uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,155,228 | arxiv | \section{Introduction}
We study the scattering of a single charged particle (an electron) by a magnetic monopole.
The magnetic field of the monopole is described by a connection on a $U(1)$ vector bundle $E$ over $M = {\mathbb{R}}^3- \{0\}$, the electron
wave functions are sections of that vector bundle in $L^2(E)$, and there is a self-adjoint Hamiltonian $H$ on this space generating the dynamics. For the scattering problem we identify asymptotic states
which are elements of $L^2({\mathbb{R}}^3)$ with dynamics generated by the free Hamiltonian $ H_0$. Scattering is described by M\o ller wave operators which are proved to exist in a two Hilbert space formulation. They have
the form $ \Omega_{\pm} \Psi = \lim_{t \to \pm \infty} e^{iHt} J e^{-i H_0t} \Psi $ where $J: L^2({\mathbb{R}}^3) \to L^2(E)$ identifies states of the same angular momentum. We also include results in which the monople Hamiltonian $H$ is perturbed by a potential $V$ yielding a new Hamiltonian $H+V$.
This scattering problem has been previously
treated by Petry \cite{Pet80}, who takes asymptotic states which are also sections of $L^2(E)$ rather than $L^3({\mathbb{R}}^3)$.
Petry's choice of the asymptotic dynamics is somewhat ad hoc and difficult to connect
with the usual free dynamics. This means his scattering results are quoted in terms of "scattering into cones", rather than the more standard
asymptotic wave functions.
Our two Hilbert space treatment puts the problem more in the mainstream of scattering theory. It opens the door for further developments such as
the perturbing potential, which seems awkward in the Petry formulation.
The paper is organized as follows. In section 2 we review the description of the monopole as a connection on a vector bundle.
In section 3 we review the definition of the free Hamiltonian $H_0$ in spherical coordinates. In section 4 we give a detailed definition of
the monopole Hamiltonian $H$ also in spherical coordinates. It is a map on smooth sections of the vector bundle and we show that it defines a self-adjoint operator on $L^2(E)$. In section 5 we find continuum eigenfunction expansions for the radial parts of both $H_0$ and $H$, and in section 6 this is used to give
detailed estimates on the free dynamics. In section 7 we prove the main result which is the existence of the wave operators. Finally
in section 8 we extend the scattering result by including a potential.
\section{The monopole}
In spherical coordinates $(r, \theta, \phi)$ the magnetic field
for a monopole of strength $n \in {\mathbb{Z}}$, $n \neq 0$, is the two-form
($\star$ is the Hodge star operation)
\begin{equation}
B = \star \frac{n}{r^2} dr = n \sin \theta d \theta d \phi.
\end{equation}
This is singular at the origin but otherwise is closed ($dB=0$) as required by Maxwell's equations. However it is not exact ($B \neq dA$ for any $A$).
If it were exact the integral over the unit sphere $|x|=1$ would be zero, but $ \int_{|x|=1} B = 4 \pi n$. Locally one can take
\begin{equation} \label{acorn1}
A = - n \cos \theta d \phi
\end{equation}
since then $dA = B$.
But this is singular at $x_1=x_2 =0$ as one can see from the representation in Cartesian coordinates
\begin{equation} \label{acorn2}
A = n \frac{ x_3}{|x| } \frac {x_2 dx_1- x_1 dx_2}{x_1^2 + x_2^2}.
\end{equation}
This is a problem since we need the magnetic potential $A$ to formulate the quantum mechanics.
The remedy is to introduce the vector bundle $E$ defined as follows. First it is a manifold and there is a smooth map $\pi : E \to M $ to $M = {\mathbb{R}}^3- \{ 0\}$
such that each fibre $E_x = \pi^{-1} x $ is a vector space isomorphic to ${\mathbb{C}}$. Further let $U_{\pm} $ be
an open covering of $M$ defined for $0 < \alpha < \frac12 \pi$ as follows.
First in spherical coordinates and then
in Cartesian coordinates
\begin{equation} \label{four}
\begin{split}
U_+ = & \Big \{ x \in M: 0 \leq \theta < \frac{\pi}{2} + \alpha \Big\} = \Big \{ x \in M: 1 \geq \frac{x_3}{|x| } > \cos\Big( \frac{ \pi}{2} + \alpha \Big) \Big\} \\
U_- = & \Big \{ x \in M: \frac {\pi}{2} - \alpha < \theta \leq \pi \Big\} = \Big \{ x \in M: \cos \Big( \frac{\pi}{2} - \alpha \Big) > \frac{x_3}{|x| } \geq -1 \Big\}. \\
\end{split}
\end{equation}
We require that in each region there is a trivialization (diffeomorphism)
\begin{equation}
h_{\pm} : \pi^{-1}( U_{\pm} ) \to U_{\pm} \times {\mathbb{C}}
\end{equation}
such that for $x \in U_{\pm}$ the map $ h_{\pm} : E_x \to \{ x \} \times {\mathbb{C}}$ is a linear isomorphism. They are related by
the transition function in $U_+ \cap U_-$
\begin{equation} \label{piquant}
h_+ h _ - ^{-1} = e^{ 2in \phi }
\end{equation}
where $e^{ 2in \phi } $
acts on the second entry ${\mathbb{C}}$. With the transition functions specified,
$E$ can be constructed as equivalence classes in $M \times {\mathbb{C}}$ with $(x,v) \sim (x, e^{2in \phi}v )$ if $x \in U_+\cap U_-$.
The connection is defined by a one-forms $A^{\pm} $ on $U_{\pm} $. To compensate (\ref{piquant}) they are related in $U_ + \cap U_ -$ by the gauge transformation
\begin{equation}
A^{+ } = A^{-} + 2n d \phi.
\end{equation}
This is accomplished by taking instead of (\ref{acorn1}),(\ref{acorn2})
\begin{equation} \label{walnut}
A^{\pm} = - n( \cos \theta \mp 1 ) d \phi = n \Big( \frac{ x_3}{|x| } \mp 1 \Big) \frac {x_2 dx_1- x_1 dx_2}{x_1^2 + x_2^2}.
\end{equation}
Each of these satisfy $d A^{\pm} = B$, but now they have no singularity. Indeed on $U_+$ we have for points with $x_3 >0$
\begin{equation} \label{bingo}
\Big| \frac{ x_3}{|x| } - 1 \Big| \leq {\cal O} \Big( \frac {x_1^2 + x_2^2} { x^2_3}\Big).
\end{equation}
So for fixed $x_3>0$ there is no singularity at $x_1=x_2=0$. Points in $U_{+}$ with $x_3 \leq 0$ also have $x_1^2 + x_2^2 >0$ so the
singularity is avoided. Similarly $A^-$ has no singularity on $U_-$.
Now we can define a covariant derivative on sections of $E$. A section of $E$ is a map $\psi : M \to E$ such that $\pi (\psi (x) )= x$. The set of all smooth sections is
denoted $\Gamma ( E)$. For $f \in \Gamma(E)$ we define $\nabla_k f \in \Gamma(E) $
by specifying that for $x \in U_{\pm} $ if $h_{\pm} f(x) = (x,f_{\pm}(x)) $ then $\nabla_k f$ satisfies $h_{\pm} (\nabla_k f (x) ) = (x, (\nabla_k f )_{\pm}(x) )$ where
\begin{equation} \label{sour}
(\nabla_k f ) _{\pm} = ( \partial_k - i A_k^{\pm} ) f_{\pm}.
\end{equation}
Here $A_k^{\pm} $ are the components in $A^{\pm} = \sum_k A^{\pm} _k dx_k$.
This defines a section since in $U_+ \cap U_-$ we have $A_k^{+ } = A_k^{-} + 2n \ \partial \phi/ \partial x_k$ so
\begin{equation}
( \partial_k - i A_k^{+} ) e^{2in \phi} = e^{2in \phi} ( \partial_k - i A_k^{- } ).
\end{equation}
Thus if $ f$ is a section, then $f_+ = e^{2in \phi} f_-$, then $(\nabla_k f )_+ = e^{2in \phi} ( \nabla_k f )_- $, and hence the pair $(\nabla_k f )_{\pm}$
define a section.
\section{Free Hamiltonian}
We first review the standard treatment of the free Hamiltonian. This will recall some facts we need and provide a model for the treatment of the monopole Hamiltonian.
The free Hamiltonian on $L^2({\mathbb{R}}^3)$ is minus the Laplacian:
\begin{equation}
H_0 = - \Delta = - \sum_i \partial_i \partial_i
\end{equation}
defined initially on smooth functions.
We study it as a quadratic form and begin by breaking it into radial and angular parts by
\begin{equation} \label{sync}
\begin{split}
( f,H_0 f ) = & \sum_i \| \partial_i f \|^2 \\
= &\sum_{i,j} (\partial_i f , \frac{ x_ix_j}{ |x|^2 } \partial_jf ) + \sum_{i,j} \Big( \partial_if, ( \delta_{i j} - \frac{ x_i x_j}{ |x|^{2} }) \partial_j f \Big) \\
= & \| \frac{1}{|x|}(x \cdot \partial ) f \|^2 + \sum_i \| \frac{1}{|x|} (x \times \partial)_i f \|^2. \\
\end{split}
\end{equation}
The last step follows from $(x \times \partial)_i =\sum_{jk} e_{ijk} x_j\partial_k$ and $\sum_i e_{ijk} e_{i \ell m} = \delta_{j \ell} \delta_{km} - \delta_{j m} \delta_{k \ell} $.
($e_{ijk} $ is the Levi-Civita symbol.)
The skew-symmetric operators $ (x \times \partial)_i $ are recognized as a basis for the representation
of the Lie algebra of the rotation group $SO(3)$ generated by the action of the group on ${\mathbb{R}}^3$.
In quantum mechanics the symmetric operators $L_i = -i (x \times \partial)_i $
are identified as the angular momentum. They satisfy the commutation relations $[L_i,L_j] = \sum_k e_{ijk} i L_k$ or $[L_1,L_2] = i L_3$, etc.
Now we have
\begin{equation}
( f,H_0 f )
= \| \frac{1}{|x|}(x \cdot \partial ) f \|^2 + \sum_i \| \frac{1}{|x|} L_i f \|^2.
\end{equation}
Next we change to spherical coordinates. The $ |x|^{-1} (x \cdot \partial f ) $ becomes $\partial f/\partial r$ and the $L_i$
become
\begin{equation}
\begin{split}
L_1 =& i \Big( \sin \phi \frac{\partial}{ \partial \theta } + \cot \theta \cos \phi \frac{\partial}{ \partial \phi } \Big) \\
L_2 =& i \Big( - \cos \phi \frac{\partial}{ \partial \theta } + \cot \theta \sin \phi \frac{\partial}{ \partial \phi } \Big) \\
L_3 = & - i \frac{\partial}{ \partial \phi }.
\end{split}
\end{equation}
The Hamiltonian in spherical coordinates, still called $H_0$, has become
\begin{equation}
( f,H_0 f) = \| \frac{\partial f}{\partial r} \|^2 + \sum_i \| \frac{1}{r} L_i f \|^2.
\end{equation}
The norms are now in
the space
\begin{equation}
{\cal H}_0 = L^2({\mathbb{R}}^+ \times S^2, r^2 dr d \Omega ) = L^2({\mathbb{R}}^+ , r^2 dr ) \otimes L^2( S^2, d \Omega )
\end{equation}
where ${\mathbb{R}}^+ = (0, \infty)$ and $d \Omega= \sin \theta d \theta d \phi$ is the Haar measure on $S^2$. The $L_i$ are symmetric in $ L^2( S^2, d \Omega ) $
and after an integration by parts in the radial variable we have
\begin{equation}
H_0 f=- \frac{1}{r^2} \frac{\partial} {\partial r} r^2 \frac{\partial f} {\partial r} + \frac{L^2}{r^2}.
\end{equation}
Here $L^2= L_1^2 + L_2^2 + L^2_3 $ is the Casimir operator for the representation of the Lie algebra of the rotation group on $S^2$ . This
is a case where it is equal to minus the Laplacian on $S^2$
\begin{equation}
L^2 = - \Delta_2 = -\Big( \frac {1}{\sin \theta} \frac{ \partial} { \partial \theta } \sin \theta \frac{ \partial} { \partial \theta } + \frac {1}{\sin^2 \theta} \frac{ \partial} { \partial \phi } \Big)
\end{equation}
as can be checked directly.
The spectrum of $L^2$ on $L^2(S^2, d \Omega) $ is studied by considering the joint spectrum of the commuting operators $L^2, L_3$.
This is a standard problem in quantum mechanics. It is also the problem of breaking down the representation of the rotation group
into irreducible pieces. Just from the commutation relations one finds the $L^2$ can only have the eigenvalues $\ell(\ell+1)$ with $\ell = 0,1,2, \dots$
and that $L_3$ can only have integer eigenvalues $m$ with $|m| \leq \ell$. The corresponding normalized eigenfunctions are the
spherical harmonics $Y_{\ell, m} (\theta, \phi) $ and are explicitly constructed in terms of the Legendre polynomials.
They satisfy
\begin{equation}
\begin{split}
L^2 Y_{\ell,m}= & \ell(\ell+1) Y_{\ell,m} \hspace{1cm} \ell \geq 0 \\
L_3 Y_{\ell,m} = & m Y_{\ell,m} \hspace{1cm} \hspace{1cm} |m| \leq \ell. \\
\end{split}
\end{equation}
The spherical harmonics are complete so this gives the full spectrum of $L^2,L_3$ and yields a definition of corresponding self-adjoint operators.
Let ${\cal K} _{0,\ell} $ be the $2\ell+1$ dimensional eigenspace for the eigenvalue $\ell( \ell+1)$ of $L^2$. Then ${\cal K} _{0,\ell} $ is spanned by $\{ Y_{\ell, m} \}_{ |m| \leq \ell} $.
Then we can write the Hilbert space as
\begin{equation}
{\cal H}_0 = \bigoplus_{\ell =0}^{\infty} L^2({\mathbb{R}}^+, r^2dr) \otimes {\cal K}_{0, \ell}
\end{equation}
and on smooth functions in this space $H_0 = \bigoplus_{\ell}\ ( h_{0, \ell} \otimes I ) $ where
\begin{equation}
h_{0, \ell} = - \frac{d^2}{dr^2} - \frac{2}{r} \frac{d}{dr} + \frac{\ell(\ell+1) }{r^2}.
\end{equation}
We study the operator $h_{0, \ell} $ further in sections \ref{tick}, \ref{tock}.
\section{Monopole Hamiltonian}
The Hamiltonian for our problem is initially defined on smooth sections $f \in \Gamma (E)$
by
\begin{equation}
H f = - \sum_{k=1}^3 \nabla_k \nabla_k f.
\end{equation}
We want to define it as a self-adjoint operator in $L^2(E)$. In this section we reduce it to a radial problem as for $H_0$. The treatment
more or less follows Wu and Yang
\cite{WuYa76}.
The Hilbert space $L^2(E)$ is
defined as follows. If
$x \in U_{\pm} $ and $v \in E_x$ then $h_{\pm} v = (x, v_{\pm}) $ and we define
$|v| = |v_{\pm}|$. This is unambiguous since if $x \in U_+ \cap U_-$ then $v_{\pm}$ only differ by
a phase and so $|v_ +| =|v_-|$. Similarly if $v,w \in E_x$ we can define $\bar v w \in {\mathbb{C}}$.
The Hilbert space $L^2(E) $ is all measurable sections $f$ such that the norm $\|f \|^2 = \int |f(x) |^2 dx$ is finite with
$(g,f) = \int \overline{g(x)}f(x) dx$.
The covariant derivative $\nabla_k$ is skew-symmetric in this Hilbert space hence the Hamiltonian is symmetric. Indeed if $ \mathrm{ supp } f, \mathrm{ supp } g \subset U_{\pm} $
and $h_{\pm} f(x) = (x, f_{\pm} (x))$, etc. then
\begin{equation} (g, \nabla_k f) = \int \overline{ g_{\pm} } ( \partial_k -i A^{\pm}_k ) f_{\pm}
= - \int \overline { ( \partial_k -i A^{\pm}_k ) g_{\pm} }f_{\pm}
= - (\nabla_k g, f).
\end{equation}
In the general case we write a section $f$ as a sum $f (x) = f_+(x) + f_-(x)$ with $ \mathrm{ supp } f_{\pm} \subset U_{\pm}$.
Now write the Hamiltonian as a quadratic form, and
as in (\ref{sync}) break it into a radial and angular parts
\begin{equation} \label{sync2}
( f,Hf) = \sum_k \| \nabla_k f \|^2
= \| \frac{1}{|x|}(x \cdot \nabla ) f \|^2 + \sum_k \| \frac{1}{|x|} (x \times \nabla)_k f \|^2.
\end{equation}
Now there is a problem. The operators $(x \times \nabla)_k $, although they have something to do with rotations,
no longer give a representation of the Lie algebra of the rotation group.
The commutators now involve extra terms $ [\nabla _j,\nabla_k] =- iF_{jk}$ where $F_{jk}= \partial_jA^{\pm}_k- \partial_kA^{\pm} _j $ is the magnetic field.
This is not special to the monopole but occurs whenever there is an external magnetic field.
The resolution due to Fierz \cite{Frz44} is to add a term proportional to the field strength.
Instead of $- i (x \times \nabla)_k = (x \times -i \nabla)_k$
we define angular momentum operators by
\begin{equation} \label{Fierz}
{\cal L}_k = (x \times -i\nabla)_k -n\frac{x_k }{|x|}.
\end{equation}
These are symmetric and do satisfy the commutators $[{\cal L}_i, {\cal L}_j ] = i\sum_k e_{ijk} {\cal L}_k $. This follows from commutators
like
\begin{equation}
\begin{split}
[{\cal L}_i, x_j ] = &\ i \sum_k e_{ijk} x_k \\
[ {\cal L} _i, \nabla_j ] = &\ i \sum_k e_{ijk} \nabla_k.\\
\end{split}
\end{equation}
To give the idea we show that $ [ {\cal L} _1, \nabla_2] = i \nabla_3$. We have
\begin{equation} \label{stun}
\begin{split}
[ {\cal L} _1, \nabla_2] = &-i [ ( x \times \nabla)_1 , \nabla_2] - n [x_1|x|^{-1} , \nabla_2 ] \\
= & -i [ x _2 \nabla_3- x_3 \nabla_2 , \nabla_2] - n x_1x_2 |x|^{-3} \\
= &i\nabla_3 -i x_2 [ \nabla_3, \nabla_2] - n x_1x_2 |x|^{-3} \\
= &i \nabla_3 - x_2 F_{32} - n x_1x_2 |x|^{-3}. \\
\end{split}
\end{equation}
But in $U_{\pm} $ we have $A_3^{\pm} =0$ and taking $A_2^{\pm} $ from (\ref{walnut})
\begin{equation}
\begin{split}
x_2F_{32} = &x_2 \partial_3 A^{\pm} _2 \\
= & x_2 n\partial_3 \Big(\frac{ x_3}{|x|} \pm 1\Big ) \frac {-x_1} {x_1^2 + x_2^2 } \\
= & n x_2 \frac{|x|^2 - x^2_3}{|x|^3} \frac {-x_1} {x_1^2 + x_2^2 } \\
= & - n \frac{x_1 x_2 }{|x|^3}. \\
\end{split}
\end{equation}
Thus the second and third terms in (\ref{stun}) exactly cancel and hence the result.
\bigskip
Now since $[ (x \times \nabla )_k , n x_k |x|^{-1} ]=0$ and $x\cdot (x \times \nabla)=0$
we have that
\begin{equation} \label{key}
{\cal L}^2 = \sum_k {\cal L}_k^2 = \sum_k (x \times -i \nabla )^2_k +n^2.
\end{equation}
The gauge field has no radial component so $x \cdot \nabla = x \cdot \partial $
and $[ (x \times -i\nabla)_k, |x|^{-1} ]= 0 $
so (\ref{sync2}) becomes
\begin{equation} \label{H2}
( f,H f ) = \| \frac{1}{|x|}(x \cdot \partial ) f \|^2 + (f, \frac{1}{|x|^2} ({\cal L}^2 - n^2 )f ).
\end{equation}
\bigskip
Next change to spherical coordinates. The vector bundle $\pi: E \to M$ becomes
a vector bundle $\pi: E' \to {\mathbb{R}}^+ \times S^2$. With $U'_{\pm} \subset S^2$ defined as in (\ref{four})
these have trivializations $h_{\pm}: \pi^{-1} ( {\mathbb{R}}^+ \times U'_{\pm} ) \to ({\mathbb{R}}^+ \times U'_{\pm} ) \times {\mathbb{C}}$
which still have transition functions $h_+ h_-^{-1} = e^{2in \phi} $.
The $ |x|^{-1} (x \cdot \nabla f ) $ becomes $\partial f/\partial r$ and the ${\cal L}_k$ become operators on $\Gamma(E') $
specified by saying that for $x \in {\mathbb{R}}^+ \times U' _{\pm} $, $({\cal L}_k f ) (x) $ satisfies $h_{\pm} ({\cal L}_k f ) (x) = (x, {\cal L}_k^{\pm} f_{\pm}(x) ) $
where
\begin{equation}
\begin{split}
{\cal L}^{\pm} _1 =& i \Big( \sin \phi \frac{\partial}{ \partial \theta } + \cot \theta \cos \phi \frac{\partial}{ \partial \phi } \Big) - n \cos \phi \left( \frac{1 \mp \cos \theta}{\sin \theta} \right) \\
{\cal L}^{\pm} _2 =& i \Big( - \cos \phi \frac{\partial}{ \partial \theta } + \cot \theta \sin \phi \frac{\partial}{ \partial \phi } \Big) - n \sin \phi \left( \frac{1 \mp \cos \theta}{\sin \theta} \right) \\
{\cal L}^{\pm} _3 = & - i \frac{\partial}{ \partial \phi } \mp n. \\
\end{split}
\end{equation}
Note that since $ ( \partial / \partial \phi) e^{2i n\phi} = e^{2i n\phi } \partial / \partial \phi + 2i n $
we have in $U'_+ \cap U'_-$ the required $ {\cal L}^{+ }_i e^{2i n\phi} = e^{2i n\phi } {\cal L}^{-}_i $.
The Hamiltonian in spherical coordinates, still called $H$, has become
\begin{equation} \label{sunny}
( f,H f ) = \| \frac{\partial f}{\partial r} \|^2 + ( f, \frac{1}{r^2} ({\cal L}^2 - n^2) f )
\end{equation}
where now the norms and inner products are in ${\cal H} = L^2(E' , r^2 dr d \Omega )$.
After an integration by parts this implies
\begin{equation} \label{sunny2}
H f= - \frac{1}{r^2} \frac{\partial} {\partial r} r^2 \frac{\partial f} {\partial r} + \frac{1}{r^2} ( {\cal L}^2 - n^2).
\end{equation}
In fact since the
the transition functions only depend on the angular variables we can make the identification
\begin{equation}
{\cal H} = L^2({\mathbb{R}}^+ , r^2 dr ) \otimes L^2(\tilde E, d \Omega )
\end{equation}
where $\tilde E$ is a vector bundle $\pi: \tilde E \to S^2$ with trivializations $h_{\pm}: \pi^{-1}( U'_{\pm} ) \to U'_{\pm} \times {\mathbb{C}}$ which still satisfy $h_+ h_-^{-1} = e^{2in \phi} $.
Now in (\ref{sunny2}) the ${\cal L}^2- n^2$ only acts on the factor $L^2(\tilde E, d \Omega)$.
The joint spectrum of ${\cal L}^2, {\cal L}_3$ has been studied by Wu and Yang \cite{WuYa76}.
The commutation relations again constrain the possible eigenvalues to $\ell(\ell+1) $ and $|m| \leq \ell$.
But now from (\ref{key}) we have ${\cal L}^2 \geq n^2$ so we must have $\ell \geq |n|$. Only states with
non-zero angular momentum exist on the monopole.
Wu - Yang explicitly construct the eigenfunctions in term of Jacobi polynomials.
The normalized eigenfunctions $\cY_{n, \ell, m} (\theta, \phi)$ are sections of $L^2(\tilde E)$
called \textit{monopole harmonics}. They satisfy
\begin{equation}
\begin{split}
{\cal L}^2 \cY_{n,\ell,m}= & \ell(\ell+1) \cY_{n, \ell,m} \hspace{1cm} \ell \geq |n| \\
{\cal L}_3 \cY_{n,\ell,m} = & m \cY_{n,\ell,m} \hspace{1cm} \hspace{1cm} |m| \leq \ell. \\
\end{split}
\end{equation}
Explicitly they are given in the trivializations on $U'_{\pm} = U_{\pm} \cap S^2 $ by
\begin{equation} \label{elf1}
\cY^{\pm} _{n, l,m}(\xi , \phi ) = \textrm{const} (1- \xi)^{\frac12 \alpha } (1+ \xi )^{\frac12 \beta} P^{\alpha ,\beta}_{ \ell + m } (\xi ) e^{i(m \pm n) \phi } \hspace{1cm} \xi = \cos \theta
\end{equation}
where $\alpha = -n-m, \beta = n-m $ and $P^{\alpha ,\beta}_{ \ell + m }$ are Jacobi polynomials given by
\begin{equation} \label{elf2}
P^{\alpha ,\beta}_{ \ell + m } (\xi) = \textrm{const} (1- \xi)^{- \alpha } (1+ \xi )^{ - \beta} \frac{d^{\ell + m}} { d \xi^{\ell + m} }(1- \xi)^{ \alpha + \ell +m } (1+ \xi )^{ \beta + \ell + m}.
\end{equation}
Completeness follows from the completeness of the Jacobi polynomials.
Thus the $\cY_{n,\ell,m}$ give the full spectrum of ${\cal L}^2, {\cal L}_3 $ and yield a definition of these as self-adjoint operators.
Let ${\cal K} _{n ,\ell} $ be the $2\ell+1$ dimensional eigenspace in $L^2(\tilde E, d \Omega) $ for the eigenvalue $\ell( \ell+1)$ of ${\cal L}^2$. Then ${\cal K}_{n, \ell} $ is spanned by the $\{ \cY_{n, \ell, m} \}_{ |m| \leq \ell} $.
We now can write the Hilbert space as
\begin{equation}
{\cal H} = \bigoplus_{\ell =|n| }^{\infty} L^2({\mathbb{R}}^+, r^2dr) \otimes {\cal K}_{n , \ell}
\end{equation}
and on smooth functions in this space $H = \bigoplus_{\ell}\ ( h_{ \ell} \otimes I ) $ where
\begin{equation} \label{thirsty}
h_{ \ell} = - \frac{d^2}{dr^2} - \frac{2}{r} \frac{d}{dr} + \frac{\ell(\ell+1)-n^2 }{r^2}.
\end{equation}
The operators $h_{\ell} $ are essentially self-adjoint on ${\cal C}^{\infty} _0( {\mathbb{R}}^+ ) $. For an operator of this form the condition is that the coefficient of
the $1/r^2 $ term be $\geq \frac34$. (See Reed-Simon \cite{ReSi75}, p. 159- 161 and earlier references). Here we have $ \ell(\ell+1) -n^2 \geq \ell \geq |n| \geq 1$ which suffices. This means
the repulsion from the $1/r^2 $ potential is strong enough to keep the particle away from the origin and a boundary condition at the origin is not
needed.
(This is not the case for the free radial Hamiltonian $h_{0, \ell}$ with $n=0, \ell =0$ in which case the $1/r^2$ is absent and a boundary condition is needed.
This case does not occur in this paper where $\ell \geq 1$. )
The self-adjoint $h_{\ell }$ determines a unitary group $e^{-ih_{\ell} t} $. This generates a unitary group on ${\cal H}$ and we define $H$ to be the self-adjoint
generator. So $e^{-iHt} = \bigoplus_{\ell} ( e^{-ih_{\ell} t} \otimes I ) $.
\section{Eigenfunction expansions } \label{tick}
Domains of self-adjointness for $h_{0, \ell} $ will be obtained by finding continuum eigenfunction expansions
(c.f. Petry \cite{Pet80}). But we start with a more general operator
\begin{equation}
h(\mu) = - \frac{d^2}{dr^2} - \frac{2}{r} \frac{d}{dr} + \frac{ \mu^2- \frac14 }{r^2}
\end{equation}
with $\mu >0$. As is well-known the continuum eigenfunctions have the form $ (kr )^{-\frac12} J_{\mu} (kr) $ where $J_{\mu} $ is the Bessel function of order $\mu$ regular at the origin
and we have
\begin{equation}
h(\mu)\Big( (kr )^{-\frac12} J_{\mu} (kr) \Big ) = k^2 \Big ( (kr )^{-\frac12} J_{\mu} (kr) \Big ).
\end{equation}
Expansions in the eigenfunctions are given by Fourier-Bessel transforms and we recall the relevant facts. (See for example Titchmarsh \cite{Tit48}, where
however results are stated with Lebesgue measure $dr$ rather the $r^2dr$ employed here.)
The transform
\begin{equation}
\psi^{\#}_{\mu} (k)= \int_0^{ \infty} (kr )^{-\frac12} J_{\mu} (kr) \psi (r) r^2 dr
\end{equation}
defined initially for say $\psi$ in the dense domain ${\cal C}^{\infty}_0({\mathbb{R}}^+)$
satisfies
\begin{equation}
\int_0^{ \infty} | \psi^{\#}_{\mu} (k) |^2 k^2 dk = \int_0^{ \infty} | \psi (r) |^2 r^2 dr
\end{equation}
and
extends to a unitary operator from
$L^2( {\mathbb{R}}^+, r^2 dr)$ to $L^2( {\mathbb{R}}^+, k^2 dk) $.
It is its own inverse
\begin{equation}
\psi (r)= \int_0^{ \infty} (kr )^{-\frac12} J_{\mu} (kr) \psi^{\#} _{\mu} (k) k^2 dk.
\end{equation}
Now for $\psi^{\#} _{\mu} \in {\cal C}^{\infty}_0({\mathbb{R}}^+) $ we have that $\psi(r) $ is a smooth function and
\begin{equation} \label{tidy1}
( h(\mu) \psi)(r) = \int_0^{ \infty} (kr )^{-\frac12} J_{\mu} (kr) ( k^2 \psi^{\#} _{\mu}(k)) k^2 dk.
\end{equation}
We use this formula to define $h(\mu)$ as a self-adjoint operator with domain
\begin{equation} \label{tidy2}
D( h(\mu)) = \{ \psi \in L^2( {\mathbb{R}}^+, r^2 dr): k^2 \psi^{\#} _{\mu} (k) \in L^2( {\mathbb{R}}^+, k^2 dk) \}.
\end{equation}
The formula (\ref{tidy1}) provides the spectral resolution and so there is a unitary group
\begin{equation} \label{tidy3}
( e^{-ih(\mu)t} \psi)(r) = \int_0^{ \infty} (kr )^{-\frac12} J_{\mu} (kr) e^{-ik^2t} \psi^{\#} _{\mu} (k) k^2 dk.
\end{equation}
\bigskip
Now if $\mu = \ell + \frac12 $ then $\mu^2- \frac14 = \ell(\ell+1) $ and we have the free operator $h_{0, \ell}$.
Thus with $ \psi^{\#} = \psi^{\#} _{\ell + \frac12} $ the operator $h_{0, \ell} $ is self adjoint on $\{ \psi: k^2 \psi^{\#}(k) \in L^2( {\mathbb{R}}^+, k^2 dk) \}$.
The unitary group $e^{-ih_{0,\ell}t} $ generates a unitary group on ${\cal H}_0$ and we define $H_0$ to be the self-adjoint generator. So
$e^{-iH_0t} = \bigoplus_{\ell} e^{-ih_{0,\ell} t} \otimes I$.
(Note also that if $\mu =( ( \ell + \frac12 )^2 -n^2 )^{\frac12} $ then $\mu^2- \frac14 = \ell(\ell+1) - n^2$ and we have the monopole operator $h_{\ell}$.
We do not use this representation here; see however \cite{Dim20}).
\section{The free dynamics} \label{tock}
We need more detailed control over the domain of operator $h_{0, \ell} $ and the associated dynamics $e^{-i h_{0, \ell} }$.
The eigenfunctions can be written
\begin{equation}
\frac{1}{ \sqrt{kr}} J_{\ell + \frac12 }(kr) = \sqrt{ \frac{2}{\pi} } j_{\ell} (kr)
\end{equation}
where $ j_{\ell} (x) $ are the spherical Bessel functions
which are given for
$x>0$ by
\begin{equation}
j_{\ell} (x) = \sqrt{ \frac{\pi}{2x} } J_{\ell+ \frac12} (x) = (-x)^{\ell}\Big( \frac{1}{x} \frac{d}{dx} \Big)^{\ell} \frac {\sin x} {x}.
\end{equation}
They are entire functions which are bounded for $x$ real and have the asymptotics
\begin{equation} \label{asymp}
j_{\ell} (x ) =
\begin{cases} {\cal O}(x^{\ell} ) & x \to 0 \\ {\cal O}(x^{-1} ) & x \to \infty. \\
\end{cases}
\end{equation}
Now we have the transform pair with $\psi^{\#} = \psi^{\#}_{\ell + \frac12}$
\begin{equation} \label{excel}
\psi^{\#} (k)= \sqrt{ \frac{2}{\pi} } \int_0^{ \infty} j_{\ell} (kr) \psi (r) r^2 dr
\hspace{1cm} \psi (r)= \sqrt{ \frac{2}{\pi} } \int_0^{ \infty} j_{\ell} (kr) \psi^{\#} (k) k^2 dk
\end{equation}
which still define unitary operators. \bigskip
\begin{lem} \label{integrable}
If $\psi^{\#} \in {\cal C}^{\infty}_0({\mathbb{R}}^+ ) $ then for any $N $
\begin{equation} \label{slober}
\psi (r) =
\begin{cases} {\cal O}(r^{\ell} ) & r \to 0 \\ {\cal O}(r^{-N} ) & r \to \infty. \\
\end{cases}
\end{equation}
Furthermore $\psi$ is infinitely differentiable and the the derivatives satisfy for any $N$
\begin{equation} \label{slober2}
\psi^{(m)} (r) =
\begin{cases} {\cal O}(r^{\ell-m} ) & r \to 0 \\ {\cal O}(r^{-N} ) & r \to \infty. \\
\end{cases}
\end{equation}
\end{lem}
\bigskip
\noindent{\bf Proof}.
In (\ref{excel}) $k$ is bounded above and below and so $ j_{\ell} (kr) $ has asymptotics (\ref{asymp}) in $r$
and hence $\psi(r) $ satisfies (\ref{slober}) with $N=1$.
To improve the long distance asymptotics
we use the identity
\begin{equation}
x j_{\ell} (x) = (\ell +2 ) j_{\ell+1} (x) + x j' _{\ell+1} (x)
\end{equation}
to write (\ref{excel}) as
\begin{equation}
\begin{split} \sqrt{ \frac{\pi} {2} } \psi (r)
= & r^{-1} \int_0^{ \infty} k r j_{\ell} (kr)( k \psi^{\#} (k) ) dk \\
= & r^{-1} \int_0^{ \infty} (\ell +2 ) j_{\ell+1} (kr) \Big( k \psi^{\#} (k) \Big)dk
+ \int_0^{ \infty} j' _{\ell+1} (kr) \Big( k^2 \psi^{\#} (k) \Big) dk. \\
\end{split}
\end{equation}
The integral in the first term has the same form that we started with and we have an extra $r^{-1} $ in front so the term is ${\cal O}( r^{-2}) $ as $r \to \infty$.
After integrating by parts the second term can be written
\begin{equation}
\begin{split}
\frac{1}{r} \int_0^{ \infty} \frac{d}{dk} j_{\ell+1} (kr) \Big( k^2 \psi^{\#} (k) \Big) dk
= & - \frac{1}{r} \int_0^{ \infty} j_{\ell+1} (kr) \Big( \frac{d}{dk}( k^2 \psi^{\#} (k) )\Big) dk. \\
\end{split}
\end{equation}
Again the integral has the same form that we started with and there is an extra $r^{-1}$ so the term is ${\cal O}(r^{-2} )$.
Thus we have proved $ \psi (r) = {\cal O}(r^{-2})$ as $r \to \infty$. Repeating the argument
gives $ \psi (r) = {\cal O}(r^{-N })$ as $r \to \infty$. Thus (\ref{slober}) is established.
For the derivative we use $ d/dr ( j_{\ell} (kr) ) = kr^{-1} d/dk ( j_{\ell} (kr) )$ and integration by parts to obtain
\begin{equation}
\begin{split} \sqrt{ \frac{\pi} {2} } \frac{d}{dr} \psi (r)
= & \int_0^{ \infty} \frac{k}{r} \frac{d}{dk} j_{\ell} (kr) \psi^{\#} (k) k^2 dk \\
= & \frac{-1}{r} \int_0^{ \infty} j_{\ell} (kr) \frac{d}{dk} \Big( k^3\psi^{\#} (k) \Big) dk. \\
\end{split}
\end{equation}
The integral is of the same form as we have been considering and so has the asymptotics (\ref{slober}).
But we have an extra factor $r^{-1} $ and so (\ref{slober2}) is proved for $m=1$. Repeating the argument
gives the general case. This completes the proof.
\bigskip
The free dynamics (\ref{tidy3}) is now expressed as
\begin{equation} \label{evolve}
( e^{-ih_{0,\ell} t} \psi)(r) = \sqrt{ \frac{2}{\pi} } \int_0^{ \infty} j_{\ell} (kr) e^{-ik^2t} \psi^{\#} (k) k^2 dk.
\end{equation}
\begin{lem} \label{infty}
Let $\psi^{\#} \in {\cal C}^{\infty}_0({\mathbb{R}}^+ ) $ and $N>0$. Then there exists a constant $C$ such that for $0 < r \leq 1, |t| \geq 1$
\begin{equation}
| e^{-ih_{0, \ell} t} \psi (r) | \leq C r^{\ell} |t| ^{-N}.
\end{equation}
\end{lem}
\bigskip
\noindent{\bf Proof}.
In (\ref{evolve}) $k$ is bounded, hence $j_{\ell}(kr) = {\cal O}( r^{\ell} ) $ as $r \to 0$, and hence
$ | e^{-ih_{0, \ell} t} \psi (r) | $ is ${\cal O}( r^{\ell} )$ as $r \to 0$ as in the previous lemma.
Now in (\ref{evolve}) we write
\begin{equation}
e^{-ik^2t} = \frac{1}{-2ikt } \frac{d}{dk} e^{-ik^2t}
\end{equation}
and then integrate by parts. This yields
\begin{equation}
\begin{split}
\label{evolve2}
& \sqrt{ \frac{\pi}{2} } ( e^{-ih_{0, \ell} t} \psi ) (r) \\= & \frac{1}{2it} \int_0^{\infty} e^{-ik^2t}
\frac{d}{dk} \Big( j_{\ell} (kr) \ k \psi^{\#} (k) \Big) dk \\
= & \frac{1}{2it} \int_0^{\infty} e^{-ik^2t}
\Big( k r j'_{\ell} (kr) \psi^{\#} (k) + j_{\ell} (kr) \frac{d}{dk} (k \psi^{\#} (k) ) \Big) dk \\
= & \frac{1}{2it} \int_0^{\infty} e^{-ik^2t}
\Big(- k r j_{\ell+1} (kr) \psi^{\#} (k) + \ell j _{\ell} (kr) \psi^{\#} (k) + j_{\ell} (kr) \frac{d}{dk} (k \psi^{\#} (k) \Big) dk. \\
\end{split}
\end{equation}
Here we used the identity
\begin{equation}
x j'_{\ell} (x) = - x j_{\ell+1} (x) + \ell j_{\ell } (x).
\end{equation}
In the integral each term has the same form we started with (possibly with an extra factor of $r$) and so are ${\cal O}(r^{\ell} ) $. But we have gained a power of $t^{-1} $ so this shows that $ | e^{-ih_{0, \ell} t} \psi (r) | \leq {\cal O}(r^{\ell} |t| ^{-1} ) $
Repeating the argument gives the bound ${\cal O}(r^{\ell} |t| ^{-N} ) $.
\section{Scattering}
Now we are ready to consider the scattering of a charged particle off a magnetic monopole. We use
a two Hilbert space formalism which has been found useful elsewhere (see for example \cite{ReSi79}, p 34; the idea goes back to Kato \cite{Kat67}).
Recall that the monopole Hilbert space is the space of sections
\begin{equation}
{\cal H} = L^2({\mathbb{R}}^+, r^2dr) \otimes L^2(\tilde E, d \Omega) = \bigoplus_{\ell =|n|}^{\infty} L^2({\mathbb{R}}^+, r^2dr) \otimes {\cal K}_{n , \ell}
\end{equation} with dynamics $e^{-i Ht}$. The asymptotic space is
\begin{equation}
{\cal H}_0 = L^2({\mathbb{R}}^+, r^2dr) \otimes L^2(S^2,d \Omega ) = \bigoplus_{\ell =0}^{\infty} L^2({\mathbb{R}}^+, r^2dr) \otimes {\cal K}_{0 , \ell}
\end{equation}
with dynamics $e^{-iH_0t} $. To compare them we need an identification operator $J: {\cal H}_0 \to {\cal H}$.
We define $J$ by matching angular momentum eigenstates, taking account that for the monopole only states with $\ell \geq |n|$ occur.
Thus we define $J$ as a partial isometry by specifying
\begin{equation}
J ( \psi \otimes Y_{\ell,m} ) = \begin{cases} \psi \otimes \cY_{n, \ell, m} & \hspace{1cm} \ell \geq |n| \\
0 & \hspace{1cm} 0 \leq \ell < n. \\
\end{cases}
\end{equation}
The M\o ller wave operators are to be defined on ${\cal H}_0 $ as
\begin{equation}
\Omega_{\pm} \Psi = \lim_{t \to \pm \infty} e^{iHt} J e^{-i H_0t} \Psi
\end{equation}
if the limit exists.
They vanish for $\Psi$ in the subspace of ${\cal H}_0$ with $\ell < |n|$.
The issue is whether they exist for $\Psi$ in
\begin{equation}
{\cal H}_{0, \geq |n|} \equiv \bigoplus_{\ell = |n|} ^{\infty} L^2({\mathbb{R}}^+ , r^2 dr) \otimes {\cal K}_{0, \ell}.
\end{equation}
If they exist then we have identified states with specified asymptotic form
\begin{equation}
e^{-iHt} \Omega_{\pm} \Psi \to J e^{-iH_0t} \Psi \ \textrm{ as } t \to \pm \infty.
\end{equation}
Only states with angular momentum $\ell ( \ell +1), \ell \geq 1$ occur in the asymptotics.
Then we can define a scattering operator
\begin{equation}
S = \Omega_+^* \Omega_-
\end{equation}
which maps ${\cal H}_{0, \geq |n|} $ to ${\cal H}_{0, \geq |n|} $.
The main result is:
\begin{thm} \label{one}
The wave operators $\Omega_{\pm} $ exist.
\end{thm}
\bigskip
\noindent{\bf Proof}. For $\Psi \in {\cal H}_{0, \geq |n|} $ we have $ \| e^{iHt} J e^{-i H_0t} \Psi \| = \| \Psi \| $ so we can approximate $\Psi$ uniformly in $t$ and it suffices to prove the limit exists for $\Psi$ in a dense set.
In fact it suffices to consider $\Psi = \psi \otimes Y_{\ell, m} $ with
with $\psi^{\#} \in {\cal C}^{\infty}_0({\mathbb{R}}^+ )$ and $\ell \geq |n| \geq 1$ since finite sums of such vectors are dense.
Since $e^{-iH_0t} \Psi = e^{-i h_{0,\ell} t} \psi \otimes Y_{\ell, m} $ and $e^{iHt} J\Psi = e^{ i h_{\ell} t} \psi \otimes \cY_{n,\ell, m}$
the problem reduces to the existence in $L^2( {\mathbb{R}}^+, r^2 dr ) $ of
\begin{equation}
\lim_{t \to \pm \infty} e^{ih_{\ell} t} e^{-i h_{0, \ell} t} \psi \hspace{1cm} \psi^{\#} \in {\cal C}^{\infty}_0({\mathbb{R}}^+ ).
\end{equation}
To analyze this we need to know that $ e^{-i h_{0, \ell} t} \psi \in D(h_{\ell}) $. It suffices to show that $ \{ \psi: \psi^\# \in {\cal C}^{\infty}_0({\mathbb{R}}^+) \} $
is in $D(h_{\ell}) $. By lemma \ref{integrable} this subspace is contained in the larger subspace
\begin{equation}
{\cal D} \equiv \{ \psi \in {\cal C}^2({\mathbb{R}}^+): \psi \textrm{ has asymptotics (\ref{slober2}) for } m=0,1,2 \}
\end{equation}
so it suffices to show ${\cal D} \subset D(h_{ \ell} ) $.
Note that with these asymptotics the derivatives are still in $L^2( {\mathbb{R}}^+, r^2 dr)$. Indeed with $m \leq 2$ the worst behavior as $r \to 0$ is ${\cal O}(r^{-1})$
and this is still square integrable with the measure $ r^2 dr$. Thus $h_{ \ell} $ acting as derivatives is an operator on ${\cal D}$ and by
integrating by parts it is symmetric. So we have a symmetric extension of the operator $h_{ \ell} $ on ${\cal C}^{\infty}_0 ({\mathbb{R}}^+) $ and the latter is essentially self-adjoint. Thus $h_{\ell}$
on ${\cal D}$ is also essentially self-adjoint with the same closure.
In particular ${\cal D} \subset D(h_{ \ell} ) $.
Let $\Omega_t = e^{ih_{\ell} t} e^{-i h_{0, \ell} t} $.
We now can compute
\begin{equation}
(\Omega_{t'} - \Omega_{t} ) \psi = \int_t^{t'} \frac{d }{ds} \Omega_s \psi ds = \int_t^{t'} e^{ih_{\ell} s} ( h_{\ell} - h_{0, \ell} ) e^{-i h_{0, \ell} s} \psi.
\end{equation}
But
\begin{equation}
h_{\ell} - h_{0, \ell} = \frac{-n^2}{r^2} \equiv v(r)
\end{equation}
and so
\begin{equation}
\| ( \Omega_{t'} - \Omega_{t}) \psi \| \leq \int_t^{t'} \| v e^{-i h_{0, \ell} s} \psi \| ds.
\end{equation}
Now it suffices to show that the function $t \to \| v e^{-i h_{0, \ell} t} \psi \| $ is integrable to obtain a limit.
We write $v = v_1 + v_2$ with supports respectively in $(0,1]$ and $[1, \infty)$. Since $\psi^{\#} \in {\cal C}^{\infty} _0({\mathbb{R}}^+) $ lemma \ref{infty}
says $|( e^{-i h_{0, \ell} t} \psi )(r) | \leq {\cal O}( r^{\ell} |t| ^{- N} ) $ for any $N$ and so
\begin{equation} \label{bisquit}
\begin{split}
\| v_1 e^{-i h_{0, \ell} t} \psi \|^2 = & \int_0^1 \frac{n^4} {r^4} |( e^{-i h_{0, \ell} t} \psi )(r) |^2 r^2 dr
\leq {\cal O}(|t|^{ -2N } ) \int_0^1 r^{2 \ell -2} dr
\leq {\cal O}(|t|^{ -2N } ) \\
\end{split}
\end{equation}
which suffices.
For the $v_2$ term we have
\begin{equation} \label{olive1}
\begin{split}
\| v_2 e^{-i h_{0, \ell} t} \psi \|_{ L^2( {\mathbb{R}}^+, r^2 dr ) }
= & \| v_2 e^{-i h_{0, \ell} t} \psi \otimes Y_{\ell, m} \|_{ L^2( {\mathbb{R}}^+, r^2 dr ) \otimes L^2 (S^2, d\Omega) }
\\
= & \| v_2 e^{-i H_{0 } t} \Psi \|_{ L^2( {\mathbb{R}}^+ \times S^2, r^2 dr d\Omega) }
\\
= & \| v_2 e^{-i H_{0 } t} \Psi \|_{ L^2( {\mathbb{R}}^3 ) }.
\\
\end{split}
\end{equation}
In the last step we have returned to
Cartesian coordinates, so now $v_2= v_2(|\bx|) $ and $ e^{-i H_{0 } t} \Psi $ is the unitary transform of our
$ e^{-i H_{0 } t} \Psi $ defined in spherical coordinates. We want
to show that this time evolution is the usual time evolution defined with the Fourier
transform. First with $t=0, $ $\Psi$ has become
$
\Psi(\bx) = \psi(|\bx| ) Y_{\ell, m} (\bx/|\bx| )
$
with Fourier transform
\begin{equation} \label{tingle}
\tilde \Psi (\bk ) = (2\pi)^{-\frac32} \int e^{i \bk \cdot \bx} \psi(|\bx| ) Y_{\ell, m} (\bx/|\bx| ) d\bx.
\end{equation}
There is a standard expansion of the complex exponential in spherical functions given by the distribution identity (with $k = |\bk|$)
\begin{equation}
e^{i \bk \cdot \bx} = 4 \pi \sum_{\ell=0}^{\infty} \sum_{m = - \ell}^{\ell} i^{\ell}j_{\ell} (k|\bx| ) Y_{\ell,m} ( (\bk / | \bk | ) \overline{Y_{\ell,m} (\bx / | \bx| )}.
\end{equation}
Inserting this in (\ref{tingle}) and changing back to spherical coordinates gives
\begin{equation} \label{tingle2}
\tilde \Psi (\bk ) =4\pi i^{\ell}\Big( \int_0^{\infty} j_{\ell} (kr) \psi (r) r^2 dr \Big) Y_{\ell,m} (\bk / | \bk | )
= (2 \pi)^{\frac32} i^{\ell} \psi^{\#} (k) Y_{\ell,m} (\bk / | \bk | ).
\end{equation}
Now replace $\psi$ by $e^{-i h_{0, \ell} t} \psi $. Then $\psi^\#(k) $ becomes $e^{-ik^2t} \psi^\#(k)$ and so $\tilde \Psi(\bk) $ becomes
$e^{- i |\bk|^2 t } \tilde \Psi(\bk) $. Thus
\begin{equation} \label{fourf}
( e^{-i H_0t} \Psi)(\bx) = (2\pi)^{-\frac32} \int e^{- i \bk \cdot \bx} e^{- i |\bk|^2 t } \tilde \Psi(\bk) d\bk
\end{equation}
which is the standard time evolution.
By lemma \ref{integrable} we have that $ \Psi \in L^1( {\mathbb{R}}^3, d\bx ) $ since
\begin{equation}
\| \Psi \|_1 = \int_{{\mathbb{R}}^3 } \Big| \psi (|\bx|) | Y_{\ell, m}(\bx/ |\bx| ) \Big | d\bx
= \int_0^{\infty} |\psi (r) | r^2 dr \ \int_{S^2 } |Y_{\ell, m}(\theta, \phi)| d\Omega < \infty.
\end{equation}
Thus $ \Psi \in L^1( {\mathbb{R}}^3, d\bx ) \cap L^2( {\mathbb{R}}^3, d\bx ) $ and in this case (\ref{fourf}) has
the well-known representation
\begin{equation}
( e^{-iH_0t } \Psi ) (\bx) = (4 \pi i t ) ^{-\frac32} \int e^{ i|\bx-\by|^2/4t} \Psi (\by) d \by.
\end{equation}
This gives the estimate $ \| e^{-iH_0t } \Psi \|_{\infty} \leq {\cal O}( |t|^{-3/2} ) $.
Now $v_2(|\bx|) $ is in $ L^2( {\mathbb{R}}^3, d\bx ) $ (since $\int_1^{\infty} r^{-4} r^2 dr < \infty$ ) and so
\begin{equation} \label{olive2}
\| v_2 e^{-i H_0 t} \Psi \|_2 \leq \| v_2 \|_2 \| e^{-i H_0 t} \Psi \|_{\infty}
\leq {\cal O}( |t|^{-3/2} )
\end{equation}
which gives the integrability in $t$. This completes the proof.
\section{Perturbations}
As an indication of the advantages of the present approach we show that it can accomodate perturbations.
Let $V(|x|)$ be a smooth bounded spherically symmetric function on ${\mathbb{R}}^3$. This defines a multiplication operator on sections of
the vector bundle. We consider instead of the Hamiltonian $H$
the perturbed Hamiltonian $H+V$. The corresponding radial Hamiltonian for angular momentum $\ell$ is instead of (\ref{thirsty})
\begin{equation} \label{thirsty2}
h_{\ell} +V = - \frac{d^2}{dr^2} - \frac{2}{r} \frac{d}{dr} +\Big[ \frac{\ell(\ell+1)-n^2 }{r^2} + V(r) \Big].
\end{equation}
As a bounded perturbation of $h_{\ell}$ which is essentially self-adjoint on ${\cal C}^{\infty}_0({\mathbb{R}})$ it is itself essentially self-adjoint on the same
domain. Then $h_{\ell} +V$ generates a unitary group on $L^2({\mathbb{R}}^+ , r^2dr)$, hence a unitary group on the full Hilbert space ${\cal H}$,
and $H+V$ is the generator so
\begin{equation}
e^{-i(H+V) t} = \bigoplus_{\ell =|n|}^{\infty} ( e^{-i(h_{\ell} +V) t} \otimes I ).
\end{equation}
\begin{thm}
Let $V(|x|) $ be a smooth bounded spherically symmetric function in $L^2({\mathbb{R}}^3)$.
Then the wave operators
\begin{equation}
\Omega_{\pm} (V)\Psi = \lim_{t \to \pm \infty} e^{ i(H+V)t } J e^{-iH_0t} \Psi
\end{equation}
exist.
\end{thm}
\bigskip
\noindent{\bf Proof}.
As in the proof of theorem \ref{one},
the proof reduces to showing that
$\| (v + V ) e^{-ih_{0,\ell} t} \psi \|$ is integrable in $t$ for $\psi^{\#} \in {\cal C}^{\infty}_0({\mathbb{R}}^+)$.
We already know this for $v = -n^2/r$ so it suffices to consider $\| V e^{-ih_{0,\ell} t} \psi \|$. We split
$V(r) = V_1(r) + V_2(r) $ with supports in $(0,1]$ and $[1, \infty)$. The term $\| V_1 e^{-ih_{0,\ell} t} \psi \|$
is integrable as in (\ref{bisquit}), in fact it is easier since $V_1$ is bounded. For the $V_2$ term we follow the argument (\ref{olive1}) - (\ref{olive2})
and obtain the integrability from the condition $V_2 \in L^2$.
This completes the proof.
\bigskip
\noindent{\bf Remark}. One would like to relax the condition that $V$ be bounded near the origin. A key feature in our method is that $h_{\ell} +V$ should
be essentially self-adjoint on ${\cal C}^{\infty}_0({\mathbb{R}}^+ )$ and referring again to \cite{ReSi75}, this is true if the bracketed expression in (\ref{thirsty2})
is greater than or equal to $\frac34 r^{-2} $ near zero. This expression is bounded below by $r^{-2} + V(r) $ so it suffices that
\begin{equation}
V_1(r) \geq - \frac14 \frac{1}{r^2}.
\end{equation}
With this hypothesis the self-adjointness holds. If we also require that $V_1(r) = {\cal O}(r^{-2}) $ as $r \to 0$, as well as $V_2 \in L^2$,
then the scattering estimates (\ref{bisquit}) and (\ref{olive2}) still hold and the wave operators exist.
Note that the Coulomb potential $V(r) = \pm q/r $ satisfies the $V_1$ conditions, however the condition $V_2 \in L^2$ is violated.
As in ordinary potential scattering the remedy is to modify the free dynamics (see for example \cite{ReSi79}, p. 169). With this
modification the wave operators would exist for the Coulomb potential as well.
|
2,869,038,155,229 | arxiv | \section{Introduction}
\label{sec:intro}
Consider a general stochastic matching model (GM), as introduced in \cite{MaiMoy16}:
items of various classes enter a system one by one, to be matched by couples. Two items are compatible if and only if their classes
are adjacent in a compatibility graph $G$ that is fixed beforehand. The classes of the entering items are drawn following a prescribed probability measure on ${\mathcal{V}}$, the set
of nodes of $G$.
This model is a variant of the Bipartite Matching model (BM) introduced
in \cite{CKW09}, see also \cite{AdWe}. In the BM, the compatibility graph is bipartite (say ${\mathcal{V}}={\mathcal{V}}_1 \cup {\mathcal{V}}_2$). Along
the various natural applications of this model, the nodes of ${\mathcal{V}}_1$ and ${\mathcal{V}}_2$ represent respectively classes of {customers} and {servers},
{kidneys} and {patients}, blood {givers} and blood {receivers}, {houses} and {applicants}, and so on. The items are matched by couples of ${\mathcal{V}}_1\times{\mathcal{V}}_2$, and also {arrive}
by couples of ${\mathcal{V}}_1\times{\mathcal{V}}_2$.
The classes of the elements of the entering couples are random, and it is assumed in the aforementioned references that
the class of the entering element of ${\mathcal{V}}_1$ is always independent of that of the entering element of ${\mathcal{V}}_2$.
An important generalization of the BM is the so-called {Extended Bipartite Matching} model (EBM) introduced in \cite{BGMa12}, where this independent assumption is relaxed.
Possible entering couples are element of a bipartite {arrival graph} on the bipartition ${\mathcal{V}}_1\cup{\mathcal{V}}_2$.
Importantly, one can observe that the GM is in fact a particular case of EBM, taking the bipartite double cover of $G$ as compatibility graph, and duplicating arrivals
with a copy of an item of the same class.
The main question raised in \cite{MaiMoy16} is the shape of the {stability region} of the model, that is, the set of probability measures on ${\mathcal{V}}$ rendering the corresponding system stable. Partly relying on the aforementioned connection between GM and EBM, and the results of \cite{BGMa12}, \cite{MaiMoy16} show that the stability region is always included in a designated set, namely the set of measures satisfying the natural necessary condition (\ref{eq:Ncond}) below. The form of the stability region is then heavily dependent on the
structural properties of the compatibility graph, and on the {matching policy}, i.e. the rule of choice of a match for an entering item whenever several possible matches are possible. A matching policy is then said to have a {maximal} stability region for $G$ if the system is stable for any measure satisfying (\ref{eq:Ncond}). It is shown in \cite{MaiMoy16} that a bipartite $G$ makes the stability region empty, that a designated class of graphs (the so-called non-bipartite separable ones - precisely defined below) make the stability region maximal for all matching policies, and that the policy 'Match the Longest' always has a maximal stability region for a non-bipartite $G$.
Applying fluid (in)stability arguments to a continuous-time version of the GM,
\cite{MoyPer17} show that, aside for a very particular class of graphs, whenever $G$ is not separable there {always} exists a policy of the strict priority type that does not have a maximal stability region, and that the 'Uniform' random policy (natural in the case where no information is available to the entering items on the state of the system) never has a maximal stability region,
thereby providing a partial converse of the result in \cite{MaiMoy16}. A related model is studied in \cite{GurWa}, which draws a comparison of matching policies in the case where the matching structure is a particular hypergraph, that is, items may be matched by groups of more than two.
In the first part of this work, we are concerned with the stability region of the GM under the 'First Come, First Matched' (FCFM) policy, consisting in always performing the match of the entering item with the oldest compatible item in the system. Compared to the aforementioned stability studies, this matching policy raises technical problems, mainly due to the infinite dimension of the state space of its natural Markov representation. In the history of study of the BM,
the corresponding 'First Come, First Served' (FCFS) policy was the first one considered in the seminal papers \cite{CKW09,AdWe}, where the existence of a stationary matching under a
natural resource pooling condition analog to (\ref{eq:Ncond}) is shown. Further, \cite{ABMW17} recently show that the stationary state can be obtained in a remarkable product form, which is obtained using an original dynamic reversibility argument. However, these results cannot be directly applied to the present context, for the GM is not a particular case of a BM, but of an EBM, for which the latter reversibility argument is unlikely to hold in general. Moreover the maximality of the stability region of FCFS for the EBM is conjectured, but left as on open problem in \cite{BGMa12}. We show hereafter that we can in fact construct a reversibility scheme that is closely related to the one proposed in \cite{ABMW17} for the BM, to show that the stability region of FCFM is indeed maximal for the GM, and that the stationary state of the Markov representation also satisfies a product form.
It is well known since the pioneering works of Loynes \cite{Loynes62} and then Borovkov \cite{Bor84}, that backwards schemes and specifically strong backwards coupling convergence, can lead to an explicit construction
of the stationary state of the system under consideration within its stability region.
One can then use pathwise representations to compare systems in steady state, via the stochastic ordering of a given performance metric (see Chapter 4 of \cite{BacBre02} on such comparison results for queues).
Moreover, we know since the seminal work of Propp and Wilson \cite{PW96} that coupling from the past algorithms (which essentially use backwards coupling convergence) provide a powerful tool for simulating the steady state of the system. In the second part of this work, we aim at achieving such results for the general matching model: under various conditions, we construct a stationary version of the system under general stationary ergodic assumptions,
via a stochastic recursive representation of the system on the canonical space of its bi-infinite input.
For this, we first observe that most usual matching policies (including FCFM, the optimal 'Match the Longest' policy, and - possibly randomized - priorities) satisfy a remarkable sub-additive property,
which allows to construct the appropriate backwards scheme to achieve this explicit construction.
These results lead to stability conditions for various models, under stationary ergodic assumptions that subsume the markovian (i.e., iid) settings.
Second, in some cases (including iid), we construct a unique (up to the natural parity of the model, in a sense that will be specified below) stationary perfect matching, by strong backwards coupling.
The paper is organized as follows. In Section \ref{sec:model} we introduce and formalize our model.
In Section \ref{sec:FCFM} we develop our reversibility scheme for the FCFM system, leading to our main result of the first part, Theorem \ref{thm:main} in subsection \ref{subsec:stabFCFM}, which establishes the existence of a stationary probability under a product form for the natural Markov representation of the system, under the natural condition (\ref{eq:Ncond}).
Our coupling result is then presented in Section \ref{sec:coupling}, including the algebraic study of sub-additive policies in Section \ref{subsec:subadd}, the construction of renovating events {\em \`a la} Borovkov and Foss in Section \ref{subsec:renov}, and the explicit constructions of perfect bi-infinite matchings for sub-additive policies, in Section \ref{subsec:matching}.
\section{The model}
\label{sec:model}
\subsection{General notation}
\label{subsec:notation}
Denote by ${\mathbb R}$ the real line, by ${\mathbb N}$ the set of non-negative integers and by ${\mathbb N}_+$, the subset of positive integers. For any two integers $m$ and $n$, denote by $\llbracket m,n \rrbracket=[m,n] \cap {\mathbb N}$.
For any finite set $A$, let $S_A$ be the group of permutations of $A$, and for all permutation $s \in S_A$ and any $a\in A$, let $s[a]$ be the image of $a$ by $s$.
Let $A^*$ be the set of finite words over the alphabet $A$.
Denote by $\emptyset$, the empty word of $A^*$.
For any word $w \in A^*$ and any subset $B$ of $A$, we
let $\norm{w}_B$ be the number of occurrences of elements of $B$ in $w$.
For any letter $a\in A$, we denote
$\norm{w}_a:=\norm{w}_{\{a\}}$ the number of occurrences of the letter $a$ in $w$, and we let $\norm{w}=\sum_{a\in A} \norm{w}_a$ be the
{\em length} of $w$. For a word $w \in A^*$ of length $\norm{w}=q$, we write $w=w_1w_2...w_q$, i.e. $w_{i}$ is the $i$-th letter
of the word $w$. In the proofs below, we understand the word $w_1...w_k$ as $\emptyset$ whenever $k=0$.
Also, for any $w \in A^*$ and any $i\in \llbracket 1,\norm{w} \rrbracket$, we denote by $w_{[i]}$, the word of length $\norm{w}-1$ obtained from $w$ by deleting
its $i$-th letter. We let $[w]:=(\norm{w}_a)_{a\in A}\in {\mathbb N}^A$ be the {\em commutative
image} of $w$. Finally, a {\em right sub-word} of the word $w=w_1...w_k$ is a word $w_j...w_k$ obtained by deleting the first $j-1$ letters of $w$, for $j \in \llbracket 1,k \rrbracket$.
For any $p \in {\mathbb N}_+$, a vector $x$ in the set $A^p$ is denoted $x=(x(1),...,x(p))$. For any $i\in\llbracket 1,p \rrbracket$, we denote by $\gre_i$ the $i$-th vector of the canonical basis of ${\mathbb R}^p$,
i.e. $\gre_i(j)=\delta_{ij}$ for any $j\in \llbracket 1,p \rrbracket$. The $\ell_1$ norm of ${\mathbb R}^p$ is denoted $\parallel . \parallel$.
Consider a simple graph $G=({\mathcal{V}},{\mathcal{E}})$, where ${\mathcal{V}}$ denotes the set of nodes, and ${\mathcal{E}} \subset {\mathcal{E}}\times {\mathcal{E}}$ is the set of edges.
We use the
notation $u {\--} v$ for $(u,v) \in {\mathcal{E}}$
and $u {\not\!\!\--} v$ for $(u,v) \not\in {\mathcal{E}}$.
For $U \subset {\mathcal{V}}$, we define $U^c = {\mathcal{V}} \setminus U$ and
\[
{\mathcal{E}}(U) = \{v \in {\mathcal{V}}\,:\, \exists u \in U, \ u
\-- v\}\:.
\]
An {\em independent set} of $G$ is a non-empty subset ${\mathcal{I}} \subset {\mathcal{V}}$
which does not include any pair of neighbors, {\em i.e.} $\bigl(\forall i\neq j \in {\mathcal{I}}, \ i {\not\!\!\--} j\bigr)$.
Let ${\mathbb{I}}(G)$ be the set of independent sets of $G$. An independent set ${\mathcal{I}}$ is said to be {\em maximal} if ${\mathcal{I}} \cup \{j\} \not\in {\mathbb{I}}(G)$ for any $j \not\in {\mathcal{I}}$.
\subsection{Formal definition of the model}
\label{subsec:model}
We consider a {\em general stochastic matching model}, as was defined in \cite{MaiMoy16}: items enter one by one a system, and each of them belongs to a
determinate class. The set of classes is denoted by ${\mathcal{V}}$, and identified with $\llbracket 1,|{\mathcal{V}}|\rrbracket$. We fix a connected simple graph $G=({\mathcal{V}},{\mathcal{E}})$ having set of nodes ${\mathcal{V}}$, termed {\em compatibility graph}.
Upon arrival, any incoming item of class, say, $i \in {\mathcal{V}}$ is either matched with an item present in the buffer, of a class $j$ such that
$i {\--} j$, if any, or if no such item is available, it is stored in the buffer to wait for its match.
Whenever several possible matches are possible for an incoming item $i$, a {\em matching policy} $\phi$ decides what is the match of $i$ without ambiguity.
Each matched pair departs the system right away.
We assume that the successive classes of entering items, and possibly their choices of match, are random.
We fix a probability space $(\Omega,\mathcal F,\mathbb P)$ on which all random variables (r.v.'s, for short) are defined, and view, throughout,
the input as a bi-infinite sequence $\left(V_n,\Sigma_n\right)_{n\in{\mathbb Z}}$ that is defined as follows:
first, for any $n \in {\mathbb Z}$ we let $V_n \in {\mathcal{V}}$ denote the class of the $n$-th incoming item.
Second, we introduce the set
\begin{equation*}
\mathcal S = S_{{\mathcal{E}}(1)} \times ... \times S_{{\mathcal{E}}(|{\mathcal{V}}|)},
\end{equation*}
in other words for any $\sigma = \left(\sigma(1),...,\sigma(|{\mathcal{V}}|)\right) \in \mathcal S$ and $i \in {\mathcal{V}}$,
$\sigma(i)$ is a permutation of the classes of items that are compatible with $i$ (which are identified with their indexes in $\llbracket 1,|{\mathcal{V}}| \rrbracket$).
Any array of permutations $\sigma \in \mathcal S$ is called {\em list of preferences}.
For any $n \in {\mathbb Z}$, we let $\Sigma_n$ denote the list of preferences at time $n$, i.e. if $\Sigma_n=\sigma$ and $V_n=v$, then the permutation $\sigma(v)$ represents the order of preference of the entering $v$-item
at $n$, among the classes of its possible matches.
\medskip
Along the various results in this work, we will consider the following statistical assumptions on the input sequence $\left(V_n,\Sigma_n\right)_{n\in{\mathbb Z}}$,
\begin{enumerate}
\item[\textbf{(H1)}] The sequence $\suitez{\left(V_{n},\Sigma_{n}\right)}$ is stationary and ergodic, drawn at all $n$ from a distribution having first marginal $\mu$ on $\mathcal V$ and second marginal $\nu_\phi$ on $\mathcal S$.
\item[\textbf{(H1')}] The sequence $\suitez{\left(V_{2n},\Sigma_{2n},V_{2n+1},\Sigma_{2n+1}\right)}$ is stationary and ergodic, drawn at all $n$ from a distribution of first marginal $\mu^0$ on $\mathcal V$, third marginal $\mu^1$ on $\mathcal V$, and second and fourth marginals $\nu_\phi$ on $\mathcal S$. We denote $\mu:={\mu^0+\mu^1 \over 2}$.
\item[\textbf{(H1'')}] The sequence $\suitez{\left(V_{2n-1},\Sigma_{2n-1},V_{2n},\Sigma_{2n}\right)}$ is stationary and ergodic, drawn at all $n$ from a distribution of first marginal $\mu^0$ on $\mathcal V$, third marginal $\mu^1$ on $\mathcal V$, and second and fourth marginals $\nu_\phi$ on $\mathcal S$. We denote $\mu:={\mu^0+\mu^1 \over 2}$.
\item[\textbf{(IID)}] The sequence $\suitez{\left(V_{n},\Sigma_{n}\right)}$ is iid from the distribution $\mu \otimes \nu_\phi$ on $\mathcal V\times\mathcal S$.
\end{enumerate}
Under either one of the above conditions, we assume that $\mu$ has full support ${\mathcal{V}}$ (we write $\mu \in \mathcal M({\mathcal{V}})$).
Then, the matching policy $\phi$ will be formalized by an operator mapping the system state onto the next one, given the class of the entering
item and the list of preferences at this time. The matching policies we consider are presented in detail in Section \ref{subsec:pol}.
Altogether, the matching graph $G$, the matching policy $\phi$ and the measure $\mu$ (under assumptions (H1) and (IID)) or $\mu^0$ and $\mu^1$ (under assumptions (H1') and (H1'')) fully specify the model, which we denote for short general matching (GM) model associated with $(G,\mu,\phi)$ under assumptions (H1) and (IID), or $(G,(\mu^0,\mu^1),\phi)$ under assumptions (H1') and (H1'').
\subsection{State spaces}
\label{subsec:state}
Fix the compatibility graph $G=({\mathcal{V}},{\mathcal{E}})$ until the end of this section.
Fix an integer $n_0 \ge 1$, and two realizations $v_1,...v_{n_0}$ of $V_1,...,V_{n_0}$ and $\sigma_1,...,\sigma_{n_0}$ of $\Sigma_1,...,\Sigma_{n_0}$.
Define the two words $v\in {\mathcal{V}}^*$ and $\sigma\in\mathcal S^*$ respectively by $z:=v_1v_2...v_{n_0}$ and $\varsigma:=\sigma_1\sigma_2...\sigma_{n_0}$.
Then, for any matching policy $\phi$, there exists a unique {\em matching} of the word $z$ associated to $\sigma$, that is, a graph having set of nodes
$\left\{v_1,...,v_{n_0}\right\}$ and whose edges represent the matches performed in the system until time $n_0$, if the successive arrivals are given by $z$ and the lists of preferences by $\sigma$.
This matching is denoted by $M_\phi(z,\varsigma)$.
The state of the system is then defined as the word $Q_\phi(z,\varsigma)\in {\mathcal{V}}^*$, whose letters are the classes of the unmatched items at $n_0$,
i.e. the isolated vertices in the matching $M_{\phi}(z,\varsigma)$, in their order of arrivals. The word $Q_\phi(z,\varsigma)$ is called {\em queue detail} at time $n_0$.
Observe that any admissible queue detail belongs to the set
\begin{equation}
\mathbb W = \Bigl\{ w\in {\mathcal{V}}^*\; : \; \forall (i,j) \in {\mathcal{E}}, \; |w|_i|w|_j=0 \Bigr\}.\label{eq-ncss}
\end{equation}
As will be seen below, depending on the service discipline $\phi$ we can also restrict the available information on the state of the system at time $n_0$, to a vector only keeping track of
the number of items of the various classes remaining unmatched at $n_0$, that is, of the number of occurrences of the various letters of the alphabet ${\mathcal{V}}$ in the word $Q_\phi(z,\varsigma)$.
This restricted state thus equals the commutative image of $Q_{\phi}(z,\varsigma)$, and is called {\em class detail} of the system. It takes values in the set
\begin{equation}
\mathbb X = \Bigl\{x \in {\mathbb N}^{|{\mathcal{V}}|}\,:\,x(i)y(j)=0\mbox{ for any }(i,j)\in {\mathcal{E}}\Bigl\}=\Bigl\{\left[w\right];\,w \in \mathbb W\Bigl\}.\label{eq-css}
\end{equation}
\subsection{Matching policies}
\label{subsec:pol}
We now present and define formally, the set of matching policies which we consider.
\begin{definition}
A matching policy $\phi$ is said {\em admissible} if the choice of match of an incoming item depends {\em solely} on the queue detail
and the list of preferences drawn upon arrival.
\end{definition}
In other words, if a matching policy $\phi$ is admissible there exists a mapping $\odot_{\phi}: \mathbb W\times ({\mathcal{V}}\times \mathcal S) \rightarrow \mathbb W$ such that,
denoting by $w$ the queue detail at a given time, and by $w'$ the queue detail if the input is augmented by the arrival of a couple $(v,\sigma) \in {\mathcal{V}}\times \mathcal S$, then
$w'$ and $w$ are connected by the relation
\begin{equation}
\label{eq:defodot}
w'= w\odot_{\phi} (v,\sigma).
\end{equation}
\subsubsection{FCFM and LFCM}
The first two policies we introduce are
First Come, First Matched and Last Come, First matched. For both policies, the order of preference of each class is irrelevant, and so the
following construction is independent of the preference $\sigma$.
\paragraph{First Come, First Matched.}
In First Come, First Matched ({\sc fcfm}) the map $\odot_{\textsc{fcfm}}$ is given for all $w \in \mathbb W$ and all couples $(v,\sigma)$, by
$$
w \odot_{\textsc{fcfs}} (v,\sigma) =
\left \{
\begin{array}{ll}
wv & \textrm{if } \; |w|_{{\mathcal{E}}(v)} = 0;\\
w_{\left [\Phi(w,v)\right]} & \textrm{else, where }\Phi(w,v) = \arg\min \{|w_k|:\,k\in{\mathcal{E}}(v)\}.
\end{array}
\right .
$$
\paragraph{Last Come, First Matched.}
For the last come, first matched ({\sc lcfm}) matching policy, the updating map $\odot_{\textsc{lcfm}}$ is analog to $\odot_{\textsc{fcfm}}$, for
$\Phi(w,v) = \arg\max\{|w_k|:\,k\in{\mathcal{E}}(v)\}.$
\subsubsection{Matching policies that only depend on the class detail}
A matching policy $\phi$ is said to be {\em class-admissible} if it can be implemented
by knowing only the class detail of the system. Let us define for any $v \in {\mathcal{V}}$ and $x\in \mathbb X$,
\begin{equation*}
{\mathcal{P}}(x,v) =\Bigl\{j\in {\mathcal{E}}(v)\,:\,x\left(j\right) > 0\Bigl\},\label{eq:setP2}
\end{equation*}
the set of classes of available compatible items with the entering class $v$-item, if the class detail of the system is given by $x$.
Then, a class-admissible policy $\phi$ is fully characterized by the probability distribution $\nu_\phi$ on $\mathcal S$,
together with a mapping $p_\phi$ such that $p_\phi(x,v,\sigma)$ denotes the class of the match chosen
by the entering $v$-item under $\phi$ for a list of preferences $\sigma$, in a system of class detail $x$ such that $\mathcal P(x,v)$ is non-empty.
Then the arrival of $v$ and the draw of $\sigma$ from $\nu_\phi$ corresponds to the following action on
the class detail,
\begin{equation}
\label{eq:defccc}
x \ccc_{\phi} (v,\sigma) = \left \{
\begin{array}{ll}
x+\gre_v &\mbox{ if }\mathcal P(x,v)=\emptyset,\\
x-\gre_{p_\phi(x,v,\sigma)}&\mbox{ else}.
\end{array}
\right .
\end{equation}
\begin{remark}
\label{rem:equiv}
As is easily seen, to any class-admissible policy $\phi$ corresponds an admissible policy, if one makes precise the rule of choice of match for the
incoming items {\em within} the class that is chosen by $\phi$, in the case where more than one item of that class is present in the system.
In this paper, we always make the assumption that within classes, the item chosen is always the {\em oldest} in line, i.e. we always apply a FCFM policy {\em within classes}.
Under this convention, any class-admissible policy $\phi$ is admissible, that is, the mapping $\ccc_\phi$ from
$\mathbb X\times ({\mathcal{V}} \times \mathcal S)$ to $\mathbb X$ can be detailed into a map $\odot_{\phi}$ from
$\mathbb W \times ({\mathcal{V}} \times \mathcal S)$ to $\mathbb W$, as in (\ref{eq:defodot}), that is such that for any queue detail $w$ and any $(v,\sigma)$,
\[\left[w\odot_\phi (v,\sigma)\right] = \left[w\right]\ccc_\phi (v,\sigma).\]
\end{remark}
\paragraph{Random policies.}
In a random policy, the only information that is needed to determine the choice of matches for the incoming items, is whether their various compatible classes have an empty queue or not.
Specifically, the order of preference of each incoming item is drawn upon arrival following the prescribed probability distribution. Then the considered item investigates its compatible classes in that order,
until it finds one having a non-empty buffer, if any. The item is then matched with an item of the latter class. In other words, a list of preferences $\sigma=\left(\sigma(1),...,\sigma\left(|{\mathcal{V}}|\right)\right)$ is drawn from $\nu_\phi$ on $\mathcal S$, and we set
\begin{equation}
p_{\phi}(x,v,\sigma)=\sigma(v)[k],\mbox{ where }k=\min \Bigl\{i \in
{\mathcal{E}}(v) \rrbracket\,:\,\sigma(v)[i]\in {\mathcal{P}}(x,v)\Bigl\}\label{eq:pphirandom}.
\end{equation}
\paragraph{Priority policies.}
A strict priority policy is such that, for any $v\in {\mathcal{V}}$,
the order of preference of $v$ in ${\mathcal{E}}(v)$ is deterministic. This is thus a particular case of random policy in which a list of preference $\sigma^* \in \Sigma$ is
fixed beforehand, in other words $\nu_\phi=\delta_{\sigma^*}$ and (\ref{eq:pphirandom}) holds for $\sigma:=\sigma^*$.
\paragraph{Uniform.}
The uniform policy {\sc u} is another particular case of random policy, such that $\nu_\phi$ is the uniform distribution on $\mathcal S$.
Consequently, for any $i \in {\mathcal{V}}$ and any $j$ such that $j {\--} i$, $\sigma(i)[j]$ is drawn uniformly in ${\mathcal{E}}(i)$.
\paragraph{Match the Longest.}
In 'Match the Longest' ({\sc ml}),
the newly arrived item chooses an item of the compatible class that has the longest line. Ties are broken by a
uniform draw between classes having queues of the same length.
Formally, set for all $x$ and $v$ such that $\mathcal P(x,v) \ne \emptyset$,
\begin{equation*}
L(x,v) =\max\left\{x(j)\,:\,j \in {\mathcal{E}}(v)\right\}\,\quad\mbox{ and }\quad\,
\mathcal L(x,v) =\left\{i\in {\mathcal{E}}(v)\,:\,x\left(i\right)=L(x,v)\right\}\subset {\mathcal{P}}(x,v).
\end{equation*}
Then, set $\nu_\phi$ as the uniform distribution on $\mathcal S$.
If the resulting sample is $\sigma$, we have
\begin{equation*}
p_{\textsc{ml}}(x,v,\sigma) =\sigma(v)[k],\mbox{ where }k=\min \Bigl\{i \in {\mathcal{E}}(v)\,:\,\sigma(v)[i]\in \mathcal L(y,c)\Bigl\}.
\end{equation*}
\paragraph{Match the Shortest.}
The 'Match the Shortest' policy is defined similarly to 'Match
the Longest', except that the shortest queue is chosen instead of
the longest. It is denoted {\sc ms}.
\subsection{Markov representation}
\label{subsec:Markov}
Fix a (possibly random) word
$w \in \mathbb W$ and a word $\varsigma \in \mathcal S^*$ having the same length as $w$. Denote for all $n\ge 0$ by $W^{[w]}_n$ the buffer content at time $n$
(i.e. just before the arrival of item $n$) if the buffer content at time 0 was set to $w$, in other words
\[W^{[w]}_n= Q_\phi\left(wV_0...V_n\,,\,\varsigma \Sigma_0...\Sigma_n\right).\]
It follows from (\ref{eq:defodot}) that the buffer-content sequence is stochastic recursive, since we have that
\[\left\{\begin{array}{ll}
W^{[w]}_0 &= w;\\
W^{[w]}_{n+1} &=W^{[w]}_n \odot_\phi (V_n,\Sigma_n),\,n\in{\mathbb N},
\end{array}\right.\quad\mbox{ a.s..}\]
Second, it follows from (\ref{eq:defccc}) that for any matching policy $\phi$ that only depends on the class detail of the system (e.g. $\phi=\textsc{random}, \textsc{ml}$ or $\textsc{ms}$),
for any initial conditions as above, the $\mathbb X$-valued sequence $\suite{X_n}$ of class-details also is stochastic recursive: for any initial condition $x \in \mathbb X$,
\begin{equation}
\label{eq:recurW}
\left\{\begin{array}{ll}
X^{[x]}_0 &= x;\\
X^{[x]}_{n+1} &=X^{[x]}_n \ccc_\phi (V_n,\Sigma_n),\,n\in{\mathbb N},
\end{array}\right.\quad\mbox{ a.s..}
\end{equation}
\section{A product form for the FCFM model}
\label{sec:FCFM}
Throughout this section, suppose that the input of the system satisfies assumption (IID). Then, for any connected graph $G$ and any admissible policy $\phi$, the queue detail $\suite{W_n}$ is a $\mathbb W$-valued
$\mathcal F_n$-Markov chains, where $\suite{\mathcal F_n}$ is the natural filtration of the sequence $\suite{(V_n,\Sigma_n)}$. This sequence will be termed {\em natural chain} of the system.
In line with \cite{MaiMoy16}, for any admissible matching policy $\phi$ we define the {\em stability region} associated to $G$ and $\phi$ as the set of measures
\begin{equation}
\label{eq:defstab}
\textsc{stab}(G,\phi) := \left\{\mu \in \mathcal M\left({\mathcal{V}}\right)\,:\,\suite{W_n} \mbox{ is positive recurrent}\right\}.
\end{equation}
Consider also the set
\begin{equation}
\label{eq:Ncond}
\textsc{Ncond}(G): \left\{\mu \in \mathcal M\left({\mathcal{V}}\right)\,:\,\mbox{for any }{\mathcal{I}} \in {\mathbb{I}}(G), \mu\left({\mathcal{I}}\right) < \mu\left({\mathcal{E}}\left({\mathcal{I}}\right)\right)\right\}
\end{equation}
which, from Theorem 1 in \cite{MaiMoy16}, is non-empty if and only if $G$ is non-bipartite.
From Proposition 2 in [{\em ibid.}], we know that $\textsc{stab}(G,\phi) \subset \textsc{Ncond}(G)$ for any admissible $\phi$.
The policy $\phi$ is said to be {\em maximal} if these two sets coincide. Theorem 2 in [{\em ibid.}] establishes the maximality
of {\sc ml} for any non-bipartite graph, however priority policies and the uniform policy are in general not maximal (respectively, Theorem 3 and Proposition 7 in \cite{MoyPer17}).
This section is devoted to proving the maximality of First Come, First Matched, by
constructing explicitly the stationary distribution of the natural chain on $\mathbb W$. Interestingly enough, this probability distribution has a remarkable product form, detailed
in (\ref{eq:PiW}).
\begin{theorem}
\label{thm:main}
Let $G=({\mathcal{V}},{\mathcal{E}})$ be a non-bipartite graph. Then the sets $\textsc{stab}(G,\textsc{fcfm})$ and $\textsc{Ncond}(G)$ coincide, in other words the general stochastic matching model $(G,\mu,\textsc{fcfm})$ is stable if and only if
$\mu$ satisfies condition (\ref{eq:Ncond}). In that case, the following is the only stationary probability of the natural chain $\suite{W_n}$:
\begin{equation}
\label{eq:PiW}
\Pi_W\left(w\right)=\alpha\prod\limits_{\ell =1}^q {\mu(w_\ell) \over \mu\Bigl({\mathcal{E}}\left(\left\{w_1,...,w_\ell\right\}\right)\Bigl)},\,
\mbox{ for any }w=w_1...w_q \in {\mathcal{V}}^*,
\end{equation}
where $\alpha$ is the normalizing constant of the measure $\Pi_B$ defined by (\ref{eq:PiB}) below.
\end{theorem}
The remainder of this Section is devoted to the proof of Theorem \ref{thm:main}, which, as will be demonstrated below, is based on a subtle reversibility
scheme that is related to the proof of reversibility for the BM model in \cite{ABMW17}. Observe however that the GM model is {\em not} a particular case of BM model,
so the proof below presents many specificities with respect to \cite{ABMW17}.
\subsection{Other notation}
Before proceeding, we first need to introduce an additional piece of notation.
For $w=w_1w_2\,...\,w_q \in {\mathcal{V}}^*$, we denote by $\cev w$ the reversed version of $w$, i.e.
\[\cev w=w_qw_{q-1}...w_2w_1.\]
Let $\td{\mathcal{V}}$ be an {\em independent copy} of the set ${\mathcal{V}}$, i.e., $\td{\mathcal{V}}$ is a set of cardinality $|{\mathcal{V}}|$
and we define the bijection
\[\left\{\begin{array}{ll}
{\mathcal{V}} &\longrightarrow \td{\mathcal{V}};\\
a & \longmapsto \td a.
\end{array}
\right.\]
For any $\td a \in \td {\mathcal{V}}$, let us also denote $\td{\td a}=a$. Then, we say that $\td a$ is the {\em counterpart} of $a$ and vice-versa.
\medskip
Let ${\mathbf V}:={\mathcal{V}} \cup \td{\mathcal{V}}.$ For any word $\mathbf w \in \mathbf V^*$, denote by ${\mathcal{V}}(\mathbf w)$ (respectively, $\td{\mathcal{V}}(\mathbf w)$) the set of letters of ${\mathcal{V}}$ (resp., $\td{\mathcal{V}}$) that are present in $\mathbf w$, in other words
\begin{equation*}
{\mathcal{V}}(\mathbf w) =\Bigl\{a \in {\mathcal{V}}\,:\,\norm \mathbf w_a >0\Bigl\};\quad \quad\td{\mathcal{V}}(\mathbf w) =\Bigl\{\td a \in \td{\mathcal{V}}\,:\,\norm \mathbf w_{\td a} >0\Bigl\}.
\end{equation*}
For any $\mathbf w \in {\mathbf V}^*$, the {\em restriction} of $\mathbf w$ to ${\mathcal{V}}$ (respectively, to $\td{\mathcal{V}} $) is the word
$\mathbf w|_{{\mathcal{V}}} \in {\mathcal{V}}^*$ (resp., $\mathbf w|_{\td{\mathcal{V}}} \in \td{\mathcal{V}}^*$) of size $\norm{\mathbf w}_{\mathcal{V}}$ (resp. of size $\norm{\mathbf w}_{\td{\mathcal{V}}}$),
obtained by keeping only the letters belonging to ${\mathcal{V}}$ (resp. to $\td{\mathcal{V}}$) in $\mathbf w$, in the same order.
The {\em dual} $\td \mathbf w$ of the word $\mathbf w=\mathbf w_1\mathbf w_2...\mathbf w_q \in {\mathbf V}^*$ is the word obtained
by exchanging the letter of $\mathbf w$ belonging to ${\mathcal{V}}$ with their counterpart in $\td{\mathcal{V}}$, and vice-versa. In other words,
\[\td \mathbf w=\td{\mathbf w_1}\,\td{\mathbf w_2}\,...\,\td{\mathbf w_q}.\]
\begin{ex}
Take for instance
$\mathbf w=a\,b\,\td a\,c\,\td b\,\td c\,\td b\,d\,a$. Then we obtain
\begin{align*}
{\mathcal{V}}(\mathbf w) &=\left\{a,b,c,d\right\},\,\quad\,\td{\mathcal{V}}(\mathbf w) =\left\{\td a, \td b,\td c\right\};\\
\mathbf w|_{{\mathcal{V}}} &=a\,b\,c\,d\,a,\,\quad\,\mathbf w|_{\td{\mathcal{V}}}=\td a\,\td b\,\td c\,\td b;\\
\td \mathbf w &=\td a\,\td b\,a\,\td c\,b\,c\,b\,\td d\,\td a,\,\quad\,\cev \mathbf w = a\,d\,\td b\,\td c\,\td b\,c\,\td a\,b\,a.\end{align*}
\end{ex}
\subsection{Auxiliary Markov representations}
\label{subsec:Markov}
We now introduce two auxiliary Markov representations of the system: the $\mathbf V^*$-valued Backwards and Forwards detailed chains, similar in construction to the backwards and forwards 'pair by pair detailed FCFS matching processes', introduced in subsection 5.1 of \cite{ABMW17}.
\paragraph{Backwards detailed chain.}
We define the ${\mathbf V}^*$-valued {backwards detailed process} $\suite{B_n}$ as follows:
$B_0=\emptyset$ and for any $n\ge 1$,
\begin{itemize}
\item if $W_n=\emptyset$ (i.e. all the items arrived up to time $n$ are matched at time $n$), then we set $B_n=\emptyset$ as well;
\item if not, we let $i(n)\le n$ be the index of the oldest item in line.
Then, the word $B_n$ is of length $n-i(n)+1$, and for any $\ell \in \llbracket 1,n-i(n)+1 \rrbracket$, we set
\[B_n(\ell)=\left\{\begin{array}{ll}
V_{i(n)+\ell-1} \,\, &\mbox{if $V_{i(n)+\ell-1}$ has not been matched up to time $n$};\\
\td{V_{k}}\,\, &\mbox{if $V_{i(n)+\ell-1}$ is matched at of before time $n$, with item $V_k$ (where $k \le n$)}.\\
\end{array}\right.\]
\end{itemize}
In other words, the word $B_n$ gathers the class indexes of all unmatched items entered up to $n$, and the copies of the class
indexes of the items matched after the arrival of the oldest unmatched item at $n$, at the place of the class index of the item they have
been matched to.
Observe that we necessarily have that $B_n(1)=V_{i(n)}\in {\mathcal{V}}$. Moreover,
the word $B_n$ necessarily contains all the letters of $W_n$. More precisely, we have
\begin{equation}
\label{eq:WB}
B_n = W_n|_{{\mathcal{V}}},\,\,\,n\ge 0.
\end{equation}
It is easily seen that $\suite{B_n}$ also is a ${\mathcal{F}}_n$-Markov chain: for any $n\ge 0$, the value of $B_{n+1}$ can be deduced from that of
$B_n$ and the class $V_{n+1}$ of the item entered at time $n+1$.
\paragraph{Forward detailed chain.}
The ${\mathbf V}^*$-valued {forward detailed process} $\suite{F_n}$ is defined as follows:
$F_0=\emptyset$ and for any $n\ge 1$,
\begin{itemize}
\item if $W_n=\emptyset$, then we also set $F_n=\emptyset$;
\item if not, let $j(n) > n$ be the largest index of an item that is matched with an item entered up to $n$ ($j(n)$ necessarily exists, otherwise all items entered up to $n$ would be have been matched by time $n$, and we would have $W_n=\emptyset$).
Then, the word $F_n$ is of size $j(n)-n$ and for any
$\ell \in \llbracket 1,j(n)-n \rrbracket$, we set
\[F_n(\ell)=\left\{\begin{array}{ll}
V_{n+\ell} \,\, &\mbox{if $V_{n+\ell}$ is not matched with an item arrived up to $n$};\\
\td{V_{k}}\,\, &\mbox{if $V_{n+\ell}$ is matched with item $V_k$, where $k \le n$}.
\end{array}\right.\]
\end{itemize}
In other words, the word $F_n$ contains the copies of all the class indexes of the items entered up to time $n$ and matched after $n$,
together with the class indexes of all unmatched items entered before the last item matched with an item entered up to $n$.
Observe that $F_n\left(j(n)-n\right) \in \td{\mathcal{V}}$ since by definition, the item $V_{j(n)}$ is matched with some $V_k$ for $k\le n$, and therefore $F_n\left(j(n)-n\right)=\td V_k$.
It is also clear that $\suite{F_n}$ is a ${\mathcal{F}}_{n}$-Markov chain, as for any $n\ge 0$, the value of $F_{n+1}$ depends solely on $F_n$
and the class index $V_{n+j(n)+1}$ of the item entered at time $n+j(n)+1$.
\begin{ex}
\label{ex:trajBWF}
\emph{Consider the matching graph of Figure \ref{fig:example1}, addressed in Section 5 of \cite{MaiMoy16}
(this is the smallest graph that is neither bipartite nor separable). An arrival scenario together with successive values of the
natural chain, the backwards and the forwards chain are represented in Figure \ref{fig:example2}.}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (2,3) -- (2,2);
\draw[-] (2,2) -- (1,1);
\draw[-] (2,2) -- (3,1);
\draw[-] (1,1) -- (3,1);
\fill (2,3) circle (2pt) node[right] {\small{1}} ;
\fill (2,2) circle (2pt) node[right] {\small{2}} ;
\fill (1,1) circle (2pt) node[below] {\small{3}} ;
\fill (3,1) circle (2pt) node[below] {\small{4}} ;
\end{tikzpicture}
\vspace*{-0.3cm}
\caption[smallcaption]{Matching graph of Example \ref{ex:trajBWF}.}
\label{fig:example1}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw[-] (-2,13) -- (19,13);
\draw[-] (-2,13) -- (-2,-2);
\draw[-] (-2,-2) -- (19,-2);
\draw[-] (19,-2) -- (19,13);
\draw[-,very thick] (-0.7,11.3) -- (-0.7,11.7);
\draw[-] (-1,11.5) -- (11,11.5);
\fill (0,11.5) circle (2pt) node[below] {\small{1}};
\fill (1,11.5) circle (2pt) node[below] {\small{3}};
\fill (2,11.5) circle (2pt) node[below] {\small{4}};
\fill (3,11.5) circle (2pt) node[below] {\small{$2$}};
\fill (4,11.5) circle (2pt) node[below] {\small{3}};
\fill (5,11.5) circle (2pt) node[below] {\small{1}};
\fill (6,11.5) circle (2pt) node[below] {\small{3}};
\fill (7,11.5) circle (2pt) node[below] {\small{2}};
\fill (8,11.5) circle (2pt) node[below] {\small{2}};
\fill (9,11.5) circle (2pt) node[below] {\small{1}};
\fill (10,11.5) circle (2pt) node[below] {\small{4}};
\fill (11,11.5) node[right]{\small{$W_0 =\emptyset,\quad B_0=\emptyset,\quad F_0=\emptyset$}};
%
\draw[-] (-1,10) -- (11,10);
\fill (0,10) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,10) .. controls +(up:0.5cm) .. (3,10);
\draw[-,very thick] (0.3,9.8) -- (0.3,10.2);
\fill (1,10) circle (2pt) node[below] {\small{3}};
\fill (2,10) circle (2pt) node[below] {\small{4}};
\fill (3,10) circle (2pt) node[below] {\small{$\not 2 \, \bar 1$}};
\fill (4,10) circle (2pt) node[below] {\small{3}};
\fill (5,10) circle (2pt) node[below] {\small{1}};
\fill (6,10) circle (2pt) node[below] {\small{3}};
\fill (7,10) circle (2pt) node[below] {\small{2}};
\fill (8,10) circle (2pt) node[below] {\small{2}};
\fill (9,10) circle (2pt) node[below] {\small{1}};
\fill (10,10) circle (2pt) node[below] {\small{4}};
\fill (11,10) node[right]{\small{$W_1 =1,\quad B_1=1,\quad F_1=34\bar 1$}};
%
\draw[-] (-1,8.5) -- (11,8.5);
\fill (0,8.5) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,8.5) .. controls +(up:0.5cm) .. (3,8.5);
\fill (1,8.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,8.5) .. controls +(up:0.5cm) .. (2,8.5);
\draw[-,very thick] (1.3,8.3) -- (1.3,8.7);
\fill (2,8.5) circle (2pt) node[below] {\small{$\not 4\,\bar 3$}};
\fill (3,8.5) circle (2pt) node[below] {\small{$\not 2 \, \bar 1$}};
\fill (4,8.5) circle (2pt) node[below] {\small{3}};
\fill (5,8.5) circle (2pt) node[below] {\small{1}};
\fill (6,8.5) circle (2pt) node[below] {\small{3}};
\fill (7,8.5) circle (2pt) node[below] {\small{2}};
\fill (8,8.5) circle (2pt) node[below] {\small{2}};
\fill (9,8.5) circle (2pt) node[below] {\small{1}};
\fill (10,8.5) circle (2pt) node[below] {\small{4}};
\fill (11,8.5) node[right]{\small{$W_2 =13,\quad B_2=13,\quad F_2=\emptyset\bar 3\bar 1$}};
%
\draw[-] (-1,7) -- (11,7);
\fill (0,7) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,7) .. controls +(up:0.5cm) .. (3,7);
\fill (1,7) circle (2pt) node[below] {\small{$\not 3\,\bar 4$}};
\draw[->, thin] (1,7) .. controls +(up:0.5cm) .. (2,7);
\fill (2,7) circle (2pt) node[below] {\small{$\not 4\,\bar 3$}};
\draw[-,very thick] (2.3,6.8) -- (2.3,7.2);
\fill (3,7) circle (2pt) node[below] {\small{$\not 2 \, \bar 1$}};
\fill (4,7) circle (2pt) node[below] {\small{3}};
\fill (5,7) circle (2pt) node[below] {\small{1}};
\fill (6,7) circle (2pt) node[below] {\small{3}};
\fill (7,7) circle (2pt) node[below] {\small{2}};
\fill (8,7) circle (2pt) node[below] {\small{2}};
\fill (9,7) circle (2pt) node[below] {\small{1}};
\fill (10,7) circle (2pt) node[below] {\small{4}};
\fill (11,7) node[right]{\small{$W_3 =1,\quad B_3=1\bar 4\bar 3,\quad F_3=\bar 1$}};
%
\draw[-] (-1,5.5) -- (11,5.5);
\fill (0,5.5) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,5.5) .. controls +(up:0.5cm) .. (3,5.5);
\fill (1,5.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,5.5) .. controls +(up:0.5cm) .. (2,5.5);
\fill (2,5.5) circle (2pt) node[below] {\small{4}};
\fill (3,5.5) circle (2pt) node[below] {\small{2}};
\draw[-,very thick] (3.3,5.3) -- (3.3,5.7);
\fill (4,5.5) circle (2pt) node[below] {\small{3}};
\fill (5,5.5) circle (2pt) node[below] {\small{1}};
\fill (6,5.5) circle (2pt) node[below] {\small{3}};
\fill (7,5.5) circle (2pt) node[below] {\small{2}};
\fill (8,5.5) circle (2pt) node[below] {\small{2}};
\fill (9,5.5) circle (2pt) node[below] {\small{1}};
\fill (10,5.5) circle (2pt) node[below] {\small{4}};
\fill (11,5.5) node[right]{\small{$W_4=\emptyset,\quad B_4=\emptyset,\quad F_4=\emptyset$}};
%
\draw[-] (-1,4) -- (11,4);
\fill (0,4) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,4) .. controls +(up:0.5cm) .. (3,4);
\fill (1,4) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,4) .. controls +(up:0.5cm) .. (2,4);
\fill (2,4) circle (2pt) node[below] {\small{4}};
\fill (3,4) circle (2pt) node[below] {\small{2}};
\fill (4,4) circle (2pt) node[below] {\small{3}};
\draw[-,very thick] (4.3,3.8) -- (4.3,4.2);
\draw[->, thin] (4,4) .. controls +(up:0.5cm) .. (7,4);
\fill (5,4) circle (2pt) node[below] {\small{1}};
\fill (6,4) circle (2pt) node[below] {\small{3}};
\fill (7,4) circle (2pt) node[below] {\small{$\not 2\,\bar 3$}};
\fill (8,4) circle (2pt) node[below] {\small{2}};
\fill (9,4) circle (2pt) node[below] {\small{1}};
\fill (10,4) circle (2pt) node[below] {\small{4}};
\fill (11,4) node[right]{\small{$W_5 =3,\quad B_5=3,\quad F_5=13\bar 3$}};
%
\draw[-] (-1,2.5) -- (11,2.5);
\fill (0,2.5) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,2.5) .. controls +(up:0.5cm) .. (3,2.5);
\fill (1,2.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,2.5) .. controls +(up:0.5cm) .. (2,2.5);
\fill (2,2.5) circle (2pt) node[below] {\small{4}};
\fill (3,2.5) circle (2pt) node[below] {\small{2}};
\fill (4,2.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (4,2.5) .. controls +(up:0.5cm) .. (7,2.5);
\fill (5,2.5) circle (2pt) node[below] {\small{1}};
\draw[-,very thick] (5.3,2.3) -- (5.3,2.7);
\draw[->, thin] (5,2.5) .. controls +(up:0.5cm) .. (8,2.5);
\fill (6,2.5) circle (2pt) node[below] {\small{3}};
\fill (7,2.5) circle (2pt) node[below] {\small{$\not 2\,\bar 3$}};
\fill (8,2.5) circle (2pt) node[below] {\small{$\not 2\,\bar 1$}};
\fill (9,2.5) circle (2pt) node[below] {\small{1}};
\fill (10,2.5) circle (2pt) node[below] {\small{4}};
\fill (11,2.5) node[right]{\small{$W_6 =31,\quad B_6=31,\quad F_6=3\bar 3\bar 1$}};
%
\draw[-] (-1,1) -- (11,1);
\fill (0,1) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,1) .. controls +(up:0.5cm) .. (3,1);
\fill (1,1) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,1) .. controls +(up:0.5cm) .. (2,1);
\fill (2,1) circle (2pt) node[below] {\small{4}};
\fill (3,1) circle (2pt) node[below] {\small{2}};
\fill (4,1) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (4,1) .. controls +(up:0.5cm) .. (7,1);
\fill (5,1) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (5,1) .. controls +(up:0.5cm) .. (8,1);
\fill (6,1) circle (2pt) node[below] {\small{3}};
\draw[-,very thick] (6.3,0.8) -- (6.3,1.2);
\draw[->, thin] (6,1) .. controls +(up:0.5cm) .. (10,1);
\fill (7,1) circle (2pt) node[below] {\small{$\not 2\,\bar 3$}};
\fill (8,1) circle (2pt) node[below] {\small{$\not 2\,\bar 1$}};
\fill (9,1) circle (2pt) node[below] {\small{1}};
\fill (10,1) circle (2pt) node[below] {\small{$\not 4\,\bar 3$}};
\fill (11,1) node[right] {\small{$W_7=313,\quad B_7=313,\quad F_6=\bar 3\bar 11\bar 3$}};
%
\draw[-] (-1,-0.5) -- (11,-0.5);
\fill (0,-0.5) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (0,-0.5) .. controls +(up:0.5cm) .. (3,-0.5);
\fill (1,-0.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,-0.5) .. controls +(up:0.5cm) .. (2,-0.5);
\fill (2,-0.5) circle (2pt) node[below] {\small{4}};
\fill (3,-0.5) circle (2pt) node[below] {\small{2}};
\fill (4,-0.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (4,-0.5) .. controls +(up:0.5cm) .. (7,-0.5);
\fill (5,-0.5) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (5,-0.5) .. controls +(up:0.5cm) .. (8,-0.5);
\fill (6,-0.5) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (6,-0.5) .. controls +(up:0.5cm) .. (10,-0.5);
\fill (7,-0.5) circle (2pt) node[below] {\small{$\not 2\,\bar 3$}};
\draw[-,very thick] (7.3,-0.7) -- (7.3,-0.3);
\fill (8,-0.5) circle (2pt) node[below] {\small{$\not 2\,\bar 1$}};
\fill (9,-0.5) circle (2pt) node[below] {\small{1}};
\fill (10,-0.5) circle (2pt) node[below] {\small{$\not 4\,\bar 3$}};
\fill (11,-0.5) node[right]{\small{$W_8=13,\quad B_7=13\bar 3,\quad F_6=\bar 11\bar 3$}};
%
\end{tikzpicture}
\caption[smallcaption]{An arrival scenario on the matching graph of Figure \ref{fig:example1},
and the trajectories of the
three Markov chains.}
\label{fig:example2}
\end{center}
\end{figure}
\end{ex}
\subsection{Reversibility}
\label{subsec:reverse}
For both chains $\suite{B_n}$ and $\suite{F_n}$, a state $\mathbf w \in {\mathbf V}^*$
is said admissible if it can be reached by the chain under consideration under {\sc fcfm}. We denote
\begin{align*}
\textsc{adm}_B &:=\Bigl\{\mathbf w \in {\mathbf V}^*\,:\,\mathbf w \mbox{ is admissible for }\suite{B_n}\Bigl\};\\
\textsc{adm}_F &:=\Bigl\{\mathbf w \in {\mathbf V}^*\,:\,\mathbf w \mbox{ is admissible for }\suite{F_n}\Bigl\}.
\end{align*}
We have the following result,
\begin{proposition}
\label{prop:product}
Suppose that condition (\ref{eq:Ncond}) holds. Then the Backwards detailed Markov chain $\suite{B_n}$ and the Forwards detailed Markov chain
$\suite{F_n}$ both admit the following unique stationary distribution (defined up to a constant):
\begin{equation}
\label{eq:PiB}
\Pi_B\left(\mathbf w\right)=\prod\limits_{i=1}^p \mu(i)^{\parallel \mathbf w\parallel_i+\parallel \td \mathbf w\parallel_i},
\end{equation}
for any admissible state $\mathbf w \in {\mathbf V}^*$ of the respective chain.
\end{proposition}
Observe that by the very definition (\ref{eq:PiB}), the measure of a word $\mathbf w$ does not change whenever any of its letters $a$ is exchanged with $\td a$. In particular, we have $\Pi_B(\cev{\td \mathbf w})=\Pi_B(\mathbf w)$ for any $\mathbf w$.
Before showing Proposition \ref{prop:product} we need to introduce a couple of technical results. Let us first observe that
\begin{lemma}
\label{lemma:M83}
Let $\mathbf w=\mathbf w_1...\mathbf w_q\in {\mathbf V}^*$. Then $\mathbf w\in \textsc{adm}_B$ if, and only if the following two conditions hold,
\begin{align}
\label{eq:admissible0}
&\forall k,\ell \in \llbracket 1,q \rrbracket\mbox{ such that } \mathbf w_{k}\in {\mathcal{V}}\mbox{ and }\mathbf w_\ell\in \td{\mathcal{V}},\,\,\mathbf w_{k} {\not\!\!\--} \mathbf w_{j};\\
\label{eq:admissible}
&\forall k \in \llbracket 1,q \rrbracket,\,\forall j\in \llbracket k+1,q \rrbracket\mbox{ such that } \mathbf w_{k}\in {\mathcal{V}},\,\mathbf w_{j}\in \td{\mathcal{V}},\,\,\mathbf w_{k} {\not\!\!\--} \td{\mathbf w_{j}}.
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:M83}]
The necessity of (\ref{eq:admissible0}) is obvious: would an element $\mathbf w$ of $\textsc{adm}_B$ contain two compatible letters in ${\mathcal{V}}$, the two corresponding items would have been matched.
Let us prove the necessity of (\ref{eq:admissible}).
Fix two such indexes $k$ and $j$ in $\llbracket 1.q \rrbracket$.
This means that $V_{i(n)+j-1}$ is matched with an item $V_\ell$ of class $\td{\mathbf w_{j}}$.
Suppose that $\mathbf w_{k} {\--} \td{\mathbf w_{j}}$. Then we have the following alternative,
\begin{itemize}
\item if $\ell < i(n)+k-1$, then $V_{\ell}$ is present in the system when the item $V_{i(n)+k-1}$ of class $\mathbf w_{k}$ enters.
As $\td{\mathbf w_{j}} {\--} \mathbf w_{k}$, these two items would have been matched;
\item if $\ell \in \llbracket i(n)+k-1, i(n)+j-2 \rrbracket$, then $V_{\ell}$ finds $V_{i(n)+k-1}$ available in the system, and thus the
two items would have been matched;
\item if $\ell \in \llbracket i(n)+j, q\rrbracket$, then $V_{\ell}$ finds both $V_{i(n)+k-1}$ and $V_{i(n)+j-1}$ available in the system, and as the policy is FCFM, choses the oldest one: again, $V_{\ell}$ and $V_{i(n)+k-1}$ are matched.
\end{itemize}
Consequently, $\mathbf w_{k} {\--} \td{\mathbf w_{j}}$ would imply in all cases that $V_{\ell}$ is matched with $V_{i(n)+k-1}$, an absurdity since $V_{i(n)+k-1}$ of class $\mathbf w_{k}$ is still unmatched at $n$.
This completes the proof of necessity.
Regarding sufficiency, fix a state $\mathbf w$ satisfying both (\ref{eq:admissible0}) and (\ref{eq:admissible}):
\[\mathbf w = b_1\,\td{a_{11}}\,\td{a_{12}}\,...\,\td{a_{1k_1}}\,b_2\,\td{a_{21}}...\td{a_{2k_2}}\,b_3....b_q\,\td{a_{q1}}...\td{a_{qk_q}},\]
where $q\ge 1$, $k_q \in {\mathbb N}$ for all $q$, $b_\ell \in {\mathcal{V}}$ for all $\ell$ and $a_{\ell j} \in {\mathcal{V}}$ for all $\ell,j$.
In particular, from (\ref{eq:admissible0}) we have that $b_i {\not\!\!\--} b_j$ for any $i \ne j$ whereas from (\ref{eq:admissible}), $a_{\ell , j} {\not\!\!\--} b_i$ for any $j$ and any $i \le \ell$.
Let us show that the chain $\suite{B_n}$ can reach the state $\mathbf w$. For this we construct inductively an arrival vector $V$ leading to $\mathbf w$ from the state $\emptyset$.
At first, set
\[V:=\left(V_{n-q+1},V_{n-q+2},...,V_{n}\right)=\left(b_1,...,b_q\right).\]
Then, we investigate all elements $\td{a_{\ell j}}$ from left to right, as follows. We start from $a_{11}$:
\begin{itemize}
\item[(1)] if there is no element $\td{a_{\ell' j'}}$ to the right of $\td{a_{11}}$ such that $a_{11} {\--} a_{\ell' j'}$, then set
$V_{n-q-1}=a_{11}$, $V_{n-q}=b_1$, $V_{n-q+1}=c \in {\mathcal{E}}\left(a_{11}\right)$, in a way that the vertex of class $a_{11}$ is matched with that of class $b$, and
we retrieve the letter $\td{a_{11}}$ to the right of $b_1$ because the item of class $c$ is matched with $a_{11}$;
\item[(2)] if there exists an element $\td{a_{\ell' j'}}$ to the right of $\td{a_{11}}$ such that $a_{11} {\--} a_{\ell' j'}$,
we investigate all terms $\td{a_{\ell'' j''}}$ to the left of $\td{a_{\ell' j'}}$. If one of them, $\td{a_{\ell'' j''}}$ is such that
$a_{\ell'' j''} {\--} a_{\ell' j'}$, then $a_{\ell' j'}$ could be matched in FCFM with another item arrived before $a_{11}$. Then we do as in case (1):
$V_{n-q-1}=a_{11}$, $V_{n-q}=b_1$, $V_{n-q+1}=c \in {\mathcal{E}}\left(a_{11}\right)$;
\item[(3)] if there exists an element $\td{a_{\ell' j'}}$ to the right of $\td{a_{11}}$ such that $a_{11} {\--} a_{\ell' j'}$,
and no term $\td{a_{\ell'' j''}}$ to the left of $\td{a_{\ell',j'}}$ is such that
$a_{\ell'' j''} {\--} a_{\ell'j'}$, we interpose a term $a_{\ell' j'}$ in $V$ between $b_1$ and $b_2$
(or at the extreme right of $V$ if $q=1$), and a term $a_{\ell j}$ between
$b_{\ell'}$ and $b_{\ell'+1}$ (or at the extreme right of $V$ if $q=\ell'$), in a way that the two corresponding items are matched.
\end{itemize}
Then, by induction we investigate in the same way all the terms $\td{a_{\ell j}}$ not yet considered:
\begin{itemize}
\item[(1)] if there is no element $\td{a_{\ell' j'}}$ to the right of $\td{a_{\ell j}}$ such that $a_{\ell j} {\--} a_{\ell' j'}$, then in $V$
we interpose a letter $a_{\ell j}$ just to the left of $b_1$, and a letter $c \in {\mathcal{E}}\left(a_{\ell j}\right)$ to the left of $b_{\ell+1}$ if $\ell <q$
(or at the extreme right of $V$ if $\ell=q$); the items of classes $c$ and $a_{\ell,j}$ are matched and a term $\td{a_{\ell j}}$ appears at the right place
in the detailed state of the system;
\item[(2)] we do as in case (1) if there exists an element $\td{a_{\ell' j'}}$ to the right of $\td{a_{\ell j}}$ such that $a_{\ell j} {\--} a_{\ell' j'}$,
but one of the non yet investigated terms $\td{a_{\ell' j'}}$ to the left of $\td{a_{\ell'j'}}$ is such that
$a_{\ell'' j''} {\--} a_{\ell'j'}$.
\item[(3)] if there exists an element $\td{a_{\ell' j'}}$ to the right of $\td{a_{\ell j}}$ such that $a_{\ell j} {\--} a_{\ell' j'}$,
and no term $\td{a_{\ell'' j''}}$ to the left of $\td{a_{\ell'j'}}$ is such that
$a_{\ell'' j''} {\--} a_{\ell'j'}$, then in $V$ we interpose a term $a_{\ell' j'}$ just to the left of $b_{\ell +1}$ (or at the extreme right of $V$ if
$\ell =q$), and a term $a_{\ell j}$ to the immediate left of $b_{\ell'+1}$ (or at the extreme right of $V$ if
$\ell' =q$), in a way that the two corresponding items are matched.
\end{itemize}
We continue this construction on and on, until all the letters $\td{a_{\ell j}} \in \mathbf w|_{\td {\mathcal{V}}}$ are investigated and the corresponding items are matched.
The final arrival vector $V$ that we obtain is of size $q'$, where
\[q'\in \left\llbracket q+\sum_{\ell=1}^q k_{\ell},q+2\sum_{\ell=1}^q k_{\ell}\right\rrbracket.\]
Indeed, the number of
items added to $V$ is at least equal to the number of letters of $\mathbf w|_{\td {\mathcal{V}}}$, and at most equal to twice the latter number (which is the case
if all the corresponding items entered the system before the item of class $b_1$ and are matched after the arrival time of the latter).
Finally, for any $n \ge q+2\sum_{\ell=1}^q k_{\ell}$, if $B_{n-q'}=\emptyset$, an arrival scenario $V$ for the $q'$ following time epochs yields to a state $B_n=\mathbf w$. This concludes the proof.
\end{proof}
As a consequence,
\begin{lemma}
\label{lemma:admissible}
The two subsets $\textsc{adm}_B$ and $\textsc{adm}_F$ are isomorphic. More precisely, the following is a one-to-one relation,
\[\left\{\begin{array}{ll}
\textsc{adm}_B &\longleftrightarrow \textsc{adm}_F\\
\mathbf w &\longleftrightarrow \cev{\td \mathbf w}.
\end{array}\right.\]
\end{lemma}
\begin{proof}
From Lemma \ref{lemma:M83}, it is sufficient to prove that a state $\mathbf w$ belongs to $\textsc{adm}_F$ if and only if
$\cev{\td{\mathbf w}}$ satisfies both (\ref{eq:admissible0}) and (\ref{eq:admissible}). The proof of this statement is similar to that of Lemma \ref{lemma:M83}.
\end{proof}
We can now state the following strong connexion between the dynamics of $\suite{B_n}$ and $\suite{F_n}$,
\begin{proposition}
\label{prop:kelly}
Let $\Pi_B$ be the measure on ${\mathbf V}^*$ defined by (\ref{eq:PiB}). Then for any two admissible states $\mathbf w,\mathbf w' \in {\mathbf V}^*$ for $\suite{B_n}$,
the states $\cev{\td{\mathbf w}}$ and $\cev{\td{\mathbf w'}}$ are admissible for $\suite{F_n}$ and we have that
\begin{equation}
\label{eq:kelly}
\Pi_B(\mathbf w)\pr{B_{n+1}=\mathbf w' | B_n=\mathbf w} = \Pi_B\left(\cev{\td{\mathbf w'}}\right)\pr{F_{n+1}=\cev{{\td \mathbf w}} | F_n=\cev{\td{\mathbf w'}}}.
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:kelly}]
Fix $\mathbf w \in \textsc{adm}_B$ (so that $\cev{\td{\mathbf w}} \in \textsc{adm}_B$ from Lemma \ref{lemma:admissible}).
We address the 5 possible cases for the transition of $\suite{B_n}$.
In cases (1)-(4) hereafter, we assume that $\mathbf w\ne \emptyset$ and set $\mathbf w=\mathbf w_1...\mathbf w_q\in {\mathbf V}^*$.
Remember that $\mathbf w_1 \in {\mathcal{V}}$.
\medskip
(1) Let us first address the case where $\mathbf w'=\mathbf w a$, for some $a\in {\mathcal{V}}$. Plainly, such a state is admissible if and only if
$a \in {\mathcal{E}}\left({\mathcal{V}}(w)\right)^c$. The backwards chain moves from $\mathbf w$ to $\mathbf w a$ at $n+1$ whenever $V_{n+1}=a$, so we have that
\[\pr{B_{n+1}=\mathbf w a | B_n=\mathbf w}=\mu(a).\]
On the other hand, $F_n=\cev{{\td{\mathbf w a}}}=\td a \,\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}...\td{\mathbf w_2}\,\td{\mathbf w_1}$ entails that the item entering at $n+1$
is matched with an item of class $a$ entered before or at time $n$.
Therefore we necessarily have that $F_{n+1}=\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}...\td{\mathbf w_2}\,\td{\mathbf w_1}=\cev{\td\mathbf w}$, in other words
\begin{align*}
\Pi_B\left(\cev{\td{{\mathbf w}a}}\right)\pr{F_{n+1}=\cev{\td \mathbf w} | F_n=\cev{\td{{\mathbf w}a}}}&=\Pi_B\left(\cev{\td{{\mathbf w}a}}\right)\\
&=\Pi_B\left(\cev{\td \mathbf w}\right)\mu(a)\\
&=\Pi_B(\mathbf w)\mu(a)
=\Pi_B(\mathbf w)\pr{B_{n+1}={\mathbf w}a | B_n=\mathbf w}.
\end{align*}
(2) Suppose now that $\mathbf w'=\mathbf w_1...\mathbf w_{k-1}\,\td a \,\mathbf w_{k+1}...\mathbf w_q\td{\mathbf w_{k}}$.
This means that $\mathbf w_{k}\in {\mathcal{V}}$ and that the item $V_{n+1}$ is of class $a$, where
$a \in{\mathcal{E}}\left(\mathbf w_{k}\right)\,\cap\, {\mathcal{E}}\left({\mathcal{V}}\left(\mathbf w_1....\mathbf w_{k-1}\right)\right)^c,$
so that in FCFM, $V_{n+1}$ is matched with the item $V_{i(n)+k-1}$ of class $\mathbf w_{k}$.
Suppose that \[F_n=\cev{\td {\mathbf w'}}=\mathbf w_{k}\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,...\td{\mathbf w_{k+1}}\, a \,\td{\mathbf w_{k-1}}\,...\,\td{\mathbf w_1}.\]
Then from Lemma \ref{lemma:M83}, $\mathbf w_{k}$ is not adjacent to any of the elements of ${\mathcal{V}}\left(\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,...\,\td{\mathbf w_{k+1}}\right)$.
But $\mathbf w_{k} {\--} a$, so the item $V_{n+1}$ of class $\mathbf w_{k}$ is matched with the item $V_{n+k+2}$ of class $a$, and we have with probability 1,
\[F_{n+1}= \td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,...\td{\mathbf w_{k+1}}\, \td{\mathbf w_{k}} \,\td{\mathbf w_{k-1}}\,...\,\td{\mathbf w_1}=\cev{\td w}.\]
Therefore, in this case,
\begin{align*}
\Pi_B\left(\cev{\td {\mathbf w'}}\right)\pr{F_{n+1}=\cev{\td \mathbf w} | F_n=\cev{\td {\mathbf w'}}} &=\Pi_B\left(\cev{\td{\mathbf w'}}\right)\\
&=\Pi_B\left(\cev{\td \mathbf w}\right)\mu(a)\\
&=\Pi_B(\mathbf w)\mu(a)
=\Pi_B(\mathbf w)\pr{B_{n+1}=\mathbf w' | B_n=\mathbf w}.
\end{align*}
(3) Now, suppose that $\mathbf w'=\mathbf w_{k}\mathbf w_{k+1}...\mathbf w_q\td{\mathbf w_1}$ for some $k \in \llbracket 1,q \rrbracket$.
This means that the class of item $V_{n+1}$ belongs to ${\mathcal{E}}\left(\mathbf w_1\right)$, so $V_{n+1}$ is matched with the oldest item
in line $V_{i(n)}$. Then $\mathbf w_2,\mathbf w_{3},...,\mathbf w_{k-1}$ all belong to $\td{{\mathcal{V}}}$, and so $\mathbf w_{k} \in {\mathcal{V}}$ and is the class of the oldest item in line after $V_{i(n)}$, now becoming the new oldest one.
Suppose that \[F_n=\cev{\td {\mathbf w'}}=\mathbf w_1\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,...\td{\mathbf w_{k+1}}\,\td{\mathbf w_{k}}.\] Applying again Lemma \ref{lemma:M83}, we obtain that
$\mathbf w_1$ is not adjacent to any of the elements of the set
${\mathcal{V}}\left(\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,...\,\td{\mathbf w_{k}}\right)$, so the state $\cev{\td{\mathbf w'}}$
is admissible. All the same, again in view of Lemma \ref{lemma:M83}, $\mathbf w_1$ is not adjacent to any of the elements $\td{\mathbf w_2},\td{\mathbf w_{3}},....,\td{\mathbf w_{k-1}}$, which are all of ${\mathcal{V}}$. So we obtain
\[F_{n+1}=\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,...\td{\mathbf w_{k+1}}\,\td{\mathbf w_k}\,\td{\mathbf w_{k-1}}\,...\td{\mathbf w_2}\,\td{\mathbf w_1}=\cev{\td\mathbf w}\]
if the incoming items $V_{n+2}$,...,$V_{n+k-1}$ are of respective classes $\mathbf w_2,...,\mathbf w_{k-1}$ and $V_{n+k-2}$ is of a class
belonging to ${\mathcal{E}}\left(\mathbf w_1\right)$. This occurs with probability
\[\mu\Bigl(\td{\mathbf w_2}\Bigl)\mu\Bigl(\td{\mathbf w_3}\Bigl)...\mu\Bigl(\td{\mathbf w_{k-1}}\Bigl)\mu\Bigl({\mathcal{E}}\left(\mathbf w_1\right)\Bigl).\]
Gathering all the above we obtain that
\begin{align*}
\Pi_B\left(\cev{\td {\mathbf w'}}\right)\pr{F_{n+1}=\cev{\td \mathbf w} | F_n=\cev{\td {\mathbf w'}}} &=\Pi_B\left(\cev{\td {\mathbf w'}}\right)\mu\Bigl(\td{\mathbf w_2}\Bigl)\mu\Bigl(\td{\mathbf w_3}\Bigl)...\mu\Bigl(\td{\mathbf w_{k-1}}\Bigl)\mu\Bigl({\mathcal{E}}\left(\mathbf w_1\right)\Bigl)\\
&=\Pi_B\left(\cev{\td \mathbf w}\right)\mu\Bigl({\mathcal{E}}\left(\mathbf w_1\right)\Bigl)\\
&=\Pi_B(\mathbf w)\mu\Bigl({\mathcal{E}}\left(\mathbf w_1\right)\Bigl)
=\Pi_B(\mathbf w)\pr{B_{n+1}=\mathbf w' | B_n=\mathbf w}.
\end{align*}
(4) Suppose now that $\mathbf w'=\emptyset$, which is possible only if $\mathbf w_1\in {\mathcal{V}}$,
the incoming item $V_{n+1}$ belongs to ${\mathcal{E}}\left(\mathbf w_1\right)$, and if $q \ge 2$,
$\mathbf w_2,...,\mathbf w_q \in \td{\mathcal{V}}$, which implies again from Lemma \ref{lemma:M83} that $\mathbf w_1 {\not\!\!\--} \td{\mathbf w_j}$ for any $j\in \llbracket 2,q \rrbracket$.
Thus, $F_n=\emptyset$ leads to the state $F_{n+1}=\cev{\td {\mathbf w}}=\td{\mathbf w_q}\,\td{\mathbf w_{q-1}}\,....\,\td{\mathbf w_2}\,\td{\mathbf w_1}$
if and only if $V_{n+1}$ is of class $\mathbf w_1$, and then $V_{n+2}$ is of class $\td{\mathbf w_q}$, $V_{n+3}$ is of class $\td{\mathbf w_{q-1}}$,
and so on ..., $V_{n+q}$ is of class $\td{\mathbf w_2}$ and $V_{n+q+1}$ is of a class belonging to ${\mathcal{E}}\left(\mathbf w_1\right).$
This event occurs with probability $\mu(1)\mu\left(\td{\mathbf w_q}\right)....\mu\left(\td{\mathbf w_2}\right)\mu\left({\mathcal{E}}\left(\mathbf w_1\right)\right).$
So we obtain
\begin{align*}
\Pi_B\left(\emptyset\right)\pr{F_{n+1}=\cev{\td \mathbf w} | F_n=\emptyset} &=\mu\left(\mathbf w_1\right)\mu\Bigl(\td{\mathbf w_2}\Bigl)\mu\Bigl(\td{\mathbf w_3}\Bigl)...\mu\Bigl(\td{\mathbf w_q}\Bigl)\mu\Bigl({\mathcal{E}}\left(\mathbf w_1\right)\Bigl)\\
&=\Pi_B(\mathbf w)\mu\Bigl({\mathcal{E}}\left(\mathbf w_1\right)\Bigl)
=\Pi_B(\mathbf w)\pr{B_{n+1}=\mathbf w' | B_n=\mathbf w}.
\end{align*}
(5) The only case that remains to be treated is when $\mathbf w=\emptyset$.
Then for any $a\in {\mathcal{V}}$, we obtain $B_{n+1}=a$ provided that $V_{n+1}$ is of class $a$, which occurs
with probability $\mu(a)$.
Then, $F_n=\td{a}$ means that $V_{n+1}$ is matched with an item of class $a$ that was entered before $n$.
Then we necessarily have that $F_{n+1}=\emptyset$, and thus
\[
\Pi_B\left(\td{a}\right)\pr{F_{n+1}=\emptyset | F_n=\td{a}}=\Pi_B\left(a\right)=\mu(a)
=\Pi_B(\emptyset)\pr{B_{n+1}=a | B_n=\emptyset}.
\]
This completes the proof.
\end{proof}
We can now turn to the proof of Proposition \ref{prop:product}.
\begin{proof}[Proof of Proposition \ref{prop:product}]
We first show that the measure $\Pi_B$ defined by (\ref{eq:PiB}) is finite on $\textsc{adm}_B$ under condition
(\ref{eq:Ncond}). From Lemma \ref{lemma:M83}, we know that for any word $\mathbf w$ in $\textsc{adm}_B$, the letters of ${\mathcal{V}}$ present in $\mathbf w$ form an independent set of ${\mathcal{V}}$,
that is ${\mathcal{V}}(\mathbf w) \in \mathbb I(G)$, and intermediate letters of $\bar{{\mathcal{V}}}$ whose counterparts in ${\mathcal{V}}$ are not adjacent of any prior letter of $\mathbf w$ in ${\mathcal{V}}$. Therefore we have that
\begin{multline*}
\Pi_B\left(\textsc{adm}_B\right)=\Pi_B(\emptyset)\\
+\sum_{{\mathcal{I}} \in {\mathbb{I}}(G)}\sum\limits_{q\in{\mathbb N}_+}\sum_{\substack{(b_1,...,b_q) \in {\mathcal{I}}^q,\\(k_1,k_2,...,k_q) \in {\mathbb N}^q} }\sum\limits_{\substack{(a_{i1},..,a_{ik_i},a_{21},..,a_{q1},..,a_{qk_q}) \in {\mathcal{V}}^{\sum_{l=1}^q k_l}:\\ a_{ij}\in{\mathcal{E}}\left(\{b_1,...,b_i\}\right)^c\mbox{\tiny{for all }}i\in \llbracket 1,q \rrbracket, j \in \llbracket 1,k_i \rrbracket}}\!\!\Pi_B\left(b_1\td{a_{11}}\td{a_{12}}...\td{a_{1k_1}}b_2....b_q\td{a_{q1}}...\td{a_{qk_q}}\right)\\
\begin{aligned}
&=1+\sum_{{\mathcal{I}} \in {\mathbb{I}}(G)}\sum\limits_{q\in{\mathbb N}_+}\,\,\sum_{(b_1,...,b_q) \in {\mathcal{I}}^q}\prod_{i=1}^q \left(\mu(b_i)\left(1+\sum_{k \in {\mathbb N}_+}\,\, \sum\limits_{(a_{i1},...,a_{ik}) \in \left({\mathcal{E}}\left(\{b_1,...,b_i\}\right)^c\right)^{k}}\prod_{\ell=1}^{k}\mu(a_{i\ell})\right)\right)\\
&\le 1+\sum_{{\mathcal{I}} \in {\mathbb{I}}(G)}\sum\limits_{q\in{\mathbb N}_+}\left(1+\sum_{k \in {\mathbb N}*}\,\, \sum\limits_{(a_{1},...,a_{k}) \in \left({\mathcal{E}}\left({\mathcal{I}}\right)^c\right)^{k}}\prod_{\ell=1}^{k}\mu(a_{\ell})\right)^q\left(\sum_{(b_1,...,b_q) \in {\mathcal{I}}^q}\prod_{i=1}^q \mu(b_i)\right)\\
&\le \sum_{{\mathcal{I}} \in {\mathbb{I}}(G)}\sum\limits_{q\in{\mathbb N}}\left(\sum_{k \in {\mathbb N}}\,\mu\left({\mathcal{E}}\left({\mathcal{I}}\right)^c\right)^k\right)^q\mu({\mathcal{I}})^q\\
&=\sum_{{\mathcal{I}} \in {\mathbb{I}}(G)}\sum\limits_{q\in{\mathbb N}}\left({\mu({\mathcal{I}}) \over \mu\left({\mathcal{E}}({\mathcal{I}})\right)}\right)^q,
\end{aligned}
\end{multline*}
which is clearly finite under (\ref{eq:Ncond}).
Let $\alpha$ be the normalizing constant of $\Pi_B$. Then it suffices to apply Kelly's Lemma (\cite{kelly:79}, Section 1.7):
define for any two admissible states
$\mathbf w,\mathbf w'\in \textsc{adm}_B$
\begin{equation}
\label{eq:defP}
P_{\mathbf w',\mathbf w}={\pr{B_{n+1}=\mathbf w' \mid B_n=\mathbf w}\alpha\Pi_B(\mathbf w) \over \alpha\Pi_B(\mathbf w')}.
\end{equation}
Then, $\Pi_B$ is the only stationary distribution of $\suite{B_n}$
if $P$ defines a transition operator on ${\mathbf V}^*$. But this is a simple consequence of Proposition \ref{prop:kelly}:
for any $\mathbf w' \in {\mathbf V}^*$, we have that
\begin{align*}
\sum_{w \in {\mathbf V}^*} P_{\mathbf w',\mathbf w} &= \sum_{\mathbf w \in {\mathbf V}^*} {\pr{B_{n+1}=\mathbf w' \mid B_n=\mathbf w}\alpha\Pi_B(\mathbf w) \over \alpha\Pi_B(\mathbf w')}\\
&= \sum_{\mathbf w \in {\mathbf V}^*} {\Pi_B\left(\cev{\td{\mathbf w'}}\right)\pr{F_{n+1}=\cev{{\td \mathbf w}} | F_n=\cev{\td{\mathbf w'}}} \over \Pi_B(\mathbf w')}\\
&= \sum_{\mathbf w \in {\mathbf V}^*} \pr{F_{n+1}=\cev{{\td \mathbf w}} | F_n=\cev{\td{\mathbf w'}}}\\
&=1,
\end{align*}
where we use the fact that $\suite{F_n}$ is a Markov chain and Lemma \ref{lemma:admissible}. This concludes the proof for $\suite{B_n}$.
We can now reverse the argument by exchanging the roles of $B_n,B_{n+1}$ and $F_n,F_{n+1}$ in the definition (\ref{eq:defP}).
This entails that $\suite{F_n}$ is the reversed Markov chain of $\suite{B_n}$, on a sample space where arrivals are reversed in time and exchanged with their match. In particular $\suite{F_n}$ also has the same
stationary probability $\Pi_B$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:main}}
\label{subsec:stabFCFM}
We are now in position to prove the main result of this section.
As $\Pi_B$ is the only stationary distribution of $\suite{B_n}$, from (\ref{eq:WB}) it is clearly sufficient to check that
\begin{equation*}
\Pi_{W}(w)=\sum\limits_{\mathbf w\in \textsc{adm}_B:\,\mathbf w|_{{\mathcal{V}}}=w} \Pi_B(\mathbf w)\,\quad\mbox{for any }w\in {\mathcal{V}}^*.
\end{equation*}
Let $w=w_1...w_q \in {\mathcal{V}}^*$. Then, from Lemma \ref{lemma:M83}, any $\mathbf w \in \textsc{adm}_B$ such that $\mathbf w|_{{\mathcal{V}}}=w$ is of the form
\[\mathbf w = w_1\td{a_{\small{11}}}\,\td{a_{\small{12}}}\,...\,\td{a_{\small{1k_1}}}\,w_2\,\td{a_{\small{21}}}...\td{a_{\small{2k_2}}}\,w_3\,...\,w_q\td{a_{\small{q1}}}\,...\,\td{a_{\small{qk_q}}},\]
where any of the elements $a_{\small{\ell j}}$ is such that $a_{\ell j} {\not\!\!\--} w_{i}$ for any $i \le \ell$.
We therefore obtain that
\begin{align*}
\sum\limits_{\mathbf w\in \textsc{adm}_B:\,\mathbf w|_{{\mathcal{V}}}=w} \Pi_B(\mathbf w)
&= \alpha\prod\limits_{\ell =1}^q \left(\mu(w_\ell)\left(1+\sum_{k\in{\mathbb N}_+}\sum\limits_{(a_{\ell 1},...,a_{\ell k}) \in \left({\mathcal{E}}\left(\left\{w_1,...,w_\ell\right\}\right)^c\right)^k}\prod_{j=1}^{k} \mu\left(a_{\ell j}\right)\right)\right)\\
&=\alpha\prod\limits_{\ell =1}^q \left(\mu(w_\ell)\sum_{k\in{\mathbb N}}\left(\mu\biggl({\mathcal{E}}\Bigl(\left\{w_1,...,w_\ell\right\}\Bigl)^c\biggl)\right)^{k}\right)\\
&=\alpha\prod\limits_{\ell =1}^q {\mu(w_\ell) \over \mu\biggl({\mathcal{E}}\Bigl(\left\{w_1,...,w_\ell\right\}\Bigl)\biggl)}=\Pi_{W}(w),
\end{align*}
which concludes the proof of Theorem \ref{thm:main}.
\section{Coupling in the General Matching model}
\label{sec:coupling}
In this Section we construct explicitly for a wide range of models, a stationary version of the buffer-content process $\suite{W_n}$.
This will be done beyond the iid case, by the strong backwards coupling method of Borovkov and Foss.
By doing so, we show in which cases a unique stationary buffer content exists and thereby, a unique stationary complete matching, in a sense that will be specified below.
The argument will be more easily developed in an ergodic-theoretical framework that we introduce in Section \ref{subsec:statergo}. Then, we show in Section \ref{subsec:subadd} that many matching
policies (including {\sc ml} and {\sc fcfm}) satisfy a remarkable property of sub-additivity, which will be the essential tool of our proofs, together with the existence of
(strong) erasing words in non-bipartite graphs, introduced in Section \ref{subsec:erase}. Our coupling results will then be given in Section \ref{subsec:renov} under assumption (H1') or (H1''), and in Section \ref{subsec:iid} under assumption (IID).
\subsection{General settings}
\label{subsec:statergo}
The general matching model is intrinsically periodic: arrivals are simple but departure are pairwise, so the size of the system has the parity of the original size at all even times - in particular a system started empty can possibly be empty only every other time, and two systems cannot couple unless their initial sizes have almost surely the same parity.
To circumvent this difficulty, at first we track the system only at even times. Equivalently, we change the time scale and see the arrivals by {\em pairs} of items (as in the original bipartite matching model \cite{CKW09,BGMa12,ABMW17}) which play different roles: the first one investigates first all possible matchings in the buffer upon its arrival according to $\phi$, before possibly considering the second one if no match is available, whereas the second one applies $\phi$ to all available items, including the first one. By doing so, it is immediate to observe that we obtain exactly a GM model as presented thus far, only
do we track it at even times.
Throughout this sub-section, suppose that assumption (H1') holds.
To formalize the above observation, we let $\suite{U_n}$ be the buffer content sequence at even times (we will use the term "{\em even} buffer content"), that is, $U_n=W_{2n}$, $n\in{\mathbb N}$.
We will primarily construct a (possibly unique) stationary version of the sequence $\suite{U_n}$ of even buffer content, by coupling.
For this, we work on the canonical space $\Omega^0:=\left({\mathcal{V}}\times\mathcal S\times{\mathcal{V}}\times\mathcal S\right)^\mathbb Z$
of the bi-infinite sequence $\suitez{\left(V_{2_n},\Sigma_{2n},V_{2n+1},\Sigma_{2n+1}\right)}$,
on which we define the bijective shift operator $\theta$ by $\theta\left((\omega_n)_{n\in\mathbb Z}\right)= (\omega_{n+1})_{n\in\mathbb Z}$ for all $(\omega_n)_{n\in \mathbb Z} \in \Omega$.
We denote by $\theta^{-1}$ the reciprocal operator of $\theta$, and by $\theta^n$ and $\theta^{-n}$ the $n$-th iterated of
$\theta$ and $\theta^{-1}$, respectively, for all $n\in{\mathbb N}$. We equip $\Omega^0$ with a sigma-field $\mathscr F^0$ and with the image probability measure
$\noindent{\bf Proof.}\ $ of the sequence $\suitez{\left(V_{2_n},\Sigma_{2n},V_{2n+1},\Sigma_{2n+1}\right)}$ on $\Omega^0$. Observe that under (H1'), $\noindent{\bf Proof.}\ $ is compatible with the shift, i.e.
for any ${\mathscr{A}} \in \mathscr F$, $\bpr{{\mathscr{A}}}=\bpr{\theta^{-1}{\mathscr{A}}}$ and any $\theta$-invariant event ${\mathscr{B}}$ ({\em i.e.} such that ${\mathscr{B}}=\theta^{-1}{\mathscr{B}}$) is either $\noindent{\bf Proof.}\ $-negligible or almost sure.
Altogether, the quadruple $\mathscr Q^0:=\left(\Omega^0,\mathscr F^0,\noindent{\bf Proof.}\ ,\theta\right)$ is thus stationary ergodic, and will be referred to as {\em Palm space} of the input at even times.
For more details about this framework, we refer the reader to the monographs \cite{BranFranLis90},
\cite{BacBre02} (Sections 2.1 and 2.5) and \cite{Rob03} (Chapter 7).
Let the r.v. $\left(V^0,\Sigma^0,V^1,\Sigma^1\right)$ be the projection of sample paths over their 0-coordinate. Thus $\left(V^0,\Sigma^0,V^1,\Sigma^1\right)$ can be interpreted as the input brought to the system at time 0, i.e. at 0 an item of class $V^0$ and then an item of class $V^1$ enter the systems, having respective lists of preference $\Sigma^0$ and $\Sigma^1$ over
$\mathcal V$, and the order of arrival between the two is kept track of ($V^0$ and {\em then} $V^1$).
Then for any $n\in \mathbb Z$, the r.v. $\left(V^0\circ\theta^n,\,\Sigma^0\circ\theta^n,\,V^1\circ\theta^n,\,\Sigma^1\circ\theta^n\right)$ corresponds to the input brought to the system at time $n$.
Define the subset
\[\mathbb W_2 = \left\{w \in \mathbb W\,:\,|w|\mbox{ is even }\right\}.\]
For any $\mathbb W_2$-valued r.v. $Y$, we define on $\Omega^0$ the sequence $\suite{U^{[Y]}_n}$ as the even buffer content sequence of the model initiated
at value $Y$, i.e.
\begin{equation}
\label{eq:recurUbar}
\left\{\begin{array}{ll}
U^{[Y]}_0 &= Y;\\
U^{[Y]}_{n+1} &= \left(U^{[Y]}_{n} \odot_\phi (V^0\circ\theta^n,\Sigma^0\circ\theta^n)\right)\odot_\phi (V^1\circ\theta^n,\Sigma^1\circ\theta^n),\,n\in{\mathbb N},
\end{array}\right.
\quad\quad\noindent{\bf Proof.}\ -\mbox{ a.s.}
\end{equation}
A stationary version of (\ref{eq:recurUbar}) is thus a recursion satisfying (\ref{eq:recurUbar}) and compatible with the shift, i.e.
a sequence $\suitez{U\circ\theta^n}$, where the $\mathbb W_2$-valued r.v. $U$ satisfies the equation
\begin{equation}
\label{eq:recurstatUbar}
U\circ\theta = \left(U \odot_\phi (V^0,\Sigma^0)\right) \odot_\phi (V^1,\Sigma^1),\,\noindent{\bf Proof.}\ -\mbox{ a.s.,}
\end{equation}
see Section 2.1 of \cite{BacBre02} for details.
To any stationary buffer content $\suitez{U\circ\theta^n}$ corresponds a unique stationary probability for the sequence $\suite{W_{2n}}$ on the original probability space
$(\Omega,\mathcal F,\mathbb P)$.
Moreover, provided that $\bpr{U=\emptyset}>0$, the bi-infinite sequence $\suitez{U\circ\theta^n}$ corresponds on $\mathscr Q^0$ to a unique stationary matching by $\phi$ (we write a $\phi$-{\em matching}),
that is obtained by using the (bi-infinite) family of construction points $\left\{n \in {\mathbb Z}\,:\,U \circ\theta^n =\emptyset\right\}$, and matching the incoming items
by $\phi$, within each finite block between construction points.
In other words, obtaining constructively a stationary buffer-content at even times and thereby, a stationary $\phi$-matching on $\mathbb Z$, amounts to solving on $\mathscr Q^0$ the almost-sure equation (\ref{eq:recurstatUbar}). This will be done by constructing the associated {backwards scheme}, as in \cite{Loynes62}:
for a $\mathbb W_2$-valued r.v. $Y$ and any fixed $n \ge 0$, the r.v. $U^{[Y]}_n\circ\theta^{-n}$ represents the even buffer content at time 0, whenever initiated at value $Y$, $n$ time epochs in the past.
Loynes' theorem shows the existence of a solution to (\ref{eq:recurstatUbar}), as the $\noindent{\bf Proof.}\ $-almost sure limit of the non-decreasing sequence $\suite{U^{[\emptyset]}_n\circ\theta^{-n}}$, whenever
the random map driving the recursion $\suite{U_n}$ is almost surely non-decreasing in the state variable. As the present model does not exhibit any particular monotonic structure, such a result is {\em a pariori} out of reach.
We thus resort to Borovkov's and Foss theory of Renovation, see \cite{Foss92,Foss94}.
Following \cite{Bor84}, we say that the buffer content sequence $\suite{U^{[Y]}_n}$ converges with {\em strong backwards coupling} to the stationary buffer content sequence
$\suite{U\circ\theta^n}$ if, $\noindent{\bf Proof.}\ $-almost surely there exists $N^*\ge 0$ such that for all $n \ge N^*$, $U^{[Y]}_{n}=U$.
Note that strong backwards coupling implies the (forward) coupling between $\suite{U^{[Y]}_n}$ and $\suite{U\circ\theta^n}$, i.e.
there exists a.s. an integer $N\ge 0$ such that $U^{[Y]}_{n}=U\circ\theta^n$ for all $n \ge N$. In particular the distribution of $U^{[Y]}_{n}$ converges in total variation to that of $U$, see e.g. Section 2.4 of \cite{BacBre02}.
\subsection{Sub-additivity}
\label{subsec:subadd}
We show hereafter that most of the
models we have introduced above satisfy a sub-additivity property
that will prove crucial in the main results of this section.
\begin{definition}[Sub-additivity]
\label{def:subadd}
An admissible matching policy $\phi$ is said to be {\em sub-additive} if,
for all $z',z''\in {\mathcal{V}}^*$, for all $\varsigma',\varsigma''\in\mathcal S^*$ whose letters are drawn by $\nu_\phi$ and such that $|\varsigma'|=|z'|$ and $|\varsigma''|=|z''|$, we have that
$$
\left|Q_\phi(z'z'',\varsigma'\varsigma'')\right| \leq \left|Q_\phi(z',\varsigma')\right| + \left|Q_\phi(z'',\varsigma'')\right|.
$$
\end{definition}
\begin{proposition}
\label{prop:sub}
The matching policies {\sc fcfs}, {\sc lcfs}, Random (including Priorities and {\sc u}) and {\sc ml} are sub-additive.
\end{proposition}
\noindent Before turning to the proof of
Proposition \ref{prop:sub} in the remainder of this section, let us show by a counter-example that, on the other hand, the policy 'Match the Shortest' is not sub-additive:
\begin{ex}[{\sc ms} is not sub-additive]
\label{ex:MS}
Take as a matching graph, the graph of Figure \ref{fig:example1}, and the arrival scenario depicted in Figure \ref{fig:example2bis}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (-1,1) -- (8,1);
\fill (0,1) circle (2pt) node[below] {\small{1}};
\fill (1,1) circle (2pt) node[below] {\small{1}};
\draw[-,very thick] (1.45,0.6) -- (1.45,1.4);
\draw[-,very thick] (1.55,0.6) -- (1.55,1.4);
\fill (2,1) circle (2pt) node[below] {\small{1}};
\fill (3,1) circle (2pt) node[below] {\small{3}};
\fill (4,1) circle (2pt) node[below] {\small{3}};
\draw[-, thin] (4,1) .. controls +(up:0.5cm) .. (7,1);
\fill (5,1) circle (2pt) node[below] {\small{2}};
\draw[-, thin] (2,1) .. controls +(up:0.5cm) .. (5,1);
\fill (6,1) circle (2pt) node[below] {\small{2}};
\draw[-, thin] (3,1) .. controls +(up:0.5cm) .. (6,1);
\fill (7,1) circle (2pt) node[below] {\small{4}};
\draw[-] (-1,-0.5) -- (8,-0.5);
\fill (0,-0.5) circle (2pt) node[below] {\small{1}};
\fill (1,-0.5) circle (2pt) node[below] {\small{1}};
\fill (2,-0.5) circle (2pt) node[below] {\small{1}};
\fill (3,-0.5) circle (2pt) node[below] {\small{3}};
\fill (4,-0.5) circle (2pt) node[below] {\small{3}};
\draw[-, thin] (4,-0.5) .. controls +(up:0.5cm) .. (6,-0.5);
\fill (5,-0.5) circle (2pt) node[below] {\small{2}};
\draw[-, thin] (3,-0.5) .. controls +(up:0.5cm) .. (5,-0.5);
\fill (6,-0.5) circle (2pt) node[below] {\small{2}};
\fill (7,-0.5) circle (2pt) node[below] {\small{4}};
\end{tikzpicture}
\caption[smallcaption]{'Match the Shortest' is not sub-additive.}
\label{fig:example2bis}
\end{center}
\end{figure}
\noindent In the first case above we let $z'=11$ and $z''=133224$.
Then, for any $\varsigma'$ and $\varsigma''$ we get $Q_{\textsc{ms}}(z',\varsigma')=11$ and $Q_{\textsc{ms}}(z'',\varsigma'')=\emptyset$, whereas performing $Q_{\textsc{ms}}(z'z'',\varsigma'\varsigma'')=1114$. So \[4=\left|Q_{\textsc{ms}}(z'z'',\varsigma'\varsigma'')\right| =4 > 2 = \left|Q_{\textsc{ms}}(z',\varsigma')\right|+\left|Q_{\textsc{ms}}(z',\varsigma')\right|=2.\]
\end{ex}
\subsubsection{Non-expansiveness}
\label{subsec:nonexp}
In the framework of stochastic recursions, the {\em non-expansiveness} property with respect to the $\ell_1$-norm, as introduced by Crandall and Tartar \cite{CT80}, amounts
to the 1-Lipschitz property of the driving map of the recursion. Similarly,
\begin{definition}[Non-expansiveness]
A class-admissible policy $\phi$ is said {\em non-expansive} if
for any $x$ and $x'$ in $\mathbb X$, any $v\in {\mathcal{V}}$ and any $\sigma \in \mathcal S$
that can be drawn by $\nu_\phi$,
\begin{equation}
\label{eq:defnonexp1}
\|x'\ccc_{\phi}(v,\sigma) - x\ccc_{\phi}(v,\sigma)\| \le \|x'-x\|.
\end{equation}
\end{definition}
\begin{proposition}
\label{prop:nonexp1}
Any random matching policy (in particular, priority and {\sc u}) is non-expansive.
\end{proposition}
\begin{proof}
The result has been proven for priority and {\sc u} in \cite{MoyPer17}: this is precisely the inductive argument, respectively in the proofs of Lemma 4 and Lemma 7 therein.
As is easily seen, the same argument can be generalized to any random policy $\phi$, once the list of preference that is drawn from $\nu_\phi$ is common to both systems.
Indeed, the following consistency property holds: for any states $x$ and $x'$, any incoming item $v$ and any list of preferences $\sigma$ drawn from $\nu_\phi$,
\begin{equation}
\label{eq:consist}
\biggl[\Bigl\{p_{\phi}(x,v,\sigma),p_{\phi}(x',v,\sigma)\Bigl\}\, \subset\, {\mathcal{P}}(x,v) \cap {\mathcal{P}}(x',v)\biggl]\quad \Longrightarrow \quad \biggl[p_{\phi}(x,v,\sigma) = p_{\phi}(x',v,\sigma)\biggl],
\end{equation}
in other words, the choice of match of $v$ cannot be different in the two systems, if both options were available in both systems.
The result follows for any random policy.
\end{proof}
\begin{proposition}
\label{prop:nonexp2}
{\sc ml} is non-expansive.
\end{proposition}
\begin{proof}
The proof is similar to that for random policies,
except for the consistency property (\ref{eq:consist}), which does not hold in this case.
Specifically, an entering item can be matched
with items of two different classes in the two systems, whereas the
queues of these two classes are non-empty in both systems.
Let us consider that case: specifically, a $v$-item enters the system, and for a common draw $\sigma$ according to the (uniform) distribution
$\nu_{\textsc{ml}}$, we obtain $p_{\textsc{ml}}(x,v,\sigma)=k$ and $p_{\textsc{ml}}(x',v,\sigma)=k'$ for
$\{k,k'\} \subset {\mathcal{P}}(x,v) \cap {\mathcal{P}}(x',v)$ and $k\ne k'$.
Thus we have
\begin{equation}
\label{eq:losers1}
\|x'\ccc_{\textsc{ml}}(v,\sigma) -
x\ccc_{\textsc{ml}}(v,\sigma)\| =\sum_{i \ne k,k'}
|x(i)-x'(i)|+R, \end{equation} where
\[R=\left|(x(k)-1)-x'(k)\right|+\left|x(k')-\left(x'(k')-1\right)\right|.\]
We are in the following alternatives,
\begin{enumerate}
\item if $x(k) > x'(k)$ and $x'(k') > x(k')$, then
\begin{equation*}
R=\left(x(k)-1-x'(k)\right)+\left(x'(k')-1-x(k')\right)=\left|x(k)-x'(k)\right|+\left|x(k')-x'(k')\right|-2.\end{equation*}
\item if $x(k) \le x'(k)$ and $x'(k') > x(k')$, then
\begin{equation*}
R=\left(x'(k)-x(k)+1\right)+\left(x(k')-1-x'(k')\right)
=\left|x(k)-x'(k)\right|+\left|x(k')-x'(k')\right|.\end{equation*}
\item if $x(k) > x'(k)$ and $x'(k') \le x(k')$, we also have
\begin{equation*}
R=\left(x(k)-1-x'(k)\right)+\left(x(k')-x'(k')+1\right)=\left|x(k)-x'(k)\right|+\left|x(k')-x'(k')\right|.
\end{equation*}
\end{enumerate}
Observe that the case $x(k) \le x'(k)$ and $x'(k') \le x(k')$ cannot
occur. Indeed, by the definition of {\sc ml} we have that
\[x(k') \le x(k)\,\mbox{ and }x'(k) \le x'(k'),\]
which would imply in turn that
\[x(k)=x(k')= x'(k) = x'(k').\]
This is impossible since, in that case, under the common list of preferences $\sigma$ both systems would have
chosen the same match for the new $v$-item.
As a conclusion, in view of (\ref{eq:losers1}), in all possible cases
we obtain that
\begin{equation*}
\|x'\ccc_{\textsc{ml}}(v,\sigma) -
x\ccc_{\textsc{ml}}(v,\sigma)\|
\le \sum_{i \ne k,k'} |x(i)-x'(i)|+\left|x(k)-x'(k)\right|+\left|x(k')-x'(k')\right|
=\|x'-x\|,
\end{equation*}
which concludes the proof.
\end{proof}
\subsubsection{Proof of Proposition \ref{prop:sub} for Non-expansive policies}
\label{subsec:proofnonexp}
As we prove below, the non-expansiveness for the $\ell_1$-norm, which is satisfied by
all random policies (Proposition \ref{prop:nonexp1}) and {\sc ml} (Proposition \ref{prop:nonexp2}),
entails simply the sub-additivity of the
corresponding model.
Fix a non-expansive matching policy $\phi$. Keeping the
notations of Definition \ref{def:subadd}, let us define the two
arrays $(x_i)_{i=1,...,|v''|}$ and $\left(x'_i\right)_{i=1,...,|v''|}$ to be the class
details of the system at arrival times, starting respectively from
an empty system and from a system of buffer content $w'$, and
having a common input $\left(v''_i,\sigma''_i\right)_{i=1,...,|v''|}$,
where $(\sigma''_i)_{i=1,...,|u''|}$ are drawn from $\nu_\phi$ on $\mathcal S$.
In other words, we set
\[\left\{\begin{array}{ll}
x_0 &= \mathbf 0;\\
x'_0 &= \left[w'\right]\\
\end{array}\right.\]
and by induction,
\[\left\{\begin{array}{ll}
x_{n+1} &= x_{n} \ccc_{\phi} \left(v''_{n+1},\sigma''_{n+1}\right),\,n\in\left\{0,\dots,|v''|-1\right\};\\
x'_{n+1}&=x'_{n} \ccc_{\phi}\left(v''_{n+1},\sigma''_{n+1}\right),\,n\in\left\{0,\dots,|v''|-1\right\}.
\end{array}\right.\]
Applying (\ref{eq:defnonexp1}) at all $n$, we obtain by induction that for all $n
\in \left\{0,\dots,|v''|\right\}$,
\begin{equation}
\|x'_n - x_n\| \le \|x'_0-x_0\|= |w'|.\label{eq:nonexprec}
\end{equation}
Now observe that by construction, $x_{|v''|}=\left[w''\right]$ which, together with (\ref{eq:nonexprec}), implies that
\begin{equation*}
|w| = \left\|x'_{|v''|}\right\|\le \left\|x'_{|v''|} - x_{|v''|}\right\|+\left\|x_{|v''|}\right\|\le |w'|+|w''|
\end{equation*}
hence the sub-additivity of $\phi$.
\subsubsection{Proof of Proposition \ref{prop:sub} for {\sc fcfm} and {\sc lcfm}}
\label{subsec:proofFIFOLIFO}
For the disciplines {\sc fcfm} and {\sc lcfm}, we cannot exploit a non-expansiveness property similar to
(\ref{eq:defnonexp1}). Indeed, it is straightforward that a given common arrival can very well increase the distance between the commutative images of the
queue details of two systems:
\begin{ex
Consider the graph of Figure \ref{fig:example1}.
Then, regardless of $\sigma$ we have for instance that
\begin{multline*}
\left\|\left[133 \odot_{\textsc{fcfm}} (2,\sigma)\right] - \left[311 \odot_{\textsc{fcfm}}(2,\sigma)\right]\right\|
\\
=\left\|\left[33\right] - \left[11\right]\right\|=\left\|(0,0,2,0)-(2,0,0,0)\right\| =4 > 2 = \left\|(1,0,2,0)-(2,0,1,0)\right\| = \left\|\left[133\right] - \left[311\right]\right\|
\end{multline*}
whereas for {\sc lcfm},
\begin{equation*}
\left\|\left[331\odot_{\textsc{lcfm}}(2,\sigma)\right] - \left[113 \odot_{\textsc{lcfm}}(2,\sigma)\right]\right\|
=\left\|\left[33\right] - \left[11\right]\right\|= 4 > 2 =\left\|\left[331\right] - \left[11\right]\right\|,
\end{equation*}
see Figure \ref{fig:example3}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\fill (1,0) circle (2pt) node[below] {\small{1}} ;
\fill (2,0) circle (2pt) node[below] {\small{3}} ;
\fill (3,0) circle (2pt) node[below] {\small{3}} ;
\fill (4,0) circle (2pt) node[below] {\small{2}} ;
\draw[-, thin] (4,0) .. controls +(up:0.5cm) .. (1,0);
\draw[-] (0,0)-- (5,0);
\fill (1,-1) circle (2pt) node[below] {\small{3}} ;
\fill (2,-1) circle (2pt) node[below] {\small{1}} ;
\fill (3,-1) circle (2pt) node[below] {\small{1}} ;
\fill (4,-1) circle (2pt) node[below] {\small{2}} ;
\draw[-, thin] (4,-1) .. controls +(up:0.5cm) .. (1,-1);
\draw[-] (0,-1)-- (5,-1);
%
%
\fill (8,0) circle (2pt) node[below] {\small{3}} ;
\fill (9,0) circle (2pt) node[below] {\small{3}} ;
\fill (10,0) circle (2pt) node[below] {\small{1}} ;
\fill (11,0) circle (2pt) node[below] {\small{2}} ;
\draw[-, thin] (11,0) .. controls +(up:0.5cm) .. (10,0);
\draw[-] (7,0)-- (12,0);
\fill (8,-1) circle (2pt) node[below] {\small{1}} ;
\fill (9,-1) circle (2pt) node[below] {\small{1}} ;
\fill (10,-1) circle (2pt) node[below] {\small{3}} ;
\fill (11,-1) circle (2pt) node[below] {\small{2}} ;
\draw[-, thin] (11,-1) .. controls +(up:0.5cm) .. (10,-1);
\draw[-] (7,-1)-- (12,-1);
\end{tikzpicture}
\caption[smallcaption]{FCFM (left) and LCFM (right) are not non-expansive.} \label{fig:example3}.
\end{center}
\end{figure}
\end{ex}
As we cannot apply the arguments of Section \ref{subsec:proofnonexp}
we resort to a direct proof for both {\sc fcfm} (for which our argument is related to the proof of Lemma 4 in \cite{ABMW17}), and {\sc lcfm}.
We keep the notation of Definition \ref{def:subadd}, where we drop for short the dependence
on $\varsigma$ in the notations $M_{\textsc{fcfm}}(.)$ and $M_{\textsc{lcfm}}(.)$, as the various {\sc fcfm} and {\sc lcfm} matchings do not
depend on any list of preferences.
\paragraph{FCFM.}
Start with the policy {\sc fcfm}. We proceed in two steps,
{\bf Step I:} Let $|z'|=1$, and assume that $M_{\textsc{fcfm}}(z'')$ has $K$ unmatched items.
We need to show that $M_{\textsc{fcfm}}(z'z'')$ has at
most $K+1$ unmatched items.
There are three possible cases:
\begin{itemi}
\item[(a)] The item $u'_1$ is unmatched in $M_{\textsc{fcfm}}(z'z'') = M_{\textsc{fcfm}}(z'_1z'')$.
Then, by the definition of {\sc fcfm} $z'_1 {\not\!\!\--} z''_j$ for any letter $z''_j$ of $z''$.
Again from the definition of {\sc fcfm}, the presence in line of this incompatible item $z'_1$ does not influence the choice of
match of any subsequent item of the word $z''$. Thus the matched pairs in $M_{\textsc{fcfm}}(z'z'')$ are exactly the ones in $M_{\textsc{fcfm}}(z'')$, so there are $K+1$ unmatched items in $M_{\textsc{fcfm}}(z'z'')$.
\item[(b)] The item $z'_1$ gets matched in $M_{\textsc{fcfm}}(z'z'')$ with an unmatched item $z''_{j_1}$ of $M_{\textsc{fcfm}}(z')$. Then, any unmatched item in $M_{\textsc{fcfm}}(z'')$ remains unmatched in $M_{\textsc{fcfm}}(z'z'')$.
On another hand, for any matched item $z''_i$ in $M_{\textsc{fcfm}}(z'')$ (let $z''_j$ be its match), either $z''_i {\not\!\!\--} z''_{j_1}$, and thus choses its match in $M_{\textsc{fcfm}}(v)$ regardless of whether $z''_{j_1}$ is matched or not,
and thus choses again $z''_j$, or $z''_i {\--} z''_{j_1}$ and thus from the {\sc fcfm} property, we have $j < j_1$ and in turn $z''_j$ remains matched with $z''_j$ in $M_{\textsc{fcfm}}(z'z'')$. Therefore the matching induced by the letters of $z''$ in $M_{\textsc{fcfm}}(z'z'')$ remains precisely $M_{\textsc{fcfm}}(z'')$, so
$M_{\textsc{fcfm}}(z'z'')$ has $K-1$ unmatched items.
\item[(c)] The item $z'_1$ gets matched with an item $z''_{j_1}$ that was matched in $M_{\textsc{fcfm}}(z'')$ to some item $z''_{i_1}$. The {\sc fcfm} matching of $z'_{1}$ with $z''_{j_1}$
breaks the old match $(z''_{i_1}, z''_{j_1})$, so we now need to search a new match for $z''_{i_1}$.
Either there is no {\sc fcfm} match for $z''_{i_1}$ and we stop, or we find a match $z''_{j_2}$.
The new pair $(z''_{i_1}, z''_{j_2})$ potentially broke an old pair $(z''_{i_2}, z''_{j_2})$.
We continue on and on, until either $z''_{i_k}$ cannot find a new match or $z''_{j_k}$ was not previously matched, and consequently,
with $K$ unmatched items in the first case and $K-1$ in the second.
Observe that due to the {\sc fcfm} property, we have $i_{\ell} \leq i_{\ell +1}$ and $j_{\ell} \leq j_{\ell +1}$ for all $\ell\le k$.
\end{itemi}
{\bf Step II:} Consider now an arbitrary finite word $z'$. Observe, that if $(z'_i,z'_j) \in M_{\textsc{fcfm}}(z')$, then
$(z'_i,z'_j) \in M_{\textsc{fcfm}}(z'z'')$, as is the case for any admissible policy. Thus, denoting $w'= Q_{\textsc{fcfm}}(z')$, we have $Q_{\textsc{fcfm}}(z'z'')=Q_{\textsc{fcfm}}\left(w'z''\right)$.
Denote $w' = w'_1 \ldots w'_p$.
We will consider one by one the items in $w'$, starting from the right to the left. If we denote for all $1 \leq i \leq p$,
$M_{\textsc{fcfm}}^i= M_{\textsc{fcfm}}(w'_{p-i+1}\ldots w'_pz'')$ and $K_i$, the number of unmatched items in $M_{\textsc{fcfm}}^i$, Step I entails by an immediate induction that for all
$1 \leq i \leq p$, $K_{i} \leq i + \left|Q_\phi(z'')\right|.$ Hence we
finally have \[\left|Q_\phi(z'z'')\right|=K_p \le p + \left|Q_\phi(z'')\right| = \left|Q_\phi(z')\right|+\left|Q_\phi(z'')\right|,\]
which concludes the proof for {\sc fcfm}.
\medskip
\paragraph{LCFM.}
We now turn to {\sc lcfm}, for which we apply the same procedure as above,
{\bf Step I:} Set $|z'|=1$ and assume that $M_{\textsc{fcfm}}(z'')$ has $K$ unmatched items. The three different cases are the same as above,
\begin{itemize}
\item[(a)] If $z'_1$ is unmatched in $M_{\textsc{lcfm}}(z'z'')$, then $z'_1$ is incompatible with $z''_1$, otherwise the two items would have been matched.
In turn, if follows from the definition of {\sc lcfm} that the presence in line of $z'_1$ does not influence the choice of match of any item
$z''_j$ that is matched in $M_{\textsc{lcfm}}(z'')$, even though $z'_1{\--} z''_j$. So $M_{\textsc{lcfm}}\left(z'z''\right)$ has exactly $K+1$ unmatched customers items.
\item[(b)] Whenever $z'_1$ is matched in $M_{\textsc{lcfm}}(z'z'')$ with an item $z''_{j_1}$ that was unmatched in $M_{\textsc{lcfm}}(z''),$ any
matched item $z''_i$ in $M_{\textsc{lcfm}}(z'')$ that is compatible with $z''_{j_1}$ has found in $z''$ a more recent compatible match $z''_j$.
The matching of $z''_i$ with $z''_j$ still occurs in $M_{\textsc{lcfm}}(z'z'')$.
Thus, as above the matching induced in $M_{\textsc{lcfm}}(z'z'')$ by the nodes of $z''$ is not affected by the
match $(z'_1,z''_{j_1})$, so there are are $K-1$ unmatched items in $M_{\textsc{lcfm}}(z'z'')$.
\item[(c)] Suppose now that $z'_1$ is matched with a server $z''_{j_1}$ that is matched in $M_{\textsc{lcfm}}(z'')$.
We proceed as for {\sc fcfm}, by constructing the new corresponding matchings $\left(z'_1,z''_{j_1}\right)$, $\left(z''_{i_1},z''_{j_2}\right)$, $\left(z''_{i_2},z''_{j_3}\right)$,
and so on, until we reach the same conclusion as for {\sc fcfm} (with the only difference that in {\sc lcfm} the indexes $i_1,i_2,...$ and $j_1,j_2,...$ are not necessarily ordered increasingly).
\end{itemize}
Therefore, at Step I we reach the same conclusions as for {\sc fcfm}.
{\bf Step II:} The construction for {\sc fcfm} remains valid for any
admissible policy, and in particular for {\sc lcfm}.
\subsection{Erasing words}
\label{subsec:erase}
The concepts of {\em erasing words} and {\em strong} erasing words will also be useful in the construction below.
\begin{definition}
Let $G=({\mathcal{V}},{\mathcal{E}})$ be a connected graph, and $\phi$ be an admissible matching policy. Let $u \in \mathbb W_2$.
We say that the word $z\in {\mathcal{V}}^*$ is an {\em erasing word} of $u$ for $(G,\phi)$ if $|z|$ is even and
for any two words $\varsigma'$ and $\varsigma$ possibly drawn by $\nu_\phi$ on $\mathcal S^*$ and having respectively the same size as $z$ and $u$, we have that
\begin{equation}
\label{eq:deferase}
Q_\phi\left(z,\varsigma'\right)=\emptyset\quad \quad \mbox{ and }\quad\quad Q_\phi\left(uz,\varsigma\varsigma'\right)=\emptyset.
\end{equation}
\end{definition}
In other words, an erasing word of $u$ has the twofold property of being perfectly matchable by $\phi$ alone, and together with $u$.
The following proposition guarantees the existence of erasing words for any stabilizable graph and any sub-additive policy.
\begin{proposition}
\label{pro:erasing}
Let $G$ be a non-bipartite graph and $\phi$ be a sub-additive matching policy. Then any word $u\in \mathbb W_2$ admits an erasing
word for $(G,\phi)$.
\end{proposition}
\begin{proof}
As will be clear below, the arguments of this proof do not depend on the drawn lists of preferences, as long as they are fixed upon arrival. For notational convenience, we thus skip this parameter from all notations
(i.e. we write for instance $Q_{\phi}(u)$ instead of $Q_\phi(u,\varsigma)$, and so on). We first show that any admissible word of size 2 admits an erasing word $y$; so let us consider a word $ij$ where $i {\not\!\!\--} j$.
\medskip
As $G$ is connected, $i$ and $j$ are connected at distance, say, $p\ge 2$, i.e. there exists a minimal path $i {\--} i_1 {\--} ... {\--} i_{p-1} {\--} j$ connecting $i$ to $j$.
If $p$ is odd, then just set $y=i_1i_2...i_{p-1}$. Clearly, $Q_\phi(w)=\emptyset$ and as the path is minimal, in $M_\phi(ijz)$ $i_1$ is matched with $i$, $i_3$ is matched with $i_2$, and so on,
until $i_{p-1}$ is matched with $j$. So $Q_\phi(ijy)=\emptyset$, and (\ref{eq:deferase}) follows.
We now assume that $p$ is even. Set $y^1=i_1i_2...i_{p-1}i_{p-1}$. Then,
in $M_\phi(ijy^1)$ $i_1$ is matched with $i$, $i_3$ with $i_2$, and so on, until both $j$ and $i_{p-2}$ are matched with an $i_{p-1}$ item. So $Q_\phi(ijy^1)=\emptyset$, however
$Q_\phi(y^1) = i_{p-1}i_{p-1}$. But as $G$ is non-bipartite, it contains an odd cycle. Thus (see e.g. the proof of Lemma 3 in \cite{MoyPer17}) there necessarily exists an {\em induced} odd cycle in $G$,
say of length $2r+1$, $r \ge 1$. As $G$ is connected, there exists a path connecting $i_{p-1}$ to any element of the latter cycle. Take the shortest one (which may intersect with the path between $i$ to $j$, or coincide with a part of it), and denote it $i_{p-1} {\--} j_1 {\--} j_2 {\--} ... {\--} j_q {\--} k_1$, where $k_1$ is the first element of the latter path belonging to the odd cycle, and by $k_1 {\--} k_2 {\--} ... {\--} k_{2r+1} {\--} k_1$, the elements of the cycle.
See an example in Figure \ref{Fig:erasingpath}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-,very thick] (0.5,0) -- (1,1);
\draw[-,very thick] (1,1) -- (1.5,1.2);
\draw[-,very thick] (1.5,1.2) -- (2.2,1.2);
\draw[-,dotted] (2.2,1.2) -- (3.2,1.2);
\draw[-,very thick] (3.2,1.2) -- (4,1.2);
\draw[-,very thick] (4,1.2) -- (4.5,0.7);
\draw[-,very thick] (4.5,0.7) -- (5.2,1.7);
\draw[-,very thick] (4.5,0.7) -- (3.5,0.7);
\draw[-,very thick] (3.5,0.7) -- (2.5,0.5);
\draw[-,very thick] (2.5,0.5) -- (2.625,-0.125);
\draw[-,dotted] (2.5,0.5) -- (2.75,-0.75);
\draw[-,very thick] (2.75,-0.75) -- (3,-1);
\draw[-,very thick] (3,-1) -- (3.5,-0.5);
\draw[-,very thick] (3,-1) -- (3.5,-1.5);
\draw[-,very thick] (3.5,-1.5) -- (4,-1.5);
\draw[-,dotted] (4,-1.5) -- (5.5,-1.5);
\draw[-,dotted] (5,-1.5) -- (5.5,-1.5);
\draw[-,very thick] (5.5,-1.5) -- (5.5,-0.5);
\draw[-,very thick] (3.5,-0.5) -- (4,-0.5);
\draw[-,dotted] (4,-0.5) -- (5.5,-0.5);
\draw[-,dotted] (5,-1.5) -- (5.5,-1.5);
\draw[-,very thick] (5,-0.5) -- (5.5,-0.5);
\draw[-,very thick] (5,-1.5) -- (5.5,-1.5);
\fill (0.5,0) circle (2pt) node[above] {\small{$i$}} ;
\fill (1,1) circle (2pt) node[above left] {\small{$i_1$}} ;
\fill (1.5,1.2) circle (2pt) node[above right] {\small{$i_2$}} ;
\fill (2.2,1.2) circle (2pt) node[above right] {\small{$i_3$}} ;
\fill (3.2,1.2) circle (2pt) node[above] {\small{$i_{p-3}$}} ;
\fill (4,1.2) circle (2pt) node[right] {\small{$i_{p-2}$}} ;
\fill (4.5,0.7) circle (2pt) node[below] {\small{$i_{p-1}$}} ;
\fill (5.2,1.7) circle (2pt) node[above] {\small{$j$}} ;
\fill (3.5,0.7) circle (2pt) node[below] {\small{$j_{1}$}} ;
\fill (2.5,0.5) circle (2pt) node[below right] {\small{$j_{2}$}} ;
\fill (2.625,-0.125) circle (2pt) node[right] {\small{$j_{3}$}} ;
\fill (2.75,-0.75) circle (2pt) node[left] {\small{$j_{q}$}} ;
\fill (3,-1) circle (2pt) node[below] {\small{$k_{1}$}} ;
\fill (3.5,-0.5) circle (2pt) node[above] {\small{$k_{2r+1}$}} ;
\fill (3.5,-1.5) circle (2pt) node[below] {\small{$k_{2}$}} ;
\fill (4,-0.5) circle (2pt) node[below] {\small{$k_{2r}$}} ;
\fill (5,-0.5) circle (2pt) node[above] {\small{$k_{r+3}$}} ;
\fill (4,-1.5) circle (2pt) node[below] {\small{$k_{3}$}} ;
\fill (5.5,-0.5) circle (2pt) node[above right] {\small{$k_{r+2}$}} ;
\fill (5.5,-1.5) circle (2pt) node[below right] {\small{$k_{r+1}$}} ;
\fill (5,-1.5) circle (2pt) node[below] {\small{$k_{r}$}} ;
\end{tikzpicture}
\caption[smallcaption]{The path from $i$ to $j$ and then to an odd cycle}
\label{Fig:erasingpath}
\end{center}
\end{figure}
Then set
\[y^2 = j_1 j_1 j_2 j_2 ... j_q j_q k_1 k_1 k_2 k_3 ... k_{2r} k_{2r+1}.\]
We are in the following alternative:
\begin{itemize}
\item if $q$ is even, then in $M_\phi(y^1y^2)$ the two nodes $i_{p-1}$ are matched with the two noded $j_1$, the two $j_2$ with the two $j_3$, and so on, until the two $j_{q}$ are matched with the two $k_1$, and then, as the
cycle is induced, $k_2$ is matched with $k_3$, $k_4$ with $k_5$ and so on, until $k_{2p}$ is matched with $k_{2p+1}$.
On the other hand, in $M_\phi(y^2)$, the two $j_1$ are matched with the two $j_2$, the two $j_3$ with the two $j_4$, and so on, until the two $j_{q-1}$ are matched with the two $j_q$.
Then, a $k_1$ is matched with $k_2$, $k_3$ with $k_4$ and so on, until $k_{2p-1}$ is matched with $k_{2p}$ and $k_{2p+1}$ is matched with the remaining $k_1$.
\item if $q$ is odd, then the edges of $M_\phi(y^1y^2)$ are as in the first case, until the two nodes $j_{q-1}$ are matched with the two nodes $j_q$. But then, whatever $\phi$ is,
one of the two nodes $k_1$ is matched with $k_2$, $k_3$ with $k_4$, and so on, until $k_{2p-1}$ is matched with $k_{2p}$, and $k_{2p+1}$ is matched with the remaining $k_1$.
Also, in $M_\phi(y^2)$, the two $j_1$ are matched with the two $j_2$, the two $j_3$ with the two $j_4$, and so on, until the two $j_{q-2}$ are matched with the two $j_{q-1}$.
Then, the two $j_q$ are matched with the two $k_1$, $k_2$ is matched with $k_3$, and so on, until $k_{2p}$ is matched with $k_{2p+1}$.
\end{itemize}
In both cases, we obtain that both $Q_\phi(y^1y^2)=\emptyset$ and that $Q_\phi(y^2)=\emptyset$. In particular, as $Q_\phi(ijy^1)=\emptyset$ we have $Q_\phi(ijy^1y^2)=\emptyset$.
Therefore $y=y^1y^2$ is an erasing word for $ij$. See an example in Figure \ref{Fig:erasingword}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.95]
\draw[-] (-0.5,0) -- (15.5,0);
\fill (-0.5,0) circle (1.5pt) node[below] {\tiny{$i$}};
\fill (0,0) circle (1.5pt) node[below] {\tiny{$j$}};
\fill (0.5,0) circle (1.5pt) node[below] {\tiny{$i_1$}};
\draw[-, thin] (-0.5,0) .. controls +(up:0.5cm) .. (0.5,0);
\fill (1,0) circle (1.5pt) node[below] {\tiny{$i_2$}};
\fill (1.5,0) circle (1.5pt) node[below] {\tiny{$i_3$}};
\draw[-, thin] (1,0) .. controls +(up:0.5cm) .. (1.5,0);
\fill (2.5,0) node[above] {\small{...}};
\fill (3.5,0) circle (1.5pt) node[below] {\tiny{$i_{p\!-\!2}$}};
\fill (4,0) circle (1.5pt) node[below] {\tiny{\,\,$i_{p\!-\!1}$}};
\draw[-, thin] (3.5,0) .. controls +(up:0.5cm) .. (4,0);
\fill (4.5,0) circle (1.5pt) node[below] {\tiny{\,\,$i_{p\!-\!1}$}};
\draw[-, thin] (0,0) .. controls +(up:0.75cm) .. (4.5,0);
\fill (5,0) circle (1.5pt) node[below] {\tiny{$j_1$}};
\fill (5.5,0) circle (1.5pt) node[below] {\tiny{\,\,$j_1$}};
\fill (6,0) circle (1.5pt) node[below] {\tiny{$j_2$}};
\fill (6.5,0) circle (1.5pt) node[below] {\tiny{\,\,$j_2$}};
\draw[-, thin] (5,0) .. controls +(up:0.5cm) .. (6,0);
\draw[-, thin] (5.5,0) .. controls +(up:0.5cm) .. (6.5,0);
\fill (8,0) node[above] {\small{...}};
\fill (9.5,0) circle (1.5pt) node[below] {\tiny{$j_{q\!-\!1}$}};
\fill (10,0) circle (1.5pt) node[below] {\tiny{\,\,$j_{q\!-\!1}$}};
\fill (10.5,0) circle (1.5pt) node[below] {\tiny{$j_q$}};
\fill (11,0) circle (1.5pt) node[below] {\tiny{\,\,$j_q$}};
\draw[-, thin] (9.5,0) .. controls +(up:0.5cm) .. (10.5,0);
\draw[-, thin] (10,0) .. controls +(up:0.5cm) .. (11,0);
\fill (11.5,0) circle (1.5pt) node[below] {\tiny{$k_1$}};
\fill (12,0) circle (1.5pt) node[below] {\tiny{\,\,$k_1$}};
\fill (12.5,0) circle (1.5pt) node[below] {\tiny{\,\,$k_2$}};
\draw[-, thin] (11.5,0) .. controls +(up:0.5cm) .. (12.5,0);
\fill (13,0) circle (1.5pt) node[below] {\tiny{\,\,$k_3$}};
\fill (13.5,0) circle (1.5pt) node[below] {\tiny{\,\,$k_4$}};
\draw[-, thin] (13,0) .. controls +(up:0.5cm) .. (13.5,0);
\fill (14,0) node[above] {\small{...}};
\fill (14.5,0) circle (1.5pt) node[below] {\tiny{$k_{2r\!-\!1}$}};
\fill (15,0) circle (1.5pt) node[below] {\tiny{\,\,$k_{2r}$}};
\draw[-, thin] (14.5,0) .. controls +(up:0.5cm) .. (15,0);
\fill (15.5,0) circle (1.5pt) node[below] {\tiny{\,\,\,\,\,$k_{2r+1}$}};
\draw[-, thin] (12,0) .. controls +(up:0.5cm) .. (15.5,0);
\draw[-] (0.5,-1.5) -- (15.5,-1.5);
\fill (0.5,-1.5) circle (1.5pt) node[below] {\tiny{$i_1$}};
\fill (1,-1.5) circle (1.5pt) node[below] {\tiny{$i_2$}};
\draw[-, thin] (0.5,-1.5) .. controls +(up:0.5cm) .. (1,-1.5);
\fill (1.5,-1.5) circle (1.5pt) node[below] {\tiny{$i_3$}};
\fill (2,-1.5) circle (1.5pt) node[below] {\tiny{$i_4$}};
\draw[-, thin] (1.5,-1.5) .. controls +(up:0.5cm) .. (2,-1.5);
\fill (2.5,-1.5) node[above] {\small{...}};
\fill (3,-1.5) circle (1.5pt) node[below]{\tiny{$i_{p\!-\!3}$}};
\fill (3.5,-1.5) circle (1.5pt) node[below] {\,\tiny{$i_{p\!-\!2}$}};
\draw[-, thin] (3,-1.5) .. controls +(up:0.5cm) .. (3.5,-1.5);
\fill (4,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$i_{p\!-\!1}$}};
\fill (4.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,\,\,$i_{p\!-\!1}$}};
\fill (5,-1.5) circle (1.5pt) node[below] {\tiny{\,$j_1$}};
\fill (5.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$j_1$}};
\draw[-, thin] (4,-1.5) .. controls +(up:0.5cm) .. (5,-1.5);
\draw[-, thin] (4.5,-1.5) .. controls +(up:0.5cm) .. (5.5,-1.5);
\fill (6,-1.5) circle (1.5pt) node[below] {\tiny{$j_2$}};
\fill (6.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$j_2$}};
\fill (7,-1.5) circle (1.5pt) node[below] {\tiny{$j_3$}};
\fill (7.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$j_3$}};
\draw[-, thin] (7,-1.5) .. controls +(up:0.5cm) .. (6,-1.5);
\draw[-, thin] (7.5,-1.5) .. controls +(up:0.5cm) .. (6.5,-1.5);
\fill (8,-1.5) node[above] {\small{...}};
\fill (8.5,-1.5) circle (1.5pt) node[below]{\tiny{$j_{q\!-\!2}$}};
\fill (9,-1.5) circle (1.5pt) node[below] {\,\tiny{$j_{q\!-\!2}$}};
\fill (9.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$j_{q\!-\!1}$}};
\fill (10,-1.5) circle (1.5pt) node[below] {\tiny{\,\,\,\,$j_{q\!-\!1}$}};
\draw[-, thin] (8.5,-1.5) .. controls +(up:0.5cm) .. (9.5,-1.5);
\draw[-, thin] (9,-1.5) .. controls +(up:0.5cm) .. (10,-1.5);
\fill (10.5,-1.5) circle (1.5pt) node[below] {\tiny{$j_q$}};
\fill (11,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$j_q$}};
\fill (11.5,-1.5) circle (1.5pt) node[below] {\tiny{$k_1$}};
\fill (12,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$k_1$}};
\draw[-, thin] (10.5,-1.5) .. controls +(up:0.5cm) .. (11.5,-1.5);
\draw[-, thin] (11,-1.5) .. controls +(up:0.5cm) .. (12,-1.5);
\fill (12.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$k_2$}};
\fill (13,-1.5) circle (1.5pt) node[below] {\tiny{\,\,$k_3$}};
\draw[-, thin] (12.5,-1.5) .. controls +(up:0.5cm) .. (13,-1.5);
\fill (14,-1.5) node[above] {\small{...}};
\fill (15,-1.5) circle (1.5pt) node[below] {\tiny{$k_{2r}$}};
\fill (15.5,-1.5) circle (1.5pt) node[below] {\tiny{\,\,\,\,$k_{2r+1}$}};
\draw[-, thin] (15,-1.5) .. controls +(up:0.5cm) .. (15.5,-1.5);
\end{tikzpicture}
\caption[smallcaption]{The two perfect matchings $M_{\textsc{fcfm}}(ijy^1y^2)$ and $M_{\textsc{fcfm}}(y^1y^2)$, for an even $p$ and an odd $q$.}
\label{Fig:erasingword}
\end{center}
\end{figure}
\medskip
We now consider any word $u \in \mathbb W_2$, say $u=u_1u_2...u_{2r_1}$. First, as we just proved, there exists an erasing word, say $z^{1}$, for the two-letter word $u_{2r_1-1}u_{2r_1}$.
In particular, we have that $Q_\phi\left(u_{2r_1-1}u_{2r_1}z^1\right)=\emptyset.$ Thus, the sub-additivity of $\phi$ entails that
\[\left|Q_\phi\left(uz^1\right)\right| \le \left|Q_\phi\left(u_1u_2...u_{2r_1-2}\right)\right|+\left|Q_\phi\left(u_{2r_1-1}u_{2r_1}z^1\right)\right| = \left|Q_\phi\left(u\right)\right| - 2,\]
in other words the input of $z^1$ strictly decreases the size of the buffer content $u$, that is, if we let $u^{2}=Q_\phi\left(uz^1\right)$, then $u^2$ is of even length $2r_2$, where $r_2 <r_1$.
We then apply the same argument as above for $u^{2}$ instead of $u$: there exists an erasing word $z^2$ for the two-letter word $u^{2}_{2r_2-1}u^{2}_{2r_2}$ gathering the last two letters of $u^{2}$, so
as above,
\[\left|Q_\phi\left(uz^1z^2\right)\right| = \left|Q_\phi\left(u^{2}z^2\right)\right| \le \left|Q_\phi\left(u^{2}\right)\right| - 2 .\]
We can continue this construction by induction, until we reach an index $\ell$ such that
\begin{equation}
\label{eq:OK}
Q_\phi\left(uz^1z^2...z^\ell\right) = \emptyset.
\end{equation}
Observe that, as $z^1,...z^\ell$ are all erasing words, we have that $Q_\phi(z^1)=Q_\phi(z^2)=...=Q_\phi(z^\ell)=\emptyset.$
Thus $Q_\phi(z^1z^2...z^\ell)=\emptyset$, which shows, together with (\ref{eq:OK}), that $z=z^1z^2...z^\ell$ is an erasing word for $u$.
\end{proof}
Clearly, uniqueness of the erasing does not hold true. In particular, if $z^1$ and $z^2$ are both erasing words of the same word $u$ for $(G,\phi)$, then $z^1z^2$ also is.
Hence the following definition,
\begin{definition}
Let $u \in \mathbb W_2$. An erasing word $z$ of $u$ for $(G,\phi)$ is said to be {\em reduced}, if $z$ cannot be written as $z=z^1z^2$, where $z^1$ and $z^2$ are both non-empty erasing words of $u$.
A reduced erasing word $z$ of $u$ is said to be {\em minimal}, if it is of minimal length among all reduced erasing words of $u$.
\end{definition}
\begin{definition}
A word $z \in {\mathcal{V}}^*$ of even length $2p$ is said to be a {\em strong erasing word} for the graph $G=({\mathcal{V}},{\mathcal{E}})$ and the matching policy $\phi$ if
\begin{enumerate}
\item $z$ is completely matchable by $\phi$ together with any two-letter word, i.e. for any $i,j \in {\mathcal{V}}$ such that $i {\not\!\!\--} j$, and any two words $\varsigma'$ and $\varsigma$ of $\mathcal S^*$
whose letters can be possibly drawn by $\nu_\phi$, and of respective length 2 and $2p$, we have that $Q_\phi\left(ijz,\varsigma\varsigma'\right)=\emptyset$;
\item any right sub-word of $z$ of even length is completely matchable by $\phi$, i.e. for any $\ell \in \llbracket 0,p-1 \rrbracket$ and any $\varsigma'$ of length $2p$ and whose letters can possibly be drawn by $\nu_\phi$,
$Q_\phi\left(z_{2\ell+1}...z_{2p},\varsigma'_{2\ell+1}...\varsigma'_{2p}\right)=\emptyset$.
\end{enumerate}
\end{definition}
Plainly, a strong erasing word for $(G,\phi)$ is a an erasing word for any two-letter word $ij$ with $i{\not\!\!\--} j$. Also observe that condition 2 above is typically met whenever
the letters of $z$ form a cycle of $G$ - this fact will be exploited below.
\begin{lemma}
\label{lemma:erasing}
Let $\phi$ be a sub-additive matching policy and $z$ be a strong erasing word for $G=({\mathcal{V}},{\mathcal{E}})$ and $\phi$. Then for any $\varsigma' \in \mathcal S^*$ of length $|z|$,
any word $u \in \mathbb W_2$ and any $\varsigma\in \mathcal S^*$ of length $|u|$, we have that $\left|Q_\phi(uz,\varsigma\varsigma')\right| \le \left|Q_\phi(u,\varsigma)\right|-2$.
\end{lemma}
\begin{proof}
From the sub-additivity of $\phi$, if $|u|=2r$,
\begin{align*}
\left|Q_\phi(uz,\varsigma\varsigma')\right|
&\le \left|Q_\phi\left(u_1...u_{2r-2},\varsigma_1...\varsigma_{2r-2}\right)\right| + \left|Q_\phi\left(u_{2r-1}u_{2r}z,\varsigma_{2r-1}\varsigma_{2r}\varsigma'\right)\right|\\
&= \left|Q_\phi\left(u_1...u_{2r-2},\varsigma_1...\varsigma_{2r-2}\right)\right|=\left|Q_\phi(u,\varsigma)\right|-2,
\end{align*}
using the fact that $u_1...u_{2r-2}\in \mathbb W_2$.
\end{proof}
To address the question of existence of strong erasing words for a given pair $(G,\phi)$, we need the following Lemma,
\begin{lemma}
\label{lemma:spanning}
Any connected non-bipartite graph $G=({\mathcal{V}},{\mathcal{E}})$ can be spanned by an odd cycle, i.e. there exists
a cycle of odd length in which all the nodes of ${\mathcal{V}}$ appear at least once.
\end{lemma}
\begin{proof}
As $G$ is non-bipartite, $G$ contains an odd cycle $\mathcal C:=c_1 {\--} c_2 {\--} \, ... \, c_{2q+1}.$
Let $p \in {\mathbb N}$ be the number of nodes of ${\mathcal{V}}$ which do not appear in the latter cycle, and denote by
$i_1,...,i_p$, these nodes. By connectedness, there exists for any $j \in \llbracket 1,p \rrbracket$, a minimal path ${\mathcal{P}}_j$ of length, say, $\ell_j$,
from $k_1$ to $i_j$. Then, we can connect $k_1$ to itself by following, first, the cycle $\mathcal C$, and then all the paths ${\mathcal{P}}_j$ from $k_1$ to $i_j$ and then the reversed path of ${\mathcal{P}}_j$ from $i_j$ to $k_1$,
successively for all
$j \in \llbracket 1,p \rrbracket$. The resulting path is a cycle connecting to $k_1$ to itself and spanning the whole set ${\mathcal{V}}$, and its length is $2q+1 + \sum_{j=1}^p 2\ell_j$, an odd number.
\end{proof}
Let us recall (see \cite{MaiMoy16} for details), that a connected graph $G=({\mathcal{V}},{\mathcal{E}})$ is said to be {\em separable of order $p$},
$p\ge 2$, if there exists a partition of ${\mathcal{V}}$ into maximal independent sets ${\mathcal{I}}_1,\dots, {\mathcal{I}}_p$, such that
\[
\forall i\neq j,\, \forall u \in {\mathcal{I}}_i,\, \forall v \in {\mathcal{I}}_j,\,\,u {\--} v \:.
\]
In particular, a separable graph $G$ is non-bipartite if and only if its order is at least 3.
\begin{proposition}
\label{pro:erasing2}
The following conditions are sufficient for the existence of a strong erasing word for $(G,\phi)$:
\begin{itemize}
\item[(i)] $G$ is non-bipartite separable and $\phi$ is any admissible policy;
\item[(ii)] $G$ is non-bipartite and $\phi=\textsc{lcfm}$.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[(i)] Suppose that $G$ is separable of order $p \ge 3$, and let ${\mathcal{I}}_1,...,{\mathcal{I}}_p$ be the corresponding maximal independent sets.
Let $z$ be a word of length $2p$ which contains exactly two letters of each maximal independent set, but whose last two letters do not represent the same independent set.
Then it is immediate that $z$, and any even right sub-word of $z$ is completely matchable by any $\phi$. Second, if we let $i$ and $j \in {\mathcal{V}}$ such that $i {\not\!\!\--} j$, which is true if and only if $i$ and $j$ belong to the same maximal independent set, say ${\mathcal{I}}_\ell$. Then it is immediate that $Q_\phi(ijz)=\emptyset$ for any $\phi$, since any incoming item of a class in any other independent set than ${\mathcal{I}}_\ell$ can be matched on the fly with any element of ${\mathcal{I}}_\ell$.
\item[(ii)]
As an immediate consequence of Lemma \ref{lemma:spanning}, there exists a cycle ${\mathscr{C}}=c_1{\--} c_2 {\--} ... {\--} c_{2q+1}$ such that
\begin{equation}
\label{eq:voisC}
{\mathcal{E}}\left(\left\{c_1,c_2,...,c_{2q+1}\right\}\right)={\mathcal{V}},
\end{equation}
which is true in particular for the odd cycle that spans ${\mathcal{V}}$.
We let $z$ be the word consisting of all the nodes of ${\mathscr{C}}$ visited 4 times in that order, i.e.
\[z=c_1c_2...c_{2q+1}c_1...c_{2q+1}c_1...c_{2q+1}c_1...c_{2q+1}.\]
We drop again the lists of permutations from all notation. First observe that, as ${\mathscr{C}}$ is a cycle we clearly get
$Q_{\textsc{lcfm}}(z)=\emptyset$, as for any admissible policy. Second, as ${\mathscr{C}}$ is a cycle it is also clear that any right sub-word of $z$ of even size is completely matchable by any admissible policy.
Now fix $i$ and $j$ in ${\mathcal{V}}$ such that $i {\not\!\!\--} j$. We need to show that $Q_{\textsc{lcfm}}(ijz)=\emptyset$.
For this let us define the following sets for $k \in \{i,j\}$,
\begin{align*}
\mathcal H(k) &= \left\{\mbox{even indexes $2\ell$ in }\llbracket 1, 2q+1 \rrbracket\,:\,c_{2\ell} {\--} k \right\};\\
\mathcal O(k) &= \left\{\mbox{odd indexes $2\ell+1$ in }\llbracket 1, 2q+1 \rrbracket\,:\,c_{2\ell+1} {\--} k \right\}.
\end{align*}
We are in the following alternative:
\begin{enumerate}
\item[Case 1:] $\mathcal O(i) \cup \mathcal O(j) \ne\emptyset$, i.e. $i$ or $j$ (or both) neighbor a node of odd index in ${\mathscr{C}}$. Let $2p+1=\min \mathcal O(i) \cup \mathcal O(j)$.
First observe that, by the definition of {\sc lcfm} all items of even indexes in $\llbracket 1,2p \rrbracket$ are matched with the immediate preceding item of odd index, so the entering
$c_{2p+1}$ item finds only $i$ and $j$ in the system, and is matched with $j$ if $c_{2p+1} {\--} j$, or with $i$ if $j {\not\!\!\--} c_{2p+1}$ and $i {\--} c_{2p+1}$. Let us assume that we are in the first case, the second one can be treated analogously. So we have $Q_{\textsc{lcfm}}\left(ijc_1...c_{2p+1}\right)=i$. Let us now define
\begin{equation*}
\tilde{\mathcal H}(i) = \left\{\mbox{even indexes $2\ell$ in }\llbracket 2p+2, 2q \rrbracket\,:\,c_{2\ell} {\--} i \right\}.
\end{equation*}
We have three sub-cases:
\begin{itemize}
\item[Sub-case 1a:] $\tilde{\mathcal H}(i) \ne \emptyset$.\quad Set $2r=\min \tilde{\mathcal H}(i)$. Then the $i$ item is matched with $c_{2r}$. Indeed, in {\sc lcfm} all items of odd indexes in
$\llbracket 2p+2, 2r \rrbracket$ are matched with the immediate preceding item, even if they are compatible with $i$.
Then, after the $i$ item is matched with the $c_{2r}$ item, all items of odd indexes in $\llbracket 2r+1,2q-1 \rrbracket$ (if the latter is non-empty) in the first exploration of ${\mathscr{C}}$ are matched
with the immediate following item, until the first $c_{2q+1}$ item is matched with the second $c_1$ item. After that, in the second exploration of ${\mathscr{C}}$ all items of even nodes are matched with
the following item of odd index, until the second $c_{2q}$ item is matched with the second $2q+1$ item, so we get a perfect matching of $ij$ with the first two explorations of ${\mathscr{C}}$.
Then the last two visits of ${\mathscr{C}}$ are perfectly matched on the fly, since ${\mathscr{C}}$ is a cycle. So $Q_{\textsc{lcfm}}(ijz)=\emptyset.$
\item[Sub-case 1b:] $\tilde{\mathcal H}(i)=\emptyset$ and $\mathcal O(i) \ne \emptyset$.\quad
Due to the {\sc lcfm} policy, in the first exploration of ${\mathscr{C}}$ all odd items are matched with the immediate preceding
item of even index, until $c_{2q+1}$, in a way that $Q_{\textsc{lcfm}}(ijc_1...c_{2q+1})=i$. Let $2s+1=\min \mathcal O(i)$. Then the remaining $i$ item is matched with the second $c_{2s+1}$, since
in {\sc lcfm}, all items of even indexes less than $2s+1$ that are compatible with $i$, are matched with the preceding item of odd index. After that, if $s<q$ then all remaining items of
even indexes in the second exploration of ${\mathscr{C}}$ are matched with the immediate following item, until the second $c_{2q}$ item is matched with the second $c_{2q+1}.$
Thus $Q_\phi(ijz)=\emptyset$, and we conclude as in 1a.
\item[Sub-case 1c:] $\tilde{\mathcal H}(i)=\emptyset$ and $\mathcal O(i) = \emptyset$.\quad From (\ref{eq:voisC}) there necessarily exists an even index (take the smallest one)
$2u \in \llbracket 2,2p \rrbracket$ such that $i {\--} c_{2u}$. Then, as in 1b we have $Q_{\textsc{lcfm}}\left(ijc_1...c_{2q+1}\right)=i$. Then, in the second exploration of ${\mathscr{C}}$, in {\sc lcfm} all items of
even indexes are matched with the preceding item of odd index, until the second $c_{2q+1}$ remains unmatched, i.e. $Q_{\textsc{lcfm}}\left(ijc_1...c_{2q+1}c_1...c_{2q+1}\right)=ic_{2q+1}$.
Then the remaining $c_{2q+1}$ item is matched with the third $c_1$, and in the third visit of ${\mathscr{C}}$, all items of even indexes are matched with the following item of odd index, until
$c_{2u}$ is matched with $i$. To finish the third exploration, if $u<q$ then all items of odd index in $\llbracket 2u+1,2q-1 \rrbracket$ are matched with the following item of even index, until
the third $c_{2q+1}$ remains alone unmatched, i.e. $Q_{\textsc{lcfm}}\left(ijc_1...c_{2q+1}c_1...c_{2q+1}c_1...c_{2q+1}\right)=c_{2q+1}$. At this point, the forth $c_1$ is matched with
the third $c_{2q+1}$, and then in the fourth exploration of ${\mathscr{C}}$ all items of even index are matched with the following item of odd index, until the last $c_{2q}$ is matched with the last
$c_{2q+1}$ item. We end up again with $Q_{\textsc{lcfm}}(ijz)=\emptyset.$
\end{itemize}
\item[Case 2:] $\mathcal O(i) \cup \mathcal O(j) =\emptyset$.\quad In that case $i$ and $j$ both have only neighbors of even indexes in ${\mathscr{C}}$, in particular from (\ref{eq:voisC}) ${\mathcal{H}}(i)$ and ${\mathcal{H}}(j)$ are both non-empty. Let $2p=\min \mathcal H(i)$ and $2p'=\min \mathcal H(j)$. Again from the definition of {\sc lcfm}, in the first exploration of ${\mathscr{C}}$, all items of even indexes are matched with the preceding item of odd index, until $c_{2q+1}$ remains unmatched, so $Q_{\textsc{lcfm}}\left(ijc_1...c_{2q+1}\right)=ijc_{2q+1}$. Then the first $c_{2q+1}$ item is matched with the second $c_1$, and if $2< \min(2p,2p')$, in the second exploration of ${\mathscr{C}}$ all items of even index in $\llbracket 2,\min(2p,2p')-2 \rrbracket$ are matched with the following item of odd index. We have again, two sub-cases:
\begin{itemize}
\item[Sub-case 2a:] $p'\le p$, so in {\sc lcfm} the $j$-item is matched with the second $c_{2p}$.
In the second exploration of ${\mathscr{C}}$, after the $c_{2p}$ item has been matched with the $j$ item, if $p<q$ all items of odd indexes in
$\llbracket 2p+1,2q-1 \rrbracket$ are matched on the fly with the immediate following item of even index, until only the second $c_{2q+1}$ item remains unmatched,
so $Q_{\textsc{lcfm}}\left(ijc_1...c_{2q+1}c_1...c_{2q+1}\right)=ic_{2q+1}$. Then the second $c_{2q+1}$ is matched with the third $c_1$. In the third exploration, if $p>2$, all items of even indexes
in $\llbracket 2,2p-2 \rrbracket$ are matched with the following item, until the $c_{2p}$ item is matched with $i$.
We then conclude as in 1c, and end up again with $Q_{\textsc{lcfm}}(ijz)=\emptyset.$
\item[Sub-case 2b:] $p < p'$, so the $i$-item is matched with the second $c_{2p}$. Then the $j$ item remains to be matched, and we conclude exactly as in 2a, by matching the $j$ item with the
third $c_{2p'}$ (instead of $i$ with the third $c_{2p}$). This concludes the proof.
\end{itemize}
\end{enumerate}
\end{itemize}
\end{proof}
\begin{remark}
\label{tem:emulatefcfm}
\rm
Observe that the conclusion of (ii) of Proposition \ref{pro:erasing2} clearly hold true for any policy $\phi$ that emulates {\sc lcfm} on the input $ijz$, for
$z=c_1...c_{2q+1}c_1....c_{2q+1}c_1...c_{2q+1}c_1...c_{2q+1}$ (keeping the notation of the above proof), and for any $i {\not\!\!\--} j \in {\mathcal{V}}$.
This is true in particular if $\phi$ is a priority policy such that for any index $j \ge 2$, $c_{j}$ prioritizes $c_{j-1}$ over any other node, and $c_1$ prioritizes $c_{2q+1}$ over any other node.
\end{remark}
\subsection{Renovating events}
\label{subsec:renov}
Define the following family of events for any $\mathbb W_2$-valued r.v. $Y$,
\begin{align*}
{\mathscr{A}}_n(Y) &=\left\{U^{[Y]}_{n}=\emptyset\right\}= \Bigl\{Q_\phi(YV^0V^1V^0\circ\theta\,V^1\circ\theta\,...\,V^0\circ\theta^{n-1}V^1\circ\theta^{n-1})=\emptyset\Bigl\},\quad n\ge 0.
\end{align*}
Let us first observe that
\begin{proposition}
\label{thm:renov1}
Let $G=({\mathcal{V}},{\mathcal{E}})$ be a matching graph and $\phi$ be an admissible matching policy.
Suppose that assumption (H1') holds, and let $Y$ be a $\mathbb W_2$-valued random variable.
If the following condition holds:
\begin{equation}
\label{eq:renov0}
\lim_{n\to\infty} \bpr{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_{l}(Y) \cap \theta^k{\mathscr{A}}_{l+k}(Y)}=1,
\end{equation}
then $\suite{U^{[Y]}_{n}}$ converges with strong backwards coupling to a stationary buffer content sequence $\suite{U\circ\theta^n}$.
\end{proposition}
\begin{proof}
Clearly, $\suite{{\mathscr{A}}_n(Y)}$ is a sequence of renovating events of length 1 for $\suite{U^{[Y]}_{n}}$ (see \cite{Foss92,Foss94}).
The result then follows from Theorem 2.5.3 of \cite{BacBre02}.
\end{proof}
The above result applies to {any} initial word of even size, however it does not guarantee the uniqueness of a solution
to (\ref{eq:recurstatUbar}). As we now demonstrate, this stronger property holds at least under the additional assumption
\begin{enumerate}
\item[\textbf{(H2)}] There exists a strong erasing word for $(G,\phi)$.
\end{enumerate}
Let us define the following sets of random variables:
\begin{equation*}
\mathscr Y_2^r=\left\{\mathbb W_2-\mbox{ valued r.v. $Y$:\, $|Y|\le 2r$ a.s.}\right\},\,r\in {\mathbb N}_+,
\end{equation*}
and let
\[\mathscr Y_2^{\infty}:= \bigcup_{r=1}^{+\infty} \mathscr Y^r_2.\]
\begin{lemma}
\label{lemma:renov}
Let $G=({\mathcal{V}},{\mathcal{E}})$, $\phi$ be a sub-additive policy, and suppose that (H2) holds.
Fix a positive integer $r$, and define the events
\begin{align}
{\mathscr{B}}^r(z^1,...,z^r) &=\left\{V^0V^1\,V^0\circ\theta \,V^1\circ\theta\,....\,V^0\circ\theta^{m-1}V^1\circ\theta^{m-1}=z^1z^2...z^r\right\};\label{eq:defB}\\
{\mathscr{C}}^r_n(z^1,...,z^r) &={\mathscr{A}}_n(\emptyset)\cap\theta^{-n}{\mathscr{B}}^r(z^1,...,z^r),\label{eq:defC}
\end{align}
where the $z^i$'s are (possibly identical) strong erasing words for $(G,\phi)$ and $m=\sum_{i=1}^r |z^i|/2$.
Then for any any r.v. $Y \in \mathscr Y_2^r$ and $n \ge 1$, up to a negligible event we have that ${\mathscr{C}}^r_n(z) \subset {\mathscr{A}}_{n+m}(Y)$.
\end{lemma}
\begin{proof}
Fix $n\ge 1$, $r \ge 1$ and $Y\in\mathscr Y_2^r$. All the arguments in this proof hold for any fixed sequence of lists of preferences, so we drop again that parameter of all notations for short.
Throughout this proof, let us also fix a sample in ${\mathscr{C}}^r_n(z)$.
First, as $U^{[\emptyset]}_{n}=\emptyset$, the matching of the arrivals up to $n$ is complete, i.e. we have that $Q_\phi\left(V^0\,V^1\,V^0\circ\theta\, \,V^1\circ\theta\,...\,V^0\circ\theta^{n-1}\,V^1\circ\theta^{n-1}\right)=\emptyset$.
Thus from the sub-additivity of $\phi$ we have
\begin{align}
\left|U^{[Y]}_{2n}\right| &= \left|Q_\phi(Y\,V^0\,V^1\,V^0\circ\theta\, \,V^1\circ\theta\,...\,V^0\circ\theta^{n-1}\,V^1\circ\theta^{n-1})\right|\nonumber\\
&\le \left|Q_\phi(Y)\right| + \left|Q_\phi(V^0\,V^1\,V^0\circ\theta\, \,V^1\circ\theta\,...\,V^0\circ\theta^{n-1}\,V^1\circ\theta^{n-1})\right|
=|Y| \le 2r.\label{eq:rage1}
\end{align}
Now, as $z$ is a strong erasing word for $(G,\phi)$, from Lemma \ref{lemma:erasing} for any $l \in \llbracket 1,r \rrbracket$
we have that
\begin{equation*}
\left|U^{[Y]}_{n+\sum_{i=1}^l |z^i|/2}\right| =\left|Q_\phi\left(U^{[Y]}_{n+\sum_{i=1}^{l-1}|z^i|/2}z^l\right)\right| \le \left|Q_\phi\left(U^{[Y]}_{n+\sum_{i=1}^{l-1}|z^i|/2}\right)\right|-2
= \left|U^{[Y]}_{n+\sum_{i=1}^{l-1}|z^i|/2}\right|-2,
\end{equation*}
where we understand $\sum_{i=1}^0 .$ as 0. This together with (\ref{eq:rage1}) and the fact that $\left|U^{[Y]}_{n}\right|$ is even, entails that for some index
$n' \in \llbracket n+1,n+m \rrbracket$ (take the smallest one), we have $U^{[Y]}_{n'}=\emptyset$.
Let $k \in \llbracket 0,r \rrbracket$ be the largest integer such that $n' \ge n + \sum_{i=1}^k |z^i|/2$, that is, such that $k$ full strong erasing words $z^1,z^2,...z^k$ (or none if $k=0$) have entered the system until time $n'$.
Let also, if $k<r$, $j \in \llbracket 0,z^{k+1} \rrbracket$ be the (even) number of letters of $z^{k+1}$ that have entered the system up to time $n'-1$ included.
In other words the input of letters between times
$n$ and $n'-1$ reads
\[V^0\circ\,\theta^n\,V^1\circ\,\theta^n\,...\,V^0\circ\,\theta^{n'-1}\,V^1\circ\,\theta^{n'-1} = \left\{\begin{array}{ll}
z^1z^2...z^kz^{k+1}_1z^{k+1}_2...z^{k+1}_j, &\mbox{ if $k<r$;}\\
&\\
z^1z^2...z^r,&\mbox{ else}.
\end{array}\right.\]
If $k<r$, as for any right sub-word of $z^{k+1}$ of even size, $z^{k+1}_{j+1}z^{k+1}_{j+2}...z^{k+1}_{|z^{k+1}|}$ is completely matchable by $\phi$. Thus, as
the following strong erasing words (if $k< r-1$) are also perfectly matchable, we obtain that
\[U^{[Y]}_{n+r m} = Q_\phi\left(z^{k+1}_{j+1}z^{k+1}_{j+2}...z^{k+1}_{|z^{k+1}|}z^{k+2}...z^{r}\right) = \emptyset,\]
which completes the proof.
\end{proof}
We thus have the following
\begin{proposition}
\label{prop:renov2}
Let $G=({\mathcal{V}},{\mathcal{E}})$, $\phi$ be a sub-additive policy, and suppose that (H1') holds together with (H2).
Let $r\in {\mathbb N}_+$. Suppose that there exists a family of (possibly identical) $r$ strong erasing words $z^1,...z^r$ such that
\begin{equation}
\label{eq:renov1}
\lim_{n\to\infty} \bpr{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_l(\emptyset) \cap \theta^k{\mathscr{A}}_{l+k}(\emptyset) \cap \theta^{-l}{\mathscr{B}}^r(z^1,...,z^r)}=1,
\end{equation}
for ${\mathscr{B}}^r(z^1,...,z^r)$ defined by (\ref{eq:defB}). Then, there exists a solution $U^r$ to (\ref{eq:recurstatUbar}) in $\mathscr Y^{\infty}_2$,
to which all sequences $\suite{U^{[Y]}_{n}}$, for $Y\in \mathscr Y^{r}_2$, converge with strong backwards coupling.
\end{proposition}
\begin{proof}
Lemma \ref{lemma:renov} shows that the events $\suite{{\mathscr{C}}^r_n(z^1,...,z^r)}$ defined by (\ref{eq:defC}) form a a sequence of renovating
events of length $m=\sum_{i=1}^r |z^i|/2$ for the recursion $\suite{U^{[Y]}_{n}}$, for any $Y\in\mathscr Y_2^r$.
Indeed, for any $n$, on ${\mathscr{C}}^r_n(z^1,...,z^r)$ the value of $\suite{U^{[Y]}_{n+m}}$ equals the empty set and does not depend on the input up to $n$.
Observe that, by $\theta$-invariance, (\ref{eq:renov1}) is equivalent to the sufficient condition in Theorem 2.5.3 of \cite{BacBre02} applied to
$\suite{{\mathscr{C}}^r_n(z^1,...,z^r)}$. It thus follows again from that Theorem, that all such sequences $\suite{U^{[Y]}_{n}}$ converge with strong backwards coupling to a
solution $U^r$ to (\ref{eq:recurstatUbar}).
\end{proof}
Consequently,
\begin{theorem}
\label{thm:renov2}
If the conditions of Proposition \ref{prop:renov2} are satisfied for any $r\in {\mathbb N}_+$, then the solution $U$ to (\ref{eq:recurstatUbar}) is unique in $\mathscr Y^{\infty}_2$,
and all sequences $\suite{U^{[Y]}_{n}}$, $Y\in \mathscr Y^{\infty}_2$, converge with strong backwards coupling to $U$.
\end{theorem}
\begin{proof}
For any $r\ge 1$ we can apply Proposition \ref{prop:renov2}, and then the same argument to $r+1$, yielding that all sequences $\suite{U^{[Y]}_{n}}$ for $Y \in \mathscr Y_2^{r+1}$,
also converge with strong backwards coupling to a solution $U^{r+1}$. As this is true in particular for any $Y \in \mathscr Y_2^r$, and by uniqueness of the backwards coupling limit,
$U^r$ and $U^{r+1}$ coincide $\noindent{\bf Proof.}\ $-almost surely. We conclude by and immediate induction on $r$ that all solutions $U^r$, $r\ge 1$, coincide almost surely, and let $U^*$ be this common limit.
Uniqueness of the solution of (\ref{eq:recurstatUbar}) can be shown using Remark 2.5.3 in \cite{BacBre02}:
any two solutions $U^*$ and $U^{**}$ in $\mathscr Y^{\infty}_2$, belong to some set $\mathscr Y_2^r$, $r\ge0$.
Thus for some strong erasing words $z^1,...,z^r$ for $(G,\phi)$, $\suite{{\mathscr{C}}^r_n(z^1,...,z^r)}$ forms a sequence of renovating events for both sequences $\suite{U^*\circ\theta^n}$ and $\suite{U^{**}\circ\theta^n}$ which,
as they converge with strong backwards coupling to the same limit and are both stationary, necessarily coincide almost surely.
\end{proof}
The renovation conditions (\ref{eq:renov0}) and (\ref{eq:renov1}) have the following intuitive interpretation: in (\ref{eq:renov0}), with overwhelming probability,
any recursion started at $Y$ at some point in the past, couples at value $\emptyset$ with the recursion started at $Y$ at time $0$, before a future horizon that goes large.
In condition (\ref{eq:renov1}), for an initially empty system the first $m$ arrivals after this coupling time form a sequence of $r$ strong erasing word for $(G,\phi)$.
As will be shown in Section \ref{subsec:iid}, these conditions take a much simpler form whenever the input is i.i.d.. On another hand, in Example \ref{ex:weak6} we show a simple case
where Theorem \ref{thm:renov2} applies, for a separable graph $G$ and a non-independent input.
We conclude this section by observing that these renovation conditions cannot be satisfied unless the measure $\mu$ introduced in assumption (H1') is an element of $\textsc{Ncond}(G)$ (defined by (\ref{eq:Ncond})).
\begin{proposition}
\label{prop:NcondStatergo}
Under assumption (H1'), conditions (\ref{eq:renov0}) and (\ref{eq:renov1}) entail that $\mu\in\textsc{Ncond}(G).$
\end{proposition}
\begin{proof}
If for some independent set ${\mathcal{I}}$ of $G$,
\[\mu({\mathcal{I}})={\mu^0({\mathcal{I}})+\mu^1({\mathcal{I}}) \over 2} > {\mu^0({\mathcal{E}}({\mathcal{I}})) +\mu^1({\mathcal{E}}({\mathcal{I}})) \over 2},\]
Birkhoff's Theorem entails the total number of arrivals of elements of ${\mathcal{I}}$ (from elements of $\suitez{V^0\circ\theta^n}$ and $\suitez{V^1\circ\theta^n}$
almost surely exceeds the total number of arrivals of elements of the neighboring classes of ${\mathcal{I}}$ by a quantity that is of order $n$ in the long run .
All the same, if we replace the inequality above by an equality, the Markov chain is at most null recurrent, as in the proof of Theorem 2 in \cite{MaiMoy16}.
So $\suite{U^{[Y]}_n}$ cannot visit zero infinitely often with probability one. Hence (\ref{eq:renov0}) for $Y = \emptyset$ a.s., and thereby (\ref{eq:renov1}), cannot hold.
\end{proof}
\subsection{Independent Case}
\label{subsec:iid}
In this section, we reformulate the renovation condition (\ref{eq:renov0}), in the particular case where the input is iid
and thereby, where the recursion $\suite{U_n}$ is a Markov chain, under a natural stability condition which we now specify.
Denote for any $\mathbb W_2$-valued r.v. $Y$ and any $j\in\mathbb N^*$, by $\tau_j(Y)$ the $j$-th visit time to $\mathbf{\emptyset}$ (or return time if $Y\equiv \emptyset$)
for the process $\suite{U_n^{[Y]}}$, that is
$$\tau_1(Y) := \inf \{n > 0, U_{n}^{[Y]} = \mathbf{\emptyset} \}, \quad \tau_j(Y) := \inf \{n > \tau_{j-1}(Y), U_{n}^{[Y]} = \mathbf{\emptyset} \}, \; j\geq 2.$$
We define the following stability condition depending on the initial condition $Y$,
\begin{itemize}
\item[\textbf{(H3)}] The stopping time $\tau_1(Y)$ is integrable.
\end{itemize}
Under assumption (IID), the Markov chain $\suite{U_n}$ is clearly irreducible on $\mathbb W_2$. So (H3) holds true whenever the chain is positive recurrent.
Therefore applying Theorem \ref{thm:main} for {\sc fcfm}, and Theorem 2 of \cite{MaiMoy16} we obtain the following list of sufficient conditions
for (H3):
\begin{proposition}
\label{prop:suffH3}
Condition (H3) holds true for any $\mathbb W_2$-valued initial condition $Y$,
whenever $G$ is non-bipartite, (IID) holds, $\mu\in\textsc{Ncond}(G)$, and in either one of the following cases:
\begin{enumerate}
\item $\phi=\textsc{fcfm}$;
\item $\phi=\textsc{ml}$;
\item $\phi$ is any admissible policy and $G$ is separable.
\end{enumerate}
\end{proposition}
We have the following result,
\begin{proposition}
\label{pro:bwiid}
If (IID) holds and $Y \in \mathscr Y_2^{\infty}$, then (H3) entails (\ref{eq:renov0}).
\end{proposition}
\begin{proof}
Fix throughout $\varepsilon>0$, and $r\in {\mathbb N}_+$ such that $Y \in\mathscr Y_2^r$.
First observe that, as a consequence of (H3) the random variable
\[\kappa = \sup\left\{k\in {\mathbb N}\,:\,\tau_1(Y)\circ\theta^{-k}>k\right\}\]
that is, the largest horizon in the past from which the first visit to $\emptyset$ takes place after time 0, is a.s. finite. In particular there exists a positive integer $K_{\varepsilon}$ such that
\begin{equation}
\label{eq:prK}
\bpr{\kappa > K_\varepsilon} < {\varepsilon \over 5}.
\end{equation}
Again in view of (H3), there exists an integer $T_\varepsilon>0$ such that
\begin{equation}
\label{eq:prT}
\bpr{ \tau_1(Y)> T_\varepsilon} < {\varepsilon \over 5},
\end{equation}
and let us denote \[H_\varepsilon:=2K_\varepsilon + 2r + T_\varepsilon.\]
We know from Proposition \ref{pro:erasing} that any word admits at least one minimal erasing word. Also, there are finitely many words in $\mathbb W_2$ of size less than $H_\varepsilon$,
and thus finitely many minimal erasing words of those words. So the following integer is well defined, and depends only on $H_\varepsilon$,
\begin{align}
\ell_\varepsilon &= {1\over 2}\max_{u \in \mathbb W_2: |w| \le H_\varepsilon} \min_{\substack{z\in {\mathcal{V}}^*:\\ z\mbox{ \scriptsize{minimal erasing word of }}u}} |z|.\label{eq:defell
\end{align}
We now define the sequence $\suitei{\tilde\tau_i}$ (where we drop the dependence on $Y$ for notational convenience), as the following subsequence of $\suitei{\tau_i(Y)}$:
$$\tilde\tau_1 :=\tau_1(Y), \quad \tilde\tau_i := \inf \{n > \tilde\tau_{i-1} + \ell_\varepsilon,\, U_{n}^{[Y]} = \mathbf{\emptyset} \}, \; i\geq 2.$$
Also define the following family of events: for all $k\in{\mathbb N}$ and $i\in{\mathbb N}_+$,
\begin{equation*}
{\mathcal{D}}^{k}_i(Y) =\bigcup_{m=1}^{\ell_\varepsilon} \Bigl\{V^0\circ\theta^{\tilde\tau_i + k}\,V^1\circ\theta^{\tilde\tau_i +k}\,...\,V^0\circ\theta^{\tilde\tau_i+k+m-1}\,V^1\circ\theta^{\tilde\tau_i+k+m-1}
\mbox{ \small{is an erasing word of} }U^{[Y]}_{\tilde\tau_i+k}\Bigl\},
\end{equation*}
and for any $k,n\in{\mathbb N}$,
\begin{equation}
{\mathcal{D}}^{k,n}(Y) =\bigcup_{\substack{i\in{\mathbb N}_+:\\\tilde\tau_i + \ell_\varepsilon \le 2n}} \,\,{\mathcal{D}}^{k}_i(Y),\quad k\in{\mathbb N},\,n \in {\mathbb N}_+. \label{eq:defDkn}
\end{equation}
For any $k \in {\mathbb N}$ and $i\in{\mathbb N}_+$, on $\theta^k{\mathcal{D}}^{k}_i(Y)$ we first have that for some (unique, and even) integer $m \le \ell_\varepsilon$,
$U^{[Y]}_{\tilde\tau_i+k+m}\circ\theta^{-k}= \emptyset$, and second, that $U^{[Y]}_{\tilde\tau_i+m}=\emptyset$, since
\begin{multline*}
\left|U^{[Y]}_{\tilde\tau_i+m}\right|\\
\begin{aligned}
&=\left|Q_\phi\left(Y\,V^0\,V^1\,...\,V^0\circ\theta^{\tilde\tau_i-1}\,V^1\circ\theta^{\tilde\tau_i-1}V^0\circ\theta^{\tilde\tau_i}\,V^1\circ\theta^{\tilde\tau_i}\,...\,V^0\circ\theta^{\tilde\tau_i+m-1}\,V^1\circ\theta^{\tilde\tau_i+m-1}\right)\right|\\
&\le \left|Q_\phi\left(Y\,V^0\,V^1\,...\,V^0\circ\theta^{\tilde\tau_i-1}\,V^1\circ\theta^{\tilde\tau_i-1}\right)\right|
+\left|Q_\phi\left(V^0\circ\theta^{\tilde\tau_i}\,V^1\circ\theta^{\tilde\tau_i}\,...\,V^0\circ\theta^{\tilde\tau_i+m-1}\,V^1\circ\theta^{\tilde\tau_i+m-1}\right)\right|
=0,
\end{aligned}
\end{multline*}
where the two terms in the third line above are zero from the very definitions of $\tilde\tau_i$ and an erasing word.
Consequently, we have that
\begin{equation}
\label{eq:finale1}
\theta^k{\mathcal{D}}^{-k,n}(Y) \subseteq \bigcup_{l=0}^n {\mathscr{A}}_{l}(Y) \cap \theta^k{\mathscr{A}}_{l+k}(Y),\quad k,n \in {\mathbb N}.
\end{equation}
Second, fix $n\in{\mathbb N}$ and a sample $\omega \in \{\kappa \le K_\varepsilon\}\cap\bigcap_{k'=0}^{K_\varepsilon}\theta^{k'}{\mathcal{D}}^{k',n}(\emptyset)$ and an integer
$k \ge K+1$. By the definition of $\kappa$, $U_0\left(\theta^{-k}\omega\right)=Y(\theta^{-k}\omega)$ entails that $U_{k-k'}\left(\theta^{-k}\omega\right)=\emptyset$ for some $k'\le K_\varepsilon$;
in other words $U^{[\emptyset]}_n \left(\theta^{-k'}\omega\right)$ equals $U^{[Y]}_{n+k-k'}\left(\theta^{-k}\omega\right)$ for any $n\ge 0$. But as $\theta^{-k'}\omega \in {\mathcal{D}}^{k',n}(\emptyset)$ by assumption,
we obtain that $\theta^{-k}\omega \in {\mathcal{D}}^{k,n}(Y)$. Consequently we have that
\begin{equation*}
\{\kappa \le K_\varepsilon\}\cap\bigcap_{k=0}^{K_\varepsilon}\theta^{k'}{\mathcal{D}}^{k',n}(\emptyset) \subseteq \{\kappa \le K_\varepsilon\}\cap\bigcap_{k=K+1}^{\infty}\theta^k{\mathcal{D}}^{k,n}(Y)
\end{equation*}
and thereby
\[\{\kappa \le K_\varepsilon\}\cap\bigcap_{k=0}^{K_\varepsilon}\theta^k\left({\mathcal{D}}^{k,n}(Y) \cap {\mathcal{D}}^{k,n}(\emptyset)\right) \subseteq \{\kappa \le K_\varepsilon\}\cap\bigcap_{k=0}^{\infty}\theta^k{\mathcal{D}}^{k,n}(Y).\]
This, together with (\ref{eq:finale1}), yields to
\begin{equation}
\label{eq:finale2}
\{\kappa \le K_\varepsilon\}\cap\bigcap_{k=0}^{K_\varepsilon}\theta^k\left({\mathcal{D}}^{k,n}(Y) \cap {\mathcal{D}}^{k,n}(\emptyset)\right) \subseteq \{\kappa \le K_\varepsilon\}\cap\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_{l}(Y) \cap \theta^k{\mathscr{A}}_{l+k}(Y),\quad\quad n \in {\mathbb N}.
\end{equation}
Now recall (\ref{eq:defell}). In words, $\ell_\varepsilon$ is (half of) the minimal length of word that can accommodate at least one erasing word of any admissible word of even size
bounded by $H_\varepsilon$. Therefore, in view of the iid assumptions the following is a well defined element of $]0,1[$:
\begin{equation}
\label{eq:defbeta}
\beta_\varepsilon = \min_{u \in \mathbb W_2\,:\,|u| \le H_\varepsilon} \bpr{\bigcup_{m=1}^{\ell_\varepsilon}\Bigl\{\mbox{$V^0\,V^1\,V^0\circ\theta\,V^1\circ\theta\,...\,V^0\circ\theta^{m-1}\,V^0\circ\theta^{m-1}$ \small{ is a minimal erasing word of} $u$}\Bigl\}}.
\end{equation}
Let
\[M_\varepsilon=\left\lceil {\mbox{Log}\varepsilon - \mbox{Log}5 - \mbox{Log} (K_\varepsilon+1) \over \mbox{Log}(1-\beta_\varepsilon)}\right\rceil,\]
that is, the least integer that is such that
\begin{equation}
\label{eq:prM}
\left(1-\beta_\varepsilon\right)^{M_\varepsilon} < {\varepsilon \over 5(K_\varepsilon+1)}.
\end{equation}
Again from (H3) and (IID), there exists a positive integer $N_\varepsilon$ such that
\begin{equation}
\label{eq:prN}
\bpr{\tilde \tau_{M_\varepsilon} + \ell_\varepsilon > N_\varepsilon}<{\varepsilon \over 5}.
\end{equation}
All in all, we obtain that for all $n > N_\varepsilon$,
\begin{multline}
\bpr{\overline{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_{l}(Y) \cap \theta^k{\mathscr{A}}_{l+k}(Y)}}\\
\shoveleft{\le \bpr{\overline{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_{l}(Y) \cap \theta^k{\mathscr{A}}_{l+k}(Y)}\cap \{\tilde \tau_{M_\varepsilon} + \ell_\varepsilon \le N_\varepsilon\}
\cap \{\kappa \le K_\varepsilon\} \cap \{\tau_1(Y) \le T_\varepsilon\}}}\\ \shoveright{+\bpr{\tilde \tau_{M_\varepsilon} + \ell_\varepsilon > N_\varepsilon} + \bpr{\kappa > K_\varepsilon}+ \bpr{\tau_1(Y)> T_\varepsilon}}\\
\shoveleft{\le \bpr{\overline{\bigcap_{k=0}^{K_\varepsilon}\theta^k\left({\mathcal{D}}^{k,n}(Y) \cap {\mathcal{D}}^{k,n}(\emptyset)\right)} \,\,\cap \{\tilde \tau_{M_\varepsilon} + \ell_\varepsilon \le N_\varepsilon\} \cap \{\tau_1(Y) \le T_\varepsilon\}}
+ {3\varepsilon \over 5}}\\
\le \sum_{k=0}^{K_\varepsilon}\bpr{\left(\bigcap_{i=1}^{M_\varepsilon}\,\overline{\theta^k{\mathcal{D}}^{k}_i(Y)}\right)\cap \{\tau_1(Y) \le T_\varepsilon\}}
+\sum_{k=0}^{K_\varepsilon}\bpr{\left(\bigcap_{i=1}^{M_\varepsilon}\,\overline{\theta^k{\mathcal{D}}^{k}_i(\emptyset)}\right)\cap \{\tau_1(Y) \le T_\varepsilon\}} + {3\varepsilon \over 5}, \label{eq:finale3}
\end{multline}
where we use (\ref{eq:prK}), (\ref{eq:prT}), (\ref{eq:finale2}) and (\ref{eq:prN}) in the second inequality, and recalling (\ref{eq:defDkn}).
Now let $u_\varepsilon$ be an element of $\mathbb W_2$ such that $\left|u_\varepsilon\right| \le H_\varepsilon$ and that achieves the minimum in (\ref{eq:defbeta}), that is
\[\beta_\varepsilon = \bpr{\bigcup_{m=1}^{\ell_\varepsilon}\Bigl\{\mbox{$V^0\,V^1\,V^0\circ\theta\,V^1\circ\theta\,...\,V^0\circ\theta^{m-1}\,V^0\circ\theta^{m-1}$ \small{is a minimal erasing word of} $u_\varepsilon$}\Bigl\}},\]
and define the events
\begin{align*}
\check{{\mathcal{D}}}_i &= \bigcup_{m=1}^{\ell_\varepsilon} \Bigl\{\mbox{$V^0\circ\theta^{\tilde\tau_i}\,V^1\circ\theta^{\tilde\tau_i}\,...\,V^0\circ\theta^{\tilde\tau_i+m-1}\,V^0\circ\theta^{\tilde\tau_i+m-1}$ \small{is a minimal erasing word of }$u_\varepsilon$}\Bigl\},\quad i\in {\mathbb N}.
\end{align*}
From assumption (IID), the events $\check{{\mathcal{D}}}_i,i\in{\mathbb N}$, are iid of probability $\beta_\varepsilon$.
On another hand, on the event $\{\tau_1(Y) \le T_\varepsilon\}$, for any $0 \le k \le K_\varepsilon$,
\[\left|U^{[Y]}_{\tau_1(Y)+k}\circ\theta^{-k}\right| \le |Y| + 2k + \tau_1(Y) \le 2r + 2K_\varepsilon +T_\varepsilon = H_\varepsilon.\]
Thus, as $Q_\phi\left(V^0\circ\theta^{\tilde\tau_i}\,V^1\circ\theta^{\tilde\tau_i}\,...\,V^0\circ\theta^{\tilde\tau_{i+1}-1}\,V^1\circ\theta^{\tilde\tau_{i+1}-1}\right)=\emptyset$
for all $i$, the sub-additivity of $\phi$ and an immediate induction entail that $\left|U^{[Y]}_{\tilde \tau_i + k}\circ\theta^{-k}\right| \le H_\varepsilon$ for all $i \ge 1$.
Therefore, for any $k \le K_\varepsilon$ and any $i\in {\mathbb N}_+$, by the very definition of $\beta_\varepsilon$ we have that
$\bpr{\theta^k{\mathcal{D}}^{k}_i(Y)} \ge \bpr{\check{{\mathcal{D}}}_i}=\beta_\varepsilon$, and in turn by independence of the $\check{{\mathcal{D}}}_i$'s, that for all $k \le K_\varepsilon$,
\begin{equation}
\label{eq:finale4}
\bpr{\left(\bigcap_{i=1}^{M_\varepsilon}\,\overline{\theta^k{\mathcal{D}}^{k}_i(Y)}\right)\cap \{\tau_1(Y) \le T_\varepsilon\}} \le \prod_{i=1}^{M_\varepsilon}\,\bpr{\overline{\check{{\mathcal{D}}}_i}} = \left(1-\beta_\varepsilon\right)^{M_\varepsilon}.
\end{equation}
All the same, on the event $\{\tau_1(Y) \le T_\varepsilon\}$, for any $0 \le k \le K_\varepsilon$ we have that
\[\left|U^{[\emptyset]}_{\tau_1(Y)+k}\circ\theta^{-k}\right| \le 2k + \tau_1(Y) \le H_\varepsilon,\]
thus we can conclude similarly that
\[\bpr{\left(\bigcap_{i=1}^{M_\varepsilon}\,\overline{\theta^k{\mathcal{D}}^{k}_i(\emptyset)}\right)\cap \{\tau_1(Y) \le T_\varepsilon\}} \le \left(1-\beta_\varepsilon\right)^{M_\varepsilon}.\]
Injecting this together with (\ref{eq:finale4}) and (\ref{eq:prM}) in (\ref{eq:finale3}) entails that, for any $n > N_\varepsilon$,
\[\bpr{\overline{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_{l}(Y) \cap \theta^k{\mathscr{A}}_{l+k}(Y)}} < \varepsilon,\]
which concludes the proof.
\end{proof}
We now prove the uniqueness of the solution using the following forward coupling result,
\begin{proposition}
\label{pro:fwiid}
Suppose that (IID) and (H3) holds. Let $Y$ and $Y^*$ be two elements of $\mathscr Y_2^\infty$.
Then there is forward coupling between $\suite{U^{[Y]}_n}$ and $\suite{U^{[Y^*]}_n}$.
\end{proposition}
\begin{proof}
We aim at proving that the stopping time
\[\rho(Y,Y^*):= \inf\left\{n \ge 0: U^{[Y]}_l = U^{[Y^*]}_l\mbox{ for all }l \ge n\right\}\]
is a.s. finite, that is
\begin{equation}
\label{eq:fwcouple1}
\lim_{n\to\infty}\bpr{\rho(Y,Y^*) \le n}=1.
\end{equation}
Observe that, as the two recursions $\suite{U^{[Y]}_n}$ and $\suite{U^{[Y^*]}_n}$ are driven by the same input, they coalesce as soon as they meet for the first time. Hence,
(\ref{eq:fwcouple1}) holds true in particular if
\begin{equation}
\label{eq:fwcouple2}
\lim_{n\to\infty}\bpr{\bigcup_{l=0}^n \,\Bigl\{U^{[Y]}_l=U^{[Y^*]}_l=\emptyset\Bigl\}}=\lim_{n\to\infty}\bpr{\bigcup_{l=0}^n \,\mathscr A_l(Y)\cap \mathscr A_l(Y^*)}=1.
\end{equation}
From Proposition \ref{pro:bwiid}, the latter holds true whenever we replace $Y^*$ by $U_0^{[Z]}\circ\theta^{-k}$ for any finite $\mathbb W_2$-valued r.v. $Z$ and any $k \in {\mathbb N}$.
The proof of (\ref{eq:fwcouple2}) for any finite $Y^*$ is analog.
\end{proof}
Consequently,
\begin{theorem}
\label{thm:mainiid}
If the policy $\phi$ is sub-additive and assumptions (IID) and (H3) hold, there exists a unique solution $U$ to (\ref{eq:recurstatUbar}) in $\mathscr Y_2^{\infty}$, to which all sequences
$\suite{U^{[Y]}_n}$, for $Y \in \mathscr Y_2^{\infty}$, converge with strong backwards coupling.
\end{theorem}
\begin{proof}
Fix a r.v. $Y\in\mathscr Y_2^{\infty}$, and let $r$ be such that $Y \in \mathscr Y_2^r$. From Proposition \ref{pro:bwiid}, (\ref{eq:renov0}) holds true.
Thus, as (H1') subsumes assumption (IID), we can apply Proposition \ref{thm:renov1}: $Y$ converges with strong backwards coupling, and thereby also in the forward sense, to a stationary sequence
$\suite{U\circ\theta^n}$, where $U\in\mathscr Y_2^{\infty}$. Now, Proposition \ref{pro:fwiid} entails in particular that any couple of such stationary sequences $\suite{U\circ\theta^n}$ and $\suite{U^*\circ\theta^n}$ couple, and therefore coincide almost surely. Thus there exists a unique solution $U$ to (\ref{eq:recurstatUbar}) in $\mathscr Y_2^{\infty}$.
\end{proof}
\subsection{Stationary perfect $\phi$-matchings}
\label{subsec:matching}
Let us summarize the results of Section \ref{sec:coupling} thus far:
a unique (bounded, even) stationary buffer content exists whenever $G$ is non-bipartite, $\phi$ is sub-additive
(which is the case for {\sc fcfm}, {\sc lcfm}, {\sc ml} and {\sc u}), and in either one of the
following cases:
\begin{itemize}
\item (H1') holds for $\mu\in \textsc{Ncond}(G)$, and (H2) holds (which is true in particular if $G$ is separable or $\phi=\textsc{lcfm}$ - see Proposition \ref{pro:erasing2}) together with (\ref{eq:renov1}) for any $r\in{\mathbb N}_+$
- see Theorem \ref{thm:renov2};
\item (IID) holds for $\mu \in \textsc{Ncond}(G)$, and (H3) holds true (which, from Proposition \ref{prop:suffH3}, is the case if $\phi$ is {\sc fcfm} or {\sc ml}, or if $\phi$ is separable) - see Theorem \ref{thm:mainiid}.
\end{itemize}
With these results in hands, we now address the problem of constructing a stationary bi-infinite perfect matching on the original time scale,
\begin{proposition}
\label{prop:perfectmatch}
Suppose that $G$ is non-bipartite, $\phi$ is sub-additive, and either one of the following is true:
\begin{itemize}
\item (H1), (H1'), (H1''), (H2) and (\ref{eq:renov1}) hold for any $r \in {\mathbb N}_+$;
\item (IID) and (H3) hold.
\end{itemize}
Then there exists exactly two bi-infinite perfect matchings under $\phi$.
\end{proposition}
\begin{proof}
In both cases, (H1) and (H1'') hold true, so we can construct two stationary ergodic quadruples
$\mathscr Q^1:=\left(\Omega^0,\mathscr F^0,\mathbb P^1,\theta\right)$ and $\bar{\mathscr Q}:=\left(\bar\Omega,\bar{\mathscr F},\bar{\mathbb P},\bar\theta\right)$ analogously to $\mathscr Q^0$, as follows:
\begin{itemize}
\item $\mathbb P^1$ is the image measure on $\Omega^0$ of the sequence $\suitez{\left(V_{2_n-1},\Sigma_{2n-1},V_{2n},\Sigma_{2n}\right)}$;
\item $\bar\Omega=({\mathcal{V}}\times \mathcal S)^{\mathbb Z}$, $\bar{\mathscr F}$ is the Borel sigma-algebra on $\bar\Omega$, $\bar{\mathbb P}$ is the image measure of
$\suitez{\left(V_n,\Sigma_n\right)}$ on $\bar{\Omega}$ and the shift $\bar\theta$ is defined by $\bar\theta\left(\suitez{\bar\om_n}\right)=\suitez{\bar\om_{n-1}}$ for all samples $\suitez{\bar\om_n}$.
\end{itemize}
In words, $\bar{\mathscr Q}$ is the ergodic quadruple corresponding to the canonical space of the original input, and $\mathscr Q^0$ (respectively, $\mathscr Q^1$) corresponds to the canonical space of the
input of pairs started at even (resp., odd) times in the original time scale.
Our aim is to construct a $\phi$-matching on the original quadruple $\bar{\mathscr Q}$. First observe that in both cases, (H1') is satisfied,
and there exists on $\mathscr Q^0$ (from Theorem \ref{thm:renov1} in the first case, Theorem \ref{thm:mainiid} in the second),
a unique even $\theta$-stationary buffer content $U^0 \in \mathscr Y_2^\infty$, to which all sequences $\suite{U^{[Y]}_n}$ for $Y \in \mathscr Y_2^\infty$ converge with strong backwards coupling,
and such that $\mathbb P^0\left[U^0=\emptyset\right]>0$.
We can apply the exact same arguments on $\mathscr Q^1$, leading to the existence of a unique $\theta$-stationary even buffer content $U^1$ on that space, that is such that
$\mathbb P^1\left[U^1=\emptyset\right]>0$.
Now observe that we can identify $\Omega^0$ to $\bar\Omega$ {\em via} the one-to-one relation
\[\left\{\begin{array}{ll}
\Omega^0 & \longleftrightarrow \,\,\bar\Omega\\
\suitez{(v^0_n,\sigma^0_n,v^1_n,\sigma^1_n)} & \longleftrightarrow\,\, \suitez{(v_n,\sigma_n)} \mbox{ such that }\\
& \quad\quad\quad (v_{2n},\sigma_{2n})= (v^0_{n},\sigma^0_{n})\mbox{ and }(v_{2n+1},\sigma_{2n+1})= (v^1_{n},\sigma^1_{n}),\,n\in\mathbb Z.
\end{array}\right.\]
Up to this bijective transformation, we can also identify $\theta$ to $\bar\theta \circ\bar\theta$, and construct two different buffer contents $\suitez{W^0_n}$ and $\suitez{W^1_n}$ on $\bar\Omega$, as follows:
\begin{equation}
\label{eq:defWfinal0}
\left\{\begin{array}{ll}
W^0_{2n} &=U^0\circ\bar\theta^{2n}\\
W^0_{2n+1} &=\left(U^0\circ\bar\theta^{2n}\right) \odot_\phi \left(V^0\circ\bar\theta^{2n},\Sigma^0\circ\bar\theta^{2n}\right)
\end{array}
\right.\quad\quad n\in \mathbb Z,\,\bar{\mathbb P}-\mbox{ a.s.;}
\end{equation}
\begin{equation}
\label{eq:defWfinal1}
\left\{\begin{array}{ll}
W^1_{2n} &=\left(U^1\circ\bar\theta^{2n-1}\right) \odot_\phi \left(V^0\circ\bar\theta^{2n-1},\Sigma^0\circ\bar\theta^{2n-1}\right)\\
W^1_{2n+1} &=U^1\circ\bar\theta^{2n+1},
\end{array}
\right.\quad\quad n\in \mathbb Z,\,\bar{\mathbb P}-\mbox{ a.s.,}
\end{equation}
which correspond respectively to a buffer content sequence that is stationary on $\mathbb W_2$ at even times, and to a buffer
content sequence that is stationary on $\mathbb W_2$ at odd times. By construction, both sequences $\suitez{W^0_n}$ and $\suitez{W^1_n}$ have infinitely many construction points (the first one at even times, the second at odd times), therefore we can construct from each one, a unique perfect $\phi$-matching on $\bar{\Omega}$, the first one depleting at even times and the second one at odd times. The proof is complete.
\end{proof}
An example of the above construction for a separable graph of order 3 is given in Example \ref{ex:weak6}.
\begin{ex}
\rm
\label{ex:weak6}
Consider the following separable compatibility graph on ${\mathcal{V}}=\{1,2,\,...\,,6\}$,
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (1,1) -- (2,-0.2);
\draw[-] (1,1) -- (3,0);
\draw[-] (1,1) -- (2,1.2);
\draw[-] (1,1) -- (3,1);
\draw[-] (2,1.2) -- (1,0);
\draw[-] (2,1.2) -- (3,0);
\draw[-] (2,1.2) -- (3,1);
\draw[-] (3,1) -- (1,0);
\draw[-] (3,1) -- (2,-0.2);
\draw[-] (1,0) -- (2,-0.2);
\draw[-] (1,0) -- (3,0);
\draw[-] (2,-0.2) -- (3,0);
\fill (1,1) circle (2pt) node[above] {\small{1}} ;
\fill (2,1.2) circle (2pt) node[above right] {\small{2}} ;
\fill (3,1) circle (2pt) node[above] {\small{3}} ;
\fill (1,0) circle (2pt) node[below] {\small{4}} ;
\fill (2,-0.2) circle (2pt) node[below] {\small{5}} ;
\fill (3,0) circle (2pt) node[below] {\small{6}} ;
\end{tikzpicture}
\caption[smallcaption]{A separable graph of order $3$.}
\label{Fig:separable2}
\end{center}
\end{figure}
Set $\bar\Omega:=\left\{\bar\om_1,\,...\,,\bar\om_6\right\}$, where
\begin{equation}
\label{eq:inputex}
\left\{\begin{array}{ll}
\bar\om_1&=...142356\mathbf{1}42356...,\\
\bar\om_2&=...423561\mathbf{4}23561...,\\
\bar\om_3&=...235614\mathbf{2}35614...,\\
\bar\om_4&=...356142\mathbf{3}56142...,\\
\bar\om_5&=...561423\mathbf{5}61423...,\\
\bar\om_6&=...614235\mathbf{6}14235....,
\end{array}\right. \end{equation}
in which the $0$-coordinate is marked in bold. Equipped with the power-set $\bar{\mathscr F}$ of $\bar{\Omega}$,
the shift $\bar\theta$ defined by $\bar\theta\om_i = \bar\theta\om_{i+1}$ for $i \le 5$ and $\bar\theta\om_6=\bar\om_1$
and $\bar{\mathbb P}$ the uniform probability on $\bar{\Omega}$ (which correspond to $\mu$ the uniform probability on ${\mathcal{V}}$), it is immediate that $\bar{\mathscr Q}=\left(\bar{\Omega},\bar{\mathscr F},\bar{\mathbb P},\bar\theta\right)$ is a stationary ergodic quadruple. Following the construction in the proof of Proposition \ref{prop:perfectmatch}, the
canonical space of the "paired" input at even and odd times is given by $\Omega^0=\{\om^0_1,\om^0_2,\om^0_3\}$, where
(emphasizing again the $0$-coordinate in bold)
\[\left\{\begin{array}{ll}
\om^0_1&=...14\,\,23\,\,56\,\,\mathbf{14}\,\,23\,\,56\,\,...,\\
\om^0_2&=...23\,\,56\,\,14\,\,\mathbf{23}\,\,56\,\,14\,\,...,\\
\om^0_3&=...56\,\,14\,\,23\,\,\mathbf{56}\,\,14\,\,23\,\,...
\end{array}\right.\]
in a way that at time 0, a 1-item and then a 4-item enter the system for the sample $\om^0_1$,
a 2-item and then a 3-item for $\om^0_2$, and a 5-item and then a 6-item for $\om^0_3$. All the same, the canonical space of the paired input at odd times is $\Omega^1=\{\om^1_1,\om^1_2,\om^1_3\}$,
where
\[\left\{\begin{array}{ll}
\om^1_1&=...42\,\,35\,\,61\,\,\mathbf{42}\,\,35\,\,61\,\,...,\\
\om^1_2&=...35\,\,61\,\,42\,\,\mathbf{35}\,\,61\,\,42\,\,...,\\
\om^1_3&=...61\,\,42\,\,35\,\,\mathbf{61}\,\,42\,\,35\,\,...
\end{array}\right.\]
Whenever furnished with the uniform probability, both quadruples $\mathscr Q^0$ and $\mathscr Q^1$ thereby obtained are stationary and ergodic.
Set $\phi=\textsc{fcfm}$. It is immediate to observe that both words $142356$ and $423561$ are strong erasing words for $(G,\phi)$ (assertion (i) of Proposition \ref{pro:erasing2}) and that
(\ref{eq:renov1}) holds.
Thus from Theorem \ref{thm:renov1} there exists a unique stationary buffer content $U^0$ on $\mathscr Q^0$ and a unique buffer content $U^1$ on $\mathscr Q^1$,
which are respectively given by
\begin{align*}
U^0(\om^0_1)&=\emptyset,\quad U^0(\om^0_2)=14,\quad U^0(\om^0_3)=\emptyset,\\
U^1(\om^1_1)&=\emptyset,\quad U^1(\om^1_2)=\emptyset,\quad U^1(\om^1_3)=\emptyset.
\end{align*}
We then construct two buffer content sequences $\suitez{W^0_n}$ and $\suitez{W^1_n}$ on $\bar{\mathscr Q}$ from (\ref{eq:defWfinal0}) and (\ref{eq:defWfinal1}), whose valuation e.g. at sample point $\bar\om_1$ are
respectively given by
\[\left\{\begin{array}{ll}
W^0_{6n}(\bar\om_1)&=\emptyset,\quad W^0_{6n+1}(\bar\om_1)=1,\quad W^0_{6n+2}(\bar\om_1)=14,\\
W^0_{6n+3}(\bar\om_1) &=4,\quad W^0_{6n+4}(\bar\om_1)=\emptyset,\quad W^0_{6n+5}(\bar\om_1)=5\,;\quad n\in {\mathbb Z},
\end{array}\right.\]
\[\left\{\begin{array}{ll}
W^1_{6n}(\bar\om_1)&=6,\quad W^1_{6n+1}(\bar\om_1)=\emptyset,\quad W^1_{6n+2}(\bar\om_1)=4,\\
W^1_{6n+3}(\bar\om_1)&=\emptyset,\quad W^1_{6n+4}(\bar\om_1)=3,\quad W^1_{6n+5}(\bar\om_1)=\emptyset\,;\,\quad n\in {\mathbb Z}.
\end{array}\right.\]
Both $\suitez{W^0_n}$ and $\suitez{W^1_n}$ determine uniquely a bi-infinite perfect matching on $\mathscr Q^0$. These two matchings are represented in Figure \ref{Fig:perfectmatch}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (0,2) -- (11,2);
\draw[-, thin] (0,1) -- (11,1);
\draw[-] (0,0) -- (11,0);
\draw[->, thin] (0,0) .. controls +(up:0.5cm) .. (0.5,0);
\fill (0,2) circle (2pt) node[below] {\small{4}};
\fill (0.5,2) circle (2pt) node[below] {\small{2}};
\draw[->, thin] (-0.5,2) .. controls +(up:0.5cm) .. (0.5,2);
\fill (0,0) circle (2pt) node[below] {\small{4}};
\fill (0.5,0) circle (2pt) node[below] {\small{2}};
\fill (1,2) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (0,2) .. controls +(up:0.5cm) .. (1,2);
\fill (1.5,2) circle (2pt) node[below] {\small{5}};
\draw[->, thin] (1.5,2) .. controls +(up:0.5cm) .. (2,2);
\fill (1,0) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (1,0) .. controls +(up:0.5cm) .. (1.5,0);
\fill (1.5,0) circle (2pt) node[below] {\small{5}};
\fill (2,2) circle (2pt) node[below] {\small{6}};
\fill (2.5,2) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (2.5,2) .. controls +(up:0.5cm) .. (3.5,2);
\fill (2,0) circle (2pt) node[below] {\small{6}};
\draw[->, thin] (2,0) .. controls +(up:0.5cm) .. (2.5,0);
\fill (2.5,0) circle (2pt) node[below] {\small{1}};
\fill (3,2) circle (2pt) node[below] {\small{4}};
\draw[->, thin] (3,2) .. controls +(up:0.5cm) .. (4,2);
\fill (3.5,2) circle (2pt) node[below] {\small{2}};
\fill (3,0) circle (2pt) node[below] {\small{4}};
\draw[->, thin] (3,0) .. controls +(up:0.5cm) .. (3.5,0);
\fill (3.5,0) circle (2pt) node[below] {\small{2}};
\fill (4,2) circle (2pt) node[below] {\small{3}};
\fill (4.5,2) circle (2pt) node[below] {\small{5}};
\draw[->, thin] (4.5,2) .. controls +(up:0.5cm) .. (5,2);
\fill (4,0) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (4,0) .. controls +(up:0.5cm) .. (4.5,0);
\fill (4.5,0) circle (2pt) node[below] {\small{5}};
\fill (5,2) circle (2pt) node[below] {\small{6}};
\fill (5.5,2) circle (2pt) node[below] {\small{1}};
\fill (5.5,1) node[]{$|$} node[below] {\small{0}};
\draw[->, thin] (5.5,2) .. controls +(up:0.5cm) .. (6.5,2);
\fill (5,0) circle (2pt) node[below] {\small{6}};
\draw[->, thin] (5,0) .. controls +(up:0.5cm) .. (5.5,0);
\fill (5.5,0) circle (2pt) node[below] {\small{1}};
\fill (6,2) circle (2pt) node[below] {\small{4}};
\draw[->, thin] (5.5,2) .. controls +(up:0.5cm) .. (6.5,2);
\fill (6.5,2) circle (2pt) node[below] {\small{2}};
\fill (6,0) circle (2pt) node[below] {\small{4}};
\draw[->, thin] (6,0) .. controls +(up:0.5cm) .. (6.5,0);
\fill (6.5,0) circle (2pt) node[below] {\small{2}};
\fill (7,2) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (7.5,2) .. controls +(up:0.5cm) .. (8,2);
\fill (7.5,2) circle (2pt) node[below] {\small{5}};
\draw[->, thin] (6,2) .. controls +(up:0.5cm) .. (7,2);
\fill (7,0) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (7,0) .. controls +(up:0.5cm) .. (7.5,0);
\fill (7.5,0) circle (2pt) node[below] {\small{5}};
\fill (8,2) circle (2pt) node[below] {\small{6}};
\fill (8.5,2) circle (2pt) node[below] {\small{1}};
\draw[->, thin] (8.5,2) .. controls +(up:0.5cm) .. (9.5,2);
\fill (8,0) circle (2pt) node[below] {\small{6}};
\draw[->, thin] (8,0) .. controls +(up:0.5cm) .. (8.5,0);
\fill (8.5,0) circle (2pt) node[below] {\small{1}};
\fill (9,2) circle (2pt) node[below] {\small{4}};
\draw[->, thin] (9,2) .. controls +(up:0.5cm) .. (10,2);
\fill (9.5,2) circle (2pt) node[below] {\small{2}};
\fill (9,0) circle (2pt) node[below] {\small{4}};
\draw[->, thin] (9,0) .. controls +(up:0.5cm) .. (9.5,0);
\fill (9.5,0) circle (2pt) node[below] {\small{2}};
\fill (10,2) circle (2pt) node[below] {\small{3}};
\fill (10.5,2) circle (2pt) node[below] {\small{5}};
\draw[->, thin] (10.5,2) .. controls +(up:0.5cm) .. (11,2);
\fill (10,0) circle (2pt) node[below] {\small{3}};
\draw[->, thin] (10,0) .. controls +(up:0.5cm) .. (10.5,0);
\fill (10.5,0) circle (2pt) node[below] {\small{5}};
\fill (11,2) circle (2pt) node[below] {\small{6}};
\fill (11,0) circle (2pt) node[below] {\small{6}};
\draw[-, thin] (11,0) .. controls +(up:0.5cm) .. (11.5,0);
\end{tikzpicture}
\caption[smallcaption]{The two stationary matchings corresponding to the graph of Figure \ref{Fig:separable2} and the input
(\ref{eq:inputex})}.
\label{Fig:perfectmatch}
\end{center}
\end{figure}
\end{ex}
\subsection{Perfect {\sc fcfm}-matchings in reverse time}
\label{FCFMreverse}
To conclude, let us comeback to the FCFM model, for which bi-infinite perfect matchings have an interesting property.
First, observe that we can complete the "exchange" mechanism introduced in the definition of the backwards and the forwards chains in Section \ref{subsec:Markov}, using construction points as follows:
start from a construction point, and then replace all items from left to right by the copy of the class of their match, on the fly, as soon as they are matched.
We illustrate this procedure in Figure \ref{Fig:exchange}, by the completion of the exchanges over two perfectly matched blocks, for the compatibility graph of Figure \ref{fig:example1} and the arrival scenario of Figure
\ref{fig:example2}.
\medskip
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (-1,2.5) -- (11,2.5);
\fill (0,2.5) circle (2pt) node[below] {\small{1}};
\fill (1,2.5) circle (2pt) node[below] {\small{3}};
\fill (2,2.5) circle (2pt) node[below] {\small{4}};
\fill (3,2.5) circle (2pt) node[below] {\small{$2$}};
\fill (4,2.5) circle (2pt) node[below] {\small{3}};
\fill (5,2.5) circle (2pt) node[below] {\small{1}};
\fill (6,2.5) circle (2pt) node[below] {\small{3}};
\fill (7,2.5) circle (2pt) node[below] {\small{2}};
\fill (8,2.5) circle (2pt) node[below] {\small{2}};
\fill (9,2.5) circle (2pt) node[below] {\small{1}};
\fill (10,2.5) circle (2pt) node[below] {\small{4}};
\fill (11,2.5) circle (2pt) node[below] {\small{2}};
\draw[-] (-1,1) -- (11,1);
\fill (0,1) circle (2pt) node[below] {\small{1}};
\draw[-, thin] (0,1) .. controls +(up:0.5cm) .. (3,1);
\fill (1,1) circle (2pt) node[below] {\small{3}};
\draw[-, thin] (1,1) .. controls +(up:0.5cm) .. (2,1);
\fill (2,1) circle (2pt) node[below] {\small{4}};
\fill (3,1) circle (2pt) node[below] {\small{$2$}};
\fill (4,1) circle (2pt) node[below] {\small{3}};
\draw[-, thin] (4,1) .. controls +(up:0.5cm) .. (7,1);
\fill (5,1) circle (2pt) node[below] {\small{1}};
\draw[-, thin] (5,1) .. controls +(up:0.5cm) .. (8,1);
\fill (6,1) circle (2pt) node[below] {\small{3}};
\draw[-, thin] (6,1) .. controls +(up:0.5cm) .. (10,1);
\fill (7,1) circle (2pt) node[below] {\small{2}};
\fill (8,1) circle (2pt) node[below] {\small{2}};
\fill (9,1) circle (2pt) node[below] {\small{1}};
\draw[-, thin] (9,1) .. controls +(up:0.5cm) .. (11,1);
\fill (10,1) circle (2pt) node[below] {\small{4}};
\fill (11,1) circle (2pt) node[below] {\small{2}};
\draw[-] (-1,-0.5) -- (11,-0.5);
\fill (0,-0.5) circle (2pt) node[below] {\small{$\bar 2$}};
\draw[-, thin] (0,-0.5) .. controls +(up:0.5cm) .. (3,-0.5);
\fill (1,-0.5) circle (2pt) node[below] {\small{$\bar 4$}};
\draw[-, thin] (1,-0.5) .. controls +(up:0.5cm) .. (2,-0.5);
\fill (2,-0.5) circle (2pt) node[below] {\small{$\bar 3$}};
\fill (3,-0.5) circle (2pt) node[below] {\small{$\bar 1$}};
\fill (4,-0.5) circle (2pt) node[below] {\small{$\bar 2$}};
\draw[-, thin] (4,-0.5) .. controls +(up:0.5cm) .. (7,-0.5);
\fill (5,-0.5) circle (2pt) node[below] {\small{$\bar 2$}};
\draw[-, thin] (5,-0.5) .. controls +(up:0.5cm) .. (8,-0.5);
\fill (6,-0.5) circle (2pt) node[below] {\small{$\bar 4$}};
\draw[-, thin] (6,-0.5) .. controls +(up:0.5cm) .. (10,-0.5);
\fill (7,-0.5) circle (2pt) node[below] {\small{$\bar 3$}};
\fill (8,-0.5) circle (2pt) node[below] {\small{$\bar 1$}};
\fill (9,-0.5) circle (2pt) node[below] {\small{$\bar 2$}};
\draw[-, thin] (9,-0.5) .. controls +(up:0.5cm) .. (11,-0.5);
\fill (10,-0.5) circle (2pt) node[below] {\small{$\bar 3$}};
\fill (11,-0.5) circle (2pt) node[below] {\small{$\bar 1$}};
\end{tikzpicture}
\caption[smallcaption]{Top: the arrival scenario of Figure \ref{fig:example2} (augmented with a 2 item). Middle: the two corresponding blocks, perfectly matched in
{\sc fcfm} with the compatibility graph of Figure \ref{fig:example1}. Bottom: completion of the exchanges by matchings.}
\label{Fig:exchange}
\end{center}
\end{figure}
Now observe the following: after completion of the exchanges on any perfectly matched block by {\sc fcfm}, for any arrival scenario,
by reading the arrivals on the matched block from right to left, we see nothing but a {\sc fcfm} matching of the items of classes in
$\td{\mathcal{V}}$. To prove this, let the four nodes $i$, $j$, $k$ and $\ell$ be such that in $G$, $i {\--} k$, $j {\--} k$ and $i {\--} \ell$,
and suppose that, after the exchange, four copies $\td i$, $\td j$, $\td k$ and $\td{\ell}$ are read in that order, in reverse time, i.e. from right to left.
Let us also assume that the {\sc fcfm} rule in reverse time is violated on this quadruple: then
the $\td k$ item is matched with the $\td j$ item while the $\td i$ item is still unmatched, and then the latter item is matched with the $\td{\ell}$ item. This occurs if and only if, in direct time, the four items of
classes $i$, $j$, $k$ and $\ell$ arrive in that order, and the $k$ item choses the $j$ item over the $i$ item for its match, and then the unmatched $i$ item is matched with the $\ell$ item.
This violates in turn the {\sc fcfm} policy, according to which the $k$ item should have been matched with the $i$ item instead of the $j$ item. Hence the assertion above: over any perfectly matched block
in {\sc fcfm}, the block of exchanged items read in reverse time is also perfectly matched in {\sc fcfm} - see the bottom display of Figure \ref{Fig:exchange}.
Now assume that the conditions of Proposition \ref{prop:perfectmatch} are satisfied for $\phi=\textsc{fcfm}$. Then there exist exactly two bi-infinite perfect {\sc fcfm}-matchings of
the input. Generalizing the above observation to all perfectly matched blocks on $\mathbb Z$, we conclude that there exist exactly two perfect {\sc fcfm}-matchings of the exchanged items in reverse time, corresponding respectively to the two aforementioned perfect {\sc fcfm}-matchings in direct time, after complete exchanges over blocks, read from right to left.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,155,230 | arxiv | \subsection*{Product of weighted hypergroups}\label{s:prodcut-of-weighted-hypergroups}
\begin{eg}\label{eg:product-of-hypergroups}
Let $\{H_i\}_{i\in{\bf I}}$ be a family of discrete hypergroups with corresponding weights $\{\omega_i\}_{i\in {\bf I}}$ such that $\omega_i(e_{H_i})=1$ for all $i\in {\bf I}$ except finitely many.
Then $\omega(x_i)_{i\in{\bf I}} :=\prod_{i\in{\bf I}} \omega_i(x_i)$ forms a hypergroup weight on the restricted direct product of hypergroups $\{H_i\}_{i\in {\bf I}}$.
\end{eg}
A discrete hypergroup $H$ is called \ma{finitely generated} if for a finite set $F \subseteq H$ with $F= \check{F}$, we have $H=\bigcup_{n\in \mathbb{N}} F^{*n}$ then $F$ is called a \ma{finite symmetric generator} of $H$. We define
\begin{equation}\label{eq:tau}
{\tau_F}:H\rightarrow \Bbb{N} \cup \{0\}
\end{equation}
by ${\tau_F}(x):=\inf\{n \in \Bbb{N}: \ x\in F^{*n}\}$ for all $x\neq e$ and ${\tau_F}(e)=0$.
It is straightforward to verify that if $F'$ is another finite symmetric generator of $H$, then for some constants $C_1,C_2$, $C_1\tau_{F'}\leq {\tau_F} \leq C_2\tau_{F'}$.
If there is no risk of confusion, we may just use $\tau$ instead of ${\tau_F}$.
\begin{dfn}\label{d:polynomial-exponential}
For a given $\beta\geq 0$, $\omega_\beta(x):=(1+\tau(x))^\beta$ is a central weight on $H$ which is called a \ma{Polynomial weight}. Similarly, for given $C>0$ and $0\leq \alpha \leq 1$, $\sigma_{\alpha,C}(x):=e^{C\tau(x)^\alpha}$ is a central weight on $H$ which is called an \ma{Exponential weight}.
\end{dfn}
\begin{proposition}\label{p:weights-homomorphisms}
Let $H, H'$ be two discrete hypergroups and $\phi: H_1 \rightarrow H_2$ be a surjective hypergroup homomorphism. If $\omega$ is a weight on $H$ so that for every $x \in H$, $\omega(x)\geq \delta$ for some $\delta>0$. Then $\omega'$ defined by
\[
{\omega}'(y):=\inf\{\omega(x):\ x\in H, \phi(x) =y\} \ \ \ \ \ \ \ (y\in H'),
\]
is a weight on $H'$.
\end{proposition}
\begin{proof}
Proof is immediate if one note that $\phi: c_c(H) \rightarrow c_c(H')$ satisfies $\norm{\phi(f)}_{\ell^1(H', \omega')} \leq \norm{f}_{\ell^1(H, \omega)}$ and
\[
\omega'(\delta_{\phi(x)} * \delta_{\phi(z)}) =\norm{ \delta_{\phi(x)} * \delta_{\phi(z)} }_{\ell^1(H', \omega')} =\norm{ \phi(\delta_{x} * \delta_{z}) }_{\ell^1(H', \omega')} \leq \norm{ \delta_{x} }_{\ell^1(H, \omega)}\norm{ \delta_{z} }_{\ell^1(H, \omega)} = \omega(x) \omega(z)
\]
for every pair $x,z\in H$.
\end{proof}
\end{subsection}
\begin{subsection}{Weights on $\operatorname{Conj}(G)$}\label{ss:weights-on-Conj(G)-from-G}
Let $(G, \sigma)$ be a weighted group i.e. $\sigma(xy)\leq \sigma(x)\sigma(y)$ for all $x,y\in G$. We use $\ell^1(G,\sigma)$ to denote the {weighted group algebra} constructed by $\sigma$. Let $Z\ell^1(G,\sigma)$ denote the center of $\ell^1(G,\sigma)$. It is not hard to show that $Z\ell^1(G,\sigma)$ is the set of all $f\in \ell^1(G, \sigma)$ for them $f(yxy^{-1})=f(x)$ for all $x,y\in G$.
The following proposition lets us apply group weights to generate hypergroup weights on $\operatorname{Conj}(G)$. The proof is straightforward, so we omit it here.
\begin{proposition}\label{p:weights-coming-groups}\label{c:relation-between-Zl^1(G,weight)-and-Zl^1(conj(G),center-weight)}
Let $G$ be an FC group possessing a weight $\sigma$. Then the mean function $\omega_\sigma$ defined by $\omega_\sigma(C):={|C|}^{-1}\sum_{t\in C} \sigma(t)$ {$(C\in \operatorname{Conj}(G))$}
is a weight on the hypergroup $\operatorname{Conj}(G)$.
Further, $\ell^1(\operatorname{Conj}(G),\omega_\sigma)$ is isometrically Banach algebra isomorphic to $Z\ell^1(G,\sigma)$.
\end{proposition}
\begin{rem}\label{r:central-weights-on-groups-UNIQUE}
Let $G$ be an FC group and let $\omega$ be a central weight on $\operatorname{Conj}(G)$. Then the mapping $\sigma_\omega$, defined on $G$ by $\sigma_\omega(x):=\omega(C_x)$, is a group weight on $G$. And $\ell^1(\operatorname{Conj}(G),\omega)$ as a Banach algebra is isometrically isomorphic to $Z\ell^1(G,\sigma_\omega)$.
\end{rem}
\begin{eg}\label{eg:|C|-is-a-weight-on-conj(G)}
Let $G$ be a discrete FC group. The mapping $\omega(C)=|C|$, for $C\in\operatorname{Conj}(G)$, is a central weight on $\operatorname{Conj}(G)$.
\end{eg}
\begin{eg}\label{eg:defined-I-C}
Let $G =\bigoplus_{i\in{\bf I}}{ G_i}$ for a family of finite groups $\{G_i\}_{i\in {\bf I}}$.
Given $C=(C_i)_{i\in {\bf I}}\in\operatorname{Conj}(G)$, define ${\bf I}_C:=\{i\in {\bf I}: C_i\neq e_{G_i}\}$.
For each $\alpha>0$, we define a mapping $\omega_\alpha(C):= \left(1 + |C_{{i_1}}| + \cdots + |C_{{i_n}}|\right)^\alpha$
where $i_j \in {\bf I}_C$. We show that $\omega_\alpha$ is a central weight on $\operatorname{Conj}(G)$.
To do so let $E \subseteq CD$ for some $E,C,D\in \operatorname{Conj}(G)$. One can easily show that for each $i\in {\bf I}$, $E_i\subseteq C_i D_i$; ${\bf I}_E \subseteq {\bf I}_C \cup {\bf I}_D$.
Therefore,
\begin{eqnarray*}
\omega_\alpha(C) &=& ( 1 + \sum_{i\in {\bf I}_E}|E_{i}|)^\alpha
\leq (1 + \sum_{i\in {\bf I}_E} |C_{{i}}||D_{{i}}|)^\alpha\ \ \ \ \ \text{(by Example~\ref{eg:|C|-is-a-weight-on-conj(G)})}\\
&\leq& \left( 1+ \sum_{i\in {\bf I}_C}|C_{{i}}|\right)^\alpha\;\left( 1+ \sum_{i\in {\bf I}_D}|D_{{i}}|\right)^\alpha= \omega_\alpha(C) \omega_\alpha(D).
\end{eqnarray*}
\end{eg}
A group $G$ is called a group with \ma{finite commutator group} or \ma{FD} if its derived subgroup is finite. It is immediate that for a group $G$, for every $C\in \operatorname{Conj}(G)$, $|C|\leq |G'|$ when $G'$ is the \ma{derived subgroup} of $G$. Therefore, the order of conjugacy classes of an FD group are uniformly bounded by $|G'|$. The converse is also true, that is for an FC group $G$, if the order of conjugacy classes are uniformly bounded, then $G$ is an FD group, see \cite[Theorem~14.5.11]{rob}. The following proposition implies that every hypergroup weight on the conjugacy classes of an FD group which is constructed by a group weight (as given in Proposition~\ref{p:weights-coming-groups}) is equivalent to a central weight. We omit the proof of the following proposition as it is straightforward.
\begin{proposition}\label{p:Omega-is-a-weight}
Let $(G,\sigma)$ be a weighted FD group.
Then the hypergroup weight $\omega_z(C):=|G'|^2 \omega_\sigma(C)$, for $C\in\operatorname{Conj}(G)$,
forms a central weight. Here $\omega_\sigma$ is defined as in Proposition~\ref{p:weights-coming-groups}.
\end{proposition}
In contrast to Proposition~\ref{p:Omega-is-a-weight}, we will see in the following examples that there exist weights on FC groups (with infinite derived subgroup) which are not equivalent to any central weight.
\end{subsection}
\begin{eg}\label{eg:weight-not- central}
Let $S_3$ be the symmetric group of order $6$.
Let $\omega$ be defined on $\operatorname{Conj}(S_3)$ by $\omega(C_e)=1$, $\omega(C_{(12)})=2$, and $\omega(C_{(123)})=5$.
One may verify that $\omega$ is a weight on $\operatorname{Conj}(S_3)$.
On the other hand, since $5=\omega(C_{(123)}) \nleqslant \omega(C_{(12)})^2=4$, $\omega$ is not a central weight.
\end{eg}
\begin{eg}\label{eg:whights-which-are-not-central-weights}
We generate the restricted direct product $G=\bigoplus_{n\in\Bbb{N}} S_3$. Let us define the weight $\omega' := \prod_{n\in \Bbb{N}}\omega$ on $\operatorname{Conj}(G)$ where $\omega$ is the hypergroup weight on $\operatorname{Conj}(S_3)$ defined in Example~\ref{eg:weight-not- central}.
For each $N\in\Bbb{N}$, define $D_N:=\prod_{n\in\Bbb{N}}D_n^{(N)} \in \operatorname{Conj}(G)$ where $D_n^{(N)}=C_{(123)}$ for all $n\in 1, \ldots, N$ and $D_n^{(N)}=C_e$ otherwise. One can verify that $D_N \in \operatorname{supp}(\delta_{E_N}*\delta_{E_N})$ for $E_N=\prod_{n\in\Bbb{N}}E^{(N)}_n \in \operatorname{Conj}(G)$ with $E_n^{(N)}=C_{(12)}$ for all $n\in 1, \ldots, N$ and $E_n^{(N)}=C_e$ otherwise. Therefore
\[
\frac{\omega'(D_N)}{\omega'(E_N)^2}=\prod_{n=1}^{N} \frac{\omega(C_{(123)})}{\omega(C_{(12)})^2} = (5/4)^{N} \rightarrow \infty
\]
where $N\rightarrow \infty$. Hence, $\omega'$ is not equivalent to any central weight.
\end{eg}
We close this subsection with the following corollary of Lemma~\ref{p:weights-homomorphisms}.
\begin{cor}\label{c:Quotient-mapping-Tomega}
Let $G$ be an FC group, $N$ a normal subgroup of $G$, and $\omega$ a weight on $\operatorname{Conj}(G)$ such that there is some $\delta>0$ such that $\omega(C)>\delta$, for any $C\in \operatorname{Conj}(G)$.
Then the mapping $\tilde{\omega}:\operatorname{Conj}(G/N)\rightarrow \Bbb{R}^+$
defined by $\tilde{\omega}(C_{xN}):=\inf\{\omega(C_{xy}):\ y\in N\}$, for $C_{xN} \in \operatorname{Conj}(G/N)$,
forms a weight on $\operatorname{Conj}(G/N)$.
\end{cor}
\begin{subsection}{Weights on duals of compact groups}\label{s:weights-on-^G}
In this subsection, $G$ is a compact group. We recall that for each $\pi \in \widehat{G}$ and $f\in L^1(G)$,
\[
\widehat{f}(\pi):=\int_G f(x) \overline{\pi(x)} dx
\]
is the \ma{Fourier transform} of $f$ at $\pi$. Let $VN(G)$ denote the group von Neumann algebra of $G$, i.e. the von Neumann algebra generated by the left regular representation of $G$. It is well-known that the predual of $VN(G)$, denoted by $A(G)$, is a Banach algebra of continuous functions on $G$; it is called the \ma{Fourier algebra} of $G$. Moreover, for every $f\in A(G)$,
\[
\norm{f}:=\sum_{\pi \in \widehat{G}} d_\pi \norm{\widehat{f}(\pi)}_1 <\infty,
\]
where $\norm{\cdot}_1$ denotes the trace-class operator norm (look at \cite[Section~32]{he2}).
In an attempt to find the noncommutative analogue of weights on groups, Lee and Samei in \cite{LeSa} defined a \ma{weight on $A(G)$} to be a densely defined (not necessarily bounded) operator $W$ affiliated with $VN(G)$ and satisfying certain properties mentioned in \cite[Definition~2.4]{LeSa} (see also \cite{sp}).
Specially they assume that $W$ has a bounded inverse, $W^{-1}$, which belongs to $VN(G)$.
For a weight $W$ on $A(G)$, the \ma{Beurling-Fourier algebra} denoted by $A(G,W)$ is defined to be the set of all $f\in A(G)$ such that
\[
\norm{f}_{A(G,W)}:=\sum_{\pi \in \widehat{G}} d_\pi \norm{\widehat{f}(\pi) \circ W}_1 <\infty.
\]
Indeed $(A(G,W), \|\cdot\|_{A(G,W)})$ forms a Banach algebra with pointwise multiplication. For abelian groups, the definition of Beurling-Fourier algebra corresponds the classical weighted group algebra on the dual group. In \cite{LeSa}, the authors also studied Arens regularity and isomorphism to operator algebras for Beurling-Fourier algebras.
\begin{dfn}\label{d:defining-weights-on-^G}
Let $G$ be a compact group and $W$ a weight on $A(G)$. We define a function $\omega_W:\widehat{G} \rightarrow (0,\infty)$ by
\begin{equation}\label{eq:weights-on-^G}
\omega_W(\pi):=\frac{\norm{I_\pi\circ W}_1}{d_\pi}\ \ \ (\pi \in \widehat{G}),
\end{equation}
where $\norm{\cdot}_1$ denotes the trace norm and $I_\pi$ is the identity matrix corresponding to the Hilbert space of $\pi$.
\end{dfn}
As a specific class of weights on the Fourier algebra of a compact group $G$, in \cite{LeSa} (and independently, in \cite{sp}), \ma{central weights} on $A(G)$ are defined.
Indeed, \cite[Theorem~2.12]{LeSa} implies that each central weight $W$ can be represent by a unique function $\omega_W:\widehat{G}\rightarrow (0,\infty)$ such that $\omega_W(\sigma) \leq \omega_W(\pi_1)\omega_W(\pi_2)$ for all $\pi_1,\pi_2,\sigma \in \widehat{G}$ where $\sigma \in \operatorname{supp}(\delta_{\pi_1}*\delta_{\pi_2})$. In this specific case of operator weights, $\omega_W$ matches with our definition in Definition~\ref{d:defining-weights-on-^G} for a central weight on the hypergroup $\widehat{G}$. In the following we show that the same is true for a general weight on $A(G)$ as well.
\vskip1.0em
Let us define $
ZA(G,W):=\{f\in A(G,W): f(yxy^{-1})=f(x)\ \text{for all $x\in G$}\}$
which is a Banach algebra with pointwise product and $\norm{\cdot}_{A(G,W)}$. Note that for the operator weights $W$ where $\omega_W(\pi)=1$ for every $\pi \in \widehat{G}$, $ZA(G,W)=ZA(G)$. For more on $ZA(G)$, look at \cite{zag}.
\begin{theorem}\label{t:ZA(G,W)=l^1(^G,w)}
Let $G$ be a compact group and $W$ a weight on $A(G)$.
Then $\omega_W$ is a weight on the hypergroup $\widehat{G}$ and the weighted hypergroup algebra $\ell^1(\widehat{G},\omega_W)$ is isometrically isomorphic to $ZA(G,W)$.
\end{theorem}
\begin{proof}
Let ${\cal X}(G)$ denote the linear span of all the characters of $G$.
First define a linear mapping ${\mathcal T}: {\cal X}(G) \rightarrow c_c(\widehat{G})$ by
${\mathcal T}(\chi_\pi)=d_\pi\delta_\pi$ for each $\pi \in \widehat{G}$. Let $f=\sum_{i=1}^n \alpha_i \chi_{\pi_i} \in \mathcal{X}(G)$ for $\pi_i \in \widehat{G}$ and $\alpha_i\in \Bbb{C}$. In this case,
\begin{eqnarray*}
\norm{{\mathcal T}(f)}_{\ell^1(\widehat{G},\omega)} &=& \sum_{i=1}^n |\alpha_i| d_{\pi_i} \omega(\pi_i) = \sum_{i=1}^n |\alpha_i| d_{\pi_i} \frac{\norm{I_{\pi_i}\circ W}_1}{d_{\pi_i}}\\
&=& \sum_{i=1}^n d_{\pi_i} \norm{\frac{\alpha_i}{d_{\pi_i}} I_{\pi_i}\circ W}_1 = \sum_{i=1}^n d_{\pi_i} \norm{ \alpha_i\widehat{\chi}_{\pi_i}(\pi_i)\circ W}_1=\norm{f}_{A(G,W)}.
\end{eqnarray*}
Therefore, ${\mathcal T}$ forms a norm preserving linear mapping.
To show that ${\cal T}$ is an algebra homomorphism note that ${\cal T}(\chi_{\pi_1}\chi_{\pi_2})= {\cal T}(\chi_{\pi_1})* {\cal T}(\chi_{\pi_2})$.
It is known that ${\cal X}(G)$ is dense in $ZA(G,W)$ and clearly $c_c(\widehat{G})$ is dense in $\ell^1(\widehat{G}, \omega_W)$. So ${\mathcal T}$ can be extended as an algebra isomorphism from $ZA(G,W)$ onto $\ell^1(\widehat{G},\omega_W)$ which preserves the norm. In particular, $\ell^1(\widehat{G},\omega_W)$ forms an algebra with respect to its weighted norm and the convolution, and so $\omega_W$ is actually a hypergroup weight on $\widehat{G}$.
\end{proof}
The proof of the following lemma is straightforward so we omit it here.
\begin{lemma}\label{l:dimension-as-a-weight}
Let $G$ be a compact group and $\widehat{G}$ be the set of all irreducible representations of $G$ as a discrete commutative hypergroup. Then $\omega_\beta(\pi)=d_\pi^\beta= h(d_\pi)^{\beta/2}$ is a central weight for each $\beta\geq 0$.
\end{lemma}
In the following, recall $ \sideset{_2}{}\sum$ defined in (\ref{eq:sum-2}).
\begin{eg}\label{eg:weight-driven-from-Z-weights} {\bf \textsf{(Lifting weights from $\Bbb{Z}$ to $\widehat{{\operatorname{SU}}}(2)$)}}
Let $\sigma$ be a weight on the group $\Bbb{Z}$.
We define
\begin{equation}\label{eq:omega-sigma}
\omega_\sigma(\pi_\ell) := \frac{1}{\ell+1} \sideset{_2}{}\sum_{r=-\ell}^{\ell} \sigma(r) \ \ \ \ \ (\ell \in {\mathbb N}_0).
\end{equation}
Recall that elements of $\widehat{\operatorname{SU}}{(2)}$ can be regarded as $\pi_\ell$ when $\ell\in {\mathbb N}_0$.
Suppose that $m,n\in{\mathbb N}_0$ and without loss of generality $n\geq m$. Then,
\begin{eqnarray*}
\omega_\sigma(\pi_m)\omega_\sigma(\pi_n) &=& \frac{1}{m+1}\sideset{_2}{}\sum_{t=-m}^{m} \sigma(t) \; \frac{1}{n+1} \sideset{_2}{}\sum_{s=-n}^{n} \sigma(s)\\
&\geq&
\frac{1}{(m+1)(n+1)}\sideset{_2}{}\sum_{t=-m}^{m} \sideset{_2}{}\sum_{s=-n}^{n} \sigma(t+s)
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (\dagger)\\
&=& \frac{1}{(m+1)(n+1)}\sideset{_2}{}\sum_{t=n-m}^{n+m} \sideset{_2}{}\sum_{s=-t}^t \sigma(s) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (\ddagger)\\
&=& \sideset{_2}{}\sum_{t=n-m}^{n+m } \frac{(t+1)}{(m+1)(n+1)} \left(\frac{1}{t+1} \sideset{_2}{}\sum_{s=-t}^{t} \sigma(s) \right)\\
&=& \sideset{_2}{}\sum_{t=n-m}^{n+m} \frac{(t+1)}{(m+1)(n+1)} \omega_\sigma(\pi_t)\\
&=& \omega_\sigma(\delta_{\pi_m}*\delta_{\pi_n}).
\end{eqnarray*}
To show that the summations $(\dagger)$ and $(\ddagger)$ are equal, let us arrange $(\dagger)$ as follows.
$$
\begin{array}{l l l l l}
\sigma(-m-n)&+\sigma(-m-n+2)&+\cdots&+\sigma(-m+n-2)&+\sigma(-m+n)\cr
+\sigma(-m-n+2)&+\sigma(-m-n+4)&+\cdots&+\sigma(-m+n)&+\sigma(-m+n+2)\cr
\vdots & \vdots & \ddots & \vdots & \vdots \cr
+\sigma(m-2)&+\sigma(m)&+\cdots&+\sigma(m+n-4)&+\sigma(m+n-2)\cr
+\sigma(m)&+\sigma(m+2)&+\cdots&+\sigma(m+n-2)&+\sigma(m+n)\ .\cr
\end{array}$$
but the sum of all the entries in the first column and the last row is equal to
$$\sideset{_2}{}\sum_{s=-m-n}^{m+n} \sigma(s)\ .$$
The next column and row give
$$\sideset{_2}{}\sum_{s=-m-n+2}^{m+n-2} \sigma(s)\ ,$$
and so on. So by doing this finitely many times, we get $(\ddagger)$.
Indeed, weight $\omega_\sigma$ follows from the recipe of Definition~\ref{d:defining-weights-on-^G} using the non-central weight $W$ on $A({\operatorname{SU}}(2))$ defined in (\ref{eq:W}) in Appendix~\ref{a:SU(2)}. So instead of the above computations, one also could use Theorem~\ref{t:ZA(G,W)=l^1(^G,w)} to prove that $\omega_\sigma$ is actually a weight on $\widehat{{\operatorname{SU}}}(2)$.
\end{eg}
\ignore{
\begin{eg}\label{eg:omega_a-on-SU(2)} {\bf \textsf{(Non-central weights on $\widehat{{\operatorname{SU}}}(2)$)}}
Let us define $\omega_a:\widehat{\operatorname{SU}}(2)\rightarrow \Bbb{R}^+$ such that $\omega_a(\pi_\ell):={a^{\ell+1}}/({\ell+1})$
for a fixed constant $a\geq {(\sqrt{5}+1)}/2$. We show that $\omega_a$ is a weight on $\widehat{\operatorname{SU}}(2)$.
For a pair of $ \ell,\ell'$ in $\Bbb{N}_0:=\{0,1,2,3\ldots\}$, without loss of generality suppose that $\ell\geq \ell'$. So we have
\begin{eqnarray*}
\sideset{_2}{}\sum_{r=\ell-\ell'}^{\ell+\ell'} (r+1) \omega_a(\pi_r) &=& \sideset{_2}{}\sum_{r=\ell-\ell'}^{\ell+\ell'} a^{r+1} = a^{\ell-\ell'+1}\; \sideset{_2}{}\sum_{r=0}^{2\ell'} a^{r}\\
&=& a^{\ell-\ell'+1}\; \frac{a^{2\ell' +2 }-1}{a^2 -1} = \frac{a^{\ell + \ell' +3}}{a^2-1} - \frac{a^{\ell - \ell' +1}}{a^2-1}\\
&\leq& a^{\ell+\ell'+2} \left( \frac{a}{a^2-1}\right) \leq \frac{a}{a^2 -1}\omega_a(\ell)(\ell+1)\omega_a(\ell')(\ell'+1).
\end{eqnarray*}
But since $a \geq {(\sqrt{5}+1)}/2$, $a/(a^2-1)\leq 1$; therefore, $\omega_a(\delta_{\pi_\ell}*\delta_{\pi_{\ell'}}) \leq \omega_a({\pi_\ell}) \omega_a({\pi_{\ell'}})$.
Note that
${\omega_a(\pi_{2\ell})}/{\omega_a(\pi_{\ell})^2} \rightarrow \infty$
when $\ell\rightarrow \infty$; while $\pi_{2\ell}\in \operatorname{supp}(\delta_{\pi_\ell}*\delta_{\pi_\ell})$. Hence, not only is $\omega_a$ a non-central weight but also it is not equivalent to any central weight.
\end{eg}
\begin{rem} Let $\sigma(n)=a^n$ for some $a\geq 1$ on $\Bbb{Z}$. Clearly, $\sigma$ is a weight on $\Bbb{Z}$ and therefore, one may consider the weight $\omega_\sigma$ as defined in Example~\ref{eg:weight-driven-from-Z-weights}. For each $\ell\in{\mathbb N}_0$ and $a\geq (1+\sqrt{5})/2$,
\begin{eqnarray*}
\omega_\sigma(\pi_\ell)= \frac{1}{\ell+1} \sideset{_2}{}\sum_{r=-\ell}^{\ell} a^r
= \frac{a^{-\ell}}{\ell+1}\; \frac{a^{2\ell+2}-1}{a^2-1} =\omega_a(\pi_\ell) \; \frac{a^{2\ell+2}-1}{a^{2\ell+1}(a^2-1)},
\end{eqnarray*}
where $\omega_a$ is the weight defined in Examples~\ref{eg:omega_a-on-SU(2)}.
But for every $\ell\in {\mathbb N}_0$ and $a>1$,
\[
\frac{1}{a+1} \leq \frac{a^{2\ell+2} - 1}{a^{2\ell+1}(a^2-1)} \leq \frac{a}{a^2-1}.
\]
This implies that the weights $\omega_\sigma$ and $\omega_a$, respectively in Examples~\ref{eg:weight-driven-from-Z-weights} and \ref{eg:omega_a-on-SU(2)}, are equivalent.
\end{rem}
}
Fix $\beta>0$. One may apply the construction in Example~\ref{eg:weight-driven-from-Z-weights} for
\begin{equation}\label{eq:sigma-beta}
\sigma(\ell):=\left\{
\begin{array}{l l}
1 & 0 \leq \ell\\
( 1 - \ell)^\beta & \ell < 0
\end{array}
\right.\ \ \ (\ell \in \Bbb{Z})
\end{equation}
to construct a hypergroup weight $\omega_\sigma$ on $\widehat{\operatorname{SU}}(2)$. Observe that the weight $\omega_\sigma$ is equivalent to the weight
$\omega_\beta$
defined in Lemma~\ref{l:dimension-as-a-weight}.
We will see in Section~\ref{s:Arens-regularity}, that this particular weight will give interesting classes of examples.
To construct weights from subgroups of compact groups, one can look at \cite[Proposition~4.11]{sp}.
\end{subsection}
\begin{subsection}{Weights on polynomial hypergroups}\label{ss:weights-on-polynomials}
Recall that $\widehat{\operatorname{SU}}(2)$ is a particular example of polynomial hypergroup so-called \ma{Chebyshev polynomials}. Similar arguments can be applied to construct hypergroup weights on polynomial hypergroups applying group weights of $\Bbb{Z}$.
\begin{eg}\label{eg:Chebyshev-weight-Arens}
Let $f:\Bbb{N}_0 \rightarrow \Bbb{R}^+$ be an increasing function such that $f(0)=1$.
Then $\omega_f(n)=f(n)+2$ is a central weight on $\Bbb{N}_0$ when it is equipped with the Chebyshev polynomial hypergroup structure of the first type.
Applying the argument in Example~\ref{eg:basic-polynomial}, we can see that $\ell^1(\Bbb{N}_0,\omega_f)$ is isomorphic to the symmetric subalgebra of $A(\Bbb{T},\sigma)$, that is $Z_{\pm 1}A(\Bbb{T},\sigma):=\{f+\check{f}: f\in A(\Bbb{T},\sigma)\}$, for the group weight
\[
\sigma_f(\ell):=\left\{
\begin{array}{l l}
1 & \ell \geq 0\\
f(-\ell) & \ell<0
\end{array}
\right. \ \ \ (\ell\in \Bbb{Z})
\]
\end{eg}
\end{subsection}
\end{section}
\begin{section}{Arens regularity}\label{s:Arens-regularity}
In \cite[Chaptetr~4]{kam}, Kamyabi-Gol applied the topological center of hypergroup algebras to prove some results about the hypergroup algebras and their second duals. For example, in \cite[Corollary~4.27]{kam}, he showed that for a (not necessarily discrete and commutative) hypergroup $H$ (which possesses a Haar measure $h$), $L^1(H,h)$ is Arens regular if and only if $H$ is finite.
Arens regularity of weighted group algebras has been studied by Craw and Young in \cite{yo}. They showed that a locally compact group $G$ has a weight $\omega$ such that $L^1(G,\omega)$ is Arens regular if and only if $G$ is discrete and countable. The monograph \cite{da2} presents a thorough report on Arens regularity of weighted group algebras. In the following we adapt the machinery developed in \cite[Section~8]{da2} for weighted hypergroups. In \cite[Section~3]{da2}, the authors study repeated limit conditions and give a rich variety of results for them. Here, we will use some of them.
First let us recall the following definitions.
Let ${\mathcal A}$ be a Banach algebra. For $f,g\in {\mathcal A}$, $\phi\in {\mathcal A}^*$, and $F,G\in {\mathcal A}^{**}$, we define the following module actions.
\[
\begin{array}{l l}
\langle f\cdot \phi, g\rangle:=\langle \phi, gf\rangle,& \langle \phi \cdot f, g\rangle:= \langle \phi, fg\rangle\\
\langle \phi\cdot F, f\rangle := \langle F, f \cdot \phi \rangle, & \langle F\cdot \phi, f\rangle:=\langle F, \phi\cdot f\rangle\\
\langle F\Diamond G, \phi\rangle:=\langle G, \phi\cdot F\rangle, & \langle G \Box F, \phi \rangle := \langle G, F \cdot \phi \rangle.
\end{array}
\]
Let $F,G\in {\mathcal A}^{**}$, and let $(f_\alpha)_\alpha$ and $(g_\beta)_\beta$ be nets in $\mathcal A$ such that $f_\alpha\rightarrow F$ and $g_\beta \rightarrow G$ in the weak$^*$ topology.
One may show that for products $\Box$ and $\Diamond$ of $\mathcal A^{**}$,
\[
F\Box G = w^*-\lim_\alpha w^*-\lim_\beta f_\alpha g_\beta \ \ \text{and} \ \ \ F\Diamond G= w^*-\lim_\beta w^*-\lim_\alpha f_\alpha g_\beta.
\]
The Banach space ${\mathcal A}^{**}$ equipped with either of the multiplications $\Box$ or $\Diamond$ forms a Banach algebra.
The Banach algebra ${\mathcal A}$ is called \ma{Arens regular} if two actions $\Box$ and $\Diamond$ coincide.
Let $c_0(H,\omega^{-1}):=\{ f:H\rightarrow \Bbb{C}:\ \ f\omega^{-1}\in c_0(H)\}$.
Note that $\ell^1(H,\omega)$ is the dual of $c_0(H,\omega^{-1})$. Hence, $\ell^1(H,\omega)^{**}$ can be decomposed as $\ell^1(H,\omega)\bigoplus c_0(H,\omega^{-1})^\perp$
when $c_0(H,\omega^{-1})^\perp:=\{F\in \ell^1(H,\omega)^{**}:\ \langle F,\phi\rangle =0 \; \text{for all $\phi \in c_0(H,\omega^{-1})$}\}$.
To see this decomposition, let $F\in \ell^1(H,\omega)^{**}$, it is clear that $f:=F|_{c_0(H,\omega^{-1})}\in \ell^1(H,\omega)$ and consequently $\Phi:=F-f \in c_0(H,\omega^{-1})^\perp$. Therefore, $F=(f,\Phi) \in \ell^1(H,\omega)\bigoplus c_0(H,\omega^{-1})^\perp$.
\begin{proposition}\label{p:Arens-regularity}
Let $(H,\omega)$ be a weighted hypergroup. Then $\ell^1(H,\omega)$ is Arens regular if the multiplications $\Box$ and $\Diamond$ restricted to $c_0(H,\omega^{-1})^{\perp}$ are constantly $0$.
\end{proposition}
\begin{proof}
Now let $F=(f,\Phi)$ and $G=(g,\Psi)$ belong to $\ell^1(H,\omega)^{**}$.
First, note that $f\Box \Psi=f\Diamond \Psi$ and $\Phi \Box g=\Phi \Diamond g$.
Thus $F \Box G = (f, \Phi)\Box (g,\Psi) = (fg, f\Box \Psi+ \Phi \Box g) = (fg, f\Diamond \Psi + \Phi \Diamond g) = F\Diamond G$.
\end{proof}
Let us define the bounded function $\Omega_\omega: H\times H \rightarrow (0,1]$ by
\begin{equation}\label{eq:Omega}
\Omega_\omega(x,y):=\frac{\omega(\delta_x*\delta_y)}{\omega(x)\omega(y)}\ \ \ (x,y\in H).
\end{equation}
If there is no risk of confusion, we may use $\Omega$ instead of $\Omega_\omega$.
\vskip1.0em
For a weighted group $(G,\sigma)$, the Arens regularity of weighted group algebras has been characterized completely; \cite[Theorem~8.11]{da2} proves that it is equivalent to the \ma{$0$-clusterness} of the function $\Omega_\sigma$ on $G\times G$, that is
\[
\lim_n \lim_m \Omega_\sigma(x_m, y_n) = \lim_m \lim_n \Omega_\sigma(x_m, y_n)=0
\]
whenever $(x_m)$ and $(y_n)$ are sequences in $G$, each consisting of distinct points, and both repeated limits exist.
A stronger version of $0$-clusterness is called \ma{strong $0$-clusterness} (see \cite[Section~3]{da2}).
We define strongly $0$-cluster functions as presented in \cite[Definition~3.6]{da2} for discrete topological spaces.
\begin{dfn}
Let $X$ and $Y$ be two sets and $f$ is a bounded function on $X\times Y$ into $\Bbb{C}$. Then $f$ \ma{$0$-clusters strongly} on $X\times Y$ if
\[
\lim_{x\rightarrow\infty} \limsup_{y\rightarrow\infty} f(x,y) = \lim_{y\rightarrow\infty} \limsup_{x\rightarrow \infty} f(x,y) = 0.
\]
\end{dfn}
\vskip1.5em
Let us define Banach space isomorphism $\kappa: \ell^1(H,\omega)\rightarrow \ell^1(H)$ where $\kappa(f)=f\omega$ for each $f\in \ell^1(H,\omega)$.
Note that for $\kappa^{**}:\ell^1(H,\omega)^{**}\rightarrow \ell^1(H)^{**}$ and $\Phi\in c_0(H,\omega)^\perp$, one gets
$
\langle \kappa^{**}(\Phi), \phi\rangle = \langle \Phi, \kappa^*(\phi)\rangle$
which is $0$ for all $\phi\in c_0(H)$. Therefore $\kappa^{**}(\Phi)\in c_0(H)^\perp$. The converse (which we do not use here) is also true and straightforward to show.
\vskip1.0em
The following theorem is a generalization of \cite[Theorem~8.8]{da2}. In the proof we use some techniques of
the proof of \cite[Theorem~3.16]{LeSa}.
\begin{theorem}\label{t:Box-product-is-zero}
Let $(H,\omega)$ be a weighted hypergroup and let $\Omega$ $0$-cluster strongly on $H\times H$. Then $\Phi\Box \Psi=0$ and $\Phi\Diamond \Psi=0$ whenever $\Phi,\Psi\in c_0(H,1/\omega)^{\perp}$.
\end{theorem}
\begin{proof}
Let us show the theorem for $\Phi\Box \Psi$, the proof for the other action is similar. Let $\Phi,\Psi\in c_0(H,1/\omega)^\perp$. By Goldstine's theorem, there are nets $(f_\alpha)_\alpha, (g_\beta)_\beta \subseteq \ell^1(H)$ such that $f_\alpha \rightarrow \kappa^{**}(\Phi)$ and $g_\beta \rightarrow \kappa^{**}(\Psi)$ in the weak$^*$ topology of $\ell^1(H)^{**}$ while $\sup_\alpha\norm{f_\alpha}_1\leq 1$ and $\sup_\beta\norm{g_\beta}_1\leq 1$.
So for each $\psi\in \ell^\infty(H)$ and $\Phi, \Psi\in \ell^1(H,\omega)^{**}$,
\begin{eqnarray*}
\langle\psi\omega, \kappa^{**}(\Phi\Box \Psi)\rangle= \langle \kappa^*(\psi), \Phi\Box \Psi \rangle= \lim_\alpha \lim_\beta \langle \psi\omega, \kappa^{-1}(f_\alpha) * \kappa^{-1}(g_\beta) \rangle
= \lim_\alpha \lim_\beta \langle \psi\omega, f_\alpha/\omega * g_\beta/\omega \rangle.
\end{eqnarray*}
Thus
\begin{eqnarray*}
|\langle\psi\omega, \kappa^{**}(\Phi\Box \Psi)\rangle|
&=& \lim_\alpha \lim_\beta |\langle \psi\omega, f_\alpha/\omega * g_\beta/\omega\rangle| \\
&=& \lim_\alpha \lim_\beta \left|\sum_{y\in H} \psi(y) \omega(y) \sum_{x,z\in H} \frac{f_\alpha(x)}{\omega(x)} \frac{g_\beta(z)}{\omega(z)} \delta_x*\delta_z(y)\right|\\
&\leq & \limsup_\alpha \limsup_\beta \sum_{x,z\in H} \frac{|f_\alpha(x)|}{\omega(x)} \frac{|g_\beta(z)|}{\omega(z)} \sum_{y\in H} |\psi(y)| \omega(y) \delta_x*\delta_z(y)\\
&\leq & \norm{\psi}_{\ell^\infty(H)}\; \limsup_\alpha \limsup_\beta \sum_{x,z\in H} |f_\alpha(x)||g_\beta(z)| \sum_{y\in H} \frac{\omega(y)}{\omega(x)\omega(z)} \delta_x*\delta_z(y)\\
&=&\norm{\psi}_{\ell^\infty(H)}\; \limsup_\alpha \limsup_\beta \sum_{x,z\in H} |f_\alpha(x)||g_\beta(z)| \Omega(x,z).
\end{eqnarray*}
For a given $\epsilon>0$, since by the hypothesis $\lim_{x}\limsup_{z}\Omega(x,z)=0$, there is a finite set $A\subseteq H$ such that for each $x\in A^c( =H\setminus A)$ there exists a finite set $B_{x}\subseteq H$ such that for each $z\in B_{x}^c:=H\setminus B$, $|\Omega(x,z)|\leq \epsilon$.
First note that
\begin{eqnarray*}
\limsup_\alpha \limsup_\beta \sum_{x\in A^c}\sum_{z\in B_x^c} |f_\alpha(x)||g_\beta(z)| \Omega(x,z)\leq \limsup_\alpha \limsup_\beta \epsilon \norm{f_\alpha}_1 \norm{g_\beta}_1 \leq \epsilon.
\end{eqnarray*}
Also according to our assumption about $\Phi$ and $\Psi$ and since for each $x\in H$, $\delta_x\in c_0(H,1/\omega)$, $\lim_\alpha f_\alpha(x) =0$ and $\lim_\beta g_\beta(x) =0$.
So for the given $\epsilon>0$, there is $\alpha_0$ such that for all $\alpha_0 \preccurlyeq \alpha$, $|f_\alpha(x)|<\epsilon/|A|$ for all $x\in A$. Moreover, for each $x\in A^c$ there is some $\beta_0^x$ such that for all $\beta$ where $\beta_0^x \preccurlyeq \beta$, $|g_\beta(z)|<\epsilon/|B_x|$ for all $z\in B_x$ (this is possible since $A$ and $B_x$ are finite). Therefore, since $|\Omega(x,z)|\leq 1$,
\begin{eqnarray*}
\limsup_\alpha \limsup_\beta \sum_{x\in A}\sum_{z\in H} |f_\alpha(x)||g_\beta(z)| \Omega(x,z)\leq \limsup_\beta \epsilon \norm{g_\beta}_1 = \epsilon
\end{eqnarray*}
and
\begin{eqnarray*}
\limsup_\alpha \limsup_\beta \sum_{x\in A^c}\sum_{z\in B_x} |f_\alpha(x)||g_\beta(z)| \Omega(x,z)
&\leq&
\limsup_\alpha \sum_{x\in A^c} |f_\alpha(x)| \limsup_\beta \sum_{z\in B_x} |g_\beta(z)|\\
&\leq&
\limsup_\alpha \epsilon \norm{f_\alpha}_1 = \epsilon.
\end{eqnarray*}
But
\begin{eqnarray*}
\sum_{x,z\in H} |f_\alpha(x)||g_\beta(z)| \Omega(x,z) &=&
\sum_{x\in A^c, z\in B_x^c} |f_\alpha(x)||g_\beta(z)| \Omega(x,z)\\
&+& \sum_{x\in A, z\in H} |f_\alpha(x)||g_\beta(z)| \Omega(x,z)\\
&+&\sum_{x\in A^c, z\in B_x} |f_\alpha(x)||g_\beta(z)| \Omega(x,z),
\end{eqnarray*}
and so, one gets that $|\langle\psi\omega, \kappa^{**}(\Phi\Box \Psi)\rangle| \leq 3 \epsilon \norm{\psi}_\infty$. Since $\epsilon>0$ was arbitrary, this proves the claim of the theorem.
\end{proof}
\begin{theorem}\label{t:Arens-regularity-summary}
Let $(H, \omega)$ be a discrete weighted hypergroup and consider the following conditions:
\begin{enumerate}
\item[$(1)$]{ $\Omega$ $0$-clusters strongly on $H\times H$.}
\item[$(2)$]{ $\Phi \Box \Psi = \Phi \Diamond \Psi =0 $ for all $\Phi, \Psi \in c_0(H, 1/\omega)^{\perp}$.}
\item[$(3)$]{ $\ell^1(H,\omega)$ is Arens regular.}
\end{enumerate}
Then $(1) \Rightarrow (2) \Rightarrow (3)$.
\end{theorem}
\begin{proof}
$(1) \Rightarrow (2)$ by Theorem~\ref{t:Box-product-is-zero}. $(2) \Rightarrow (3)$ is implied from Proposition~\ref{p:Arens-regularity}.
\end{proof}
\begin{rem}
Since in hypergroups, the cancellation does not necessarily exist, the argument of \cite[Theorem~1]{yo} cannot be applied to show $(3)$ implies $(1)$.
\end{rem}
\begin{eg}\label{eg:Chebyshev-Arens}
Let ${\mathbb N}_0$ be equipped with Chebyshev polynomial hypergroup structure of the first type and $\sigma_f$ be the group weight defined in Example~\ref{eg:Chebyshev-weight-Arens} for an increasing function $f$. One can easily check that if $\lim_{n,m} f(n+m)/{f(n)f(m)}=0$,
then $\Omega_{\omega_f}$ $0$-clusters strongly on ${\mathbb N}_0\times {\mathbb N}_0$; hence, $\ell^1({\mathbb N}_0,\omega_f)$ is Arens regular. Indeed, $Z_{\pm}A(\Bbb{T},\sigma_f)$ is Arens regular. But note that $A(\Bbb{T},\sigma_f)$ (which is isomorphic to $\ell^1(\Bbb{Z},\sigma_f)$ through the Fourier transform) is not Arens regular, as $\Omega_{\sigma_f}$ does not $0$-cluster strongly on $\Bbb{Z}\times \Bbb{Z}$ (see \cite[Theorem~8.11]{da2}).
\end{eg}
\begin{cor}\label{c:Arens-subadditive}
Let $(H,\omega)$ be a weighted discrete hypergroup such that $\omega$ is a weakly additive weight. If $1/\omega \in c_0(H)$, then $\ell^1(H,\omega)$ is Arens regular.
\end{cor}
\begin{proof}
We have
\begin{eqnarray*}
\lim_{x\rightarrow \infty}\limsup_{y\rightarrow\infty} \frac{\omega(\delta_{x}*\delta_{y})}{\omega(x)\omega(y)}
&\leq & \limsup_{x\rightarrow \infty}\limsup_{y\rightarrow\infty} C \frac{\omega(x)+ \omega(y)}{\omega(x)\omega(y)}\\
&=& C \limsup_{x\rightarrow \infty}\limsup_{y\rightarrow\infty} \frac{1}{\omega(x)}+\frac{1}{\omega(y)}=0.
\end{eqnarray*}
Therefore $\Omega$ $0$-clusters strongly on $H\times H$ and hence $\ell^1(H,\omega)$ is Arens regular by Theorem~\ref{t:Arens-regularity-summary}.
\end{proof}
\begin{corollary}\label{c:Arens-of-finite-generated}
Let $H$ be a finitely generated hypergroup. Then for each polynomial weight $\omega_\beta$ ($\beta>0$) on $H$ defined in Definition~\ref{d:polynomial-exponential}, $\ell^1(H,\omega_\beta)$ is Arens regular.
\end{corollary}
\begin{proof}
Here we only need to prove the case for an infinite hypergroup $H$.
Let $F$ be a finite generator of the hypergroup $H$ containing the identity of $H$ rendering the central weight $\omega_\beta$. Recall that $\omega_\beta$ is weakly additive with constant $C=\min\{1,2^{\beta-1}\}$. Moreover, for each $N\in \Bbb{N}$, for $x\in H\setminus F^{*N}$, ${\tau_F}(x)\geq N$; hence,
$\omega_\beta(x)=(1+{\tau_F}(x))^\beta \geq (1+N)^\beta$.
Therefore, $1/\omega_\beta \in c_0(H)$. Subsequently, $\ell^1(H,\omega_\beta)$ is Arens regular, by Corollary~\ref{c:Arens-subadditive}.
\end{proof}
\begin{rem}\label{r:central-screws}
Every finitely generated hypergroup $H$ admits a weight for which the corresponding weighted algebra is Arens regular. On the other hand, an argument similar to \cite[Corollary~1]{yo} may apply to show that for every uncountable discrete hypergroup $H$, $H$ does not have any weight $\omega$ which $0$-clusters strongly.
\end{rem}
\begin{eg}\label{eg:dual-weights-Arens-regular-but-not-the Beurling-Fourier-algebra}
Let $\omega_\beta$ be as defined in Lemma~\ref{l:dimension-as-a-weight} for some $\beta\geq 0$. Then $\Omega_{\omega_\beta}$ also $0$-clusters strongly on $\widehat{\operatorname{SU}}(2) \times \widehat{\operatorname{SU}}(2)$. Therefore, $\ell^1(\widehat{\operatorname{SU}}(2),\omega_\beta)$, which is isometrically Banach algebra isomorphic to $ZA({\operatorname{SU}}(2), \omega_\beta)$, is Arens regular. On the other hand, $A({\operatorname{SU}}(2),\omega_\beta)$ is not Arens regular if $\beta>0$. To observe the later fact, first note that by applying \cite{yo}, we obtain that $\ell^1(\Bbb{Z},\sigma)$ is not Arens regular for $\sigma$ defined in (\ref{eq:sigma-beta}).
Therefore, $A(\Bbb{T},\sigma)$ is not Arens regular. Note that, $\omega_\beta$ can also be rendered using the weight $\sigma$ through the argument of the last paragraph of Subsection~\ref{s:weights-on-^G}.
For the dual spaces $VN(\Bbb{T},\sigma)$ and $VN({\operatorname{SU}}(2), \omega_\beta)$, one may verify that $VN(\Bbb{T},\sigma)$ embeds $*$-weakly in $VN({\operatorname{SU}}(2),\omega_\beta)$ (the details of this embedding will appear in a manuscript by the second named author and et al). Hence, $A(\Bbb{T},\sigma)$ is a quotient of $A({\operatorname{SU}}(2),\omega_\beta)$ and consequently $A({\operatorname{SU}}(2),\omega_\beta)$ is not Arens regular.
\end{eg}
In the following, we generalize some results on ${\operatorname{SU}}(2)$ to all ${\operatorname{SU}}(n)$'s, the group of all $n\times n$ special unitary matrices on $\Bbb{C}$, based on a recent study on the representation theory of ${\operatorname{SU}}(n)$, \cite{lee1}.
As an example for Lemma~\ref{l:dimension-as-a-weight}, $(\widehat{\operatorname{SU}}(n),\omega_\beta)$ is a discrete commutative hypergroup where $\omega_\beta(\pi)=d_\pi^\beta$ for some $\beta\geq 0$. See \cite{ful} for the details of representation theory of ${\operatorname{SU}}(n)$.
There is a one-to-one correspondence between $\widehat{\operatorname{SU}}(n)$ and $n$-tuples $(\pi_1,\ldots,\pi_n)\in {\mathbb N}_0^n$ such that
$\pi_1 \geq \pi_2 \geq \cdots \geq \pi_{n-1} \geq \pi_n=0$.
This presentation of the representation theory of ${\operatorname{SU}}(n)$ is called \ma{dominant weight}. Using this presentation, we have the following formula which gives the dimension of each representation by the formula
\begin{equation}\label{eq:dimention-in-SU(n)}
d_{\pi}=\prod_{1\leq i < j \leq n} \frac{\pi_i -\pi_j + j -i}{j-i}
\end{equation}
where $\pi$ is the representation corresponding to $(\pi_1,\ldots,\pi_n)$. Suppose that $\pi, \nu,\mu$ are representations corresponding to $(\pi_1,\ldots,\pi_n)$, $(\nu_1,\ldots, \nu_n)$, and $(\mu_1,\ldots,\mu_n)$, respectively, such that $\pi \in\operatorname{supp}(\delta_\nu * \delta_\mu)$. Collins, Lee, and \`{S}niady showed in \cite[Corollary~1.2]{lee1} there exists some $C_n>0$, for each $n\in\Bbb{N}$, such that
\begin{equation}\label{eq:Lee-inequality}
\frac{d_\pi}{d_\mu d_\nu} \leq C_n \left(\frac{1}{1+\mu_1} + \frac{1}{1+\nu_1}\right).
\end{equation}
Applying (\ref{eq:Lee-inequality}), we prove that $\omega_\beta$ $0$-clusters on $\widehat{\operatorname{SU}}(n)$.
\begin{proposition}\label{p:Arens-of-l1(SU(n))}
For every $\beta>0$, $\ell^1(\widehat{\operatorname{SU}}(n),\omega_\beta)$ is Arens regular.
\end{proposition}
\begin{proof}
Let $(\mu_m)_{m\in \Bbb{N}}$ and $(\nu_k)_{k\in\Bbb{N}}$ be two arbitrary sequences of distinct elements of $\widehat{\operatorname{SU}}(n)$.
Since, the elements of $(\mu_m)_{m\in \Bbb{N}}$ are distinct, $\lim_{m\rightarrow\infty} \mu_1^{(m)}=\infty$ where
$\mu_m=(\mu_1^{(m)},\ldots,\mu_n^{(m)})$. The very same thing can be said for $\nu_k=(\nu_1^{(k)},\ldots,\nu_n^{(k)})$.
For each arbitrary pair $(m,k)\in \Bbb{N}\times \Bbb{N}$, if $\pi\in\operatorname{supp}(\delta_{\mu_m}*\delta_{\nu_k})$, we have
\[
d_\pi\leq C_n (\frac{1}{1+\mu^{(m)}_1}+\frac{1}{1+\nu^{(k)}_1})d_{\mu_m}d_{\nu_k}.
\]
Hence
\[
\omega_\beta(\pi) \leq C_n^\beta (\frac{1}{1+\mu^{(m)}_1}+\frac{1}{1+\nu^{(k)}_1})^\beta \omega_\beta(\mu_m)\omega_\beta(\nu_k).
\]
Therefore
\begin{eqnarray*}
\omega_\beta(\delta_{\mu_m}*\delta_{\nu_k}) = \sum_{\pi\in\widehat{\operatorname{SU}}(n)}\delta_{\mu_m}*\delta_{\nu_k}(\pi)\omega_\beta(\pi)
\leq C_n^\beta (\frac{1}{1+\mu^{(m)}_1}+\frac{1}{1+\nu^{(k)}_1})^\beta \omega_\beta(\mu_m)\omega_\beta(\nu_k).
\end{eqnarray*}
Or equivalently
\[
\Omega_\beta(\mu_m,\nu_k):=\frac{\omega_\beta(\delta_{\mu_m}*\delta_{\nu_k})}{\omega_\beta(\mu_m)\omega_\beta(\nu_k)} \leq C_n^\beta (\frac{1}{1+\mu^{(m)}_1}+\frac{1}{1+\nu^{(k)}_1})^\beta.
\]
Hence, $\lim_{m\rightarrow\infty}\limsup_{k\rightarrow\infty} \Omega_\beta(\mu_m,\nu_k)=
\lim_{k\rightarrow\infty}\limsup_{m\rightarrow\infty} \Omega_\beta(\mu_m,\nu_k) = 0$.
Since $\widehat{\operatorname{SU}}(n)$ is countable, this argument implies that $\Omega_\beta$ $0$-clusters strongly on $\widehat{\operatorname{SU}}(n)\times \widehat{\operatorname{SU}}(n)$ and, by Theorem~\ref{t:Arens-regularity-summary}, $\ell^1(\widehat{\operatorname{SU}}(n),\omega_\beta)$ is Arens regular.
\end{proof}
\begin{eg}\label{eg:SL(2,2n)-Arens-regularity}
Let $SL(2,2^n)$ denote the finite group of special linear matrices
over the field $\Bbb{F}_{2^n}$ with cardinal $2^n$, for given $n\in\Bbb{N}$.
As a direct result of the character table, \cite{SL(2F)}, for each three conjugacy classes say $C_1,C_2,D\in\operatorname{Conj}(SL(2,2^n))$, $|D|\leq 2( |C_1| + |C_2|)$ if $D\subseteq C_1C_2$ for all $n$. Let us define the FC group $G$ to be the restricted direct product of $\{SL(2,2^n)\}_{n\in \Bbb{N}}$ i.e. $G:= \bigoplus_{n=1}^\infty SL(2,2^n)$.
Therefore, one can easily show that the weight $\omega_\alpha$, defined in Example~\ref{eg:defined-I-C}, is a weakly additive weight with the constant $M=2^\alpha\min\{1, 2^{\alpha-1}\}$. Moreover, since $\lim_{C\rightarrow \infty} \omega_\alpha(C)=\infty$, $\ell^1(\operatorname{Conj}(G),\omega_\alpha)$ is Arens regular, by Corollary~\ref{c:Arens-subadditive}.
\end{eg}
\begin{rem}
Let $\omega$ be a central weight on $\operatorname{Conj}(G)$ for some FC group $G$. Then there is a group weight $\sigma_\omega$, as defined in Remark~\ref{r:central-weights-on-groups-UNIQUE}, such that $\ell^1(\operatorname{Conj}(G), \omega)$ is isometrically Banach algebra isomorphic to $Z\ell^1(G, \sigma_\omega)$.
So one may also use the embedding $\ell^1(\operatorname{Conj}(G), \omega) \hookrightarrow \ell^1(G,\sigma_\omega)$ to study Example~\ref{eg:SL(2,2n)-Arens-regularity} by applying the theorems which are characterizing Arens regularity of weighted group algebras.
\end{rem}
\begin{rem}\label{r:Arens-of-Qutient}
Let $G$ be an FC group and $\sigma$ a group weight on $G$. We defined $\omega_\sigma$, the derived weight on $\operatorname{Conj}(G)$ from $\sigma$ in Proposition~\ref{p:weights-coming-groups}. Recall that in this case $Z\ell^1(G, \sigma)$ is isomorphic to the Banach algebra $\ell^1(\operatorname{Conj}(G), \omega_\sigma)$. If $N$ is a normal subgroup of $G$, we defined a quotient mapping $T_{\omega_\sigma}: \ell^1(\operatorname{Conj}(G),\omega_\sigma) \rightarrow \ell^1(\operatorname{Conj}(G/N),\tilde{\omega}_\sigma)$ in Corollary~\ref{c:Quotient-mapping-Tomega} where $\tilde{\omega}_\sigma(C_{xN})=\inf\{\omega_\sigma(C_{xy}):\ y\in N\}\ \ (C_{xN} \in \operatorname{Conj}(G/N))$.
Let us note that for an Arens regular Banach algebra ${\mathcal A}$, every quotient algebra ${\mathcal A}/\mathcal{I}$ where $\mathcal{I}$ is a closed ideal of ${\mathcal A}$ is Arens regular as well (see \cite[Corollary 3.15]{da2}). Therefore, if $\ell^1(\operatorname{Conj}(G),\omega_\sigma)$ is Arens regular, for every normal subgroup $N$, $\ell^1(\operatorname{Conj}(G/N),\tilde{\omega}_\sigma)$, which is isomorphic to $\ell^1(\operatorname{Conj}(G),\omega_\sigma)/\ker(T_{\omega_\sigma})$, is Arens regular.
\end{rem}
In the final result of this section, we apply some techniques of \cite{yo} to show that for restricted direct product of hypergroups, product weights fail to admit Arens regular algebras.
\begin{proposition}\label{p:non-Arens-of-special-products}
Let $\{ H_i\}_{i\in{\bf I}}$ be an infinite family of non-trivial discrete hypergroups and for each $i\in{\bf I}$, $\omega_i$ is a weight on $H_i$ such that $\omega_i(e_{H_i})=1$ for all except finitely many $i\in{\bf I}$. Let $H=\bigoplus_{i\in{\bf I}}H_i$ and $\omega=\prod_{i\in{\bf I}}\omega_i$. Then $\ell^1(H,\omega)$ is not Arens regular.
\end{proposition}
\begin{proof}
Since ${\bf I}$ is infinite, suppose that $\Bbb{N}_0\times \Bbb{N}_0 \subseteq {\bf I}$. Define $v_n=(x_i)_{i\in {\bf I}}$ where $x_i=e_{H_i}$ for all $i\in {\bf I}\setminus (n,0)$ and $x_{(n,0)}$ be a non-identity element of $H_{(n,0)}$ for all $n\in \Bbb{N}$. Similarly define $u_m=(x_i)_{i\in{\bf I}}$ where $x_i=e_{H_i}$ for all $i\in {\bf I}\setminus (0,m)$ and $x_{(0,m)}$ be a non-identity element of $H_{(0,m)}$ for all $m\in\Bbb{N}$.
Note that for each pair of elements $(n,m)\in\Bbb{N}\times \Bbb{N}$, $\operatorname{supp}(\delta_{v_n}*\delta_{u_m})$ forms a singleton in $H$; moreover, $
\omega(\delta_{v_n}*\delta_{u_m})=\omega(v_n)\omega(u_m)$. Hence, $(\delta_{v_n}*\delta_{u_m})_{(n,m)\in \Bbb{N}\times \Bbb{N}}$ forms a sequence of distinct elements in $\ell^1(H)$.
Let us define $f_n=\delta_{v_n}$ and $g_m=\delta_{u_m}$ for all $n,m\in\Bbb{N}$.
Suppose that $A:=\{(v_n,u_m):\ n>m\}$ and $\phi\in\ell^\infty(H)$ is the characteristic function of the subset $A$.
Clearly, $\kappa^{-1}(f_n)=\omega^{-1} f_n$ and $\kappa^{-1}(g_m)=\omega^{-1} g_m$ belong to $\ell^1(H,\omega)$ for all $n,m$ and $\kappa^*(\phi)=\omega \phi \in\ell^\infty(H,\omega^{-1})$, for the Banach space isomorphism $\kappa: \ell^1(H,\omega)\rightarrow \ell^1(H)$ where $\kappa(f)=f\omega$ for each $f\in \ell^1(H,\omega)$. Note that
\begin{eqnarray*}
\langle \omega^{-1} f_n * \omega^{-1} g_m, \kappa^*(\phi)\rangle &=& \langle \omega^{-1} f_n * \omega^{-1} g_m, \omega \phi\rangle\\
&=& \sum_{t\in H} (\omega^{-1} f_n * \omega^{-1} g_m)(t) \omega(t) \phi(t) \\
&=& \frac{\omega(v_n*u_m)}{\omega(v_n)\omega(u_m)} \phi(\delta_{v_n}*\delta_{u_m}) \\
&=& \phi(\delta_{v_n}*\delta_{u_m}) = \left\{
\begin{array}{c c}
1 & \text{if $n > m$}\\
0 & \text{if $n\leq m$}
\end{array} \right.
\end{eqnarray*}
Let us recall that for each $n$ and $m$, $\norm{f_n}_{\ell^1(H,\omega)}=1$ and $\norm{g_m}_{\ell^1(H,\omega)}=1$. So $(f_n)_{n\in\Bbb{N}}$ and $(g_m)_{m\in\Bbb{N}}$, as two nets in the unit ball of $\ell^1(H,\omega)^{**}$, have two subnets $(f_\alpha)_\alpha$ and $(g_\beta)_\beta$ such that $f_\alpha$ and $g_\beta$ converge weakly$^*$ to some $F$ and $G$ in $\ell^1(H,\omega)^{**}$, respectively.
Note that for the specific element $\phi$, defined above, $\langle F\Box G,\phi\rangle=0$
while
$ \langle F\Diamond G,\phi\rangle= 1$.
Hence $F\Box G\neq F\Diamond G$ and consequently $\ell^1(H,\omega)$ is not Arens regular.
\end{proof}
\end{section}
\begin{section}{Isomorphism to operator algebras}\label{s:Operator-algebra-weighted-hypergroups}
Let $(H,\omega)$ be a weighted discrete hypergroup. In this section, we study the existence of an algebra isomorphism from $\ell^1(H,\omega)$ onto an operator algebra. A Banach algebra ${\mathcal A}$ is called an operator algebra if there is a Hilbert space ${\mathcal H}$ such that ${\mathcal A}$ is a closed subalgebra of $\mathcal{B}(\mathcal H)$.
Let $\mathcal A$ be a Banach algebra and ${\bf m}:{\mathcal A}\times {\mathcal A} \rightarrow {\mathcal A}$ is the bilinear (multiplication) mapping ${\bf m}(f,g)=fg$.
Then $\mathcal A$ is called \ma{injective}, if ${\bf m}$ has a bounded extension from ${\mathcal A} \otimes_\epsilon {\mathcal A}$ into $\mathcal A$, where $\otimes_\epsilon$ is the injective tensor product. In this case, we denote the norm of ${\bf m}$ by $\norm{{\bf m}}_\epsilon$.
\cite[Corollary 2.2.]{sa3} proves that if a Banach algebra ${\mathcal A}$ is injective then it is isomorphic to an operator algebra. But the converse also holds for weighted hypergroup algebras. The proof is similar to the group case in \cite[Theorem 2.8]{sa3} and it follows from the little Grothendieck inequality (see \cite{pisier}).
Note that a Banach algebra which is isomorphic to an operator algebra is always Arens regular (\cite[Corollary~2.5.4]{blecher}).
\vskip1.0em
Injectivity of weighted group algebras has been studied before. Initially Varopoulos, in \cite{va}, studied the group $\Bbb{Z}$ equipped with the weight $\sigma_\alpha(n)= (1+|n|)^\alpha$ for all $\alpha\geq 0$. This study looked at { injectivity} of $\ell^1(\Bbb{Z},\sigma_\alpha)$. He showed that $\ell^1(\Bbb{Z},\sigma_\alpha)$ is injective if and only if $\alpha>1/2$.
The manuscript~\cite{sa3}, which studied the injectivity question for a wider family of weighted group algebras, developed a machinery applying Littlewood multipliers. In particular, it partially extended Varopoulos's result to finitely generated groups with polynomial growth.
Following the structure of \cite{sa3}, in this section, we study the injectivity or equivalently isomorphism to operator algebras for weighted hypergroup algebras.
In this section, $\cal A\otimes_\gamma B$ and $\cal A\otimes_\epsilon B$ denote respectively the projective and injective tensor products of Banach spaces $\cal A$ and $\cal B$.
We know that $\ell^1(H,\omega)\otimes_\gamma \ell^1(H,\omega)$ is isometrically isomorphic to $\ell^1(H\times H,\omega \times \omega)$. Moreover, $\ell^1(H\times H,\omega \times \omega)^*$ is $\ell^\infty(H\times H,\omega^{-1}\times \omega^{-1})$. Since the injective tensor norm is minimal among all cross-norm Banach space tensor norms, the identity map $\iota:\ell^1(H)\times \ell^1(H) \rightarrow \ell^1(H)\times \ell^1(H)$ may extend to a contractive mapping
\[
\iota: \ell^1(H) \otimes_\gamma \ell^1(H) \rightarrow \ell^1(H) \otimes_\epsilon \ell^1(H).
\]
Since, $\iota$ has a dense range,
\begin{equation}\label{eq:e-sits-in-gamma}
\iota^*: (\ell^1(H) \otimes_\epsilon \ell^1(H))^* \rightarrow (\ell^1(H) \otimes_\gamma \ell^1(H))^*=\ell^\infty(H\times H)
\end{equation}
is an injective mapping. Therefore, applying $\iota^*$, one may embed $(\ell^1(H)\otimes_\epsilon \ell^1(H))^*$ into $\ell^\infty(H\times H)$, as a linear subspace of $\ell^\infty(H\times H)$.
Let $H$ be a discrete hypergroup. We define \ma{Littlewood multipliers} of $H$ to be the set of all functions $f:H\times H \rightarrow \Bbb{C}$ such that there exist functions $f_1,f_2: H\times H \rightarrow \Bbb{C}$ where $
f(x,y)=f_1(x,y) + f_2(x,y)$ for $x,y\in G$
such that
\[
\sup_{y\in H}\sum_{x\in H} |f_1(x,y)|^2 <\infty\ \ \text{and} \ \ \sup_{x\in H}\sum_{y\in H} |f_2(x,y)|^2 <\infty.
\]
We denote the set of all Littlewood multipliers by $T^2(H)$ and define the norm $\norm{\cdot}_{T^2(H)}$ by
\[
\norm{f}_{T^2(H)}:=\inf\left\{ \sup_{y\in H}\left( \sum_{x\in H} |f_1(x,y)|^2\right)^{1/2} + \sup_{x\in H}\left( \sum_{y\in H} |f_2(x,y)|^2\right)^{1/2} \right\}
\]
where the infimum is taken over all possible decompositions $f_1,f_2$.
Note that for a decomposition $f_1,f_2$ of $f\in T^2(H)$,
\begin{eqnarray*}
\norm{f}_{\ell^\infty(H\times H)} = \sup_{x,y\in H} |f(x,y)| &\leq& \sup_{x,y\in H} |f_1(x,y)| + \sup_{x,y\in H} |f_2(x,y)|\\
&\leq& \sup_{y\in H}\left( \sum_{x\in H} |f_1(x,y)|^2\right)^{1/2} + \sup_{x\in H}\left( \sum_{y\in H} |f_2(x,y)|^2\right)^{1/2} <\infty,
\end{eqnarray*}
since for discrete space $H$, $\ell^2(H)\subseteq \ell^\infty(H)$ and $\norm{\cdot}_\infty \leq \norm{\cdot}_2$. Since $f_1,f_2$, in the previous equation are arbitrary, $\norm{f}_{\ell^\infty(H\times H)}\leq \norm{f}_{T^2(H)}$.
Hence $T^2(H)\subseteq \ell^\infty(H\times H)$. Furthermore, for each $\phi\in \ell^\infty(H\times H)$ and $f\in T^2(H)$, $f\phi\in T^2(H)$ and
$\norm{f\phi}_{T^2(H)} \leq \norm{f}_{T^2(H)}\norm{\phi}_\infty$.
\vskip1.5em
The following theorem is the hypergroup version of \cite[Theorem~2.7]{sa3}. Since the proof is very similar to the group case, we omit it here (although with all the details it can be found in \cite{ma-the}). Here we use ${K_{\bf G}}$ to denote \ma{Grothendieck's constant}.
First in his celebrated ``R{\'e}sum{\'e}", Grothendieck proved the existence of the constant ${K_{\bf G}}$ in Grothendieck's inequality. For a detailed account of {Grothendieck's constant}, its history, and approximations look at \cite[Sections~3 and 4]{pisier}.
\begin{theorem}\label{t:littlewood}
Let $I:T^2(H) \rightarrow (\ell^1 (H) \otimes_\gamma \ell^1(H))^*=\ell^\infty(H\times H)$ be the mapping which takes every element of $T^2(H)$ to itself as a bounded function on $H\times H$. Then $I(T^2(H))\subseteq \iota^*((\ell^1(H)\otimes _\epsilon \ell^1(H))^*)$ for the mapping $\iota^*$ defined in (\ref{eq:e-sits-in-gamma}).
Moreover, $J:={\iota^*}^{-1}\circ I: T^2(H) \rightarrow (\ell^1(H)\otimes _\epsilon \ell^1(H))^*$ is bounded and $\norm{J}\leq {K_{\bf G}}$.
\end{theorem}
From now on, we identify $(\ell^1(H)\otimes_\epsilon \ell^1(H))^*)$ with its image through the mapping $\iota^*$; hence, $J$ is the identity mapping which takes $T^2(H)$ into $(\ell^1(H)\otimes_\epsilon \ell^1(H))^*$. We present our first main result of this section. This is a generalization of \cite[Theorem~3.1]{sa3}.
\vskip2.0em
\begin{theorem}\label{t:injectiv-l^1(H,omega)-for-T2-weights}
Let $H$ be a discrete hypergroup and $\omega$ is a weight on $H$ such that $\Omega$, defined in (\ref{eq:Omega}), belongs to $T^2(H)$. Then $\ell^1(H,\omega)$ is injective and equivalently isomorphic to an operator algebra. Moreover,
for the multiplication map ${\bf m}$ on $\ell^1(H, \omega)\otimes_\epsilon \ell^1(H, \omega)$, as defined before, $\norm{{\bf m}}_\epsilon \leq {K_{\bf G}} \norm{\Omega}_{T^2(H)}$.
\end{theorem}
\begin{proof}
Let
$
\Gamma_\omega:\ell^1(H\times H, \omega \times \omega) \rightarrow \ell^1(H,\omega)$
such that
$\Gamma_\omega(f\otimes g):=f*g$
for $f,g\in \ell^1(H,\omega)$. The adjoint of $\Gamma_\omega$, ${\Gamma}^*_\omega$, can be characterized as follows.
\begin{eqnarray*}
{\Gamma}^*_\omega(\phi)(x,y) = \langle {\Gamma}^*_\omega(\phi),\delta_{x}\otimes\delta_{y}\rangle
= \langle \phi, \Gamma_\omega(\delta_x\otimes \delta_y)\rangle
= \langle \phi, \delta_x*\delta_y\rangle
\end{eqnarray*}
for all $\phi \in \ell^\infty(H,\omega^{-1})$ and $x,y \in H$.
Now we define $L$ from $\ell^\infty(H)$ to $\ell^\infty(H\times H)$ such that the following diagram commutes,
\[
\xymatrix{
{\ell^\infty(H,\omega^{-1})} \ar[rr]^{{\Gamma}^*_{\omega}} & & {\ell^\infty(H\times H, \omega^{-1} \times \omega^{-1})}\ar[d]_{R}\\
{\ell^\infty(H)}\ar[u]^{P}\ar[rr]^{L} & & {\ell^\infty(H\times H)}
}
\]
where $P(\varphi)(x)=\varphi(x)\omega(x)$ for $\varphi\in \ell^\infty(H)$ and $R(\phi)(x,y)=\phi(x,y)\omega^{-1}(x)\omega^{-1}(y)$ for $\phi\in \ell^\infty(H\times H, \omega^{-1}\times\omega^{-1})$ and $x,y\in H$.
Hence, one gets
\begin{eqnarray*}
L(\varphi)(x,y) = R\left( {\Gamma}^*_{\omega} \circ P(\varphi)\right)(x,y)
&=& \frac{\left({\Gamma}^*_{\omega} \circ P (\varphi)\right)(x,y)}{\omega(x) \omega(y)}\\
&=& \frac{{\Gamma}^*_{\omega} \left( \omega \varphi \right)(x,y)}{\omega(x) \omega(y)}\\
&=& \frac{ \langle \varphi \omega, \delta_x*\delta_y\rangle}{\omega(x) \omega(y)}\\
&=& \sum_{t\in H} \delta_x*\delta_y(t) \frac{\omega(t)}{\omega(x)\omega(y)} \varphi(t).
\end{eqnarray*}
for all $\varphi\in \ell^\infty(H)$.
Hence,
\[
\left| \sum_{t\in H} \delta_x*\delta_y(t) \frac{\omega(t)}{\omega(x)\omega(y)} \varphi(t)\right|
\leq \sum_{t\in H} \delta_x*\delta_y(t) \frac{\omega(t)}{\omega(x)\omega(y)} |\varphi(t)|
\leq \norm{\varphi}_\infty \Omega(x,y)
\]
So there is a function $v_\varphi:H\times H\rightarrow \Bbb{C}$ such that
\[
\frac{\langle\delta_x*\delta_y,\omega \varphi \rangle}{\omega(x) \omega(y)} = v_\varphi(x,y) \norm{\varphi}_\infty \Omega(x,y)
\]
and $\norm{v_\varphi}_\infty\leq 1$.
Therefore
$L(\varphi) = \Lambda(\varphi) \Omega$
where $\Lambda(\varphi)(x,y):=v_\varphi(x,y) \norm{\phi}_\infty$
for all $\varphi\in\ell^\infty(H)$.
Since $\Omega$ belongs to $T^2(H)$ and $T^2(H)$ is an $\ell^\infty(H\times H)$-module, $L(\varphi) \in T^2(H)$ and $\norm{L(\varphi)}_{T^2(H)} \leq \norm{\varphi}_\infty \norm{\Omega}_{T^2(H)}$.
Therefore $L(\ell^\infty(H))\subseteq T^2(H) \subseteq (\ell^1(H)\otimes_\epsilon \ell^1(H))^*$.
In this case, using the following diagram with ${\mathcal A}=R^{-1}((\ell^1(H)\otimes_\epsilon \ell^1(H))^*)$,
\[
\xymatrix{
{\ell^\infty(H,\omega^{-1})} \ar[rr]^{{\Gamma}_\omega^*} & &
{\mathcal A} \ar[rr]^{\iota} \ar[d]^{R|_{r}} & & {\ell^\infty(H\times H, \omega^{-1} \times \omega^{-1})}\ar[d]_{R}\\
{\ell^\infty(H)}\ar[u]^{P}\ar[rr]^{L} & & (\ell^1(H)\otimes_\epsilon \ell^1(H))^* \ar[rr]^{\iota} & & {\ell^\infty(H\times H)}
}
\]
One can easily verify that ${\mathcal A}=(\ell^1(H,\omega)\otimes_\epsilon \ell^1(H,\omega))^*$.
So, we have shown that ${\Gamma}^*$ is a map projecting $\ell^\infty(H)$ into $(\ell^1(H)\otimes_\epsilon \ell^1(H))^*$ as a subset of $\ell^\infty(H\times H)$.
we see that ${\Gamma}_\omega^*$ is a map projecting $\ell^\infty(H, \omega^{-1})$ into $(\ell^1(H,\omega)\otimes_\epsilon \ell^1(H, \omega))^*$.
Hence, ${\Gamma}_\omega^*={\bf m}^*$, where $
{\bf m}$ is the multiplication extended to $\ell^1(H,\omega)\otimes_\epsilon \ell^1(H,\omega)$.
Therefore ${\bf m}$ is bounded and $\norm{{\bf m}}=\norm{ \Gamma_\omega}=\norm{ R\Gamma_\omega P}=\norm{L}$. Moreover,
\begin{eqnarray*}
\norm{L(\varphi)}_{(\ell^1(H)\otimes^\epsilon\ell^1(H))^*} &\leq& \norm{J} \; \norm{\Gamma^*(\varphi)}_{T^2(H)}
\leq {K_{\bf G}} \left\|\Omega\right\|_{T^2(H)} \norm{\Lambda(\varphi)}_{\ell^\infty(H\times H)}\\
&\leq& {K_{\bf G}} \left\|\Omega\right\|_{T^2(H)} \norm{\varphi}_{\ell^\infty(H)}
\end{eqnarray*}
for all $\varphi\in \ell^\infty(H)$. Consequently, $\norm{{\bf m}}_\epsilon \leq {K_{\bf G}} \norm{\Omega}_{T^2(H)}$.
\end{proof}
\begin{eg}\label{eg:OP-of-SU(n)}
Let $\omega_\beta$ be the dimension weight defined on $\widehat{\operatorname{SU}}(n)$ in Lemma~\ref{l:dimension-as-a-weight}. As we have shown in the proof of Proposition~\ref{p:Arens-of-l1(SU(n))}, for the polynomial weight $\omega_\beta$, $\beta\geq 0$, and $\mu,\nu \in \widehat{\operatorname{SU}}(n)$,
\[
\Omega_\beta(\mu,\nu) \leq C_n^\beta (\frac{1}{1+\mu_1}+\frac{1}{1+\nu_1})^\beta
\leq A_\beta C_n^\beta \left( \frac{1}{(1+\mu_1)^\beta}+\frac{1}{(1+\nu_1)^\beta}\right),
\]
where $A_\beta=\min\{1,2^{\beta-1}\}$.
To study $\norm{\cdot}_{T^2(\widehat{\operatorname{SU}}(2))}$ for $\Omega_\beta$, let us note that for each $k\in \Bbb{N}\cup\{0\}$, there are less than $(1+k)^{n-2}$ many $\lambda=(\lambda_1,\ldots,\lambda_n) \in \widehat{\operatorname{SU}}(n)$ such that $\lambda_1=k$.
Therefore
\[
\sum_{\lambda\in\widehat{\operatorname{SU}}(n)} \frac{1}{(1+\lambda_1)^{2\beta}} \leq \sum_{k=0}^\infty \frac{(1+k)^{n-2}}{(1+k)^{2\beta}}
\]
where the right-hand side series converges if and only if $2\beta - n +2>1$.
Therefore, for $\beta> (n-1)/2$, $\Omega_\beta\in T^2(\widehat{\operatorname{SU}}(n))$ and by Theorem~\ref{t:injectiv-l^1(H,omega)-for-T2-weights}, $\ell^1(\widehat{\operatorname{SU}}(2),\omega_\beta)$ is injective and consequently isomorphic to an operator algebra. Moreover,
note that
\begin{eqnarray*}
\norm{\Omega_\beta}_{T^2(\widehat{\operatorname{SU}}(n))} &\leq& \left\|(\mu,\nu)\mapsto \frac{A_\beta C_n^\beta}{1+\mu_1}+\frac{A_\beta C_n^\beta}{1+\nu_1}\right\|_{T^2(H)}\\
&\leq& \sup_{\nu\in \widehat{\operatorname{SU}}(n)}\left( \sum_{\mu\in \widehat{\operatorname{SU}}(n)} \left|\frac{ A_\beta C_n^\beta }{1+\mu_1}\right|^2\right)^{1/2} \\
&+ &\sup_{\mu\in \widehat{\operatorname{SU}}(n)}\left( \sum_{\nu\in \widehat{\operatorname{SU}}(n)} \left|\frac{A_\beta C_n^\beta}{1+\nu_1}\right|^2\right)^{1/2}\\
&\leq& A_\beta C_n^\beta 2 \left( \sum_{k=0}^\infty \frac{1}{(1+k)^{2\beta-n+2}}\right)^{1/2}.
\end{eqnarray*}
Hence, for $A_\beta=\min\{1,2^{\beta-1}\}$,
\[
\norm{{\bf m}}_\epsilon \leq 2 {K_{\bf G}} A_\beta C_n^\beta \left( \sum_{k=0}^\infty \frac{1}{(1+k)^{2\beta-n+2}}\right)^{1/2}
\]
\end{eg}
\vskip2.0em
\begin{cor}\label{c:injectiv-l^1(H,omega)-for-general-case}
Let $H$ be a discrete hypergroup and $\omega$ is a weakly additive weight on $H$ with a corresponding constant $C>0$.
Then $\ell^1(H,\omega)$ is injective if $ \sum_{x\in H}{\omega(x)^{-2}}<\infty$.
Moreover,
\[
\norm{{\bf m}}_\epsilon \leq 2C{K_{\bf G}} \left( \sum_{x\in H}\frac{1}{\omega(x)^2} \right)^{1/2}.
\]
\end{cor}
\begin{proof}
Suppose that $ \sum_{x\in H} \omega(x)^{-2}<\infty$. Note that for each $t\in \operatorname{supp}(\delta_x*\delta_y)$,
\begin{eqnarray*}
\frac{\omega(t)}{\omega(x)\omega(y)} \leq C\frac{\omega(x)+\omega(y)}{\omega(x)\omega(y)} = \frac{C}{\omega(x)}+\frac{C}{\omega(y)}.
\end{eqnarray*}
Thus, for the functions $f_1(x,y)={\omega(x)}^{-1}$ and $f_2(x,y)={\omega(y)}^{-1}$,
\begin{eqnarray*}
\norm{\Omega}_{T^2(H)} &\leq& \left\|(x,y)\mapsto \frac{C}{\omega(x)}+\frac{C}{\omega(y)}\right\|_{T^2(H)}\\ &\leq& \left(\sup_{y\in H}\left( \sum_{x\in H} \left|\frac{C}{\omega(x)}\right|^2\right)^{1/2} + \sup_{x\in H}\left( \sum_{y\in H} \left|\frac{C}{\omega(y)}\right|^2\right)^{1/2}\right) \leq 2C \left(\sum_{x\in H} \frac{1}{\omega(x)^2}\right)^{\frac{1}{2}}.
\end{eqnarray*}
Consequently, by Theorem~\ref{t:injectiv-l^1(H,omega)-for-T2-weights}, $\ell^1(H,\omega)$ is injective and $\norm{{\bf m}}_\epsilon$ satisfies the mentioned inequality.
\end{proof}
\begin{eg}\label{eg:chebyshev-OP}
Let $\omega_f$ be the weight constructed by the group weight admitted by a positive increasing function $f$ (see Example~\ref{eg:Chebyshev-weight-Arens}). One can see that, if \[
\sum_{n\in {\mathbb N}_0} \frac{1}{f(n)^2}<\infty\ \ \text{and}\ \ \sup_{n,m\in {\mathbb N}_0} \frac{ f(n+m)}{f(n)+ f(m)} <\infty,
\]
then $\omega_f$ satisfies the conditions of Corollary~\ref{c:injectiv-l^1(H,omega)-for-general-case} and therefore, $\ell^1({\mathbb N}_0,\omega_f)$ is isomorphic to an operator algebra. On the other hand, $\ell^1({\mathbb N}_0,\omega_f)$ can be embedded (isomorphically as a Banach algebra) into $A(\Bbb{T},\sigma_f)$ which is not isomorphic to any operator algebra (as it is not even Arens regular, see Example~\ref{eg:Chebyshev-Arens}).
\end{eg}
\begin{rem}
Note that the assumed condition for $f$ in Example~\ref{eg:chebyshev-OP} implies the Arens regularity condition required in Example~\ref{eg:Chebyshev-Arens}. Compare it with this know fact that every Banach algebra which is isomorphic to an operator algebra is Arens regular.
\end{rem}
\begin{rem}\label{r:operator-algebra-of-conj(SL(2,2^n)}
Let $(\operatorname{Conj}(G), \omega_\alpha)$ be the weighted hypergroup defined in Example~\ref{eg:SL(2,2n)-Arens-regularity}. Note that $\omega_\alpha$ is a weakly additive weight.
One can straightforwardly show that $\sum_{C\in \operatorname{Conj}(G)} \omega(C)^{-2} =\infty$.
Hence, not all weakly additive weights are satisfying the other condition mentioned in Corollary~\ref{c:injectiv-l^1(H,omega)-for-general-case}.
\end{rem}
For finitely generated hypergroups, we showed that that polynomial weights are weakly additive. In the following, we study operator algebra isomorphism for weighted hypergroup algebras with polynomial weights. Developing a machinery which relates exponential weights to polynomial ones, we also study exponential weights in Subsection~\ref{ss:OP-exponential-weights}. For the case that $H$ is a group, this has been achieved in \cite{sa3}
\begin{cor}\label{c:injectiv-l^1(H,omega)-for-polynomial-weights}
Let $H$ be a finitely generated hypergroup. If $F$ is a generator of $H$ such that $|F^{*n}|\leq D n^d$ for some $d,D>0$ and $\omega_\beta$ is the polynomial weight on $H$ associated to $F$. Then $\ell^1(H,\omega_\beta)$ is injective if $2\beta>d+1$. Moreover, for $C=\min\{1,2^{\beta-1}\}$,
\[
\norm{{\bf m}}_\epsilon \leq 2C{K_{\bf G}} \left( 1 + \sum_{n=1}^\infty \frac{ D n^d}{(1+n)^{2\beta}} \right)^{1/2} .
\]
\end{cor}
\begin{proof}
To prove this corollary, we mainly rely on Corollary~\ref{c:injectiv-l^1(H,omega)-for-general-case}. Recall that $\omega_\beta$ is weakly additive whose constant is $C=\min\{1,2^{\beta-1}\}$.
To show the desired bound for $\norm{{\bf m}}_\epsilon$, note that
\begin{eqnarray*}
\sum_{x\in H} \frac{1}{\omega_\beta(x)^2} &=& \sum_{x\in H} \frac{1}{(1+\tau(x))^{2\beta}}
= \sum_{n=0}^\infty \sum_{\{x\in F^n \setminus F^{n-1}\}} \frac{1}{(1+n)^{2\beta}}\\
&\leq& 1 + \sum_{n=1}^\infty \frac{|F^n|}{(1+n)^{2\beta}}
\leq 1 + \sum_{n=1}^\infty \frac{ D n^d}{(1+n)^{2\beta}}
\end{eqnarray*}
which is convergent if $2\beta>d+1$.
\end{proof}
\begin{eg}\label{eg:operator-algebra-of-polynomial}
For a polynomial hypergroup $\Bbb{N}_0$, as a finitely generated hypergroup with the generator $F=\{0,1\}$, we have $|F^{*n}|=n+1 \leq 2n$, as we have seen before.
By Corollary~\ref{c:injectiv-l^1(H,omega)-for-polynomial-weights}, for the polynomial weight $\omega_\beta$ with $\beta>1$ associated to $F$, $\ell^1( \Bbb{N}_0,\omega_\beta)$ is injective.
For $C=\min\{1,2^{\beta-1}\}$, Corollary~\ref{c:injectiv-l^1(H,omega)-for-general-case} implies that
\[
\norm{{\bf m}}_\epsilon \leq 2C{K_{\bf G}} \left( \sum_{n=1}^\infty \frac{ 1}{n^{2\beta}} \right)^{1/2}.
\]
\end{eg}
\begin{subsection}{Hypergroups with exponential weights}\label{ss:OP-exponential-weights}
The other class of weights introduced for finitely generated hypergroups is the class of exponential weights. As we mentioned before, unlike polynomial weights, exponential weights are not necessarily weakly additive. In this subsection, following \cite{sa3}, we study operator algebra isomorphism of these weights by studying the cases for them $\Omega$ belongs to $T^2(H)$. The following lemma is a hypergroup adaptation of \cite[Theorem~3.3]{sa3}. Since the proof is similar to the one of \cite[Lemma~B.2]{ghan}, we omit it here.
\begin{lemma}\label{l:weight-coming-from-p&q}
Suppose that $0<\alpha<1$, $C>0$, and $\beta\geq \max\left\{ 1, \frac{6}{C\alpha(1-\alpha)}\right\}$.
Define the functions $p:[0,\infty) \rightarrow \Bbb{R}$ and $q:(0,\infty)\rightarrow \Bbb{R}$ by
$p(x):=Cx^{\alpha}-\beta \ln(1+x)$ and $q(x):=\frac{p(x)}{x}$.
Let $H$ be a finitely generated hypergroup with a symmetric generator $F$ and $\omega: H\rightarrow (0,\infty)$ such that
\[
\omega(x)=e^{p(\tau_F(x))}=e^{\tau_F(x) q(\tau_F(x))}\ \ \text{for all $x\in H$}.
\]
Then $\omega(t)\leq M \omega(x) \omega(y)$ for all $t,x,y\in H$ such that $t\in x*y$ where
\[
M=\max\{e^{p(z_1)-p(z_2)-p(z_3)}: z_1,\ z_2,z_3\in [0,2K]\cap \Bbb{N}_0\}
\]
and
\[
K=\left( \frac{\beta^2}{C\alpha(1-\alpha)}\right)^{1/\alpha}.
\]
\end{lemma}
\begin{theorem}\label{t:injectiv-l^1(H,sigma)-for-exponential-weights}
Let $H$ be a finitely generated hypergroup. If $F$ is a symmetric generator of $H$ such that $|F^{*n}|\leq D n^d$ for some $d,D>0$ and $\sigma_{\alpha,C}$ is an exponential weight on $H$ for some $0<\alpha<1$ and $C>0$. Then $\ell^1(H,\sigma_{\alpha,C})$ is injective and equivalently isomorphic to an operator algebra.
\end{theorem}
\begin{proof}
Let $\omega_\beta$ be the weight defined in Lemma~\ref{l:weight-coming-from-p&q}.
We define a function $\omega:H \rightarrow (0,\infty)$ by
\[
\omega(x):=\frac{\sigma_{\alpha,C}(x)}{\omega_\beta(x)} = e^{C{\tau_F}(x)^\alpha - \beta \ln(1+{\tau_F}(x))}\ \ (x\in H)
\]
where $\omega_\beta$ is the polynomial weight defined on $H$ associated to $F$ and
\[
\beta>\max \{ 1, \frac{6}{C\alpha(1-\alpha)}, \frac{d+1}{2} \}.
\]
Therefore, by Lemma~\ref{l:weight-coming-from-p&q}, $\omega(t)\leq M \omega(x)\omega(y)$ for some $M>0$ and all $t,x,y\in H$ such that $t\in x*y$.
Therefore
\[
\frac{\sigma_{\alpha,C}(t)}{\sigma_{\alpha,C}(x)\sigma_{\alpha,C}(y)} \leq M \frac{\omega_\beta(t)}{\omega_\beta(x)\omega_\beta(y)}.
\]
Therefore,
\[
\frac{\sigma_{\alpha,C}(t)}{\sigma_{\alpha,C}(x)\sigma_{\alpha,C}(y)} \leq M' \left( \frac{1}{(1+\tau(x))^\beta}+\frac{1}{(1+\tau(y))^\beta}\right)
\]
for a modified constant $M'>0$.
Therefore by the proof of Corollary~\ref{c:injectiv-l^1(H,omega)-for-polynomial-weights}, $\Omega_{\sigma_{\alpha,C}}\in T^2(H)$. Now Theorem~\ref{t:injectiv-l^1(H,omega)-for-T2-weights} finishes the proof.
\end{proof}
\begin{eg}
As a result of Theorem~\ref{t:injectiv-l^1(H,sigma)-for-exponential-weights}, and to follow Example~\ref{eg:operator-algebra-of-polynomial}, if $H$ is a polynomial hypergroup on $\Bbb{N}_0$, for each exponential weight $\sigma_{\alpha,C}$ for $0<\alpha<1$ and $C>0$, $\ell^1(H,\sigma_{\alpha,C})$ is injective. Note that this class of hypergroups includes $\widehat{\operatorname{SU}}(2)$.
\end{eg}
\end{subsection}
\end{section}
\section*{Acknowledgements}
For this research, the first author was supported by a Ph.D. Dean's Scholarship at University of Saskatchewan and a Postdoctoral Fellowship form the Fields Institute For Research In Mathematical Sciences and University of Waterloo. The second name author was also supported by NSERC Discovery grant no 409364 and a generous support from the Fields Institute. The first named author also would like to express his deep gratitude to Yemon Choi and Nico Spronk for several constructive discussions and suggestions which improved the paper significantly. The authors also would like to thank the referee for his many productive comments.
|
2,869,038,155,231 | arxiv | \section{Introduction}
To compare the difference between the integral product of two
functions with the product of the integrals, one may use the
celebrated \emph{\v{C}eby\v{s}ev functional}
\begin{align}
\label{identity}\mathcal{T}\left( {f,g} \right) = \frac{1}{{b -
a}}\int_a^b {f\left( t \right)g\left( t \right)dt} - \frac{1}{{b
- a}}\int_a^b {f\left( t \right)dt} \cdot \frac{1}{{b -
a}}\int_a^b {g\left( t \right)dt}.
\end{align}
which has an important applications in numerical integration and
approximation theory.
Two famous inequalities due to P. L. \v{C}eby\v{s}ev
(\cite{Cebysev1}-\cite{Cebysev2}) involving two differentiable
mappings with bounded first derivatives and monotonic integrable
mappings, which are respectively:
\begin{align}
\label{ceb1}\left| {\mathcal{T}\left( {f,g} \right)} \right| \le
\frac{\left(b-a\right)^2}{12} \left\|
{f^{\prime}}\right\|_{\infty} \left\|
{g^{\prime}}\right\|_{\infty},
\end{align}
and
\begin{align}
\label{ceb2}\frac{1}{{b - a}}\int_a^b {f\left( x \right)g\left( x
\right)dx} \ge\left( {\frac{1}{{b - a}}\int_a^b {f\left( x
\right)dx} } \right)\left( {\frac{1}{{b - a}}\int_a^b {g\left( x
\right)dx} } \right).
\end{align}
The inequality (\ref{ceb1}) is known as the first \v{C}eby\v{s}ev
inequality and (\ref{ceb2}) is called the second \v{C}eby\v{s}ev
inequality.
In recent years many authors took serious attentions to study
both inequalities (\ref{ceb1}) and (\ref{ceb2}) through several
approaches and different ways for various type of functions, the
reader may refer to \cite{Alomari}, \cite{Atkinson},
\cite{BD}--\cite{Cerone2}, \cite{CebCerDra1},
\cite{DraAcc}--\cite{HH}, \cite{Lupas}, \cite{P} and the
references therein. For a comprehensive list of old results
(before 1994) see \cite{Mitrinovic} and for a new good list of
references see \cite{CeroneDragomir}.\\
Among others, in order to study the difference between two
Riemann integral means, Barnett et al. \cite{Barnett1} have proved
the following estimates:
\begin{theorem}
\label{thm01}\emph{Let $f:[a,b]\rightarrow \mathbb{R}$ be an
absolutely continuous function with the property that $f^{\prime
}\in L_{\infty }[a,b]$, i.e.,} $
\left\Vert {f^{\prime }}\right\Vert _{\infty }:=ess\mathop {\sup
\limits_{t\in \left[ {a,b}\right] }\left\vert {f^{\prime }\left( t\right)
\right\vert . $ \emph{Then for $a\leq c<d\leq b$, we have the
inequality}
\begin{align}
& \left\vert {\frac{1}{{b-a}}\int_{a}^{b}{f\left( t\right) dt}-\frac{1}{{d-c
}\int_{c}^{d}{f\left( s\right) ds}}\right\vert \label{Barnett} \\
& \leq \left[ {\frac{1}{4}+\left( {\frac{{\left( {a+b}\right) /2-\left( {c+d
\right) /2}}{{\left( {b-a}\right) -\left( {d-c}\right) }}}\right) ^{2}
\right] \left[ {\left( {b-a}\right) -\left( {d-c}\right) }\right]
\left\Vert
{f^{\prime }}\right\Vert _{\infty } \notag \\
& \leq \frac{1}{2}\left[ {\left( {b-a}\right) -\left( {d-c}\right)
}\right] \left\Vert {f^{\prime }}\right\Vert _{\infty }. \notag
\end{align
\emph{The constant $1/4$ in the first inequality and $1/2$ in the
second inequality are the best possible.}
\end{theorem}
Another result presented by Cerone and Dragomir \cite{Cerone} as
follows:
\begin{theorem}
\label{thm04}\emph{Let $f:[a,b]\rightarrow \mathbb{R}$. The
following bounds hold:}
\begin{align}
& \left\vert {\frac{1}{{b-a}}\int_{a}^{b}{f\left( t\right) dt}-\frac{1}{{d-c
}\int_{c}^{d}{f\left( s\right) ds}}\right\vert \label{Cerone2} \\
& \leq \left\{
\begin{array}{l}
\left[ {\frac{{b-a-\left( {d-c}\right) }}{2}+\left\vert {\frac{{c+d}}{2}
\frac{{a+b}}{2}}\right\vert }\right] \frac{{\bigvee_{a}^{b}\left( f\right)
}{{b-a}}; \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\, \rm
{if}\thinspace
\thinspace $f$\thinspace\thinspace \thinspace \rm{is\thinspace \thinspace of \thinspace \thinspace bounded\thinspace \thinspace variation}\\
\\
L\frac{{\left( {c-a}\right) ^{2}+\left( {b-d}\right)
^{2}}}{{2\left[ {\left( {b-a}\right) -\left( {d-c}\right) }\right]
}};\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\, \rm {if}\thinspace
\thinspace $f$\thinspace\thinspace \thinspace\thinspace \rm {is\thinspace \thinspace} $L$-{\rm {Lipschitzian}} \\\\
\end{array
\right. \notag
\end{align
\end{theorem}
Recently, Hwang and Dragomir \cite{Hwang} proved the following
result for absolutely continuous mapping whose first derivatives
in absolute value is convex:
\begin{theorem}
\label{thm05}\emph{Let $f:[a,b]\rightarrow \mathbb{R}$ be an
absolutely continuous mapping $[a,b]$. If $|f'|$ is convex then}
\begin{multline}
\left| {\frac{1}{{b - a}}\int_a^b {f\left( s \right)ds} -
\frac{1}{{y - x}}\int_x^y {f\left( s \right)ds} }
\right|\label{hwang}
\\
\le \frac{1}{6}\left[ {\frac{{\left( {x - a} \right)^2 }}{{b -
a}}\left| {f'\left( a \right)} \right| + I\left( {a,b,x,y}
\right)\left| {f'\left( x \right)} \right| + J\left( {a,b,x,y}
\right)\left| {f'\left( y \right)} \right| + \frac{{\left( {b - y}
\right)^2 }}{{b - a}}\left| {f'\left( b \right)} \right|} \right],
\end{multline
for all $a\le x<y\le b$, where
\begin{align*}
I\left( {a,b,x,y} \right) &= \frac{{\left( {x - a} \right)^2
\left( {y - x} \right)}}{{\left( {b - a} \right)\left( {b - a - y
+ x} \right)}} - \frac{1}{3}\frac{{\left( {x - a} \right)^3 \left(
{y - x} \right)}}{{\left( {b - a} \right)\left( {b - a - y + x}
\right)^2 }} - \frac{1}{2}\frac{{\left( {x - a} \right)\left( {y -
x} \right)}}{{b - a}}
\\
&\qquad + \frac{1}{6}\frac{{\left( {y - x} \right)\left( {b - a -
y + x} \right)}}{{b - a}} + \frac{{\left( {x - a} \right)^2
}}{{3\left( {b - a}
\right)}},
\end{align*}
\begin{align*}
J\left( {a,b,x,y} \right) &= \frac{{\left( {x - a} \right)^2 \left( {y - x} \right)}}{{\left( {b - a} \right)\left( {b - a - y + x} \right)}} - \frac{{\left( {x - a} \right)^3 \left( {y - x} \right)}}{{3\left( {b - a} \right)\left( {b - a - y + x} \right)^2 }} - \frac{{\left( {x - a} \right)\left( {y - x} \right)}}{{2\left( {b - a} \right)}}
\\
&\qquad+ \frac{{\left( {y - x} \right)\left( {b - a - y + x}
\right)}}{{6\left( {b - a} \right)}} + \frac{{\left( {x - a}
\right)^2 }}{{3\left( {b - a} \right)}}
\end{align*}
\end{theorem}
The aim of this paper is to establish new sharp bounds for the
\v{C}eby\v{s}ev functional involving various type of functions.
Mainly, new bounds for \v{C}eby\v{s}ev functional that combining
convex functions and other type of functions together such as
absolutely continuous, Lipschitz and bounded variation are
presented.
\section{The case when $f^{\prime}$ or $g^{\prime}$ is convex \label{sec2}}
\begin{thm} \label{thm1}\emph{Let $a,b \in \mathbb{R}$, $a<b$ and $I$ be a real interval such that $a,b\in I^{\circ}$ (the interior of the interval $I$). Let $f,g:I\rightarrow \mathbb{R}$ be two
absolutely continuous functions on $I$ such that
$\left|{f^{\prime}}\right|$ and $\left|{g^{\prime}}\right|$ are
convex on $[a,b]\subset I^{\circ}$, then}
\begin{align}
\left\vert {\mathcal{T}\left( {f,g} \right)
}\right\vert\label{eq2.1}
&\leq\frac{\left( {b - a} \right)^2}{48}
\left[ {M\left( {a,b} \right) + N\left( {a,b} \right) + \left|
{M\left( {a,b} \right) - N\left( {a,b} \right)} \right|} \right]
\\
&\leq \frac{\left(b-a\right)^{2}}{12}
\max\{\left| {g^{\prime}\left( a \right)} \right|,\left|
{g^{\prime}\left( b \right)} \right|\}\cdot\max\{\left|
{f^{\prime}\left( a \right)} \right|,\left| {f^{\prime}\left( b
\right)} \right|\},\nonumber
\end{align}
\emph{where}
\begin{align*}
M\left({a,b}\right):=\left| {f'\left( a \right)} \right|\left|
{g'\left( a \right)} \right| + \left| {f'\left( b \right)}
\right|\left| {g'\left( b \right)} \right|,
\end{align*}
\emph{and}
\begin{align*}
N\left({a,b}\right):=\left| {f'\left( b \right)} \right|\left|
{g'\left( a \right)} \right| + \left| {f'\left( a \right)}
\right|\left| {g'\left( b \right)} \right|.
\end{align*}
\emph{The constants $\frac{1}{48}$ and $\frac{1}{12}$ are the best
possible.}
\end{thm}
\begin{proof}
By applying the integration by parts formula; Dragomir in
\cite{Dragomir} obtained the identity (see also \cite{Cerone1}):
\begin{align}
\label{eq2.2}\mathcal{T}\left( {f,g} \right) =
\frac{1}{{\left( {b - a} \right)^2
}}\int_a^b {\left[ {\left( {t - a} \right)\int_a^b {g\left( t
\right)dt} - \left( {b - a} \right)\int_a^t {g\left( s \right)ds}
} \right]f^{\prime}\left( t \right)dt}.
\end{align}
Utilizing the triangle inequality, we have
\begin{align}
&\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert
\nonumber\\
&\leq
\frac{1}{b-a}\int_{a}^{b}{\left\vert {\left( {t-a}\right) \left[ {\frac{1}{{t-a}
\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right]
}\right\vert \left\vert f'\left( {t}\right) \right\vert dt}
\label{eq2.3}
\end{align}
Since $\left|{g^{\prime}(x)} \right|$ is convex then by setting
$y=t$ and $x=a$ in (\ref{hwang}), one may obtain that
\begin{align}
&\left| {\frac{1}{{b - a}}\int_a^b {g\left( s \right)ds} -
\frac{1}{{t - a}}\int_a^t {g\left( s \right)ds} } \right|
\nonumber\\
&\le \frac{1}{6}\left[ {\frac{{\left( {t - a} \right)\left( {b -
t} \right)}}{{b - a}}\left| {g'\left( a \right)} \right| + 2\left(
{b - t} \right)\left| {g'\left( t \right)} \right| + \frac{{\left(
{b - t} \right)^2 }}{{b - a}}\left| {g'\left( b \right)} \right|}
\right]
\nonumber\\
&\le \frac{1}{6}\left[ {\frac{{\left( {t - a} \right)\left( {b -
t} \right) + 2\left( {b - t} \right)^2 }}{{b - a}}\left| {g'\left(
a \right)} \right| + \frac{{2\left( {b - t} \right)\left( {t - a}
\right) + \left( {b - t} \right)^2 }}{{b - a}}\left| {g'\left( b
\right)} \right|} \right] \label{eq2.4}
\\
&\le \frac{1}{2 } \max\{\left| {g'\left( a \right)} \right|,\left|
{g'\left( b \right)} \right|\}\cdot
\left(b-t\right)\label{eq2.5}
\end{align}
where the inequality (\ref{eq2.4}) deduced from the previous
inequality since $\left|{g^{\prime}(x)} \right|$ is convex.
Substituting (\ref{eq2.4}) in (\ref{eq2.3}) and use the property
that $\left|{f^{\prime}(x)} \right|$ is convex, we can state that
\begin{align*}
&\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert
\nonumber\\
&\leq \frac{1}{6\left(b-a\right)^2}\int_{a}^{b} \left[
{\left\{\left( {t - a} \right)^2\left( {b - t} \right) + 2\left(
{t-a}\right)\left( {b - t} \right)^2 \right\}\cdot\left| {g'\left(
a \right)} \right|}\right.
\\
&\qquad\qquad \left.{+ \left\{2\left( {b - t} \right)\left( {t -
a} \right)^2 + \left( {t-a}\right)\left( {b - t} \right)^2
\right\}\cdot\left| {g'\left( b \right)} \right|} \right] \cdot
\left[ {\frac{{b - t}}{{b - a}}\left| {f'\left( b \right)} \right|
+ \frac{{t - a}}{{b - a}}\left| {f'\left( a \right)} \right|}
\right] dt
\nonumber\\
&\leq \frac{1}{6\left(b-a\right)^2} \cdot\frac{{7\left( {b - a}
\right)^4 }}{{60}}\left[ {\left| {f'\left( a \right)}
\right|\left| {g'\left( a \right)} \right| + \left| {f'\left( b
\right)} \right|\left| {g'\left( b \right)} \right|} \right]
\\
&\qquad + \frac{1}{6\left(b-a\right)^2}\cdot \frac{{2\left( {b -
a} \right)^4 }}{{15}}\left[ {\left| {f'\left( b \right)}
\right|\left| {g'\left( a \right)} \right| + \left| {f'\left( a
\right)} \right|\left| {g'\left( b \right)} \right|} \right]
\\
&\le \frac{{7\left( {b - a} \right)^2 }}{{360}}\left[ {\left|
{f'\left( a \right)} \right|\left| {g'\left( a \right)} \right| +
\left| {f'\left( b \right)} \right|\left| {g'\left( b \right)}
\right|} \right] + \frac{{\left( {b - a} \right)^2 }}{{45}}\left[
{\left| {f'\left( b \right)} \right|\left| {g'\left( a \right)}
\right| + \left| {f'\left( a \right)} \right|\left| {g'\left( b
\right)} \right|} \right]
\\
&\le \frac{{\left( {b - a} \right)^2 }}{{24}} \max\{{M\left( {a,b}
\right), N\left( {a,b} \right)} \}
\\
&=\frac{{\left( {b - a} \right)^2 }}{{48}}\left[ {M\left( {a,b}
\right) + N\left( {a,b} \right) + \left| {M\left( {a,b} \right) -
N\left( {a,b} \right)} \right|} \right]
\end{align*}
where $M\left( {a,b} \right)$ and $N\left( {a,b} \right)$ are
defined above and we have used the max-law i.e., $\max \left\{
{c,d} \right\} = \frac{1}{2}\left[ {c + d + \left| {c - d}
\right|} \right]$, $\forall c,d\in \mathbb{R}$, and this proves
the first inequality in (\ref{eq2.1}).
To prove the second inequality in (\ref{eq2.1}), substituting
(\ref{eq2.5}) in (\ref{eq2.3}) and use the property that
$\left|{f^{\prime}(x)} \right|$ is convex, we get
\begin{align*}
&\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert
\nonumber\\
&\leq
\frac{1}{b-a}\int_{a}^{b}{\left\vert {\left( {t-a}\right) \left[ {\frac{1}{{t-a}
\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right]
}\right\vert \left\vert f'\left( {t}\right) \right\vert dt}
\nonumber\\
&\leq \frac{1}{2\left(b-a\right)^2} \max\{\left| {g'\left( a
\right)} \right|,\left| {g'\left( b \right)} \right|\}\left\{{
\left| {f'\left( b \right)} \right|\int_a^b {\left( {t - a}
\right) \left( {b - t} \right)^2dt} }\right.
\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left.{+ \left|
{f'\left( a \right)} \right|\int_a^b {\left( {t - a} \right)^2
\left( {b - t} \right) dt} }\right\}
\\
&\le \frac{1}{2\left(b-a\right)^2} \max\{\left| {g'\left( a
\right)} \right|,\left| {g'\left( b \right)}
\right|\}\cdot\max\{\left| {f'\left( a \right)} \right|,\left|
{f'\left( b \right)} \right|\}
\\
&\qquad\qquad\qquad\qquad\qquad\times\left[ {\int_a^b {\left( {t -
a} \right) \left( {b - t} \right)^2dt} + \int_a^b {\left( {t - a}
\right)^2 \left( {b - t} \right) dt} } \right]
\\
&=\frac{\left(b-a\right)^{2}}{12} \max\{\left| {g^{\prime}\left( a
\right)} \right|,\left| {g^{\prime}\left( b \right)}
\right|\}\cdot\max\{\left| {f^{\prime}\left( a \right)}
\right|,\left| {f^{\prime}\left( b \right)} \right|\},
\end{align*}
which proves the second inequality in (\ref{eq2.1}). The sharpness
of the first inequality in (\ref{eq2.1}) holds with
$f(x)=\frac{1}{6}x^2$ and $g(x)=x$, $\forall x\in [0,1]$. While
the sharpness of the second inequality follows by considering
$f(x)=g(x)=x$, $x\in [a,b]$.
\end{proof}
\begin{thm}\label{thm2}
\emph{Let $f,g:[a,b]\rightarrow \mathbb{R}$ be such that $f$ is
$L$--Lipschitzian on $[a,b]$ and $g$ is absolutely continuous on
$[a,b]$ such that $\left|{g^{\prime}}\right|$ is convex on
$[a,b]$. Then,}
\begin{align}
\label{eq2.6}\left\vert {\mathcal{T}\left( {f,g}
\right)}\right\vert \le L \frac{{\left( {b - a} \right)^2
}}{{24}}\left[ {\left| {g'\left( a \right)} \right| + \left|
{g'\left( b \right)} \right|} \right]\le L \frac{{\left( {b - a}
\right)^2 }}{{12}} \max\{\left| {g^{\prime}\left( a \right)}
\right|,\left| {g^{\prime}\left( b \right)} \right|\}.
\end{align}
\emph{The constants $\frac{1}{24}$ and $\frac{1}{12}$ are the best possible.}
\end{thm}
\begin{proof}
Using the integration by parts formula one may have
\begin{align}
\label{eq2.7}\mathcal{T}\left( {f,g} \right) =
\frac{1}{{\left( {b - a} \right)^2
}}\int_a^b {\left[ {\left( {t - a} \right)\int_a^b {g\left( t
\right)dt} - \left( {b - a} \right)\int_a^t {g\left( s \right)ds}
} \right]df\left( t \right)}.
\end{align}
On the other hand, for $L$-Lipschitzian mapping $p$ defined on
$[\alpha,\beta]$ and a Riemann integrable function $q$ defined on
$[\alpha,\beta]$, the following inequality is well known in
literature
\begin{align}
\left| {\int_\alpha ^\beta {q\left( s \right)dp\left( s \right)}
} \right| \le L\int_\alpha ^\beta {q\left( s
\right)ds}.\label{eq2.8}
\end{align}
So as $f$ $L$--Lipschitzian on $[a,b]$ by (\ref{eq2.7}) and
using (\ref{eq2.8}), we have
\begin{align}
\label{eq2.9} \left\vert {\mathcal{T}\left( {f,g}
\right)}\right\vert
\leq \frac{L}{b-a}\int_{a}^{b}{\left( {t-a}\right) \left| {\frac{1}{
t-a}}\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right|
dt}.
\end{align}
Now, as $\left|{g^{\prime}(x)} \right|$ is convex then by setting
$y=t$ and $x=a$ in (\ref{hwang}), then (\ref{eq2.4}) holds so by
substituting (\ref{eq2.4}) in (\ref{eq2.9}) simple computations
yield that
\begin{align*}
\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert
&\leq \frac{L}{b-a}\int_{a}^{b}{\left( {t-a}\right) \left| {\frac{1}{
t-a}}\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right| dt}
\\
&\le L \frac{\left({b-a}\right)^2}{24} \left[ {\left| {g'\left( a
\right)} \right| + \left| {g'\left( b \right)} \right|} \right]
\end{align*}
which proves the first inequality in (\ref{eq2.6}) and the
sharpness holds with $f(x)=x$ and $g(x)=\frac{1}{3}x^2$, $x\in
[0,1]$.
The second inequality follows by substituting (\ref{eq2.5})
instead of (\ref{eq2.4}) in (\ref{eq2.9}) and the sharpness
follows by considering $f(x)=g(x)=x$, $x\in [0,1]$.
\end{proof}
\begin{rem}\label{remark1}
\emph{In Theorem \ref{thm2}, if $f$ is differentiable and
$f^{\prime}$ is
bounded i.e., $\left\Vert {f^{\prime }}\right\Vert _{\infty }:=ess\mathop {\sup
\limits_{t\in \left[ {a,b}\right] }\left\vert {f^{\prime }\left( t\right)
\right\vert$, then we have $L=\left\Vert {f^{\prime }}\right\Vert
_{\infty }$ and thus (\ref{eq2.6}) can be written as:}
\begin{align}
\label{eq2.10}\left\vert {\mathcal{T}\left( {f,g}
\right)}\right\vert \le \frac{{\left( {b - a} \right)^2 }}{{12}}
\max\{\left| {g^{\prime}\left( a \right)} \right|,\left|
{g^{\prime}\left( b \right)} \right|\}\left\Vert {f^{\prime
}}\right\Vert _{\infty }.
\end{align}
\emph{The constant $\frac{1}{12}$ is the best possible.}
\end{rem}
\begin{thm}\label{thm3}
\emph{Let $f,g:[a,b]\rightarrow \mathbb{R}$ be such that $f$ is of
bounded variation on $[a,b]$ and $g$ is absolutely continuous on
$[a,b]$ such that $\left|{g^{\prime}}\right|$ is convex on
$[a,b]$. Then,}
\begin{align}
\label{eq2.11}\left\vert {\mathcal{T}\left( {f,g}
\right)}\right\vert \le \frac{{\left( {b - a} \right)
}}{{16}}\left[ {\left| {g'\left( a \right)} \right| + \left|
{g'\left( b \right)} \right|} \right]\cdot \bigvee_a^b\left(
{f}\right)
\le \frac{{\left( {b - a} \right)
}}{{8}}\max\{\left| {g^{\prime}\left( a \right)} \right|,\left|
{g^{\prime}\left( b \right)} \right|\}\cdot \bigvee_a^b\left(
{f}\right).
\end{align}
\emph{The constants $\frac{1}{16}$ and $\frac{1}{8}$ are the best
possible.}
\end{thm}
\begin{proof}
Using the fact that for a continuous function $p$ defined on
$\left[\alpha,\beta\right]$ and a bounded variation function $q$
on $\left[\alpha,\beta\right]$ the Riemann--Stieltjes integral
$\int_{\alpha}^{\beta} {p\left( t \right)dq\left( t \right)}$
exists and the inequality
\begin{align}
\label{eq2.12}\left| {\int_{\alpha}^{\beta}{p\left( t
\right)dq\left( t \right)} } \right| \le \mathop {\sup }\limits_{t
\in \left[\alpha,\beta\right]} \left| {f\left( t \right)} \right|
\cdot \bigvee_{\alpha}^{\beta}\left( {q} \right),
\end{align}
holds.
Therefore, as $f$ is of bounded variation on $[a,b]$ and $g$ is
absolutely continuous on $[a,b]$ by (\ref{eq2.12}) and using
(\ref{eq2.7}), we have
\begin{align}
\label{eq2.13}& \left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert \\
& \leq \frac{1}{b-a} \mathop {\sup }\limits_{t \in \left[ {a,b}
\right]}\left\{
{\left( {t-a}\right) \left| {\frac{1}{
t-a}}\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right|
dt} \right\}\cdot \bigvee_a^b\left( {f}\right). \nonumber
\end{align
As $\left|{g^{\prime}(x)} \right|$ is convex then (\ref{eq2.4})
holds. By substituting (\ref{eq2.4}) in (\ref{eq2.13}) we get the
first inequality in (\ref{eq2.11}) and the sharpness holds with
the functions $f(x)= \sgn \left(t-\frac{1}{2}\right)$ and
$g(x)=\frac{1}{2}x^2$, $x\in[0,1]$.
The second inequality in (\ref{eq2.11}) follows by substituting
(\ref{eq2.5}) instead of (\ref{eq2.4}) in (\ref{eq2.13}) and the
sharpness holds with $f(x)= \sgn \left(t-\frac{1}{2}\right)$ and
$g(x)=x$, $x\in[0,1]$.
\end{proof}
\begin{rem}\label{remark2}
\emph{In Theorem \ref{thm3}, if $f$ is differentiable then $
\bigvee_a^b\left( {f}\right) = \int_a^b{|f'(t)|dt}=
\|f^{\prime}\|_1$ and thus (\ref{eq2.11}) can be written as:}
\begin{align}
\label{eq2.14}\left\vert {\mathcal{T}\left( {f,g}
\right)}\right\vert \le \frac{{\left( {b - a} \right) }}{{8}}
\max\{\left| {g^{\prime}\left( a \right)} \right|,\left|
{g^{\prime}\left( b \right)} \right|\}\left\Vert {f^{\prime
}}\right\Vert _{1 }.
\end{align}
\emph{The constant $\frac{1}{8}$ is the best possible.}
\end{rem}
\begin{thm} \label{thm4}\emph{Let $f,g:[a,b]\rightarrow \mathbb{R}$ be two
absolutely continuous on $[a,b]$. If $f' \in L_{\alpha}[a,b]$,
$\alpha,\beta>1$, $\frac{1}{\alpha}+\frac{1}{\beta}=1$ and
$\left|{g^{\prime}}\right|$ is convex on $[a,b]$, then}
\begin{align}
\left\vert {\mathcal{T}\left( {f,g} \right)
}\right\vert
\leq \frac{(b-a)^{1+\frac{1}{\beta}}}{2}\cdot {\rm
B}^{\frac{1}{\beta}} \left( { \beta+1,\beta+1} \right)\cdot
\max\{\left| {g'\left( a \right)} \right|,\left| {g'\left( b
\right)} \right|\}\cdot\left\|f'\right\|_{\alpha}.\label{eq2.15}
\end{align}
\emph{The constant $\frac{1}{2}\cdot {\rm B}^{\frac{1}{\beta}}
\left( { \beta+1,\beta+1} \right)$ is the best possible $\forall
\beta>1$, where ${\rm B}\left({\cdot,\cdot}\right)$ is the Euler
beta function.}
\end{thm}
\begin{proof}
As $f^{\prime } \in L_{\alpha}([a,b])$, applying the H\"{o}lder
inequality on the right-hand side of (\ref{eq2.3}), we have
\begin{align}
&\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert
\nonumber\\
&\leq
\frac{1}{b-a}\int_{a}^{b}{\left\vert {\left( {t-a}\right) \left[ {\frac{1}{{t-a}
\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right]
}\right\vert \left\vert f'\left( {t}\right) \right\vert
dt}\nonumber
\\
&\leq \frac{1}{b-a}\left(\int_{a}^{b}{\left| {t-a}\right|^{\beta} \left| {\frac{1}{{t-a}
\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right)
du}}\right|^{\beta} dt}\right)^{1/{\beta}}\label{eq2.16}
\\
&\qquad\qquad\times\left(\int_{a}^{b}{\left\vert f'\left(
{t}\right) \right\vert^{\alpha} dt}\right)^{1/{\alpha}}. \nonumber
\end{align}
As $\left|{g^{\prime}}\right|$ is convex then (\ref{eq2.5}) holds,
so that by substituting (\ref{eq2.5}) in (\ref{eq2.16}) we get
\begin{align*}
\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert &\leq
\frac{1}{2\left(b-a\right)} \cdot\left\|f'\right\|_{\alpha}\cdot
\max\{\left| {g'\left( a \right)} \right|,\left| {g'\left( b
\right)} \right|\}\cdot \left(\int_a^b{ \left(
{t-a}\right)^{\beta} \left( {b-t}\right)^{\beta}dt}
\right)^{\frac{1}{\beta}}
\\
&=\frac{(b-a)^{1+\frac{1}{\beta}}}{2}\cdot {\rm
B}^{\frac{1}{\beta}} \left( {\beta+1,\beta+1} \right)\cdot
\max\{\left| {g'\left( a \right)} \right|,\left| {g'\left( b
\right)} \right|\}\cdot\left\|f'\right\|_{\alpha},
\end{align*}
which prove the inequality (\ref{eq2.15}). The sharpness is
proved in Remark \ref{remark4} below.
\end{proof}
\begin{rem}\label{remark3}
In (\ref{eq2.15}) we have the following particular cases:
\begin{enumerate}
\item If $\alpha=1$ and $\beta=\infty$, then we have
\begin{align}
\label{eq2.17}\left\vert {\mathcal{T}\left( {f,g} \right)
}\right\vert \le \frac{\left(b-a\right)}{8} \cdot \max\{\left|
{g'\left( a \right)} \right|,\left| {g'\left( b \right)}
\right|\}\cdot\left\|f'\right\|_{1}.
\end{align}
\item If $\alpha= \beta=2$, then we have
\begin{align}
\label{eq2.18}\left\vert {\mathcal{T}\left( {f,g} \right)
}\right\vert \le \frac{\left(b-a\right)^{3/2}}{2\sqrt{30}} \cdot
\max\{\left| {g'\left( a \right)} \right|,\left| {g'\left( b
\right)} \right|\}\cdot\left\|f'\right\|_{2}.
\end{align}
\item If $\alpha=\infty$ and $\beta=1$, then we have
\begin{align}
\label{eq2.19}\left\vert {\mathcal{T}\left( {f,g} \right)
}\right\vert \le \frac{\left(b-a\right)^2}{12} \cdot \max\{\left|
{g'\left( a \right)} \right|,\left| {g'\left( b \right)}
\right|\}\cdot\left\|f'\right\|_{\infty}.\\\nonumber
\end{align}
\end{enumerate}
The constants $\frac{1}{8},\frac{1}{2\sqrt{30}},\frac{1}{12}$ are
the best possible. \emph{Moreover, if $g^{\prime}$ is bounded then
we can replace $ \max\{\left| {g'\left( a \right)} \right|,\left|
{g'\left( b \right)} \right|\}$ by $\|g^{\prime}\|_{\infty}$ in
all previous inequalities.}
\end{rem}
The sharpness of (\ref{eq2.15}) can be proved in viewing of the
following remark:
\begin{rem}\label{remark4}
\emph{Let $h\left({\beta}\right)=\frac{1}{2}\cdot {\rm
B}^{\frac{1}{\beta}} \left( { \beta+1,\beta+1} \right)$, $1<\beta
<\infty$. It is remarkable to note that:}
\begin{itemize}
\item $h\left({\beta}\right)$ \emph{is positive for all $\beta>1$
(see Figure \ref{figer1}).}
\item $h\left({\beta}\right)$\emph{ increases for all $\beta>1$
(see Figure \ref{figer2}).}
\item \emph{$\mathop {\inf }\limits_{\beta \in \left( {1,\infty
} \right)} h\left( \beta \right) = \frac{1}{12}$ which holds as
$\beta \to 1^+$ and $\mathop {\sup }\limits_{\beta \in \left(
{1,\infty } \right)} h\left( \beta \right) = \frac{1}{8}$ which
holds as $\beta \to \infty$. Moreover, we have shown that (in
Remarks \ref{remark1}--\ref{remark2}) the constants $\frac{1}{8}$
and $\frac{1}{12}$ are the best possible, so that
$$\frac{1}{12}\le h \left({\beta} \right) \le \frac{1}{8},\,\,\,\,\,\,\forall \beta>1$$ thus
the constant $\frac{1}{2}\cdot {\rm B}^{\frac{1}{\beta}} \left( {
\beta+1,\beta+1} \right)$ is the best possible for all $\beta>1$.}
\end{itemize}
\begin{figure}[!h]
\includegraphics[angle=0,width=2in]{b.eps}
\caption{The graph of $h(\beta)$ $(1<\beta <\infty)$.}
\label{figer1}
\end{figure}
\begin{figure}[!h]
\includegraphics[angle=0,width=2in]{beta.eps}
\caption{The graph of $h^{\prime}(\beta)$ $(1<\beta <\infty)$
which is $>0, \forall \beta$.} \label{figer2}
\end{figure}
\end{rem}
The dual case of (\ref{eq2.19}) is incorporated in the following
theorem.
\begin{thm} \label{thm5}\emph{Let $f,g:[a,b]\rightarrow \mathbb{R}$ be two
absolutely continuous on $[a,b]$ such that
$\left|{f^{\prime}}\right|$ is convex on $[a,b]$ and $g' \in
L_{\infty}[a,b]$ then}
\begin{align}
\label{eq2.20} \left\vert {\mathcal{T}\left( {f,g} \right)
}\right\vert
\leq \frac{\left(b-a\right)^{2}}{12}
\left\| {g^{\prime}} \right\|_{\infty}\cdot\max\{\left|
{f^{\prime}\left( a \right)} \right|,\left| {f^{\prime}\left( b
\right)} \right|\}.
\end{align
The constant $\frac{1}{12}$ is the best possible.
\end{thm}
\begin{proof}
As in the proof of Theorem \ref{thm1}, since
$\left|{f^{\prime}}\right|$ is convex on $[a,b]$ and $g^{\prime}
\in L_{\infty}$ by substituting $d=t$ and $c=a$ in
(\ref{Barnett}), then by (\ref{eq2.2}) we have
\begin{align*}
&\left\vert {\mathcal{T}\left( {f,g} \right)}\right\vert
\nonumber\\
&\leq
\frac{1}{b-a}\int_{a}^{b}{\left\vert {\left( {t-a}\right) \left[ {\frac{1}{{t-a}
\int_{a}^{t}{g\left( u\right)
du}-\frac{1}{{b-a}}\int_{a}^{b}{g\left( u\right) du}}\right]
}\right\vert \left\vert f'\left( {t}\right) \right\vert dt}
\nonumber\\
&\leq \frac{1}{2\left(b-a\right)^3}
\left\|{g^{\prime}}\right\|_{\infty}\left\{{ \left| {f'\left( b
\right)} \right|\int_a^b {\left( {t - a} \right)^2 \left( {b - t}
\right)dt} }\right.
\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left.{+ \left|
{f'\left( a \right)} \right|\int_a^b {\left( {t - a} \right)
\left( {b - t} \right)^2 dt} }\right\}
\\
&\le \frac{1}{2\left(b-a\right)^3}
\left\|{g^{\prime}}\right\|_{\infty}\cdot\max\{\left| {f'\left( a
\right)} \right|,\left| {f'\left( b \right)} \right|\}
\\
&\qquad\qquad\qquad\qquad\qquad\times\left[ {\int_a^b {\left( {t -
a} \right)^2 \left( {b - t} \right)dt} + \int_a^b {\left( {t - a}
\right) \left( {b - t} \right)^2 dt} } \right]
\\
&=\frac{\left(b-a\right)^{2}}{12}\left\|{g^{\prime}}\right\|_{\infty}\cdot\max\{\left|
{f^{\prime}\left( a \right)} \right|,\left| {f^{\prime}\left( b
\right)} \right|\},
\end{align*}
which proves the required inequality (\ref{eq2.20}).
\end{proof}
\section{The case when $f$ and $g$ are convex (concave)\label{sec3}}
Another interesting inequality due to \v{C}eby\v{s}ev is the
following:
\begin{thm}\cite{Cebysev2}
\label{thm4.1.1}\emph{Let $f,g: \left[ {a,b} \right]\subseteq
\mathbb{R}\to \mathbb{R}$ be two integrable functions on $\left[
{a,b} \right]$ which are both monotonic in the same sense, then}
\begin{align}
\label{eq4.1.1} \frac{1}{{b - a}}\int_a^b {f\left( x
\right)g\left( x \right)dx} \ge\left( {\frac{1}{{b - a}}\int_a^b
{f\left( x \right)dx} } \right)\left( {\frac{1}{{b - a}}\int_a^b
{g\left( x \right)dx} } \right).
\end{align}
\emph{The inequality is reversed if $f$ and $g$ are monotonic in
opposite sense.}
\end{thm}
Seeking positivity of the \v{C}eby\v{s}ev inequality
(\ref{eq4.1.1}), Atkinson proved that:
\begin{thm}\cite{Atkinson} \label{thm4.1.3}
\emph{If both $f$ and $g$ are twice differentiable and convex on
$[a,b]$ and}
\begin{equation*}
\int_a^b {\left( {t - \frac{{a + b}}{2}} \right)g\left( t
\right)dt}=0,
\end{equation*}
then
\begin{equation}\label{eq4.1.3}
\mathcal{T}\left( {f,g} \right) \ge 0.
\end{equation}
\end{thm}
\noindent After one year from Atkinson result, Lupa\c{s} proved
the following result:
\begin{thm}\cite{Lupas} \label{thm4.1.4}
\emph{If $f,g$ are convex functions on the interval $[a,b]$, then}
\begin{align}\label{eq4.1.4}
\mathcal{T}\left( {f,g} \right)\ge \frac{{12}}{{\left( {b - a}
\right)^3 }}\int_a^b {\left( {t - \frac{{a + b}}{2}}
\right)f\left( t \right)dt} \cdot \int_a^b {\left( {t - \frac{{a
+ b}}{2}} \right)g\left( t \right)dt},
\end{align}
\emph{with equality when at least one of the function $f$ and $g$
is a linear function on $[a,b]$.}
\end{thm}
In 2008, Boer \cite{Boer} obtained a Lupa\c{s} type inequality
(\ref{eq4.1.4}) for $3$-convex functions. In 2012, Belbachir and
Rahmani \cite{BR} generalized Lupa\c{s} inequality for $n$-convex
functions $(n\ge2)$.
\begin{rem}
\emph{By relaxing the assumptions in Theorem \ref{thm4.1.3},
Cerone and Dragomir in \cite{CebCerDra1} refined and proved that
(\ref{eq4.1.3}) holds for a monotonic nondecreasing function $f$
and a continuous function $g$. Another related result for
nondecreasing mappings $f$ and $g$ was obtained in \cite{DraCh}.}
\end{rem}
$\bullet$ \textbf{New Bounds.} An upper bound for the
\v{C}eb\v{s}ev functional of two convex functions is proved in the
following result:
\begin{thm}
\label{thm4.3.24} \emph{Let $f,g:[a,b]\to \mathbb{R}$ be two
convex functions, then}
\begin{align}
\label{eq4.3.52}\mathcal{T}\left( {f,g} \right) \le
\frac{1}{{12}}\left( {f\left( b \right) - f\left( a \right)}
\right)\left( {g\left( b \right) - g\left( a \right)} \right).
\end{align}
\emph{The constant $\frac{1}{12}$ is the best possible in the
sense that it cannot be replaced by smaller one.}
\end{thm}
\begin{proof}
Firstly, we note that for any convex function $h$ defined on
$[a,b]$, we have
\begin{equation*}
h\left( t \right) \le \frac{{t - a}}{{b - a}}h\left( b \right) +
\frac{{b - t}}{{b - a}}h\left( a \right).
\end{equation*}
Using the identity
\begin{align}\label{eq4.3.53}
\mathcal{T}\left( {f,g} \right) &= \frac{1}{{b - a}}\int_a^b
{\left[ {f\left( t \right) - \frac{{f\left( a \right) + f\left( b
\right)}}{2}} \right]\left[ {g\left( t \right) - \frac{1}{{b -
a}}\int_a^b {g\left( s \right)ds} } \right]dt},
\end{align}
since $f$ and $g$ are two convex functions on $[a,b]$, then we
have
\begin{align*}
\mathcal{T}\left( {f,g} \right) &= \frac{1}{{b - a}}\int_a^b
{\left[ {f\left( t \right) - \frac{{f\left( a \right) + f\left( b
\right)}}{2}} \right]\left[ {g\left( t \right) - \frac{1}{{b -
a}}\int_a^b {g\left( s \right)ds} } \right]dt}
\\
&\le\frac{1}{{b - a}} \int_a^b {\left[ {\frac{{t - a}}{{b -
a}}f\left( b \right) + \frac{{b - t}}{{b - a}}f\left( a \right) -
\frac{{f\left( a \right) + f\left( b \right)}}{2}} \right]}
\\
&\qquad\times \left[ {\frac{{t - a}}{{b - a}}g\left( b \right) +
\frac{{b - t}}{{b - a}}g\left( a \right) - \frac{1}{{b -
a}}\int_a^b {g\left( s \right)ds} } \right]dt
\\
&= \frac{1}{{b - a}}\left\{ {\frac{{\rm{1}}}{{{\rm{12}}}}af\left(
a \right)g\left( b \right) - \frac{{\rm{1}}}{{{\rm{12}}}}af\left(
a \right)g\left( a \right)- \frac{{\rm{1}}}{{{\rm{12}}}}af\left( b
\right)g\left( b \right) + \frac{{\rm{1}}}{{{\rm{12}}}}af\left( b
\right)g\left( a \right)} \right.
\\
&\qquad\qquad\left. {- \frac{1}{{12}}bf\left( a \right)g\left( b
\right)+ \frac{1}{{12}}bf\left( a \right)g\left( a \right)
+ \frac{1}{{12}}bf\left( b \right)g\left( b \right)
- \frac{1}{{12}}bf\left( b \right)g\left( a \right)} \right\}
\\
&= \frac{1}{{12}}\left( {f\left( b \right) - f\left( a \right)}
\right)\left( {g\left( b \right) - g\left( a \right)} \right),
\end{align*}
which gives the desired inequality (\ref{eq4.3.52}). To prove the
sharpness, assume that (\ref{eq4.3.52}) holds with constant $C>0$,
i.e.,
\begin{align}
\label{eq4.3.54}\mathcal{T}\left( {f,g} \right) \le C \left(
{f\left( b \right) - f\left( a \right)} \right)\left( {g\left( b
\right) - g\left( a \right)} \right).
\end{align}
Let $[a,b]=[0,1]$, consider the $f(x)=g(x)=x$, $x\in [0,1]$, so
that we have $\int_0^1 {f\left( x \right)g\left( x \right)dx} =
\frac{1}{3}$ and $\int_0^1 {f\left( x \right)dx} = \int_0^1
{g\left( x \right)dx} = \frac{1}{2}$. Making use of
(\ref{eq4.3.54}) we get
\begin{align*}
\mathcal{T}\left( {f,g} \right)=\frac{1}{3} - \frac{1}{4} =
\frac{1}{12}\le C
\end{align*}
which shows that the constant $\frac{1}{12}$ is the best possible
in (\ref{eq4.3.52}), and thus the proof is completely finished.
\end{proof}
\begin{rem}
\emph{In Theorem \ref{thm4.3.24}, if both $f$ and $g$ are
monotonic in the same sense then}
\begin{align}
0\le\mathcal{T}\left( {f,g} \right) \le \frac{1}{{12}}\left(
{f\left( b \right) - f\left( a \right)} \right)\left( {g\left( b
\right) - g\left( a \right)} \right).
\end{align}
\end{rem}
Now, we may state the revers of (\ref{eq4.3.52}), as follows:
\begin{thm}
\label{thm4.3.26}\emph{Let $f,g:[a,b]\to \mathbb{R}$ be two
concave functions, then}
\begin{align}
\label{eq4.3.56}\mathcal{T}\left( {f,g} \right) \ge
\frac{1}{{12}}\left( {f\left( b \right) - f\left( a \right)}
\right)\left( {g\left( b \right) - g\left( a \right)} \right).
\end{align}
\emph{The constant $\frac{1}{12}$ is the best possible in the
sense that it cannot be replaced by greater one.}
\end{thm}
\begin{proof}
The proof goes likewise the proof of Theorem \ref{thm4.3.24}, we
omit the details.
\end{proof}
\begin{rem}
\emph{In Theorem \ref{thm4.3.26}, if both $f$ and $g$ are
monotonic but in opposite sense then}
\begin{align}
0\ge\mathcal{T}\left( {f,g} \right) \ge \frac{1}{{12}}\left(
{f\left( b \right) - f\left( a \right)} \right)\left( {g\left( b
\right) - g\left( a \right)} \right).
\end{align}
\end{rem}
|
2,869,038,155,232 | arxiv | \section{Introduction}
Over the past decades, neural networks have been successfully applied for data modelling due to its universal approximation power for nonlinear maps \cite{Chen1995,Cybenko1989,Hornik1989,Park1991}, and learning capability from a collection of training samples \cite{Hecht-Nielsen1988,Rumelhart1986}. In practice, however, it is quite challenging to properly determine an appropriate architecture (here refers to the number of hidden nodes) of a neural network so that the resulting learner model can achieve sound performance for both learning and generalization. To resolve this problem, one turns to develop constructive approaches for building neural networks, starting with a small sized network, followed by incrementally generating hidden nodes (corresponding to a set of input weights and biases) and output weights until a pre-defined termination criterion meets. Being a fundamental of manifold applications, it is essential to ensure that the constructive neural network shares the universal approximation property. In \cite{Barron1993}, Barron proposed a greedy learning framework based on the work reported in \cite{Jones1992}, and established some significant results on the convergence rate. In \cite{Kwok1997}, Kwok and Yeung presented a method to construct neural networks through optimizing some objective functions. Theoretically, their proposed algorithm can generate universal approximators for any continuous nonlinear functions provided that the activation function meets some conditions. It has been aware that the process of iterative searching for an appropriate set of parameters (input weights, biases and output weights) is time-consuming and computationally intensive although the universal approximation property holds, but very difficult to be employed for dealing with large-scale data analytics.
Randomized approaches for large-scale computing are highly desirable due to their effectiveness and efficiency \cite{Mahoney2011}. In machine learning for data modelling, randomized algorithms have demonstrated great potential in developing fast learner models and learning algorithms with much less computational cost \cite{LukoandJaeger2009,ScardapaneandWang2017,SI-ProfWang2016}. Readers are strongly recommended to refer to our survey paper \cite{ScardapaneandWang2017} for more details. From algorithm perspectives, randomized learning techniques for neural networks received attention in later 80's \cite{Broomhead1988} and further developed in early 90's \cite{Pao1994,Pao1992,Schmidt1992}. A common and basic idea behind these randomized learning algorithms is a two-step training paradigm, that is, randomly assigning the input weights and biases of neural networks and evaluating the output weights by solving a linear equation system using the well-known least squares method and its regularized versions. From approximation theory viewpoint, randomized Radial Basis Function (RBF) Networks proposed in \cite{Broomhead1988} and Random Vector Functional-link (RVFL) networks proposed in \cite{Pao1992} can be regarded as Random Basis Approximators (RBAs) \cite{Tyukin2009}. Thus, it is essential and interesting to look into the universal approximation capability of RBAs in the sense of probability. In \cite{Igelnik1995}, Igelnik and Pao proved that a RVFL network with random parameters (input weights and biases) chosen from the uniform distribution defined over a range can be a universal approximator with probability one for continuous functions. In \cite{Husmeier1999}, Husmeier revisited this significant result and showed that the universal approximation property of RVFL networks holds as well for symmetric interval setting of the random parameter scope if the function to be approximated meets Lipschitz condition. In \cite{Tyukin2009}, Ivan et al. empirically investigated the feasibility of RBAs for data modelling and showed that a supervisory mechanism is necessary to make RVFL networks applicable. Their experiments clearly indicate that RVFL networks fail to approximate a target function with a very high probability if the setting for random parameters is improper. This phenomenon was further studied with mathematical justifications in \cite{SI-Gorban2016}. From implementation considerations, we need to know two key parameters involved in design of RVFL networks: the number of hidden nodes and the scope of random parameters. Indeed, as one of the special features of this type of learner models, the first parameter associated with the modelling accuracy must be set largely. The second parameter is related to the approximation capability of the class of random basis functions. Obviously, these settings play very important roles to successfully build randomized learner models for real world applications. Intuitively, constructive or incremental RVFL (IRVFL) networks may be a possible solution for problem solving. However, our recent work reported in \cite{LiandWang2016} reveals the infeasibility of IRVFL networks, if they are incrementally built with random input weights and biases assigned in a fixed scope and its convergence rate satisfies some certain conditions. Thus, further researches on supervisory mechanisms with adaptive scope setting in random assignment of the input weights and biases to ensure the universal approximation capability of IRVFL networks are being expected.
To the development of randomized learning techniques for neural networks, our prime and original contribution from this paper is the way of assigning the random parameters with an inequality constraint and adaptively selecting the scope of the random parameters, ensuring the universal approximation property of the built randomized learner models. Indeed, this work firstly touches the base of implementation of the random basis function approximation theory. It should be clarified that one cannot view SCNs as a specific implementation of RVFL networks due to some remarkable distinctions in randomization of the learner model. Three algorithmic implementations of SCNs, namely Algorithm SC-I, SC-II and SC-III, are presented with the same supervisory mechanism for configuring the random parameters, but different methods for computing the output weights. Concretely, SC-I employs a constructive scheme to evaluate the output weights only for the newly added hidden node and keep all of the previously obtained output weights unchanged; SC-II recalculates a part of the current output weights by solving a local least squares problem with a user specified shifting window size; and SC-III finds the output weights all together through solving a global least squares problem with the current learner model. Our experimental results on a toy example for function approximation and real world regression tasks demonstrate remarkable improvements on modelling performance, compared with the results obtained by existing methods such as Modified Quickprop (MQ) \cite{Kwok1997} and IRVFL \cite{LiandWang2016}.
The remainder of this paper is organized as follows: Section 2 briefly reviews constructive neural networks with a recall for the infeasibility of IRVFL networks. Section 3 details our proposed stochastic configuration networks with both theoretical analysis and algorithmic description. Section 4 reports our simulation results, and Section 5 concludes this paper with some remarks on further studies.
The following notation is used throughout this paper. Let $\Gamma:=\{g_1, g_2, g_3...\}$ be a set of real-valued functions, span$(\Gamma)$ denote a function space spanned by $\Gamma$; $L_{2}(D)$ denote the space of all Lebesgue measurable functions $f=[f_1,f_2,\ldots,f_m]:\mathbb{R}^{d}\rightarrow \mathbb{R}^{m}$ defined on $D\subset \mathbb{R}^{d}$, with the $L_2$ norm defined as
\begin{equation}\label{multiple_lp}
\|f\|:=\left(\sum_{q=1}^{m}\int_{D}|f_q(x)|^2dx\right)^{1/2}<\infty.
\end{equation}
The inner product of $\theta=[\theta_1,\theta_2,\ldots,\theta_m]:\mathbb{R}^{d}\rightarrow \mathbb{R}^{m}$ and $f$ is defined as
\begin{equation}\label{multiple_inner}
\langle f,\theta\rangle:=\sum_{q=1}^{m}\langle f_q,\theta_q\rangle=\sum_{q=1}^{m}\int_{D}f_q(x)\theta_q(x)dx.
\end{equation}
In the special case that $m=1$, for a real-valued function $\psi:\mathbb{R}^{d}\rightarrow \mathbb{R}$ defined on $D\subset \mathbb{R}^{d}$, its $L_2$ norm becomes $ \|\psi\|:=(\int_{D}|\psi(x)|^2dx)^{1/2}$, while the inner product of $\psi_1$ and $\psi_2$ becomes $\langle \psi_1,\psi_2\rangle=\int_{D}\psi_1(x)\psi_2(x)dx$.
\section{Related Work}
Instead of training a learner model with a fixed architecture, the process of constructive neural networks starts with a small sized network then adds hidden nodes incrementally until an acceptable tolerance is achieved. This approach does not require any prior knowledge about the complexity of the network for a given task. This section briefly reviews some closely related work on constructive neural networks. Some comments on these methods are also given.
Given a target function $f:\mathds{R}^{d}\rightarrow \mathds{R}$, suppose that we have already built a SLFNN with $L-1$ hidden nodes, i.e., $f_{L-1}(x)=\sum_{j=1}^{L-1}\beta_jg_j(w_j^\mathrm{T}x+b_j)$ ($L=1,2,\ldots$, $f_0=0$), and the current residual error, denoted as $e_{L-1}=f-f_{L-1}$, does not reach an acceptable tolerance level. The construction process is concerned with how to incrementally add $\beta_L$, $g_L$ ($w_L$ and $b_L$) leading to $f_{L}=f_{L-1}+\beta_Lg_L$ until the residual error falls into an expected tolerance $\epsilon$.
\subsection{Constructive Neural Networks: Deterministic Methods}
In \cite{Barron1993}, Barron proposed a greedy approximation framework based on Jones' Lemma \cite{Jones1992}. The main result can be stated in the following theorem.
\textbf{Theorem 1 (Barron \cite{Barron1993}).} Given a Hilbert space $G$, suppose $f\in \overline{\mbox{conv}(G)}$ (closure of the convex hull of the set G), and $\|g\|\leq b_g$ for each $g \in G$. Let $b_f>c_f>\sup \|g\|+\|f\|$. For $L=1, 2, \ldots$, the sequence of approximations $\{f_L\}$ is deterministic and described as
\begin{equation}\label{barron_th}
f_L=\alpha_Lf_{L-1}+(1-\alpha_L)g_L,
\end{equation}
where
\begin{equation}
\alpha_L=\frac{b_f^2}{b_f^2+\|f_{L-1}-f\|^2},
\end{equation}
and $g_L$ is chosen such that the following condition holds
\begin{equation}\label{greedy_condition}
\langle f_L-f,g_L-f\rangle<\frac{b_f^2-c_f^2}{2b_f^2}\|f_{L-1}-f\|^2.
\end{equation}
Then, for every $L\geq1$, $\|f-f_L\|^2\leq \frac{C^{'}}{L}$, where $C^{'}\geq b_f$.
\textbf{Remark 1.} Some optimization techniques are needed in order to find a suitable $g_L$ to meet the condition (\ref{greedy_condition}). In fact, the whole process aims to minimize $\|\alpha_Lf_{L-1}+(1-\alpha_L)g_{L}-f\|$ at each iteration, which is functionally equivalent to choosing $g_L\in G$ that maximizes $\langle f-f_{L-1},g\rangle$. Note that this constructive scheme is only applicable to the target function $f$ belonging to the closure of the convex hull of $G$, that means the convergence to all $L_2$ functions cannot be guaranteed under this framework. In addition, the new leaner model $f_L$ comes from a specific convex combination of the previous model $f_{L-1}$ and the newly added term $g_L$, which results in a weak solution compared against one obtained through optimizing the left hand side of (\ref{greedy_condition}) with respect to the coefficient $\alpha_L$.
In \cite{Kwok1997}, Kwok and Yeung proposed an incremental learning strategy for building SLFNNs and proved the universal approximation property. At each step, the input weights and biases ($w$ and $b$) of the newly added node at the hidden layer are obtained by maximizing an objective function and the output weights are evaluated using the least squares method. This theoretical result can be stated as follows:
\textbf{Theorem 2 (Kwok and Yeung \cite{Kwok1997}).} Let $\Gamma$ be a set of basis function. Assume that span($\Gamma$) is dense in $L_2$ space and $\forall g\in \Gamma$, $0<\|g\|<b_g$ for some $b_g\in \mathds{R}^{+}$. If $g_L$ is selected as to maximize $\langle e_{L-1},g_L\rangle^2/\|g\|^2$, then $\lim_{L\rightarrow +\infty}\|f-f_L\|=0$.
\textbf{Remark 2.} Theoretically, the convergence property can be guaranteed provided that we can find $g_L$ to maximize $\langle e_{L-1},g_L\rangle^2/\|g\|^2$. In practice, however, local minima occurs frequently when performing the gradient ascent algorithm. Sometimes, during the course of incremental learning, the optimization process in constructing a new hidden node seems meaningless as the updating of hidden parameters is excessively slow, which means the reduction of residual error will be close to zero. In fact, this point was discussed in \cite{Kwok1994} and a Modified Quickprop algorithm was suggested in \cite{Kwok1997} in order to alleviate this problem. However, this weakness cannot be overcome completely for some complex tasks. In essence, the Modified Quickprop algorithm that iteratively finds the appropriate hidden parameters ($w$ and $b$) may still face some obstacles in the optimization process of the objective function when it is searching in a plateau of the error surface. Once $w$ ($b$) at some iterative step falls in a region that both the first and second derivatives of the objective function with respect to $w$ ($b$) are nearly zero, the learning reaches a halt and may be terminated. In a nut shell, the derivative-based optimization processes suffer from some inherent shortcomings and have lower probability to generate a universal learner eventually.
The constructive methods mentioned above consist of two phases: selecting the hidden parameters (the input weights and biases) according to some certain criteria and evaluating the output weights after adding a new hidden node. Although the universal approximation property can be guaranteed by incrementally adding the hidden nodes, the resulting SLFNNs usually need a quite few hidden nodes to achieve good learning performance. In practice, seeking for a basis function through maximizing $\langle e_{L-1},g_L\rangle^2/\|g\|^2$ is very time consuming. Thus, for many real world applications, deterministic methods used in building constructive neural networks seem have no or less applicability. As one of pathways to generate neural networks with the universal approximation property (corresponding to the perfect learning power), randomized approaches have great potential to access a faster and feasible solution \cite{LukoandJaeger2009,ScardapaneandWang2017,SI-ProfWang2016}.
\subsection{Constructive Random Basis Approximators}
Random vector functional-link (RVFL) networks \cite{Pao1994, Pao1992} can be regarded as a randomized version of SLFNNs, where the input weights and biases are randomly assigned and fixed during the training phase, and the output weights are analytically evaluated by the least squares method \cite{Lancaster1985}. In \cite{Igelnik1995}, Igelnik and Pao theoretically justified its universal approximation property based on Monte-Carlo method with the limit-integral representation of the target function. The main result can be restated as follows:
\textbf{Theorem 3 (Igelnik and Pao \cite{Igelnik1995}).} For any compact set $D\subset\mathds{R}^d$, given $f\in C(D)$ (i.e., the set of all continuous functions defined over $D$), and any activation function $g$ that satisfies $\int_{\mathds{R}}|g(t)|^2 dt<\infty$ or $\int_{\mathds{R}}|g^{'}(t)|^2dt<\infty$, there exist a sufficiently large $L$, a set of $\beta_1,\beta_2,...,\beta_L $ and a probabilistic space $\chi_L$, such that $f_L=\sum_{j=1}^L\beta_jg(x;w_j,b_j)$ can approximate $f$ with arbitrary accuracy in the sense of probability one, if $w$ and $b$ are randomly assigned over $\chi_L$ and follow certain distribution. Alternatively, this result can be expressed as
\begin{equation}
\lim_{L\rightarrow+\infty}E\Big(\int_{D}|f(x)-f_{L}(x)|^2dx\Big)=0,
\end{equation}
where $E$ is the expectation operator with respect to the probabilistic space $\chi_L$.
\textbf{Remark 3.} In \cite{Husmeier1999}, Husmeier revisited the universal approximation property of RVFL networks with a symmetric interval setting for the random parameters. Strictly speaking, such a property holds for only target functions satisfying Lipschitz condition. From our experience, however, most of data modelling problems from real world applications meet Lipschitz condition. Thus, for simplicity, we adopt the symmetric interval setting for the random parameters in this paper. It should be aware that such a special setting does not limit the development of our framework at all, and it is just a matter of implementation indeed.
The theoretical result stated in Theorem 3 is fundamental and significant for building randomized neural networks. Similar to the case of making use of the SLFNNs in resolving real world problems, we need to develop effective algorithms to implement such a class of randomized predictive models. It is natural to think of an incremental implementation of RVFL (IRVFL) networks, where the model is built incrementally with random assignment of the input weights and biases, and constructive evaluation of its output weights. Although the construction process of IRVFL networks seems to be computationally efficient, unfortunately the universal approximation property of the constructed learner cannot be guaranteed. The following Theorem 4 from our recent work \cite{LiandWang2016} have justified this point.
\textbf{Theorem 4 (Li and Wang \cite{LiandWang2016}).} Let span($\Gamma$) be dense in $L_2$ space and $\forall g\in \Gamma$, $0<\|g\|<b_g$ for some $b_g\in \mathds{R}^{+}$.
Suppose that $g_L$ is randomly generated and $\beta_L$ is given by
\begin{equation}\label{th4_con1}
\beta_{L}=\frac{\langle e_{L-1},g_L\rangle}{\|g_L\|^2}.
\end{equation}
For sufficiently large $L$, if the followings hold:
\begin{eqnarray}\label{th4_con2}
\frac{\|e_{L-1}\|^2-\|e_{L}\|^2}{\|e_{L-1}\|^2}\leq\varepsilon_L<1,
\end{eqnarray}
and
\begin{eqnarray}\label{th4_con3}
\lim_{L\rightarrow \infty}\prod_{k=1}^{L}(1-\varepsilon_k)=\varepsilon>0.
\end{eqnarray}
Then, the constructive neural network with random weights, $f_L$, has no universal approximation capability, that is,
\begin{eqnarray}
\lim_{L\rightarrow \infty}\|f-f_L\|\geq\sqrt{\varepsilon}\|f\|.
\end{eqnarray}
Theorem 4 reveals that IRVFL networks may not share the universal approximation property, if the output weights are taken as (\ref{th4_con1}) and the residual error sequence meets conditions (\ref{th4_con2}) and (\ref{th4_con3}). The consequence stated in Theorem 4 still holds if the output weights are evaluated using the least squares method. We state this interesting result in the following Theorem 5.
\textbf{Theorem 5.} Let span($\Gamma$) be dense in $L_2$ space and $\forall g\in \Gamma$, $0<\|g\|<b_g$ for some $b_g\in \mathds{R}^{+}$.
Suppose that $g_L$ is randomly generated and the output weights are calculated by solving the global least square problem, i.e.,
$[\beta_1^{*}, \beta_2^{*},\ldots,\beta_{L}^{*}]=\arg \min_{\beta}\|f-\sum_{j=1}^{L}\beta_jg_j\|$.
For sufficiently large $L$, if (\ref{th4_con2}) and (\ref{th4_con3}) hold for the residual error sequence $\|e_L^{*}\|$ (corresponding to $\|e_L\|$ in Theorem 4), the constructive neural network with random hidden nodes has no universal approximation capability, that is,
\begin{eqnarray}\label{th5_con}
\lim_{L\rightarrow \infty}\|f-\sum_{j=1}^{L}\beta_j^{*}g_j\|\geq\sqrt{\varepsilon}\|f\|.
\end{eqnarray}
\textbf{Proof.} Simple computations can verify that the sequence $\|e_{L}^{*}\|^2$ is monotonically decreasing and converges. Indeed, let $\tilde{\beta}_{L}=\langle e_{L-1}^{*},g_L\rangle/\|g_L\|^2$, we have
\begin{eqnarray}
\|e_{L}^{*}\|^2&\leq&\langle e_{L-1}^{*}-\tilde{\beta}_{L}g_L,e_{L-1}^{*}-\tilde{\beta}_{L}g_L\rangle \nonumber\\
&=&\|e_{L-1}^{*}\|^2-\frac{\langle e_{L-1}^{*},g_L\rangle^2}{\|g_L\|^2}\nonumber\\
&\le& \|e_{L-1}^{*}\|^2.
\end{eqnarray}
Then, (\ref{th5_con}) can be easily obtained by following the proof of Theorem 4.
Clearly, the universal approximation property is conditional to IRVFL networks whatever the output weights are evaluated. This happens due to multiple reasons (e.g. the scope setting and/or the improper way to assign the random parameters) that is hard to tell mathematically. In this paper, we propose a solution for constructing randomized learner models under a supervisory mechanism to ensure the universal approximation property.
\section{Stochastic Configuration Networks}
Universal approximation property is fundamental to a learner model for data modelling. Logically, one cannot expect to build a neural network with good generalization but poor learning performance. Therefore, it is essential to share the universal approximation property for SCNs. This section details our proposed SCNs, including proofs of the universal approximation property and algorithm descriptions. Some remarks on algorithmic implementations are also given.
\subsection{Universal Approximation Property}
Given a target function $f:\mathds{R}^{d}\rightarrow \mathds{R}^{m}$, suppose that a SCN with $L-1$ hidden nodes has already been constructed, that is, $f_{L-1}(x)=\sum_{j=1}^{L-1}\beta_jg_j(w_j^\mathrm{T}x+b_j)$ ($L=1,2,\ldots$, $f_0=0$), where $\beta_j=[\beta_{j,1},\ldots,\beta_{j,m}]^\mathrm{T}$. Denoted the current residual error by $e_{L-1}=f-f_{L-1}=[e_{L-1,1},\ldots,e_{L-1,m}]$. If $\|e_{L-1}\|$ does not reach a pre-defined tolerance level, we need to generate a new random basis function $g_L$ ($w_L$ and $b_L$) and evaluate the output weights $\beta_L$ so that the leading model $f_{L}=f_{L-1}+\beta_Lg_L$ will have an improved residual error. In this paper, we propose a method to randomly assign the input weights and biases with a supervisory mechanism (inequality constraint), and provide with three ways to evaluate the output weights of SCNs with $L$ hidden nodes (i.e., after adding the new node). Mathematically, we can prove that the resulting randomized learner models incrementally built based on our stochastic configuration idea are universal approximators.
\textbf{Theorem 6.} Suppose that span($\Gamma$) is dense in $L_2$ space and $\forall g\in \Gamma$, $0<\|g\|<b_g$ for some $b_g\in \mathds{R}^{+}$. Given $0<r<1$ and a nonnegative real number sequence $\{\mu_L\}$ with $\lim_{L\rightarrow+\infty}\mu_L=0$ and $\mu_L\leq (1-r)$. For $L=1,2,\ldots$, denoted by
\begin{equation}\label{delta1}
\delta_{L}=\sum_{q=1}^{m}\delta_{L,q}, \delta_{L,q}=(1-r-\mu_L)\|e_{L-1,q}\|^2, q=1,2,...,m.
\end{equation}
If the random basis function $g_L$ is generated to satisfy the following inequalities:
\begin{equation}\label{step1}
\langle e_{L-1,q},g_L\rangle^2\geq b_g^2\delta_{L,q}, q=1,2,...,m,
\end{equation}
and the output weights are constructively evaluated by
\begin{equation}\label{step2}
\beta_{L,q}=\frac{\langle e_{L-1,q},g_L\rangle}{\|g_L\|^2}, q=1,2,\ldots,m.
\end{equation}
Then, we have $\lim_{L\rightarrow +\infty}\|f-f_L\|=0,$ where $f_L=\sum_{j=1}^{L}\beta_{j}g_j$, $\beta_j=[\beta_{j,1},\ldots,\beta_{j,m}]^{\mathrm{T}}$.
\textbf{Proof.} According to (\ref{step2}), it is easy to verify that $\{\|e_{L}^2\|\}$ is monotonically decreasing. Thus, the sequence $\|e_L\|$ is convergent as $L\rightarrow\infty$. From (\ref{delta1}), (\ref{step1}) and (\ref{step2}), we have
\begin{eqnarray}
&&\|e_L\|^2-(r+\mu_L)\|e_{L-1}\|^2\nonumber\\&=& \sum_{q=1}^m\left(\langle e_{L-1,q}-\beta_{L,q}g_L,e_{L-1,q}-\beta_{L,q}g_L\rangle-(r+\mu_L)\langle e_{L-1,q},e_{L-1,q}\rangle\right)\nonumber\\
&=&\sum_{q=1}^m\left((1-r-\mu_L)\langle e_{L-1,q},e_{L-1,q}\rangle-2\langle e_{L-1,q},\beta_{L,q}g_L\rangle+\langle\beta_{L,q}g_L,\beta_{L,q}g_L\rangle\right)\nonumber\\
&=&(1-r-\mu_L)\|e_{L-1}\|^2-\frac{\sum_{q=1}^m\langle e_{L-1,q},g_L\rangle^2}{\|g_L\|^2}\nonumber\\
&=&\delta_L-\frac{\sum_{q=1}^m\langle e_{L-1,q},g_L\rangle^2}{\|g_L\|^2}\nonumber\\
&\leq& \delta_L-\frac{\sum_{q=1}^m\langle e_{L-1,q},g_L\rangle^2}{b_g^2}\leq0.
\end{eqnarray}
Therefore, the following inequality holds:
\begin{equation}\label{contract}
\|e_L\|^2\leq r\|e_{L-1}\|^2+\gamma_L, (\gamma_L=\mu_L\|e_{L-1}\|^2\geq 0).
\end{equation}
Note that $\lim_{L\rightarrow+\infty}\gamma_L=0$, by using (\ref{contract}), we can easily show that $\lim_{L\rightarrow +\infty}\|e_{L}\|^2=0$ which implies $\lim_{L\rightarrow +\infty}\|e_{L}\|=0$. This completes the proof of Theorem 6.\\
Theorem 6 provides a constructive scheme, i.e., (\ref{step1}) and (\ref{step2}), that can consequently lead to a universal approximator. Unlike the strategy that maximizes some objective functions in \cite{Kwok1997}, the supervisory mechanism (\ref{step1}), which aims at finding appropriate $w_L$ and $b_L$ for a new hidden node, weakens the demanding condition as required to achieve the maximum value for $\langle e_{L-1},g_L\rangle^2$ or $\langle e_{L-1},g_L\rangle^2/\|g_{L}\|^2$ (for the case $m=1$). In fact, the existence of $w_L$ and $b_L$ satisfying (\ref{step1}) can be easily deduced because $\Psi(w,b)=\sum_{q=1}^m\langle e_{L-1,q},g_L\rangle^2/\|g_{L}\|^2$ is a continuous function in the parameter space, and $\delta_L=(1-r-\mu_L)\|e_{L-1}\|^2$ can be far less than $\|e_{L-1}\|^2$ once the $r$ is selected to approach 1. Overall, the supervisory mechanism described in (\ref{step1}) not only makes it possible to randomly assign the hidden parameters, which performs more flexibly and efficiently in generating a new hidden node, but also
enforces the residual error to be zero along with the constructive process.
\textbf{Remark 4.} Our proposed supervisory mechanism in (\ref{step1}) indicates that random assignment of $w_L$ and $b_L$ should be constrained and data dependent. That is to say, the configuration of hidden parameters needs to be relevant to the given training samples, rather than totally rely on the distribution and/or scoping parameter setting of the random weights and biases. To the best of our knowledge, our attempt on the supervisory mechanism (\ref{step1}) is the first time in design of randomized learner models, and it fills the gaps between the scheme of solving a global nonlinear optimization problem (that are usually quite demanding in both time and space since iterative solution seems inevitable) and the strategy of freely random assignment of the hidden parameters without any constraint. It is worth mentioning that Theorem 6 is still valid if the learning parameter $r$ is unfixed and set based on an increasing sequence approaching 1 (always less than 1). That makes SC algorithm be more flexible for the implementation of randomly searching $w_L$ and $b_L$ even when the residual error is smaller. It has been observed that finding out appropriate $w_L$ and $b_L$ for the newly added node becomes more challenging as the residual error becomes smaller. In this case, we need to set the value of $r$ to be extremely close to 1.
It is obvious that $\beta_L=[\beta_{L,1},\ldots,\beta_{L,m}]^{\mathrm{T}}$ in Theorem 6 is analytically evaluated by $\beta_{L,q}=\langle e_{L-1,q},g_L\rangle/\|g_L\|^2$ and remain fixed for further adding steps. This determination scheme, however, may cause very slow convergence rate for the constructive process. Thus, we consider a recalculation scheme for the output weights, that is, once $g_j (j=1,2,\ldots,L)$ have been generated according to (\ref{step1}), $\beta_1,\beta_2,\ldots,\beta_{L}$ can be evaluated by minimizing the global residual error. The following Theorem 7 gives a result on the universal approximation property if the least squares method is applied to update the output weights in a proceeding manner.
Let $[\beta_1^{*}, \beta_2^{*},\ldots,\beta_{L}^{*}]=\arg \min_{\beta}\|f-\sum_{j=1}^{L}\beta_jg_j\|$, $e_{L}^{*}=f-\sum_{j=1}^{L}\beta_j^{*}g_j=[e_{L,1}^{*},\ldots,e_{L,m}^{*}]$, and define intermediate values $\tilde{\beta}_{L,q}=\langle e_{L-1,q}^{*},g_L\rangle/\|g_L\|^2$ for $q=1,\ldots,m$ and $\tilde{e}_{L}=e_{L-1}^{*}-\tilde{\beta}_{L}g_L$, where $\tilde{\beta}_{L}=[\tilde{\beta}_{L,1},\ldots,\tilde{\beta}_{L,m}]^{\mathrm{T}}$ and $e_0^{*}=f$.
\textbf{Theorem 7.} Suppose that span($\Gamma$) is dense in $L_2$ space and $\forall g\in \Gamma$, $0<\|g\|<b_g$ for some $b_g\in \mathds{R}^{+}$. Given $0<r<1$ and a nonnegative real number sequence $\{\mu_L\}$ with $\lim_{L\rightarrow+\infty}\mu_L=0$ and $\mu_L\leq (1-r)$. For $L=1,2,\ldots$, denoted by
\begin{equation}
\delta_{L}^{*}=\sum_{q=1}^{m}\delta_{L,q}^{*}, \delta_{L,q}^{*}=(1-r-\mu_L)\|e_{L-1,q}^{*}\|^2, q=1,2,...,m.
\end{equation}
If the random basis function $g_L$ is generated to satisfy the following inequalities:
\begin{equation}\label{step3}
\langle e_{L-1,q}^{*},g_L\rangle^2\geq b_g^2\delta_{L,q}^{*}, q=1,2,...,m,
\end{equation}
and the output weights are evaluated by
\begin{equation}\label{step4}
[\beta_1^{*}, \beta_2^{*},\ldots,\beta_{L}^{*}]=\arg \min_{\beta}\|f-\sum_{j=1}^{L}\beta_jg_j\|.
\end{equation}
Then, we have $\lim_{L\rightarrow +\infty}\|f-f_L^{*}\|=0,$ where $f_L^{*}=\sum_{j=1}^{L}\beta_{j}^{*}g_j$, $\beta_{j}^{*}=[\beta^{*}_{j,1},\ldots,\beta^{*}_{j,m}]^{\mathrm{T}}$.
\textbf{Proof.} It is easy to show that $\|e_{L}^{*}\|^2\leq \|\tilde{e}_{L}\|^2=\|e_{L-1}^{*}-\tilde{\beta}_{L}g_L\|^2\leq \|e_{L-1}^{*}\|^2\leq \|\tilde{e}_{L-1}\|^2$ for $L=1,2,\ldots$, so $\{\|e_{L}^{*}\|^2\}$ is monotonically decreasing and convergent.
Hence, we have
\begin{eqnarray}
&&\|e_L^{*}\|^2-(r+\mu_L)\|e_{L-1}^{*}\|^2\nonumber\\&\leq&\|\tilde{e}_{L}\|^2-(r+\mu_L)\|e_{L-1}^{*}\|^2\nonumber\\
&=&\sum_{q=1}^{m}\left(\langle e_{L-1,q}^{*}-\tilde{\beta}_{L,q}g_L,e_{L-1,q}^{*}-\tilde{\beta}_{L,q}g_L\rangle-(r+\mu_L)\langle e_{L-1,q}^{*},e_{L-1,q}^{*}\rangle\right)\nonumber\\
&=&\sum_{q=1}^{m}\left((1-r-\mu_L)\langle e_{L-1,q}^{*},e_{L-1,q}^{*}\rangle-2\langle e_{L-1,q}^{*},\tilde{\beta}_{L,q}g_L\rangle+\langle\tilde{\beta}_{L,q}g_L,\tilde{\beta}_{L,q}g_L\rangle\right)\nonumber\\
&=&(1-r-\mu_L)\|e_{L-1}^{*}\|^2-\frac{\sum_{q=1}^{m}\langle e_{L-1,q}^{*},g_L\rangle^2}{\|g_L\|^2}\nonumber\\
&=&\delta_L^{*}-\frac{\sum_{q=1}^{m}\langle e_{L-1,q}^{*},g_L\rangle^2}{\|g_L\|^2}\nonumber\\
&\leq& \delta_L^{*}-\frac{\sum_{q=1}^{m}\langle e_{L-1,q}^{*},g_L\rangle^2}{b_g^2}\leq0.
\end{eqnarray}
Using the same arguments in the proof of Theorem 6, we can obtain $\lim_{L\rightarrow +\infty}\|e_{L}^{*}\|=0$, that completes the proof of Theorem 7.
Evaluation of the output weights in Theorem 7 is straightforward with the use of Moore-Penrose generalized inverse \cite{Lancaster1985}. Unfortunately, this method is infeasible for large-scale data analytics. To solve this problem, we suggest a trade-off solution with window shifting concept in the global least squares method (i.e., we only optimize a part of the output weights after the number of the hidden nodes exceeds a given window size). Indeed, this selective scheme for evaluating the output weights is meaningful and significant for dealing with large-scale data processing. Similar to the proof of Theorem 7, the resulting SCN shares the universal approximation property. Here, we state this result in the following Theorem 8 and omit the detailed proof.
\textbf{Theorem 8.} Suppose that span($\Gamma$) is dense in $L_2$ space and $\forall g\in \Gamma$, $0<\|g\|<b_g$ for some $b_g\in \mathds{R}^{+}$. Given $0<r<1$ and a nonnegative real number sequence $\{\mu_L\}$ with $\lim_{L\rightarrow+\infty}\mu_L=0$ and $\mu_L\leq (1-r)$. For a given window size $K$ and $L=1,2,\ldots$, denoted by
\begin{equation}
\delta_{L}^{*}=\sum_{q=1}^{m}\delta_{L,q}^{*}, \delta_{L,q}^{*}=(1-r-\mu_L)\|e_{L-1,q}^{*}\|^2, q=1,2,...,m.
\end{equation}
If the random basis function $g_L$ is generated to satisfy the following inequalities:
\begin{equation}\label{step5}
\langle e_{L-1,q}^{*},g_L\rangle^2\geq b_g^2\delta_{L,q}^{*}, q=1,2,...,m,
\end{equation}
and, as $L\leq K$, the output weights are evaluated by
\begin{equation}\label{step6}
[\beta_1^{*}, \beta_2^{*},\ldots,\beta_{L}^{*}]=\arg \min_{\beta}\|f-\sum_{j=1}^{L}\beta_jg_j\|;
\end{equation}
Otherwise (i.e.,$L>K$), the output weights will be selectively evaluated (i.e., keep $\beta_1^{*},\ldots,\beta_{L-K}^{*}$ unchanged and renew the left $\beta_{L-K+1}, \ldots,\beta_{L}$) by
\begin{equation}\label{step7}
[\beta_{L-K+1}^{*},\beta_{L-K+2}^{*},\ldots,\beta_{L}^{*}]=\arg \min_{\beta_{L-K+1}, \ldots,\beta_{L}}\|f-\sum_{j=1}^{L-K}\beta_j^{*}g_j-\sum_{j=L-K+1}^{L}\beta_jg_j\|.
\end{equation}
Then, we have $\lim_{L\rightarrow +\infty}\|f-f_L^{*}\|=0,$ where $f_L^{*}=\sum_{j=1}^{L}\beta_{j}^{*}g_j$, $\beta_{j}^{*}=[\beta^{*}_{j,1},\ldots,\beta^{*}_{j,m}]^{\mathrm{T}}$.
\subsection{Algorithm Description}
This subsection details the proposed SC algorithms, i.e., SC-I, SC-II and SC-III, which are associated with Theorems 6, 8 and 7, respectively. In general, the main components of our proposed SC algorithms can be summarized as follows:
\begin{itemize}
\item \textbf{Configuration of Hidden Parameters }: Randomly assigning the input weights and biases to meet the constraint (\ref{step1}) ((\ref{step3}) or (\ref{step5})), then generating a new hidden node and adding it to the current learner model.
\item \textbf{Evaluation of Output Weights}: Constructively or selectively determining the output weights of the current learner model.
\end{itemize}
\begin{table}[h!]\label{sc1}
\begin{center}
\begin{tabular}{lll}
\toprule
\textbf{Algorithm SC-I} \\
\midrule
Given inputs $X=\{x_1,x_2,\ldots,x_N\}$, $x_i\in \mathds{R}^{d}$ and outputs $T=\{t_1,t_2,\ldots,t_N\}$, $t_i\in \mathds{R}^{m}$; \\
Set maximum number of hidden nodes $L_{max}$, expected error tolerance $\epsilon$, maximum times\\ of random configuration $T_{max}$; Choose a set of positive scalars $\Upsilon\!=\{\lambda_{min}\!:\!\Delta\lambda\!:\!\lambda_{max}\}$;\\
\midrule
\textbf{1.} Initialize $e_0:=[t_1,t_2,\ldots,t_N]^\mathrm{T}$, $0<r<1$, two empty sets $\Omega$ and $W$;\\
\textbf{2.} \textbf{While} $L\leq L_{max}$ AND $\|e_0\|_{F}>\epsilon$, \textbf{Do}\\
\hspace{7.8mm}{\textbf{Phase 1: Hidden Parameters Configuration (Step 3-17)}} \\
\textbf{3.}\:\:\:\:\:\:\:\textbf{For} $\lambda \in \Upsilon$, \textbf{Do}\\
\textbf{4.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{For} $k=1,2\ldots,T_{max}$, \textbf{Do}\\
\textbf{5.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:Randomly assign $\omega_L$ and $b_L$ from $[-\lambda,\lambda]^d$ and $[-\lambda,\lambda]$, respectively;\\
\textbf{6.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:Calculate $h_{L}$, $\xi_{L,q}$ based on Eq. (\ref{hiddennode}) and (\ref{factor1}), and $\mu_L=(1-r)/(L+1)$;\\
\textbf{7.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{If}\:\:\:$min\{\xi_{L,1},\xi_{L,2},...,\xi_{L,m}\}\geq 0$\\
\textbf{8.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{Save} $w_L$ and $b_L$ in $W$, $\xi_L=\sum_{q=1}^{m}\xi_{L,q}$ in $\Omega$, respectively;\\
\textbf{9.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{Else} go back to \textbf{Step 4}\\
\textbf{10.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{End If}\\
\textbf{11.}\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{End For}\:(corresponds to \textbf{Step 4}) \\
\textbf{12.}\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{If}\:\:$W$ is not empty\\
\textbf{13.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:Find $w_L^{*}$, $b_L^{*}$ that maximize $\xi_L$ in $\Omega$, and set $H_L=[h^*_1,h^*_2,\ldots,h^*_L]$;\\
\textbf{14.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{Break} (go to \textbf{Step 18}); \\
\textbf{15.}\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{Else} randomly take $\tau\in (0,1-r)$, renew $r:=r+\tau$, return to \textbf{Step 4};\\
\textbf{16.}\:\:\:\:\:\:\:\:\:\:\:\:\:\textbf{End If}\\
\textbf{17.}\:\:\:\:\:\:\textbf{End For} (corresponds to \textbf{Step 3})\\
\hspace{8.9mm}{\textbf{Phase 2: Output Weights Determination}} \\
\textbf{18.}\:\:\:\:\:\:Calculate $\beta_{L,q}=(e_{L-1,q}^{\mathrm{T}}\cdot h^{*}_L)/(h_L^{*\mathrm{T}}\cdot h^{*}_L)$, \:$q=1,2,\ldots,m$;\\ \textbf{19.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:$\beta_{L}=[\beta_{L,1},\ldots,\beta_{L,m}]^{\mathrm{T}}$;\\
\textbf{20.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:$e_L=e_{L-1}-\beta_Lh_{L}^{*}$; \\
\textbf{21.}\:\:\:\:\:\:Renew $e_0:=e_L$; $L:=L+1$;\\
\textbf{22.} \textbf{End While}\\
\textbf{23.} \textbf{Return} $\beta_1, \beta_2, \ldots, \beta_L$, $\omega^{*}=[\omega^{*}_1,\ldots,\omega^{*}_L]$ and $b^{*}=[b^{*}_1,\ldots,b^{*}_L]$.\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
Given a training dataset with inputs $X=\{x_1,x_2,\ldots,x_N\}$, $x_i=[x_{i,1},\ldots,x_{i,d}]^\mathrm{T}\in \mathds{R}^{d}$ and its corresponding outputs $T=\{t_1,t_2,\ldots,t_N\}$, where $t_i=[t_{i,1},\ldots,t_{i,m}]^\mathrm{T}\in \mathds{R}^{m}$, $i=1,\ldots,N$. Denoted by $e_{L-1}(X)=[e_{L-1,1}(X),e_{L-1,2}(X),\ldots,e_{L-1,m}(X)]^\mathrm{T}\in \mathds{R}^{N\times m}$ as the corresponding residual error vector before adding the $L$-th new hidden node, where $e_{L-1,q}(X)=[e_{L-1,q}(x_1),\ldots,e_{L-1,q}(x_N)]\in \mathds{R}^N$ with $q=1,2,\ldots,m$. Let
\begin{equation}\label{hiddennode}
h_L(X)=[g_{L}(w_L^\mathrm{T}x_1+b_L),g_{L}(w_L^\mathrm{T}x_2+b_L),\ldots,g_{L}(w_L^\mathrm{T}x_N+b_L)]^\mathrm{T},
\end{equation}
which stands for the activation of the $L$-th hidden node for the input $x_i$, $i=1,2,\ldots,N$. The (current) hidden layer output matrix can be expressed as $H_L=[h_1,h_2,\ldots,h_L]$.
In practice, the target function is presented as a collection of input-output data pairs. So $\beta_{L,q}=\langle e_{L-1,q},g_L\rangle/\|g_L\|^2$ ($q=1,\ldots,m$) becomes
\begin{equation}
\beta_{L,q}=\frac{e_{L-1,q}(X)^{\mathrm{T}}\cdot h_{L}(X)}{h_{L}(X)^{\mathrm{T}}\cdot h_{L}(X)}, \:\:\:q=1,2,\ldots,m.
\end{equation}
For the sake of brevity, we introduce a set of variables $\xi_{L,q}, q=1,2,...,m$ and use them in algorithm descriptions (pseudo codes):
\begin{equation}\label{factor1}
\xi_{L,q}=\left(\frac{\Big(e_{L-1,q}(X)^{\mathrm{T}}\cdot h_L(X)\Big)^2}{h_L(X)^{\mathrm{T}}\cdot h_L(X)}-(1-r-\mu_L)e_{L-1,q}(X)^{\mathrm{T}}e_{L-1,q}(X)\right).
\end{equation}
Recall that for a given window size $K$ in SC-II, as $L\leq K$, the suboptimal solution $\beta^{*}=[\beta^{*}_1,\beta^{*}_2,\ldots,\beta^{*}_L]^{\mathrm{T}}\in \mathds{R}^{L\times m}$ can be computed using the standard least squares method, that is,
\begin{equation}
\beta^{*}=\arg\min_{\beta}\|H_L\beta-T\|_{F}^2=H^{\dagger}_LT,
\end{equation}
where $H^{\dagger}_L$ is the Moore-Penrose generalized inverse \cite{Lancaster1985} and $\|\cdot\|_{F}$ represents the Frobenius norm.
As $L>K$, a portion of the output weights can be evaluated by
\begin{equation}
\beta^{window}=\arg\min_{\beta}\|H_K\beta-T\|_{F}^2=H^{\dagger}_KT,
\end{equation}
where $\beta^{window}$ consists of the most recent $\beta_{L-K+1},\ldots,\beta_L$, $H_K$ is composed of the last $K$ columns of $H$, i.e., $H_K=[h_{L-K+1},\ldots,h_L]$, and the left (previous) $\beta_1,\ldots,\beta_{L-K}$ remain unchanged. It is easy to see that SC-II and SC-III become consistent once $K\geq L_{max}$.
\begin{table}[h!]\label{sc2}
\begin{center}
\begin{tabular}{lcl}
\toprule
\textbf{Algorithm SC-II} \\
\midrule
Given the same items in \textbf{Algorithm SC-I}. Set window size $K<L_{max}$. \\
\midrule
\textbf{1.} Initialize $e_0:=[t_1,t_2,\ldots,t_N]^\mathrm{T}$, $0<r<1$, two empty sets $\Omega$ and $W$;\\
\textbf{2.} \textbf{While} $L\leq L_{max}$ AND $\|e_0\|_{F}>\epsilon$, \textbf{Do}\\
\textbf{3.}\:\:\:\:\:\:\:\:Proceed \textbf{Phase 1 of Algorithm SC-I}; \\
\textbf{4.}\:\:\:\:\:\:\:\:Obtain $H_L=[h^*_1,h^*_2,\ldots,h^*_L]$;\\
\textbf{5.}\:\:\:\:\:\:\:\:\textbf{If}\:\:$L\leq K$\\
\textbf{6.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:Calculate $\beta^{*}=[\beta^{*}_1,\beta^{*}_2,\ldots,\beta^{*}_L]^{\mathrm{T}}=H^{\dagger}_LT$;\\
\textbf{7.}\:\:\:\:\:\:\:\:\textbf{Else} \\
\textbf{8.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:Set $\tilde{H}_K=H_L(:,1:L-K)$, $H_K=H_L(:,L-K+1:L)$;\\
\textbf{9.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:Retrieve $\beta^{previous}=[\beta^{*}_1,\beta^{*}_2,\ldots,\beta^{*}_{L-K}]^{\mathrm{T}}$;\\
\textbf{10.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:Calculate $\beta^{window}=H^{\dagger}_K(T-\tilde{H}_K\beta^{previous})$;\\
\textbf{11.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:Let $\beta^{*}=[\beta^{*}_1,\beta^{*}_2,\ldots,\beta^{*}_L]^{\mathrm{T}}:=\left[\!\!\!
\begin{array}{c}
\beta^{previous} \\
\beta^{window} \\
\end{array}
\!\!\!\!\right]$;\\
\textbf{12.}\:\:\:\:\:\:\:\textbf{End If}\\
\textbf{13.}\:\:\:\:\:\:\:Calculate $e_L=e_{L-1}-\beta_L^{*}h^{*}_L$;\\
\textbf{14.}\:\:\:\:\:\:\:Renew $e_0:=e_L$; $L:=L+1$;\\
\textbf{15.} \textbf{End While}\\
\textbf{16.} \textbf{Return} $\beta^{*}_1, \beta^{*}_2, \ldots, \beta^{*}_L$, $\omega^{*}=[\omega^{*}_1,\ldots,\omega^{*}_L]$ and $b^{*}=[b^{*}_1,\ldots,b^{*}_L]$.\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{table}[htbp]\label{sc-3}
\begin{center}
\begin{tabular}{lcl}
\toprule
\textbf{Algorithm SC-III} \\
\midrule
Given the same items in \textbf{Algorithm SC-I}.\\
\midrule
\textbf{1.} Initialize $e_0:=[t_1,t_2,\ldots,t_N]^\mathrm{T}$, $0<r<1$, two empty sets $\Omega$ and $W$;\\
\textbf{2.} \textbf{While} $L\leq L_{max}$ AND $\|e_0\|_{F}>\epsilon$, \textbf{Do}\\
\textbf{3.}\:\:\:\:\:\:\:\:Proceed \textbf{Phase 1 of Algorithm SC-I};\\
\textbf{4.}\:\:\:\:\:\:\:\:Obtain $H_L=[h^*_1,h^*_2,\ldots,h^*_L]$;\\
\textbf{5.}\:\:\:\:\:\:\:\:Calculate $\beta^{*}=[\beta^{*}_1,\beta^{*}_2,\ldots,\beta^{*}_L]^{\mathrm{T}}:=H^{\dagger}_LT$;\\
\textbf{6.}\:\:\:\:\:\:\:\:Calculate $e_L=e_{L-1}-\beta_L^{*}h^{*}_L$;\\
\textbf{7.}\:\:\:\:\:\:\:\:Renew $e_0:=e_L$; $L:=L+1$;\\
\textbf{8.} \textbf{End While}\\
\textbf{9.} \textbf{Return} $\beta^{*}_1, \beta^{*}_2, \ldots, \beta^{*}_L$, $\omega^{*}=[\omega^{*}_1,\ldots,\omega^{*}_L]$ and $b^{*}=[b^{*}_1,\ldots,b^{*}_L]$.\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\textbf{Remark 5.} In SC-I, we employ a scaling sigmoidal function as the hidden node activation function, of which the parameters $\Upsilon
:=\{\lambda_1:\Delta\lambda:\lambda_{max}\}$ play important role for adaptively determining the scope setting of the random parameters $w$ and $b$. As indicated in Theorem 6, although the randomness is beneficial for fast configuration of the newly added hidden node, the searching area is potentially data dependent and should not be rigidly fixed. In fact, the $\lambda\in \Upsilon$ varies during the course of adding new hidden nodes, that will be demonstrated later in our simulations.
\textbf{Remark 6.} Except for the expected error tolerance $\epsilon$, termination of SC-I can be done by referring to $L_{max}$ in order to prevent the over-fitting phenomenon. It is worth mentioning that a larger value of $\sum_{q=1}^{m}\langle e_{L-1,q},g_L\rangle^2/\|g_L\|^2$ would probably lead to faster decreasing in the residual error. This is the reason behind why we conduct $T_{max}$ times of random configuration for $w_L$ and $b_L$ and finally return a pair of $w^{*}$ and $b^{*}$ with the largest $\xi_L$, as specified between \textbf{Step 3-17} in Algorithm SC-I. Obviously, such an algorithmic operation helps in building compact SCNs.
\textbf{Remark 7.} As the constructive process proceeds, difficulties involved in the random configuration for $w$ and $b$ become increasingly higher when the residual error becomes smaller. To overcome this, we take time-varying value of $r$ so that $\sum_{q=1}^{m}(1-r-\mu_L)e_{L-1,q}(X)^{\mathrm{T}}e_{L-1,q}(X)$ will approach to zero and consequently create more opportunities to make $\xi_L\geq0$.
\section{Performance Evaluation}
This section reports some simulation results over four regression problems, including a function approximation and three real-world modelling tasks. Performance evaluation goes to learning, generalization and efficiency (computing time and the eventual number of hidden nodes required for achieving an expected training error tolerance). Comparisons among our SC algorithms, Modified Quickprop (MQ) \cite{Kwok1997} and IRVFL \cite{LiandWang2016} are carried out. In this study, the sigmoidal activation function $g(x)=1/(1+\exp(-x))$ is used.
\subsection{Data Sets}
Four datasets are employed in our simulation studies, including a function approximation example, three real world regression datasets \textbf{stock}, \textbf{concrete} and \textbf{compactiv} downloaded from KEEL (Knowledge Extraction based on Evolutionary Learning) dataset repository \footnote{KEEL: http://www.keel.es/}.
\begin{itemize}
\item DB 1 was generated by a real-valued function \cite{Tyukin2009}:
\begin{equation}
f(x) = 0.2e^{-(10x-4)^{2}}+0.5e^{-(80x-40)^{2}}+0.3e^{-(80x-20)^{2}}, \:x\in[0,1].
\end{equation}
The training dataset has 1000 points randomly generated from the uniform distribution [0,1]. The test set, of size 300, is generated from a regularly spaced grid on [0,1].
\item Data provided in \textbf{stock} are daily stock prices from January 1988 through October 1991, for ten aerospace companies. The task is to approximate the price of the 10th company by using the prices of the rest. The whole data set (DB2) contains 950 observations with nine input variables and one output variable. We randomly selected 75\% samples as the training set while the test set consists of the left 25\%.
\item The third regression task (\textbf{concrete}) is to find the highly nonlinear functional relationship between concrete compressive strength and the ingredients (input features) including cement, blast furnace slag, fly ash, water, super plasticizer, coarse aggregate, fine aggregate, and age. The whole data set (DB 3) contains 1020 instances. Both the training and test sets are formed in the same manner as DB2.
\item The fourth dataset (\textbf{compactiv}) comes from a real world application, that is, computer activity dataset describing the portion of time that CPUs run in user-mode, based on 8192 computer system activities collected from a Sun SPARCstation 20/712 with 2 CPUs and 128 MB of memory running in a multi-user university
department. Each system activity is evaluated using 21 system measures (as input features). Both the training and test data sets are formulated in the same way as done for DB2 and DB3.
\end{itemize}
Both the input and output attributes are normalized into [0,1], and all results reported in this paper take averages over 100 independent trials. The maximum times of random configuration $T_{max}$ is set as 200. For the other parameters such as the maximum number of hidden nodes $L_{max}$, the expected error tolerance $\epsilon$ and the index $r$, we will specify their corresponding settings later.
Two metrics are used in our comparative studies, i.e., modelling accuracy and efficiency. The first one is the widely used Root Mean Squares Error:
\begin{equation}
RMSE=\left(\frac{1}{N}\sum_{i=1}^{N}[\sum_{j=1}^L\beta_jg(w_j^{\mathrm{T}}x_i+b_j)-t_i]^2\right)^{1/2},
\end{equation}
where $N$ is the number of samples (training or test) and $L$ represents the number of hidden nodes.
For the sake of comparing the efficiency of different methods, we recorded both the time (measured in seconds) spent on building the learner model and the number of hidden nodes needed to achieve the expected error tolerance.
\subsection{Results}
Table 1 and Table 2 show the training and test results of MQ, IRVFL, SC-I, SC-II and SC-III on DB 1-4, in which average values and standard deviations are reported on the basis of 100 independent trials. In particular, we set the learning rate in MQ as 0.05 and its maximum iterative number as 200 for all the regression tasks. In IRVFL, the random parameters ($w$ and $b$) are taken from the uniform distribution over [-1,1]. Clearly, from Table 1, that SC-I, SC-II and SC-III outperform other methods in terms of both learning and generalization. For each setting of $L$, the training errors of MQ and IRVFL are larger than SC algorithms, in which SC-III demonstrates the best performance compared against SC-I and SC-II for both training and test datasets. In Table 2, both MQ and SC algorithms lead to sound performances in comparison to the results obtained from IRVFL.
For efficiency comparisons, it can be observed from Table 3 that MQ, IRVFL and SC-I all fail in achieving the expected training error tolerance ($\epsilon=0.05$) within an acceptable time period for DB1, whilst both IRVFL and SC-I could not successfully achieve the given training error for DB2. Here, we only report the results for DB1 and DB2 as the other two case studies share similar consequences. First, the missing values in Table 3 (marked as `-') for time cost and test performance are caused by the extremely slow learning rate for these scenarios. With some practical consideration that if certain patience parameter mentioned in \cite{Kwok1997} is given, the learning phase of MQ, IRVFL and SC-I algorithm will be terminated to avoid meaningless time cost. Consequently, it seems impossible for those methods to meet the expected error tolerance in a reasonable time slot. The proposed SC-II and SC-III algorithms, however, perform quite well and achieve the specified training error bound with few hidden nodes. On the other hand, compared against MQ method, SC-II and SC-III algorithms generate less number of hidden nodes and demonstrate better generalization performance.
\begin{table}[h!]
\centering
\footnotesize
{\caption{Performance Comparison among MQ, IRVFL, SC-I, SC-II and SC-III on DB 1. $K=15$ was used in SC-II. }\label{tab:1}}
\begin{center}
\begin{tabular}{c|cc|cc}\hline
\multirow{2}*{Algorithms} &\multicolumn{2}{c}{Training}\vline & \multicolumn{2}{c}{Test}\\
\cline{2-5}
&$L=25$ & $L=50$ & $L=25$& $L=50$ \\
\hline
MQ & 0.1031$\pm$0.0001 & 0.1030$\pm$0.0001 & 0.1011$\pm$0.0003 & 0.1011$\pm$0.0003 \\
IRVFL & 0.1630$\pm$0.0008& 0.1626$\pm$0.0005& 0.1622$\pm$0.0012 & 0.1617$\pm$0.0008 \\
SC-I & 0.0927$\pm$0.0020& 0.0887$\pm$0.0018& 0.0912$\pm$0.0021 & 0.0870$\pm$0.0019 \\
SC-II & \textbf{0.0435}$\pm$\textbf{0.0061} & \textbf{0.0366}$\pm$\textbf{0.0049}& \textbf{0.0411}$\pm$\textbf{0.0064}& \textbf{0.0337}$\pm$\textbf{0.0049}\\
SC-III & \textbf{0.0332}$\pm$\textbf{0.0065} & \textbf{0.0097}$\pm$\textbf{0.0036}& \textbf{0.0308}$\pm$\textbf{0.0060} & \textbf{0.0100}$\pm$\textbf{0.0033} \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\centering
\footnotesize
{\caption{Performance Comparison among MQ, IRVFL and SC-I, SC-II and SC-III on DB 2, DB3 and DB4. $K=15$ was used in SC-II. }\label{tab:2}}
\begin{center}
\begin{tabular}{c|c|cc|cc}\hline
\multirow{2}*{Data Sets} &\multirow{2}*{Algorithms} &\multicolumn{2}{c}{Training}\vline & \multicolumn{2}{c}{Test}\\
\cline{3-6}
&&$L=25$ & $L=50$ & $L=25$& $L=50$ \\
\hline
\multirow{5}*{DB2} & MQ & 0.0624$\pm$0.0058 & 0.0410$\pm$0.0014& 0.0611$\pm$0.0061 & 0.0407$\pm$0.0017 \\
&IRVFL & 0.2121$\pm$0.0260 & 0.1853$\pm$0.0248& 0.2057$\pm$0.0268 & 0.1787$\pm$0.0237 \\
&SC-I & 0.0963$\pm$0.0032& 0.0881$\pm$0.0026& 0.0925$\pm$0.0031 & 0.0851$\pm$0.0024\\
&SC-II & \textbf{0.0437}$\pm$\textbf{0.0014} & \textbf{0.0395}$\pm$\textbf{0.0008}& \textbf{0.0427}$\pm$\textbf{0.0017} & \textbf{0.0391}$\pm$\textbf{0.0010} \\
&SC-III & \textbf{0.0409}$\pm$\textbf{0.0010} & \textbf{0.0327}$\pm$\textbf{0.0007}& \textbf{0.0403}$\pm$\textbf{0.0012} & \textbf{0.0347}$\pm$\textbf{0.0012} \\
\hline
\multirow{5}*{DB3} & MQ & 0.1096$\pm$0.0042 & 0.0910$\pm$0.0014& 0.0999$\pm$0.0048 & 0.0869$\pm$0.0021 \\
&IRVFL & 0.2045$\pm$0.0189 & 0.1929$\pm$0.0135& 0.2109$\pm$0.0214 & 0.1983$\pm$0.0166 \\
&SC-I & 0.1401$\pm$ 0.0027& 0.1358$\pm$0.0022& 0.1315$\pm$0.0026& 0.1258$\pm$0.0017\\
&SC-II & \textbf{0.0994}$\pm$\textbf{0.0018} & \textbf{0.0943}$\pm$\textbf{0.0011}& \textbf{0.0925}$\pm$\textbf{0.0029} & \textbf{0.0874}$\pm$\textbf{0.0018} \\
&SC-III & \textbf{0.0969}$\pm$\textbf{0.0013} & \textbf{0.0835}$\pm$\textbf{0.0012}& \textbf{0.0898}$\pm$\textbf{0.0020} & \textbf{0.0850}$\pm$\textbf{0.0025} \\
\hline
\multirow{5}*{DB4} & MQ & 0.0840$\pm$0.0052 & 0.0600$\pm$0.0071& 0.0848$\pm$0.0053 & 0.0624$\pm$0.0075 \\
&IRVFL & 0.2002$\pm$0.0391 & 0.1924$\pm$0.0283& 0.1958$\pm$0.0386 & 0.1882$\pm$0.0281 \\
&SC-I & 0.1207$\pm$0.0036& 0.1137$\pm$0.0038& 0.1169$\pm$0.0038&0.1105$\pm$ 0.0040\\
&SC-II & \textbf{0.0760}$\pm$\textbf{0.0034} & \textbf{0.0579}$\pm$\textbf{0.0029}& \textbf{0.0773}$\pm$\textbf{0.0039} & \textbf{0.0593}$\pm$\textbf{0.0036} \\
&SC-III & \textbf{0.0678}$\pm$\textbf{0.0038} & \textbf{0.0394}$\pm$\textbf{0.0016}& \textbf{0.0697}$\pm$\textbf{0.0044} & \textbf{0.0418}$\pm$\textbf{0.0021} \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\centering
\footnotesize
{\caption{Efficiency comparison among MQ, IRVFL and SC-I, SC-II and SC-III on DB 1 and DB2. Window size $K=15$ was used in SC-II.}\label{tab:3}}
\begin{center}
\begin{tabular}{c|c|cc|c}\hline
\multirow{2}*{Data Sets} &\multirow{2}*{Algorithms} &\multicolumn{2}{c}{Efficiency ($\epsilon=0.05$)}\vline & \multirow{2}*{Test}\\
\cline{3-4}
& &Time (\emph{s})& Nodes \\
\hline
\multirow{5}*{DB1} & MQ & - & - &- \\
&IRVFL & -& - &-\\
&SC-I & - & -&-\\
&SC-II & 0.4029$\pm$0.1690 & 26.6700$\pm$8.7514 &0.0463$\pm$0.0021\\
&SC-III & \textbf{0.2737}$\pm$\textbf{0.0719} & \textbf{19.9000}$\pm$\textbf{3.6167}&\textbf{0.0435}$\pm$\textbf{0.0047} \\
\hline
\multirow{5}*{DB2} & MQ & 1.3157$\pm$0.6429 & 34.1809$\pm$3.5680 &0.0532$\pm$0.0175\\
&IRVFL & -& - &-\\
&SC-I & - & -&-\\
&SC-II & 0.2193$\pm$0.0454 & 16.7200$\pm$2.1513 &0.0488$\pm$0.0016\\
&SC-III & \textbf{0.2083}$\pm$\textbf{0.0317} & \textbf{16.3600}$\pm$\textbf{1.5987}&\textbf{0.0483}$\pm$\textbf{0.0020} \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[h!]
\centering
\subfigure[]{\includegraphics[width=0.46\textwidth]{figures/New_f1_train_new.eps}}
\subfigure[]{\includegraphics[width=0.46\textwidth]{figures/New_f1_test_new.eps}}
\caption{Performance of Modified QuickProp (MQ), IRVFL, SC-I, SC-II and SC-III with 150 additive nodes on DB 1: (a) Average training RMSE and (b) Average test RMSE}\label{fig:1}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfigure[]{\includegraphics[width=0.46\textwidth]{figures/New_f2_train_new.eps}}
\subfigure[]{\includegraphics[width=0.46\textwidth]{figures/New_f2_test_new.eps}}
\caption{Performance of Modified QuickProp (MQ), IRVFL, SC-I, SC-II and SC-III with 150 additive nodes on DB 2: (a) Average training RMSE and (b) Average test RMSE}\label{fig:2}
\end{figure*}
The advantages of the proposed SC algorithms are clearly visible from Figure 1 and Figure 2, in which the curves are plotted based on the average values over 100 independent runs. The following observations can be made from these results.
\begin{itemize}
\item In Figure 1, it can be observed that the error decreasing rates of our proposed SC algorithms are quite higher than the other two methods. At the same time, their corresponding test errors also keep decreasing when adding new hidden nodes. Furthermore, similar to the results reported in Table 1, SC-II and SC-III perform much better than SC-I, which has slower decreasing rate (rather than MQ and IRVFL). It should be noted that window size $K=15$ is used in SC-II, so that the error decreasing trend of SC-II and SC-III should be consistent with each other when $L\leq 15$. This fact has been clearly illustrated in Figure 1.
\item For MQ and IRVFL algorithms on DB 1, both the training and test errors stabilize at a certain level but still unacceptable (the real training RMSE is larger than 0.1). This extremely slow decreasing rate in learning makes it impossible to build a learner model with time constraint, even when $L$ takes a larger value. In comparison, an improved decreasing rate of SC-I seems obviously evident compared against MQ after adding 150 hidden nodes.
\item In Figure 2, MQ, SC-II, and SC-III converge faster than the other two algorithms, of which IRVFL exhibits the slowest decreasing rate in both training and test phases. Although MQ in this case outperforms SC-I, its error decreasing rate is slower than SC-II and SC-III as $L\leq 50$. Indeed, there is a negligible difference among the curves of SC-II, SC-III and MQ when $L>50$. As a whole, the error curves of SC-III always stay at the bottom, which correspond to the numerical results reported in Table 3.
\end{itemize}
\begin{table}[h!]
\centering
\footnotesize
{\caption{Performance of SC-II with different window sizes on DB 1. $L=35$}\label{tab:4}}
\begin{tabular}{c|cc|cc|cc}\hline
\multirow{2}*{Window Size} &\multicolumn{2}{c|}{Training} & \multicolumn{2}{c|}{Test}& \multicolumn{2}{c}{Efficiency ($\epsilon=0.05$)}\\
\cline{2-7}
&Mean & STD & Mean& STD &Time(\emph{s}) &Nodes \\
\hline
K=5& 0.0682 & 0.0049& 0.0672 & 0.0049 & 1.36 &99.99\\
K=10& 0.0530 & 0.0061 & 0.0516 & 0.0064 & 0.72 & 50.85\\
K=15& 0.0439 & 0.0062 & 0.0416 & 0.0068 & 0.34 & 25.07\\
K=20& 0.0364 & 0.0069 & 0.0338 & 0.0064 & 0.28 & 21.21\\
K=25& 0.0325 & 0.0073 & 0.0303 & 0.0067 & 0.26 & 20.29\\
K=30& 0.0267 & 0.0056 & 0.0252 & 0.0049 & 0.25 & 19.62\\
K=35& 0.0248 & 0.0069 & 0.0233 & 0.0062 & 0.25 & 19.75\\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\footnotesize
{\caption{Performance of SC-II with different window sizes $K$ on DB 2. $L=25$}\label{tab:5}}
\begin{tabular}{c|cc|cc|cc}\hline
\multirow{2}*{Window Size} &\multicolumn{2}{c|}{Training} & \multicolumn{2}{c|}{Test}& \multicolumn{2}{c}{Efficiency ($\epsilon=0.05$)}\\
\cline{2-7}
&Mean & STD & Mean& STD &Time(\emph{s}) &Nodes \\
\hline
K=5& 0.0613 & 0.0039& 0.0601 & 0.0037 & 0.88 & 66.42\\
K=10& 0.0492 & 0.0024 & 0.0493 & 0.0027 & 0.31& 23.48\\
K=15& 0.0443 & 0.0014 & 0.0436 & 0.0020& 0.21 & 16.62\\
K=20& 0.0420 & 0.0012& 0.0412& 0.0015 & 0.20& 16.16\\
K=25& 0.0411 & 0.0010 & 0.0405 & 0.0013 & 0.20 & 16.35\\
\hline
\end{tabular}
\end{table}
Table 4 and 5 report some results on robustness of the modelling performance with respect to the window size $K$ in SC-II. It should be noted that we specified the number of hidden nodes being added in order to perform the robustness analysis. Furthermore, for the purpose of examining the efficiency, we set the training error tolerance for each task (the same as in Table 3) and recorded the real time cost and number of hidden notes required for achieving that tolerance. All of the results reported in Table 4 and 5 are averaged over 100 independent runs. For DB 1, seven cases ($K=5, 10,15, 20,25, 30, 35$) are considered and the number of hidden nodes is set as 35. It can been clearly seen that both training and test performance become better along with increasing the value of $K$. Importantly, small $K$ values may lead to inferior results, for example, the training and test RMSEs with $K=5$, $K=10$ and $K=15$ are out of the tolerance level. On the other hand, the efficiency of SC-II will become higher as the value of $K$ increases. Apart from the scenarios of $K=5$, $K=10$ and $K=15$, the difference among the remaining cases can be ignored that the number of hidden nodes is about 20 while the whole time cost is around 0.26$s$ (see Table 4). For DB 2, five cases ($K=5, 10, 15, 20, 25$) are examined while the number of hidden nodes is set as 25. Similarly, all of the results, except for $K=5$ and $K=10$, are comparable. In particular, for $K=15,20,25$, the time cost and number of hidden nodes required for achieving the error tolerance ($\epsilon=0.05$) are around 0.2$s$ and 16, respectively.
\begin{figure*}[h!]
\centering
\subfigure[IRVFL, $\lambda=1$]{\includegraphics[width=0.32\textwidth]{figures/New_f3_1.eps}}
\subfigure[IRVFL, $\lambda=100$]{\includegraphics[width=0.32\textwidth]{figures/New_f3_2.eps}}
\subfigure[IRVFL, $\lambda=200$]{\includegraphics[width=0.32\textwidth]{figures/New_f3_3.eps}}
\subfigure[SC-I]{\includegraphics[width=0.32\textwidth]{figures/New_f3_4.eps}}
\subfigure[SC-II, $K=15$]{\includegraphics[width=0.32\textwidth]{figures/New_f3_5.eps}}
\subfigure[SC-III]{\includegraphics[width=0.32\textwidth]{figures/New_f3_6.eps}}
\caption{Approximation performance of IRVFL (with different setting of $\lambda$), SC I, SC-II and SC-III on DB 1}\label{fig:3}
\end{figure*}
Figure 3 depicts the modelling performance (for test dataset) of IRVFL, SC-I, SC-II and SC-III on DB 1, respectively. Firstly, the importance of the scale factor $\lambda$ in the activation function, which directly determines the range of random parameters, is examined by performing different settings. Secondly, the infeasibility of IRVFL is illustrated in comparison with the proposed SC algorithms, where effectiveness and benefits from the supervisory mechanism of SCNs can be clearly observed. The maximum number of the hidden nodes to be added is set as 100 for these four methods examined here. From Figure 3(a), (b), (c), it is apparent that IRVFL networks perform poorly. In fact, we attempted to add 20,000 hidden nodes to see the performance of IRVFL networks with different values of $\lambda$. Unfortunately, the final result is very similar to that depicted in Figure 3(c), which is aligned with our findings in \cite{LiandWang2016}. In comparison, the test performance of SC-I shown in Figure 3(d) is much better than IRVFL, that corresponds to the records in Table 1 and the error changing curves in Figure 1, respectively. In addition, both SC-II and SC-III outperform SC-I as the recalculation of $\beta_L$ contributes a lot in constructing a universal approximator with a much more compact structure.
\begin{figure*}[h!]
\centering
\subfigure[Training]{\includegraphics[width=0.46\textwidth]{figures/New_f4_train_new.eps}}
\subfigure[Test]{\includegraphics[width=0.46\textwidth]{figures/New_f4_test_new.eps}}
\caption{Comparison between SC-I and IRVFL with various types of activation function }\label{fig:4}
\end{figure*}
\subsection{Discussions}
This section briefly explains why the performance of our proposed SC algorithms are better than MQ and IRVFL algorithms. Then, further remarks are provided to highlight some characteristics of SCNs.
Although the universal approximation property can be ensured by maximizing an objective function (see Section 2), the Modified Quickprop (MQ) algorithm, aiming at iteratively finding the appropriate hidden parameters ($w$ and $b$) for the new hidden node, may face some obstacles in proceeding the optimization of certain objective function. In essence, it is a gradient-ascent algorithm with consideration of second-order information, and may be problematic when it is searching in a plateau over the error surface. That is to say, the well-trained parameters ($w$ and $b$) at certain iterative step fall into a region where both the first and second order of derivatives of the objective function are nearly zero, learning basically reaches a halt and can be terminated by a patience parameter for the purpose of time-saving. The overall speed for parameter updating might be quite slow. This issue seems severe for regression tasks. For implementations, MQ works less flexibly and seems very likely to fail in constructing a universal approximator, not to mention some inherent flaws including the selection of initial weights, learning rate, as well as a reasonable stopping criterion. Besides, there will be much computational burden in dealing with large-scale problems. The unique feature of the proposed SCNs is the use of randomness in learner model design with a supervisory mechanisms. Compared against Modified Quickprop algorithm, SC-II and SC-III are computationally manageable and easy to be implemented.
In addition, the output weights $\beta_L$ in IRVFL are analytically calculated by $e_{L-1}(X)^T\cdot h^{*}_L/(h_L^{*T}\cdot h^{*}_L)$ and then remain fixed in further phases. This constructive approach for computing the output weights, together with the freely random assignment of the hidden parameters, may cause very slow decreasing rate of the residual error sequence, and as a result the learner model may fail to approximate the target function. Although this evaluation method for the output weights is also applied for SC-I, the whole construction process works successfully in building a universal approximator. In the end, some features of our proposed SCNs are summarized as follows:
\begin{itemize}
\item The selection of $r$ that to some extent is directly associated with the error decreasing speed is quite important. In our experiments, its setting is unfixed and based on an increasing sequence stating from 0.9 and approaching 1. That makes it possible for the implementation of randomly searching $w_L$ and $b_L$. As the constructive process proceeds, the current residual error becomes smaller which makes the configuration task on $w_L$ and $b_L$ be more challenging (i.e., need more searching attempts as $L$ becomes larger). From our experience, this difficulty can be effectively alleviated by setting $r$ extremely close to 1.
\item In SC algorithms, we used the scaling sigmoidal function in the hidden nodes, of which $\lambda$ may keep varying during the learning course. This implies that the scope setting for $w$ and $b$ should not be fixed. In our simulations, $\lambda$ is automatically decided from a given set $\{1,5,15,30,50,100,150,200\}$, in accompany with the setup of $r$. This strategy makes it possible for the built learner to possess multi-scale random basis functions rather than RVFL networks (as argued in \cite{SI-Gorban2016}) or IRVFL, and can inherently improve the probability of finding certain appropriate setting of the input weights and biases.
\item In implementation, $T_{max}$ controls the pool size of the random basis function candidates. It is related to both opportunity and efficiency, so we need to chose this parameter carefully with trade-off mind. This operation aims at finding the most appropriate pairs of $w$ and $b$ that returns the largest $\xi_L$ (see Algorithm SC-I). Roughly speaking, this manipulation can be viewed as an alternative to find a `suboptimal' solution of the maximization problem discussed in \cite{Kwok1997}.
\item Based on the theoretical analysis given in Section 3, many types of activation functions can be used in SCNs, such as gaussian, sine, cosine and tanh function. Figure 4 shows both the training and test performance of SC-I and IRVFL with different choices of activation functions. It can be clearly seen that our SC-I algorithm is more efficient and effective than IRVFL as the random assignment of $w$ and $b$ from [-1,1] is unworkable and useless, no matter what kind of activation function is employed in the hidden layer.
\item As a compromise between SC-I and SC-III, SC-II provides more flexibility in dealing with large-scale data modelling tasks, by right of its distinctive window-based recalculation for the output weights. Through optimizing a part of the output weights according to a given window size, SC-II not only outperforms SC-I, because the evaluation manner of the output weights in SC-I is not based on any objectives and no further adjustments are provided throughout the construction process; but also has some advantages over SC-III in building universal approximators for coping with large-scale data analytics. In practice, when the number of training samples is extremely large and $L>K$, the computation burden for calculating $H^{\dagger}_K$ in SC-II is far less than calculating $H^{\dagger}_L$ in SC-III. Besides, the robustness of SC-II with regard to the window size is favourable once $K$ is assigned from a reasonable range, as shown in Table 4 and 5. It is meaningful and interesting to find out more characteristics about SC-II in the future.
\end{itemize}
\section{Conclusions}
Our proposed framework in this paper provides with an alternative solution for building randomized learner models with supervisory mechanisms. As a powerful tool for fast data modelling, SCNs can be incrementally built by stochastically configuring the input weights and biases of each hidden node, and determining the output weights via constructive evaluation or solving an optimization problem for linear models. Each SCN model can be regarded as a specific implementation of our SC algorithms, which in theory ensure the convergence provided that newly generated nodes are kept adding to the model. From our hands-on experiences as demonstrated in the simulations, SC-III exhibits the best performance in terms of the convergence rate, SC-I converges slowest but still outperforms IRVFL although they evaluate the output weights in the same way. It should be pointed that SC-II embedded with a given window size for updating the output weights makes a good trade-off between the efficiency and the scalability (i.e., for large-scale data analytics).
In this work, we focus on SLFNN architecture with a sigmoidal basis/activation function for the hidden nodes. Some immediate extensions to RVFL architecture (with direct links between the inputs and the outputs) and Echo State Networks (with recurrent feedback at the hidden layer) can be done easily. As a technical contribution to the machine learning community, our proposed stochastic configuration idea in this paper is significant and applicable for data representation with deep learning \cite{Graves2013,Hinton2006,Hinton2006science,LeCun2015}. Plenty of researches on SCNs and its applications can be explored in the future, and some of them have been done by our group. For instance, deep SCNs (DSCNs) and robust SCNs (RSCNs) have been developed for dealing with data representation and regression, and uncertain data analytics respectively, which will be reported in our series of publications. Also, we are extending the present algorithms to ensemble learning, online learning and distributed learning. Except for these studies mentioned above, the following researches are being expected: robustness analyses of the modelling performance with respect to some key parameters involved in design of SCNs; and a guideline for selecting the basis/activation function so that the modelling performance can be maximized.
\bibliographystyle{elsarticle-num}
|
2,869,038,155,233 | arxiv | \section{Introduction}
For the purpose of this paper, a normal function is a function $f:\aleph_1\rightarrow\aleph_1$ that is strictly increasing and continuous at limit stages, i.\,e.~we demand that
\begin{enumerate}[label=(\roman*)]
\item $\alpha<\beta$ implies $f(\alpha)<f(\beta)$ and that
\item $f(\lambda)=\sup_{\alpha<\lambda}f(\alpha)$ holds for any limit ordinal $\lambda$.
\end{enumerate}
Equivalently, $f$ is the unique strictly increasing enumeration of a closed and unbounded (club) subset of~$\aleph_1$. It is easy to see that the fixed points of any normal function~$f$ do again form an $\aleph_1$-club. The normal function that enumerates these fixed points is called the derivative of $f$ and is denoted by $f'$. Let us agree to call a normal function $g$ an upper derivative of $f$ if $f(g(\alpha))=g(\alpha)$ holds for any ordinal~$\alpha<\aleph_1$. Note that such a function~$g$ must majorize the derivative $f'$ of~$f$. As an example we consider the function $f(\alpha)=\omega^\alpha$ from ordinal arithmetic. In this case $f'(\alpha)=\varepsilon_\alpha$ is the $\alpha$-th $\varepsilon$-number. The notion of normal function plays an important role in proof theory (see~e.\,g.~\cite[Chapter~V]{schuette77}) and has interesting computability-theoretic properties (due to~\cite{marcone-montalban}). More generally, one can consider normal functions on the class of ordinals. A fundamental example from set theory is the function $f(\alpha)=\aleph_\alpha$ that enumerates the infinite cardinals. However, such normal functions are beyond the scope of the present paper.
The above construction of derivatives via clubs uses the fact that~$\aleph_1$ is a regular cardinal. One can also build derivatives by transfinite recursion, which relies on collection or replacement. In the present paper we construct derivatives in a much weaker setting. In very informal terms, we show that the following statements are equivalent over a suitable base theory:
\begin{enumerate}[label=(\arabic*$^0$)]
\item Every normal function has a derivative.
\item Every normal function has an upper derivative.
\item Transfinite induction holds for any $\Pi^1_1$-formula.
\end{enumerate}
To establish this equivalence we will give precise sense to the following argument: To see that~(1$^0$) implies~(2$^0$) it suffices to observe that any derivative is an upper derivative. To prove the direction from~(2$^0$) to~(3$^0$) we must establish induction for a $\Pi^1_1$-formula $\varphi(\gamma)$ up to an arbitrary ordinal~$\alpha$. Using the Kleene normal form theorem one obtains countable trees $\mathcal T_\gamma$ with
\begin{equation*}
\varphi(\gamma)\leftrightarrow\text{``$\mathcal T_\gamma$ is well-founded"}.
\end{equation*}
The assumption that $\varphi$ is progressive along the ordinals can then be expressed as
\begin{equation*}
\forall_{\beta<\gamma}\text{``$\mathcal T_\beta$ is well-founded"}\rightarrow\text{``$\mathcal T_\gamma$ is well-founded"}.
\end{equation*}
Assume that this statement is witnessed by a binary function $h$ on the ordinals, in the sense that we have
\begin{equation*}
\forall_{\beta<\gamma}\operatorname{otp}(\mathcal T_\beta)\leq\delta\rightarrow\operatorname{otp}(\mathcal T_\gamma)\leq h(\gamma,\delta)
\end{equation*}
for any ordinals $\gamma$ and $\delta$, where $\operatorname{otp}(\mathcal T)$ denotes the order type of~$\mathcal T$. To avoid the dependency on~$\gamma$ we set $h_0(\delta)=\sup_{\gamma<\alpha}h(\gamma,\delta)$ (alternatively one could set $h_0(\delta)=\sup_{\gamma\leq\delta} h(\gamma,\delta)$, avoiding the reference to the fixed bound $\alpha$). Now form a normal function~$f$ with $h_0(\delta)\leq f(\delta+1)$, e.\,g.~by setting $f(\delta)=\sum_{\gamma<\delta}1+h_0(\gamma)$. Then statement~(2$^0$) allows us to consider an upper derivative~$g$ of $f$. Let us show that we have
\begin{equation*}
\operatorname{otp}(\mathcal T_\gamma)\leq g(\gamma+1)
\end{equation*}
for all $\gamma<\alpha$. For $\beta<\gamma$ we may inductively assume $\operatorname{otp}(\mathcal T_\beta)\leq g(\beta+1)\leq g(\gamma)$. By the above we obtain
\begin{equation*}
\operatorname{otp}(\mathcal T_\gamma)\leq h(\gamma,g(\gamma))\leq h_0(g(\gamma))\leq f(g(\gamma)+1)\leq f(g(\gamma+1))=g(\gamma+1).
\end{equation*}
So $g$ witnesses that $\mathcal T_\gamma$ is well-founded for any $\gamma<\alpha$. This yields $\forall_{\gamma<\alpha}\varphi(\gamma)$, which is the conclusion of transfinite induction. To see that~(3$^0$) implies~(1$^0$) we will construct notation systems for the values~$f'(\alpha)$, relative to a given normal function~$f$. The crucial fact that the notation system for $f'(\alpha)$ is well-founded (and hence represents an ordinal) will be established by transfinite induction on~$\alpha$.
In order to make the result from the previous paragraph precise we will use the framework of reverse mathematics. This research program uncovers equivalences between different mathematical and foundational statements in the language of second order arithmetic (see~\cite{simpson09} for an introduction). As the base theory for our investigation we take~$\mathbf{ACA}_0$. In second order arithmatic the above statement~(3$^0$) corresponds to the following assertion:
\begin{enumerate}
\item[(3)] Induction for $\Pi^1_1$-formulas is available along any countable well-order.
\end{enumerate}
We will refer to this assertion as $\Pi^1_1$-bar induction, in order to distinguish it from the principle of transfinite induction along a specific (class- or set-sized) well-order. Let us recall that $\Pi^1_1$-bar induction is well-established in reverse mathematics: Simpson~\cite{simpson-bar-induction} has shown that it is equivalent to $\Sigma^1_1$-dependent choice and to $\Pi^1_2$-reflection for $\omega$-models, also over~$\mathbf{ACA}_0$.
To formalize statements (1$^0$) and (2$^0$) in second-order arithmetic we will rely on \mbox{J.-Y.}~Girard's notion of dilator~\cite{girard-pi2,girard-intro}. For the purpose of the present paper, a (coded) prae-dilator is a particularly uniform functor $n\mapsto T_n$ from natural numbers to linear orders (full details can be found in Section~\ref{sect:normal-dils-so} below). Girard has observed that the uniformity allows to extend $T$ beyond the natural numbers. In~\cite{freund-thesis,freund-computable} the first author has given a detailed description of linearly ordered notation systems $D^T_X$ that are computable in $T$ and the linear order~$X$. This yields an endofunctor $X\mapsto D^T_X$ of linear orders, which one may call a class-sized prae-dilator. If $D^T_X$ is well-founded for every well-order~$X$, then $T$ is called a (coded) dilator. In this case $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$ defines a function on the ordinals. A condition under which this function is normal has been identified by P.~Aczel~\cite{aczel-phd,aczel-normal-functors} (even before Girard had introduced dilators in the full sense). This leads to a notion of normal prae-dilator, which will be defined in Section~\ref{sect:normal-dils-so}. In the same section we will characterize (upper) derivatives on the level of normal prae-dilators. Once all this is made precise, we can take the following as our formalization of statement~(2$^0$) in second order arithmetic:
\begin{enumerate}
\item[(2)] Any normal dilator $T$ has an upper derivative $S$ such that $X\mapsto D^S_X$ preserves well-foundedness (so that $S$ is again a normal dilator).
\end{enumerate}
The advantage of this principle is that it is relatively easy to state and does not depend on a specific construction of derivatives. Its disadvantage is that it confounds the following two questions: How strong is the assertion that any normal dilator has an upper derivative? And how much strength is added by the demand that the upper derivative preserves well-foundedness? We want to disentangle these questions in our formalization of statement~(1$^0$). To see how this works, let us recall that Aczel~\cite{aczel-phd,aczel-normal-functors} has explicitly constructed a derivative~$\partial T$ of a given normal prae-dilator~$T$. In Section~\ref{sect:constructin-derivative} we will show that $\partial T_n$ can be represented by a term system. In view of this representation $\mathbf{RCA}_0$ proves that $\partial T$ exists and is a derivative of~$T$. What $\mathbf{RCA}_0$ cannot show is that $D^{\partial T}$ preserves well-foundedness whenever $D^T$ does. This suggests to formalize statement~(1$^0$) as the following assertion:
\begin{enumerate}
\item[(1)] If $T$ is a normal dilator, then $D^{\partial T}_X$ is well-founded for any well-order~$X$.
\end{enumerate}
As $\mathbf{RCA}_0$ proves that any derivative is an upper derivative it will be immediate that (1) implies~(2). This means that the entire strength of these two principles is concentrated in the preservation of well-foundedness, which answers the questions that we have raised after the formulation of principle~(2).
Let us summarize the content of the following sections: As explained above, Section~\ref{sect:normal-dils-so} introduces (upper) derivatives on the level of normal prae-dilators and gives a precise formalization of statement~(2). In Section~\ref{sect:upper-deriv-bi} we prove that~(2) implies~(3), by giving precise meaning to the argument from the beginning of this introduction. Section~\ref{sect:constructin-derivative} contains the construction of $\partial T$ in $\mathbf{RCA}_0$, which yields the implication from~(1) to~(2). In Section~\ref{sect:bi-deriv-wf} we prove that (3) implies~(1), using $\Pi^1_1$-induction along $X$ to establish that $D^{\partial T}_X$ is well-founded. At the end of the paper we will thus have shown that (1), (2) and (3) are equivalent (see Theorem~\ref{thm:main-result} for the official statement of this result).
In the rest of this introduction we put our result into context. Let us first discuss implications for the predicative foundation of mathematics: The predicative stance originated with H.~Weyl's {\em Das Kontinuum} \cite{weyl-continuum} from 1908, and may be characterized by the imposition of a constraint on set formation that countenances only that which is implicit in accepting the natural number structure as a completed totality. Based on a proposal due to G.~Kreisel, the modern logical analysis of predicativity (given the natural numbers) was carried out by S.~Feferman~\cite{feferman64} and K.~Sch\"utte~\cite{schuette64} in 1964. It is couched in terms of provability in an autonomous transfinite progression of ramified theories of sets which are based on classical logic and assume the existence of the set of natural numbers. The existence of further sets is regimented by a hierarchy of levels to be generated in an autonomous way. At each level, sets are asserted to exist only via definitions in which quantification over sets must be restricted to lower levels. The further condition of autonomy requires that one may ascend to a level $\alpha$ only if the existence of a well-ordering of order type $\alpha$ has been established at some level $\beta<\alpha$. Feferman and Sch\"utte independently showed that the least non-autonomous ordinal for this progression of theories is the recursive ordinal $\Gamma_0$. Set-theoretically, the constructible sets up to $\Gamma_0$ form the minimal model of the aforementioned progression. A connection with our result arises because derivatives of normal functions (and transfinite hierarchies of derivatives) provide the intuition behind the usual notation system for~$\Gamma_0$ (see e.\,g.~\cite{schuette77}). This does not imply, however, that the abstract notion of derivative (relative to an arbitrary normal function) is predicatively acceptable. Indeed our result shows that it is not: We prove that the existence of normal functions is equivalent to $\Pi^1_1$-bar induction and hence to $\Sigma^1_1$-dependent choice (all over $\mathbf{ACA}_0$). Now the least ordinal~$\tau$ such that the constructible sets up to $\tau$ form a model of $\Sigma^1_1$-dependent choice is the first \mbox{non-recursive} ordinal $\omega_1^{\operatorname{CK}}$ (see \cite{kreisel62}), which is much larger than~$\Gamma_0$. As a consequence, the principle of $\Sigma^1_1$-dependent choice does not possess a prima facie predicative justification. By the result of our paper the same applies to the principle that the derivative of every dilator preserves well-foundedness. On the other hand, all sufficiently concrete consequences of these principles hold in predicative mathematics: The extension of $\mathbf{ACA}_0$ by $\Sigma^1_1$-dependent choice is $\Pi^1_2$-conservative over the theory
$\mathbf{(\Pi^1_0\textbf{-}CA)_{\omega^{\omega}}}$, which allows for $\omega^\omega$ iterations of arithmetical comprehension (due to A.~Cantini
\cite{cantini86}). The latter is a predicative theory in its entirety.
Let us also compare our result to a theorem of T.~Arai~\cite{arai-derivatives}. Roughly speaking, this theorem states that the following are equivalent over~$\mathbf{ACA}_0$:
\begin{itemize}
\item The order $D^{\partial T}_X$ is well-founded for every well-order $X$.
\item Any set is contained in a countable coded $\omega$-model of the statement that ``$D^T_X$~is well-founded for every well-order~$X$''.
\end{itemize}
This formulation of Arai's result should be read with quite some reservation: Arai does not represent normal functions by dilators. Instead his result relies on the assumption that we are given formulas that define term systems for~$D^T_X$ and $D^{\partial T}_X$, which must satisfy certain conditions. In particular this approach does not allow to quantify over dilators, as required for our result. On an informal level Arai's result can be read as a pointwise version of ours: Recall that $\Pi^1_1$-bar induction is equivalent to $\omega$-model reflection for $\Pi^1_2$-formulas. Assume that we want to establish this reflection principle for a formula~$\varphi$. Girard has shown that the notion of dilator is $\Pi^1_2$-complete (see D.~Norman's proof in~\cite[Annex~8.E]{girard-book-part2}, which will also play an important role in Section~\ref{sect:constructin-derivative} below). Thus one may hope to construct a normal prae-dilator $T$ such that $\varphi$ is equivalent to the statement that ``$D^T_X$ is well-founded for every well-order~$X$''. Using our principle~(1) one could conclude that ``$D^{\partial T}_X$ is well-founded for every well-order~$X$''. By Arai's result this would yield the desired $\omega$-models of $\varphi$. When we started working on the present paper we planned to derive the equivalence between (1), (2) and (3) from Arai's result, by making the given argument precise. However, this has met with so many technical obstacles that it turned out easier to give a completely new proof.
To conclude this introduction we compare our result to a theorem of the first author~\cite{freund-thesis,freund-equivalence,freund-categorical,freund-computable}, which says that the following are equivalent over $\mathbf{RCA}_0$:
\begin{itemize}
\item Every dilator has a well-founded Bachmann-Howard fixed point.
\item The principle of $\Pi^1_1$-comprehension holds.
\end{itemize}
To explain what this means we point out that the first principle quantifies over arbitrary dilators~$T$, rather than just over normal ones. This includes cases where we have $\operatorname{otp}(D^T_\alpha)>\alpha$ for any ordinal~$\alpha$, so that $D^T$ cannot have a well-founded fixed point. The best we can hope for is a function $\vartheta:D^T_\alpha\rightarrow\alpha$ that is ``almost" order-preserving (see~\cite{freund-equivalence} for a precise definition). If such a function exists, then $\alpha$ is called a Bachmann-Howard fixed point of $T$. This name has been chosen since the conditions on~$\vartheta$ are inspired by properties of the collapsing function used to define the Bachmann-Howard ordinal (cf.~in particular~\cite{rathjen-weiermann-kruskal}). It is worth noting that the notion of Bachmann-Howard fixed point is most interesting for dilators that are not normal (see the proof of~\cite[Proposition~3.3]{freund-computable} for an instructive example). As is well-known, $\Pi^1_1$-comprehension is much stronger than $\Pi^1_1$-bar induction. Thus the results of~\cite{freund-thesis} and the present paper help to explain why collapsing functions, rather than derivatives, are the crucial feature of strong ordinal notation systems.
\section{Normal dilators in second order arithmetic}\label{sect:normal-dils-so}
In the present section we define and investigate (prae-) dilators in the setting of reverse mathematics. Our approach is based on the work of Girard~\cite{girard-pi2} and on details worked out by the first author~\cite{freund-thesis,freund-computable}. We will also characterize normal prae-dilators and their (upper) derivatives.
Let us first discuss some category-theoretic prerequisites: To turn the class of (countable) linear orders into a category we take the order embeddings (strictly increasing functions) as morphisms. By the category of natural numbers we mean the full subcategory with the finite orders $n=\{0,\dots,n-1\}$ (ordered as usual) as objects. Note that this yields a small category equivalent to the category of finite orders. The equivalence is witnessed by the increasing enumerations $\operatorname{en}_a:|a|\rightarrow a$, where $|a|=\{0,\dots,|a|-1\}$ denotes the cardinality of the finite order~$a$. For each embedding $f:a\rightarrow b$ there is a unique increasing function $|f|:|a|\rightarrow |b|$ with
\begin{equation*}
\operatorname{en}_b\circ|f|=f\circ\operatorname{en}_a.
\end{equation*}
Thus~$|\cdot|$ and $\operatorname{en}$ become a functor and a natural isomorphism. We will also consider the finite subset functor $[\cdot]^{<\omega}$ on the category of sets, with
\begin{align*}
[X]^{<\omega}&=\text{``the set of finite subsets of $X$''},\\
[f]^{<\omega}(a)&=\{f(x)\,|\,x\in a\}.
\end{align*}
Of course, $[n]^{<\omega}$ is the full power set of $\{0,\dots,n-1\}$. We will often write the arguments of a functor $T$ as subscripts, so that a morphism $f:X\rightarrow Y$ is transformed into $T_f:T_X\rightarrow T_Y$. When we want to avoid iterated subscripts we revert to the notation $T(f):T(X)\rightarrow T(Y)$. Hereditarily finite sets with the natural numbers as urelements can be coded by natural numbers. It is straightforward to see that basic relations and operations on these sets are primitive recursive in the codes. This allows us to introduce the following notion in the theory~$\mathbf{RCA}_0$, as in~\cite{freund-computable}:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:coded-prae-dilator}
A coded prae-dilator consists of
\begin{enumerate}[label=(\roman*)]
\item a functor $T$ from the category of natural numbers to the category of linear orders with fields $T_n\subseteq\mathbb N$ and
\item a natural transformation $\operatorname{supp}^T:T\Rightarrow[\cdot]^{<\omega}$ such that any $\sigma\in T_n$ lies in the range of $T_{\iota_\sigma\circ\operatorname{en}_\sigma}$, where
\begin{equation*}
|\operatorname{supp}^T_n(\sigma)|\xrightarrow{\mathmakebox[2em]{\operatorname{en}_\sigma}}\operatorname{supp}^T_n(\sigma)\xhookrightarrow{\mathmakebox[2em]{\iota_\sigma}}n=\{0,\dots,n-1\}
\end{equation*}
factors the unique morphism with range $\operatorname{supp}^T_n(\sigma)\subseteq n$.
\end{enumerate}
\end{definition}
More precisely, the functor~$T$ is represented by the sets
\begin{align*}
T^0&=\{\langle 0,n,\sigma\rangle\,|\,\sigma\in T_n\}\cup\{\langle 1,n,\sigma,\tau\rangle\,|\,\sigma<_{T_n}\tau\},\\
T^1&=\{\langle f,\sigma,\tau\rangle\,|\,T_f(\sigma)=\tau\}
\end{align*}
of natural numbers. The natural transformation $\operatorname{supp}^T$ is represented by the set
\begin{equation*}
\operatorname{supp}^T=\{\langle n,\sigma,a\rangle\,|\,\operatorname{supp}^T_n(\sigma)=a\}.
\end{equation*}
Thus an expression such as $\sigma\in T_n$ is an abbreviation for $\langle 0,n,\sigma\rangle\in T^0$, which is a $\Delta^0_1$-formula in $\mathbf{RCA}_0$. The statement that $T$ is a coded prae-dilator is easily seen to be arithmetical in the sets $T^0,T^1,\operatorname{supp}^T\subseteq\mathbb N$.
As an example we consider the sets
\begin{equation*}
\omega^n=\{\langle n_0,\dots,n_{k-1}\rangle\,|\,n>n_0\geq\dots\geq n_{k-1}\}
\end{equation*}
with the lexicographic order (it may help to think of $\langle n_0,\dots,n_{k-1}\rangle$ as the formal Cantor normal form $\omega^{n_0}+\dots+\omega^{n_{k-1}}$). To obtain a functor we map each morphism $f:n\rightarrow m$ to the embedding $\omega^f:\omega^n\rightarrow\omega^m$ with
\begin{equation*}
\omega^f(\langle n_0,\dots,n_{k-1}\rangle)=\langle f(n_0),\dots,f(n_{k-1})\rangle.
\end{equation*}
If we define $\operatorname{supp}^\omega_n:\omega^n\rightarrow[n]^{<\omega}$ by
\begin{equation*}
\operatorname{supp}^\omega_X(\langle n_0,\dots,n_{k-1}\rangle)=\{n_0,\dots,n_{k-1}\},
\end{equation*}
then we get a coded prae-dilator, as one readily verifies.
Let us now discuss how the coded prae-dilator $n\mapsto\omega^n$ from the previous paragraph can be extended beyond the category of natural numbers: The idea is to view the numbers $n_i$ as variables for the elements of a given order. For example the pair $\langle\{\beta,\gamma\},\langle 1,1,0\rangle\,\rangle$ with ordinals $\alpha>\beta>\gamma$ represents the ordinal~\mbox{$\omega^\beta+\omega^\beta+\omega^\gamma$}. Note that any ordinal below $\omega^\alpha$ can be represented in this way. To make the representations unique we require that the numbers in the second component are as small as possible. Thus $\langle\{\beta,\gamma\},\langle 3,3,1\rangle\,\rangle$ would not be a valid representation. In order to formulate this requirement in general we will rely on the observation that we have $\langle 1,1,0\rangle\in\omega^2=\omega^{|\{\beta,\gamma\}|}$. One should also demand that all elements of the first component do occur in the second. Thus $\langle\{\beta,\gamma,\delta\},\langle 1,1,0\rangle\,\rangle$ with $\alpha>\beta>\gamma>\delta$ would not be a valid representation. This can be expressed via the condition~$\operatorname{supp}^\omega_{|\{\beta,\gamma\}|}(\langle 1,1,0\rangle)=\{0,1\}=2=|\{\beta,\gamma\}|$. In general we have the following construction, which has been given in~\cite{freund-computable}:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:coded-prae-dilator-reconstruct}
Consider a coded prae-dilator $T=(T,\operatorname{supp}^T)$. For each order~$X$ we define a set $D^T_X$ and a binary relation $<_{D^T_X}$ on $D^T_X$ by
\begin{gather*}
D^T_X=\{\langle a,\sigma\rangle\,|\,a\in[X]^{<\omega}\text{ and }\sigma\in T_{|a|}\text{ and }\operatorname{supp}^T_{|a|}(\sigma)=|a|\},\\
\langle a,\sigma\rangle<_{D^T_X}\langle b,\tau\rangle\Leftrightarrow T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau),
\end{gather*}
where $\iota_a^{a\cup b}:a\hookrightarrow a\cup b$ and $\iota_b^{a\cup b}:b\hookrightarrow a\cup b$ denote the inclusions between suborders of~$X$. Given an embedding $f:X\rightarrow Y$, we define $D^T_f:D^T_X\rightarrow D^T_Y$ by
\begin{equation*}
D^T_f(\langle a,\sigma\rangle)=\langle [f]^{<\omega}(a),\sigma\rangle.
\end{equation*}
To define a family of functions $\operatorname{supp}^{D^T}_X:D^T_X\Rightarrow[X]^{<\omega}$ we set
\begin{equation*}
\operatorname{supp}^{D^T}_X(\langle a,\sigma\rangle)=a
\end{equation*}
for each order~$X$.
\end{definition}
In order to see that $D^T_f(\langle a,\sigma\rangle)$ still satisfies the uniqueness conditions (i.\,e.~that we have $\sigma\in T_{|[f]^{<\omega}(a)|}$ and $\operatorname{supp}^T_{|[f]^{<\omega}(a)|}(\sigma)=|[f]^{<\omega}(a)|$) it suffices to note that $[f]^{<\omega}(a)$ has the same cardinality as~$a$. The following shows that the conditions from Definition~\ref{def:coded-prae-dilator} extend beyond the category of natural numbers (in part~(ii) one could replace $\iota_{\langle a,\sigma\rangle}$ by $\iota_{\langle a,\sigma\rangle}\circ\operatorname{en}_a$, since $\operatorname{en}_a:|a|\rightarrow a$ is an isomorphism).
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:reconstruct-class-sized-dil}
If $T$ is a coded prae-dilator, then
\begin{enumerate}[label=(\roman*)]
\item the maps $X\mapsto(D^T_X,<_{D^T_X})$ and $f\mapsto D^T_f$ form an endofunctor on the category of linear orders and
\item the map $X\mapsto\operatorname{supp}^{D^T}_X$ is a natural transformation between $D^T$ and $[\cdot]^{<\omega}$, with the property that any $\langle a,\sigma\rangle\in D^T_X$ lies in the range of $D^T_{\iota_{\langle a,\sigma\rangle}}$, where
\begin{equation*}
\iota_{\langle a,\sigma\rangle}:\operatorname{supp}^{D^T}_X(\langle a,\sigma\rangle)=a\hookrightarrow X
\end{equation*}
is the inclusion.
\end{enumerate}
\end{proposition}
\begin{proof}
In~\cite[Lemma~2.4]{freund-computable} the same has been shown in a stronger base theory (we point out that the uniqueness conditions are crucial for the linearity of $<_{D^T_X}$). It is straightforward to check that the proof goes through in~$\mathbf{RCA}_0$.
\end{proof}
While $D^T$ is a class-sized object, its restriction $D^T\!\restriction\!\mathbb N$ to the category of natural numbers can be constructed in~$\mathbf{RCA}_0$. The following is similar to~\cite[Proposition~2.5]{freund-computable}. Nevertheless we give a detailed proof, since we want to refer to it later.
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:class-sized-restrict}
If $T$ is a coded prae-dilator, then so is $D^T\!\restriction\!\mathbb N$. In this case we get a natural isomorphism $\eta^T:D^T\!\restriction\!\mathbb N\Rightarrow T$ by setting
\begin{equation*}
\eta^T_n(\langle a,\sigma\rangle)=T_{\iota_a\circ\operatorname{en}_a}(\sigma),
\end{equation*}
where $\iota_a:a\hookrightarrow n$ is the inclusion
\end{lemma}
\begin{proof}
The previous proposition implies that $D^T\!\restriction\!\mathbb N$ is a coded prae-dilator. To see that $\eta^T_n$ is order preserving we consider an inequality $\langle a,\sigma\rangle<_{D^T_n}\langle b,\tau\rangle$. According to Definition~\ref{def:coded-prae-dilator-reconstruct} this amounts to $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$. Write $\iota_{a\cup b}:a\cup b\hookrightarrow n$ and observe
\begin{equation*}
\iota_{a\cup b}\circ\operatorname{en}_{a\cup b}\circ|\iota_a^{a\cup b}|=\iota_{a\cup b}\circ\iota_a^{a\cup b}\circ\operatorname{en}_a=\iota_a\circ\operatorname{en}_a.
\end{equation*}
Applying $T_{\iota_{a\cup b}\circ\operatorname{en}_{a\cup b}}$ to both sides of the above inequality we obtain
\begin{multline*}
\eta^T_n(\langle a,\sigma\rangle)=T_{\iota_a\circ\operatorname{en}_a}(\sigma)=T_{\iota_{a\cup b}\circ\operatorname{en}_{a\cup b}}\circ T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_n}{}\\
{}<_{T_n}T_{\iota_{a\cup b}\circ\operatorname{en}_{a\cup b}}\circ T_{|\iota_b^{a\cup b}|}(\tau)=T_{\iota_b\circ\operatorname{en}_b}(\tau)=\eta^T_n(\langle b,\tau\rangle).
\end{multline*}
To establish naturality we consider an order preserving function~$f:n\rightarrow m$. Write $\iota_{[f]^{<\omega}(a)}:[f]^{<\omega}(a)\hookrightarrow m$ and observe that we have
\begin{equation*}
f\circ\iota_a\circ\operatorname{en}_a=\iota_{[f]^{<\omega}(a)}\circ\operatorname{en}_{[f]^{<\omega}(a)},
\end{equation*}
as both sides are order isomorphisms between $|a|=|[f]^{<\omega}(a)|$ and $[f]^{<\omega}(a)\subseteq n$. We can deduce
\begin{multline*}
\eta^T_m\circ D^T_f(\langle a,\sigma\rangle)=\eta^T_m(\langle[f]^{<\omega}(a),\sigma\rangle)=T_{\iota_{[f]^{<\omega}(a)}\circ\operatorname{en}_{[f]^{<\omega}(a)}}(\sigma)=\\
=T_f\circ T_{\iota_a\circ\operatorname{en}_a}(\sigma)=T_f\circ\eta^T_n(\langle a,\sigma\rangle).
\end{multline*}
By the definition of coded prae-dilator any $\sigma\in T_n$ can be written as $\sigma=T_{\iota_a\circ\operatorname{en}_a}(\sigma_0)$ with $a=\operatorname{supp}^T_n(\sigma)$ and $\sigma_0\in T_{|a|}$. In view of
\begin{equation*}
[\iota_a\circ\operatorname{en}_a]^{<\omega}(\operatorname{supp}^T_{|a|}(\sigma_0))=\operatorname{supp}^T_n(T_{\iota_a\circ\operatorname{en}_a}(\sigma_0))=\operatorname{supp}^T_n(\sigma)=a
\end{equation*}
we have $\operatorname{supp}^T_{|a|}(\sigma_0)=|a|$ and hence $\langle a,\sigma_0\rangle\in D^T_n$. Since $\eta^T_n(\langle a,\sigma_0\rangle)=\sigma$ holds by construction we can conclude that $\eta^T_n$ is surjective.
\end{proof}
As indicated in the introduction, the following notion plays a crucial role (there is no ambiguity since the two obvious definitions of well-foundedness are equivalent in $\mathbf{RCA}_0$, see e.\,g.~\cite[Lemma~2.3.12]{freund-thesis}):
\begin{definition}[$\mathbf{RCA}_0$]\label{def:coded-dilator}
A coded prae-dilator~$T$ is called a coded dilator if $D^T_X$ is well-founded for every well-order~$X$.
\end{definition}
Let us briefly discuss (prae-) dilators from the perspective of a theory that allows for more general class-sized objects: In such a theory one would declare that a class-sized prae-dilator consists of
\begin{itemize}
\item an endofunctor $T$ on the category of linear orders and
\item a natural transformation $\operatorname{supp}^T:T\Rightarrow[\cdot]^{<\omega}$ such that any $\sigma\in T_X$ lies in the range of $T_{\iota_\sigma}$, where $\iota_\sigma:\operatorname{supp}^T_X(\sigma)\hookrightarrow X$ is the inclusion.
\end{itemize}
If $T_X$ is well-founded for every well-order~$X$, then one would call $T=(T,\operatorname{supp}^T)$ a class-sized dilator. In~\cite[Remark~2.2.2]{freund-thesis} we have verified that the existence of (necessarily unique and hence natural) support functions $\operatorname{supp}^T_X$ is equivalent to the assertion that $T$ preserves direct limits and pullbacks. It follows that the given definition of dilators is equivalent to the original one by Girard~\cite{girard-pi2} (but our prae-dilators are not quite equivalent to Girard's pre-dilators, since the latter must satisfy an additional monotonicity condition that is automatic in the well-founded case). Consider a class-sized prae-dilator~$T$ such that we have $T_n\subseteq\mathbb N$ for every number~$n$. Then the restriction $T\!\restriction\!\mathbb N$ is a coded prae-dilator. Proposition~\ref{prop:reconstruct-class-sized-dil} tells us that $D^{T\restriction\mathbb N}$ is a class-sized prae-dilator. The equivalence from Lemma~\ref{lem:class-sized-restrict} is readily extended to a natural isomorphism between $D^{T\restriction\mathbb N}$ and $T$ (see~\cite[Proposition~2.5]{freund-computable}), which means that we have reconstructed the class-sized prae-dilator $T$ from its set-sized restriction. In view of $D^{T\restriction\mathbb N}_X\cong T_X$ it is immediate that $T\!\restriction\!\mathbb N$ is a coded dilator if~$T$ is a class-sized dilator. The converse is somewhat more subtle, since Definition~\ref{def:coded-dilator} only quantifies over well-orders with field $X\subseteq\mathbb N$. Girard~\cite[Theorem~2.1.15]{girard-pi2} has shown that it suffices to test the preservation of well-foundedness on countable orders. Thus it is true that $D^T$ is a class-sized dilator for any coded dilator~$T$. In second order arithmetic we can consider the orders $T_X$ and the isomorphisms $D^{T\restriction\mathbb N}_X\cong T_X$ when $T$ is a specific class-sized prae-dilator with a computable construction. This can be useful when $T_X$ has a more transparent description than $D^{T\restriction\mathbb N}_X$ (as in the example above, where the term $\omega^\beta+\omega^\beta+\omega^\gamma\in\omega^\alpha$ is more intelligible than the expression $\langle\{\beta,\gamma\},\langle 1,1,0\rangle\,\rangle\in D^ \omega_\alpha$). On the other hand, second order arithmetic cannot reason about class-sized prae-dilators in general (i.\,e.~quantify over them). Thus we will mostly be concerned with coded prae-dilators, which are more important on a theoretical level. We will often omit the specification ``coded'' to improve readability.
Arguing in a sufficiently strong set theory, each coded dilator $T$ induces a function $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$ on the ordinals. To see that this function does not need to be normal we consider the coded dilator that maps $n$ to the order
\begin{equation*}
T_n=\{0,\dots,n-1\}\cup\{\Omega\}
\end{equation*}
with a new biggest element~$\Omega$. Its action on a morphism $f:n\rightarrow m$ and the support functions $\operatorname{supp}^T_n:T_n\rightarrow[n]^{<\omega}$ are given by
\begin{equation*}
T_f(\sigma)=\begin{cases}
f(\sigma) & \text{if $\sigma\in\{0,\dots,n-1\}$},\\
\Omega & \text{if $\sigma=\Omega$},
\end{cases}\quad
\operatorname{supp}^T_n(\sigma)=\begin{cases}
\{\sigma\} & \text{if $\sigma\in\{0,\dots,n-1\}$},\\
\emptyset & \text{if $\sigma=\Omega$}.
\end{cases}
\end{equation*}
It is straightforward to check that
\begin{equation*}
D^T_X=\{\langle\{x\},0\rangle\,|\,x\in X\}\cup\{\langle\emptyset,\Omega\rangle\}\qquad\text{(with $0\in 1\subseteq T_1$ and $\Omega\in T_0$)}
\end{equation*}
is isomorphic to $X\cup\{\Omega\}$ (where $\Omega$ is still the biggest element). Thus we have $\operatorname{otp}(D^T_\alpha)=\alpha+1$, which means that the function induced by $T$ is not continuous at limit stages and does not have any fixed points.
To analyze the given counterexample we observe that the functor~$T$ from the previous paragraph does not preserve initial segments: Given that the range of $f:n\rightarrow m$ is an initial segment of $m$, we cannot infer that the range of $T_f$ is an initial segment of $T_n$ (since it contains the element~$\Omega$). Indeed, Aczel~\cite{aczel-phd,aczel-normal-functors} and Girard~\cite{girard-pi2} have identified preservation of initial segments as the crucial condition that reconciles categorical continuity, i.\,e.~preservation of direct limits, and the usual notion of continuity at limit ordinals (paraphrasing Girard). More precisely, Aczel focuses on initial segments of the form
\begin{equation*}
X\!\restriction\!x=\{y\in X\,|\,y<_Xx\},
\end{equation*}
where $x$ is an element of the linear order~$X=(X,<_X)$. It will be convenient to have the following notation: For $a,b\in[X]^{<\omega}$ we abbreviate
\begin{equation*}
a<^{\operatorname{fin}}_X b\quad\Leftrightarrow\quad\forall_{x\in a}\exists_{y\in b}\,x<_X y.
\end{equation*}
The relation $\leq^{\operatorname{fin}}_X$ is defined in the same way, with $\leq_X$ at the place of $<_X$. We omit the subscript when we refer to the usual order on the natural numbers or on the ordinals. In the case of a singleton we write $a<^{\operatorname{fin}}_X y$ rather than $a<^{\operatorname{fin}}_X\{y\}$. Note that this makes $a<^{\operatorname{fin}}_Xx$ equivalent to $a\subseteq X\!\restriction\!x$. The following is fundamental:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:range-dil-support}
If $T$ is a coded prae-dilator, then we have
\begin{equation*}
\operatorname{rng}(D^T_f)=\{\langle a,\sigma\rangle\in D^T_Y\,|\,a\subseteq\operatorname{rng}(f)\}
\end{equation*}
for any order embedding~$f:X\rightarrow Y$.
\end{lemma}
\begin{proof}
For the inclusion $\subseteq$ it suffices to recall $D^T_f(\langle b,\sigma\rangle)=\langle [f]^{<\omega}(b),\sigma\rangle$. Conversely, induction on the size of $a\subseteq\operatorname{rng}(f)$ yields a finite $b\subseteq X$ with $[f]^{<\omega}(b)=a$. Then $\langle a,\sigma\rangle\in D^T_Y$ is the image of $\langle b,\sigma\rangle\in D^T_X$ (observe $|b|=|a|$).
\end{proof}
Preservation of initial segments can now be characterized as follows:
\begin{corollary}[$\mathbf{RCA}_0$]\label{cor:initial-segments-supports}
Consider a coded prae-dilator $T$ and a linear order $X$. The following are equivalent for any elements $x\in X$ and $\rho\in D^T_X$:
\begin{enumerate}[label=(\roman*)]
\item We have $\operatorname{rng}(D^T_{\iota_x})=D^T_X\!\restriction\!\rho$, where $\iota_x:X\!\restriction\!x\hookrightarrow X$ is the inclusion.
\item For any $\langle a,\sigma\rangle\in D^T_X$ we have
\begin{equation*}
\langle a,\sigma\rangle<_{D^T_X}\rho\quad\Leftrightarrow\quad a<^{\operatorname{fin}}_X x.
\end{equation*}
\end{enumerate}
\end{corollary}
We will see that a coded dilator with the following property does induce a normal function on the ordinals.
\begin{definition}[$\mathbf{RCA}_0$]\label{def:coded-normal-dil}
A normal prae-dilator consists of a (coded) prae-dilator~$T$ and a natural family of order embeddings $\mu^T_n:n\rightarrow T_n$ such that we have
\begin{equation*}
\sigma<_{T_n}\mu^T_n(m)\quad\Leftrightarrow\quad\operatorname{supp}^T_n(\sigma)<^{\operatorname{fin}} m
\end{equation*}
for all numbers $m<n$ and all elements $\sigma\in T_n$.
\end{definition}
Note that the family of functions $\mu^T_n$ can be represented by the set
\begin{equation*}
\mu^T=\{\langle n,m,\rho\rangle\,|\,\mu^T_n(m)=\rho\}
\end{equation*}
of natural numbers. As an example we recall the coded dilator $n\mapsto\omega^n$ considered above. It is straightforward to verify that we obtain a normal dilator by setting
\begin{equation*}
\mu^\omega_n(m)=\langle m\rangle\in\omega^n
\end{equation*}
for all numbers $m<n$. Recall that $\langle m\rangle$ corresponds to the formal Cantor normal form $\omega^m$. This suggests to think of $\mu^\omega_n$ as the restriction of the normal function $\alpha\mapsto\omega^\alpha$ to the finite ordinal~$n$. A formal version of this idea can be found in the proof of Proposition~\ref{prop:normal-dil-fct} below. Before we can formulate it we must extend $\mu^T$ beyond the category of natural numbers. This relies on the following observation:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:support-mu}
If $T=(T,\mu^T)$ is a normal prae-dilator, then we have
\begin{equation*}
\operatorname{supp}^T_n(\mu^T_n(m))=\{m\}
\end{equation*}
for all numbers $m<n$.
\end{lemma}
\begin{proof}
Define $\iota:1\rightarrow n$ by $\iota(0)=m$. By the naturality of $\mu^T$ and $\operatorname{supp}^T$ we get
\begin{equation*}
\operatorname{supp}^T_n(\mu^T_n(m))=\operatorname{supp}^T_n(\mu^T_n(\iota(0)))=[\iota]^{<\omega}(\operatorname{supp}^T_1(\mu^T_1(0)))\subseteq\operatorname{rng}(\iota)=\{m\}.
\end{equation*}
So it remains to show that we cannot have $\operatorname{supp}^T_n(\mu^T_n(m))=\emptyset$. The latter would imply $\operatorname{supp}^T_n(\mu^T_n(m))<^{\operatorname{fin}} m$ and hence $\mu^T_n(m)<_{T_n}\mu^T_n(m)$, which is impossible.
\end{proof}
In particular the lemma yields $\operatorname{supp}^T_{|\{x\}|}(\mu^T_1(0))=|\{x\}|$, which secures the uniqueness condition needed for the following construction:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:extend-normal-transfos}
Let $T$ be a normal prae-dilator. For each order~$X$ we define $D^{\mu^T}_X:X\rightarrow D^T_X$ by setting
\begin{equation*}
D^{\mu^T}_X(x)=\langle\{x\},\mu^T_1(0)\rangle
\end{equation*}
for all elements $x\in X$.
\end{definition}
The reader may have noticed that only the value $\mu^T_1(0)$ was needed in order to extend $\mu^T$ to arbitrary linear orders. To state the equivalence from Definition~\ref{def:coded-normal-dil} for all numbers~$n$ it is nevertheless convenient to consider the entire family of functions $\mu^T_n:n\rightarrow T_n$ as given. Let us verify that the defining property of normal prae-dilators extends to all linear orders:
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:reconstruct-normal-dil}
If $T$ is a normal prae-dilator, then the functions $D^{\mu^T}_X:X\rightarrow D^T_X$ form a natural family of order embeddings. Furthermore we have
\begin{equation*}
\langle a,\sigma\rangle<_{D^T_X}D^{\mu^T}_X(x)\quad\Leftrightarrow\quad a<^{\operatorname{fin}}_X x
\end{equation*}
for any order~$X$ and any element $\langle a,\sigma\rangle\in D^T_X$.
\end{proposition}
\begin{proof}
To show that~$D^{\mu^T}_X$ is an embedding we consider $x_0<_Xx_1$. For $j\in\{0,1\}$ we write $\iota_j:\{x_j\}\hookrightarrow\{x_0,x_1\}$. Using the naturality of $\mu^T$ and the fact that $\mu^T_2$ is order preserving we get
\begin{equation*}
T_{|\iota_0|}(\mu^T_1(0))=\mu^T_2(|\iota_0|(0))=\mu^T_2(0)<_{T_2}\mu^T_2(1)=\mu^T_2(|\iota_1|(0))=T_{|\iota_1|}(\mu^T_1(0)).
\end{equation*}
According to Definition~\ref{def:coded-prae-dilator-reconstruct} this yields
\begin{equation*}
D^{\mu^T}_X(x_0)=\langle\{x_0\},\mu^T_1(0)\rangle<_{D^T_X}\langle\{x_1\},\mu^T_1(0)\rangle=D^{\mu^T}_X(x_1),
\end{equation*}
as desired. To see that $D^{\mu^T}$ is natural we compute
\begin{equation*}
D^T_f(D^{\mu^T}_X(x))=\langle[f]^{<\omega}(\{x\}),\mu^T_1(0)\rangle=\langle\{f(x)\},\mu^T_1(0)\rangle=D^{\mu^T}_Y(f(x)).
\end{equation*}
It remains to establish the stated equivalence: First assume that we have
\begin{equation*}
\langle a,\sigma\rangle<_{D^T_X}D^{\mu^T}_X(x)=\langle\{x\},\mu^T_1(0)\rangle.
\end{equation*}
Write $\iota_0:a\hookrightarrow a\cup\{x\}$ and $\iota_1:\{x\}\hookrightarrow a\cup\{x\}$ for the inclusions. By definition of the order on $D^T_X$ we have
\begin{equation*}
T_{|\iota_0|}(\sigma)<_{T_{|a\cup\{x\}|}}T_{|\iota_1|}(\mu^T_1(0))=\mu^T_{|a\cup\{x\}|}(|\iota_1|(0)).
\end{equation*}
Using the equivalence from Definition~\ref{def:coded-normal-dil} we can deduce
\begin{equation*}
[|\iota_0|]^{<\omega}(|a|)=[|\iota_0|]^{<\omega}(\operatorname{supp}^T_{|a|}(\sigma))=\operatorname{supp}^T_{|a\cup\{x\}|}(T_{|\iota_0|}(\sigma))<^{\operatorname{fin}} |\iota_1|(0).
\end{equation*}
This implies $a<^{\operatorname{fin}}_X x$, as desired. To establish the converse implication one follows the argument backwards, noting that $a<^{\operatorname{fin}}_X x$ implies $[|\iota_0|]^{<\omega}(|a|)<^{\operatorname{fin}} |\iota_1|(0)$.
\end{proof}
Working in a sufficiently strong set theory, we can now prove that normal dilators do induce normal functions. This result is due to Aczel~\cite[Theorem~2.11]{aczel-phd}.
\begin{proposition}\label{prop:normal-dil-fct}
If $T$ is a normal dilator, then $\alpha\mapsto(D^T_\alpha)$ is a normal function.
\end{proposition}
\begin{proof}
As a preparation we observe the following: Writing $\iota_x:X\!\restriction\!x\hookrightarrow X$ for the inclusion, we can combine Corollary~\ref{cor:initial-segments-supports} and Proposition~\ref{prop:reconstruct-normal-dil} to see that the range of $D^T_{\iota_x}$ is equal to $D^T_X\!\restriction\!D^{\mu^T}_X(x)$. Since $D^T_{\iota_x}$ is an order embedding this yields
\begin{equation*}
D^T_{X\restriction x}\cong D^T_X\!\restriction\!D^{\mu^T}_X(x)
\end{equation*}
for any order~$X$ and any~$x\in X$. Now we prove that $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$ is strictly increasing: If we have $\alpha<\beta$, then $\alpha$ is isomorphic (and, with the usual set-theoretic definition of ordinals, even equal) to $\beta\!\restriction\!\alpha$. As $D^T$ is functorial (see Proposition~\ref{prop:reconstruct-class-sized-dil}) we get $D^T_\alpha\cong D^T_{\beta\restriction\alpha}$. Together with the above observation this yields
\begin{equation*}
\operatorname{otp}(D^T_\alpha)=\operatorname{otp}(D^T_{\beta\restriction\alpha})=\operatorname{otp}(D^T_\beta\!\restriction\!D^{\mu^T}_\beta(\alpha))<\operatorname{otp}(D^T_\beta).
\end{equation*}
To conclude that $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$ is a normal function we must establish
\begin{equation*}
\operatorname{otp}(D^T_\lambda)\leq\sup\{\operatorname{otp}(D^T_\alpha)\,|\,\alpha<\lambda\}
\end{equation*}
when $\lambda$ is a limit ordinal. Given an element $\langle a,\sigma\rangle\in D^T_\lambda$, pick an ordinal $\alpha<\lambda$ with $a<^{\operatorname{fin}}\alpha$. Then Lemma~\ref{lem:range-dil-support} tells us that $\langle a,\sigma\rangle$ lies in the range of $D^T_{\iota_\alpha}$. By the above we obtain
\begin{equation*}
\operatorname{otp}(D^T_\lambda\!\restriction\!\langle a,\sigma\rangle)<\operatorname{otp}(D^T_\lambda\!\restriction\!D^{\mu^T}_\lambda(\alpha))=\operatorname{otp}(D^T_\alpha).
\end{equation*}
Since $\langle a,\sigma\rangle\in D^T_\lambda$ was arbitrary this implies the claim.
\end{proof}
Our next goal is to define upper derivatives of normal prae-dilators. Recall that, on the level of normal functions, $g$ is an upper derivative of $f$ if we have $f\circ g(\alpha)\leq g(\alpha)$ for every ordinal~$\alpha$ (note that $\geq$ is automatic). The inequality can be witnessed by an order embedding. This suggests to look at compositions of and natural transformations between coded prae-dilators. To form $T\circ S$ we must extend $T$ beyond the natural numbers, since $S_n$ can be an infinite order:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:compose-dils}
Let $T$ and $S$ be coded prae-dilators. For each number~$n$ and each morphism $f:n\rightarrow m$ we put
\begin{equation*}
(T\circ S)_n=D^T(S_n),\qquad (T\circ S)_f=D^T(S_f),
\end{equation*}
where $D^T(S_n)$ is ordered according to Definition~\ref{def:coded-prae-dilator-reconstruct}. We also define a family of functions $\operatorname{supp}^{T\circ S}_n:(T\circ S)_n\rightarrow[n]^{<\omega}$ by setting
\begin{equation*}
\operatorname{supp}^{T\circ S}_n(\langle a,\tau\rangle)=\bigcup_{\sigma\in a}\operatorname{supp}^ S_n(\sigma)
\end{equation*}
for each number~$n$.
\end{definition}
It is straightforward to see that $\mathbf{RCA}_0$ proves the existence of $T\circ S$. Crucially, the extension $D^{T\circ S}$ recovers the composition of~$D^T$ and $D^S$:
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:compose-dils-rca}
If $T$ and $S$ are coded (prae-) dilators, then so is $T\circ S$. We get a natural collection of isomorphisms $\zeta^{T,S}_X:D^T(D^S_X)\rightarrow D^{T\circ S}_X$ by setting
\begin{equation*}
\zeta^{T,S}_X(\langle\{\langle a_1,\sigma_1\rangle,\dots,\langle a_k,\sigma_k\rangle\},\tau\rangle)=\langle a_1\cup\dots\cup a_k,\langle\{S_{|\iota_1|}(\sigma_1),\dots,S_{|\iota_k|}(\sigma_k)\},\tau\rangle\,\rangle,
\end{equation*}
where $\iota_j:a_j\hookrightarrow a_1\cup\dots\cup a_k$ are the inclusion maps. Furthermore we have
\begin{equation*}
\operatorname{supp}^{D^{T\circ S}}_X(\zeta^{T,S}_X(\sigma))=\bigcup\{\operatorname{supp}^{D^S}_X(\rho)\,|\,\rho\in\operatorname{supp}^{D^T}_{D^S_X}(\sigma)\}
\end{equation*}
for any element $\sigma\in D^T(D^S_X)$.
\end{proposition}
\begin{proof}
One readily verifies that $T\circ S$ is a functor, using Proposition~\ref{prop:reconstruct-class-sized-dil}. The naturality of $\operatorname{supp}^{T\circ S}$ follows from the naturality of $\operatorname{supp}^S$. To see that the support condition from Definition~\ref{def:coded-prae-dilator} is satisfied we consider an arbitrary $\langle a,\tau\rangle\in(T\circ S)_n$. Abbreviate $c=\operatorname{supp}^{T\circ S}_n(\langle a,\tau\rangle)$ and observe that $\operatorname{supp}^S_n(\sigma)\subseteq c$ holds for any~$\sigma\in a$. Using the support condition for $S$ we get $\sigma\in\operatorname{rng}(S_{\iota_c\circ\operatorname{en}_c})$, where $\iota_c:c\hookrightarrow n$ is the inclusion. Induction on $|a|$ yields a finite set $b\subseteq S_{|c|}$ with $[S_{\iota_c\circ\operatorname{en}_c}]^{<\omega}(b)=a$. Since $S_{\iota_c\circ\operatorname{en}_c}$ is an embedding we have $|b|=|a|$ and hence $\langle b,\tau\rangle\in D^T(S_{|c|})$. In view of
\begin{equation*}
(T\circ S)_{\iota_c\circ\operatorname{en}_c}(\langle b,\tau\rangle)=D^T(S_{\iota_c\circ\operatorname{en}_c})(\langle b,\tau\rangle)=\langle [S_{\iota_c\circ\operatorname{en}_c}]^{<\omega}(b),\tau\rangle=\langle a,\tau\rangle
\end{equation*}
we learn that $\langle a,\tau\rangle$ lies in the range of~$(T\circ S)_{\iota_c\circ\operatorname{en}_c}$, as required. If $T$ and $S$ are coded dilators, then~$D^T(D^S_X)$ is well-founded for any well-order~$X$. The claim that $T\circ S$ is a coded dilator will follow once we have proved $D^T(D^S_X)\cong D^{T\circ S}_X$. To show that the given equation for $\zeta^{T,S}_X$ defines such an isomorphism we first check that
\begin{equation*}
\sigma=\langle\{\langle a_1,\sigma_1\rangle,\dots,\langle a_k,\sigma_k\rangle\},\tau\rangle\in D^T(D^S_X)
\end{equation*}
implies
\begin{equation*}
\zeta^{T,S}_X(\sigma)=\langle a_1\cup\dots\cup a_k,\langle\{S_{|\iota_1|}(\sigma_1),\dots,S_{|\iota_k|}(\sigma_k)\},\tau\rangle\,\rangle\in D^{T\circ S}_X.
\end{equation*}
Assuming that the pairs $\langle a_j,\sigma_j\rangle$ are all distinct, we see that $\sigma\in D^T(D^S_X)$ requires $\tau\in T_k$ and $\operatorname{supp}^T_k(\tau)=k$. Definition~\ref{def:coded-prae-dilator-reconstruct} also shows that $\langle a_i,\sigma_i\rangle<_{D^S_X}\langle a_j,\sigma_j\rangle$ implies $S_{|\iota_i|}(\sigma_i)<_{S_{|c|}}S_{|\iota_j|}(\sigma_j)$, where we abbreviate $c=a_1\cup\dots\cup a_k$. Thus the set $\{S_{|\iota_1|}(\sigma_1),\dots,S_{|\iota_k|}(\sigma_k)\}$ is still of cardinality~$k$, which yields
\begin{equation*}
\rho:=\langle\{S_{|\iota_1|}(\sigma_1),\dots,S_{|\iota_k|}(\sigma_k)\},\tau\rangle\in D^T(S_{|c|})=(T\circ S)_{|c|}.
\end{equation*}
To conclude $\zeta^{T,S}_X(\sigma)\in D^{T\circ S}_X$ it remains to establish $\operatorname{supp}^{T\circ S}_{|c|}(\rho)=|c|$. In view of $\sigma\in D^T(D^S_X)$ we must have $\langle a_j,\sigma_j\rangle\in D^S_X$ and hence $\operatorname{supp}^S_{|a_j|}(\sigma_j)=|a_j|$. Together with the naturality of $\operatorname{supp}^S$ we indeed get
\begin{equation*} \operatorname{supp}^{T\circ S}_{|c|}(\rho)=\bigcup_{j=1,\dots,k}\operatorname{supp}^S_{|c|}(S_{|\iota_j|}(\sigma_j))=\bigcup_{j=1,\dots,k}[|\iota_j|]^{<\omega}(\operatorname{supp}^S_{|a_j|}(\sigma_j))=|c|.
\end{equation*}
It is straightforward to check that $\zeta^{T,S}$ is natural, i.\,e.~that we have
\begin{equation*}
\zeta^{T,S}_Y\circ D^T(D^S_f)=D^{T\circ S}_f\circ\zeta^{T,S}_X
\end{equation*}
for any embedding $f:X\rightarrow Y$. Using naturality, the claim that $\zeta^{T,S}_X$ is order preserving can be reduced to the case where $X=n$ is a natural number. There it follows from the observation that $\zeta^ {T,S}_n$ factors as
\begin{equation*}
D^T(D^S_n)\xrightarrow{D^T(\eta^S_n)}D^T(S_n)=(T\circ S)_n\xrightarrow{(\eta^{T\circ S}_n)^{-1}} D^{T\circ S}_n,
\end{equation*}
where $\eta^S_n$ and $\eta^{T\circ S}_n$ are the isomorphisms from Lemma~\ref{lem:class-sized-restrict}. To establish that $\zeta^{T,S}_X$ is surjective we consider an arbitrary element $\langle c,\langle\{\rho_1,\dots,\rho_k\},\tau\rangle\rangle\in D^{T\circ S}_X$. Define $a_j=[\operatorname{en}_c]^{<\omega}(\operatorname{supp}^S_{|c|}(\rho_i))$ and write $\iota_j:a_j\hookrightarrow c$ for the inclusions. Using the support condition for $S$ we get an element $\sigma_j\in S_{|a_j|}$ with $\rho_j=S_{|\iota_j|}(\sigma_j)$. In view of
\begin{equation*}
[\operatorname{en}_c\circ |\iota_j|]^{<\omega}(\operatorname{supp}^S_{|a_j|}(\sigma_j))=[\operatorname{en}_c]^{<\omega}(\operatorname{supp}^S_{|c|}(\rho_j))=a_j
\end{equation*}
we have $\langle a_i,\sigma_i\rangle\in D^S_X$. One can check that $\langle\{\langle a_1,\sigma_1\rangle,\dots,\langle a_k,\sigma_k\rangle\},\tau\rangle\in D^T(D^S_X)$ is the desired preimage under $\zeta^{T,S}_X$. The support formula given in the lemma follows by unravelling definitions.
\end{proof}
We should also consider compositions in the normal case:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:compose-normal-dils}
Let $T=(T,\mu^T)$ and $S=(S,\mu^S)$ be normal prae-dilators. We define a family of functions $\mu^{T\circ S}_n:n\rightarrow(T\circ S)_n=D^T(S_n)$ by setting
\begin{equation*}
\mu^{T\circ S}_n(m)=D^{\mu^T}_{S_n}\circ \mu^S_n(m)=\langle\{\mu^S_n(m)\},\mu^T_1(0)\rangle
\end{equation*}
for all numbers $m<n$.
\end{definition}
We verify the expected property:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:compose-normal-dils}
If $(T,\mu^T)$ and $(S,\mu^S)$ are normal prae-dilators, then so is~$(T\circ S,\mu^{T\circ S})$. Furthermore we have
\begin{equation*}
\zeta^{T,S}_X\circ D^{\mu^T}_{D^S_X}\circ D^{\mu^S}_X=D^{\mu^{T\circ S}}_X
\end{equation*}
for any order $X$, where $\zeta^{T,S}_X$ is the isomorphism from Proposition~\ref{prop:compose-dils-rca}.
\end{lemma}
\begin{proof}
Proposition~\ref{prop:compose-dils-rca} tells us that $T\circ S$ is a prae-dilator. The fact that $\mu^{T\circ S}$ is a natural transformation is readily deduced from Proposition~\ref{prop:reconstruct-normal-dil}. To verify the equivalence from Definition~\ref{def:coded-normal-dil} we consider an arbitrary element $\rho=\langle\{\sigma_1,\dots,\sigma_k\},\tau\rangle$ of $(T\circ S)_n=D^T(S_n)$. By Proposition~\ref{prop:reconstruct-normal-dil} and the normality of $S$ we get
\begin{align*}
\rho<_{(T\circ S)_n}\mu^{T\circ S}_n(m)=D^{\mu^T}_{S_n}\circ \mu^S_n(m)&\Leftrightarrow\{\sigma_1,\dots,\sigma_k\}<^{\operatorname{fin}}_{S_n}\mu^S_n(m)\\
&\Leftrightarrow\operatorname{supp}^{T\circ S}_n(\rho)=\bigcup_{i=1,\dots,k}\operatorname{supp}^S_n(\sigma)<^{\operatorname{fin}} m.
\end{align*}
The equality asserted in the lemma can be verified by unravelling definitions.
\end{proof}
Let us now look at natural transformations between coded prae-dilators. To define their extensions beyond the natural numbers we will use the following result of Girard (the given proof is similar to that of~\cite[Proposition~2.3.15]{girard-pi2}):
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:dilator-cartesian}
Any natural transformation $\nu:T\Rightarrow S$ between coded prae-dilators satisfies ${\operatorname{supp}^S}\circ\nu=\operatorname{supp}^T$.
\end{lemma}
\begin{proof}
Consider a number~$n$ and an element $\sigma\in T_n$. By the support condition from Definition~\ref{def:coded-prae-dilator} we have $\sigma=T_{\iota_\sigma\circ\operatorname{en}_\sigma}(\sigma_0)$ for some $\sigma_0\in T_m$, with $m=|\operatorname{supp}^T_n(\sigma)|$. Using the naturality of $\nu$ and $\operatorname{supp}^S$ we get
\begin{multline*}
\operatorname{supp}^S_n(\nu_n(\sigma))=\operatorname{supp}^S_n(\nu_n\circ T_{\iota_\sigma\circ\operatorname{en}_\sigma}(\sigma_0))=\operatorname{supp}^S_n(S_{\iota_\sigma\circ\operatorname{en}_\sigma}\circ\nu_m(\sigma_0))=\\
=[\iota_\sigma\circ\operatorname{en}_\sigma]^{<\omega}(\operatorname{supp}^S_m(\nu_m(\sigma_0)))\subseteq\operatorname{rng}(\iota_\sigma)=\operatorname{supp}^T_n(\sigma).
\end{multline*}
Aiming at a contradiction, let us now assume that there is a $k\in\operatorname{supp}^T_n(\sigma)$ that does not lie in $\operatorname{supp}^S_n(\nu_n(\sigma))$. Consider the functions $f_1,f_2:n\rightarrow n+1$ with
\begin{equation*}
f_1(i)=\begin{cases}
i & \text{if $i\leq k$},\\
i+1 & \text{if $i>k$},
\end{cases}\qquad
f_2(i)=\begin{cases}
i & \text{if $i<k$},\\
i+1 & \text{if $i\geq k$}.
\end{cases}
\end{equation*}
Observe that we have
\begin{equation*}
k=f_1(k)\in[f_1]^{<\omega}(\operatorname{supp}^T_n(\sigma))=\operatorname{supp}^T_{n+1}(T_{f_1}(\sigma)),
\end{equation*}
as well as
\begin{equation*}
k\notin\operatorname{rng}(f_2)\supseteq[f_2]^{<\omega}(\operatorname{supp}^T_n(\sigma))=\operatorname{supp}^T_{n+1}(T_{f_2}(\sigma)).
\end{equation*}
Thus $T_{f_1}(\sigma)$ and $T_{f_2}(\sigma)$ are distinguished by their supports. Since $\nu_{n+1}$ is injective we obtain
\begin{equation*}
S_{f_1}\circ\nu_n(\sigma)=\nu_{n+1}\circ T_{f_1}(\sigma)\neq\nu_{n+1}\circ T_{f_2}(\sigma)=S_{f_2}\circ\nu_n(\sigma).
\end{equation*}
By Definition~\ref{def:coded-prae-dilator} we may write $\nu_n(\sigma)=S_{\iota_{\nu_n(\sigma)}\circ\operatorname{en}_{\nu_n(\sigma)}}(\tau_0)$. Since~$k$ is not contained in $\operatorname{rng}(\iota_{\nu_n(\sigma)})=\operatorname{supp}^S_n(\nu_n(\sigma))$ we have
\begin{equation*}
f_1\circ\iota_{\nu_n(\sigma)}\circ\operatorname{en}_{\nu_n(\sigma)}=f_2\circ \iota_{\nu_n(\sigma)}\circ\operatorname{en}_{\nu_n(\sigma)}.
\end{equation*}
Thus we get
\begin{equation*}
S_{f_1}\circ\nu_n(\sigma)=S_{f_1}\circ S_{\iota_{\nu_n(\sigma)}\circ\operatorname{en}_{\nu_n(\sigma)}}(\tau_0)=S_{f_2}\circ S_{\iota_{\nu_n(\sigma)}\circ\operatorname{en}_{\nu_n(\sigma)}}(\tau_0)=S_{f_2}\circ\nu_n(\sigma),
\end{equation*}
which contradicts the inequality established above.
\end{proof}
Let us remark that ${\operatorname{supp}^S}\circ\nu=\operatorname{supp}^T$ is equivalent to the assertion that $\nu$ is Cartesian (i.\,e.~that the naturality squares for~$\nu$ are pullbacks). Thus the latter holds for any natural transformation between prae-dilators, as pointed out by P.~Taylor~\cite{taylor98} (the first author would like to thank Thomas Streicher for this reference and for enlightening explanations). For us the lemma is important since it ensures that the uniqueness condition $\operatorname{supp}^T_{|a|}(\sigma)=|a|$ from Definition~\ref{def:coded-prae-dilator-reconstruct} is preserved under natural transformations, which justifies the definition of $D^\nu$:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:morphism-dils}
Given coded prae-dilators $T$ and $S$, any natural transformation $\nu:T\Rightarrow S$ is called a morphism of coded prae-dilators. If $T=(T,\mu^T)$ and $S=(S,\mu^S)$ are normal and we have $\nu\circ\mu^T=\mu^S$, then $\nu$ is called a morphism of normal prae-dilators. To extend~$\nu$ beyond the category of natural numbers we define $D^\nu_X:D^T_X\rightarrow D^S_X$ by setting
\begin{equation*}
D^\nu_X(\langle a,\sigma\rangle)=\langle a,\nu_{|a|}(\sigma)\rangle
\end{equation*}
for each linear order~$X$.
\end{definition}
Let us verify that the extension of a morphism has the expected property:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:morph-dilators-extend}
If $\nu:T\Rightarrow S$ is a morphism of (normal) prae-dilators, then the maps $D^\nu_X:D^T_X\rightarrow D^S_X$ form a natural transformation (and $D^\nu_X\circ D^{\mu^T}_X=D^{\mu^S}_X$ holds for every order~$X$). Furthermore we have ${\operatorname{supp}^{D^S}_X}\circ D^\nu_X=\operatorname{supp}^{D^T}_X$.
\end{lemma}
\begin{proof}
To see that each function $D^\nu_X:D^T_X\rightarrow D^S_X$ is order preserving it suffices to observe that $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$ implies
\begin{equation*}
S_{|\iota_a^{a\cup b}|}(\nu_{|a|}(\sigma))=\nu_{|a\cup b|}(T_{|\iota_a^{a\cup b}|}(\sigma))<_{S_{|a\cup b|}}\nu_{|a\cup b|}(T_{|\iota_b^{a\cup b}|}(\tau))=S_{|\iota_b^{a\cup b}|}(\nu_{|b|}(\tau)),
\end{equation*}
using the naturality of $\nu$ and the fact that $\nu_{|a\cup b|}$ is order preserving. The naturality of $D^\nu$ follows from the fact that we have $|[f]^{<\omega}(a)|=|a|$ for any order preserving function $f$. If $\nu$ is a morphism of normal prae-dilators, then we get
\begin{equation*}
D^\nu_X\circ D^{\mu^T}_X(x)=D^\nu_X(\langle\{x\},\mu^T_1(0)\rangle)=\langle\{x\},\nu_1\circ\mu^T_1(0)\rangle=\langle\{x\},\mu^S_1(0)\rangle=D^{\mu^S}_X(x).
\end{equation*}
The relation between the supports is immediate in view of Definition~\ref{def:coded-prae-dilator-reconstruct}.
\end{proof}
As suggested by the last line of equations, one can show that the general condition $\nu_n\circ\mu^T_n=\mu^S_n$ follows from the special case $n=1$ (write $m<n$ as $m=\iota(0)$ with $\iota:1\rightarrow n$ and use naturality). In practice it is just as straightforward to verify the condition for arbitrary~$n$. We now have all ingredients to define upper derivatives on the level of coded prae-dilators:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:upper-derivative}
Let $T$ be a normal prae-dilator. An upper derivative of~$T$ consists of a normal prae-dilator $S$ and a morphism $\xi:T\circ S\Rightarrow S$ of normal prae-dilators.
\end{definition}
With the previous definition we have completed our formalization of statement~(2) from the introduction (where $S$ stands for $(S,\xi)$). Of course we want to know that we have recovered the notion of upper derivative for normal functions. This fact can be established in a sufficiently strong set theory:
\begin{proposition}\label{prop:upper-deriv-real}
Consider normal dilators $T$ and $S$. If there is a natural transformation $\xi:T\circ S\Rightarrow S$, then the normal function $\alpha\mapsto\operatorname{otp}(D^S_\alpha)$ is an upper derivative of the normal function $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$.
\end{proposition}
Before we prove the proposition, let us remark that $X\mapsto D^T_X$ automatically preserves well-foundedness if~$X\mapsto D^S_X$ does, since
\begin{equation*}
D^\xi_X\circ\zeta^{T,S}_X\circ D^T(D^{\mu^S}_X):D^T_X\rightarrow D^S_X
\end{equation*}
is an embedding ($\zeta^{T,S}$ is the natural isomorphism from Proposition~\ref{prop:compose-dils-rca}).
\begin{proof}
In view of Proposition~\ref{prop:normal-dil-fct} it suffices to establish $\operatorname{otp}(D^T_\gamma)\leq\gamma$ for any value~$\gamma=\operatorname{otp}(D^S_\alpha)\cong D^S_\alpha$. The required inequality is witnessed by the embeddings
\begin{equation*}
D^T_\gamma\cong D^T(D^S_\alpha)\xrightarrow{\mathmakebox[2em]{\zeta^{T,S}_\alpha}}D^{T\circ S}_\alpha\xrightarrow{\mathmakebox[2em]{D^\xi_\alpha}}D^S_\alpha\cong\gamma,
\end{equation*}
where the first isomorphism uses the functoriality of $D^T$ (cf.~Proposition~\ref{prop:reconstruct-class-sized-dil}).
\end{proof}
To conclude the discussion of upper derivatives we record an immediate consequence of Lemmas~\ref{lem:compose-normal-dils}~and~\ref{lem:morph-dilators-extend}. The equality in the corollary has an intuitive meaning, which will become clear in the proof of Theorem~\ref{thm:equalizer-to-derivative}.
\begin{corollary}[$\mathbf{RCA}_0$]\label{cor:deriv-normal-witness}
Assume that $(S,\xi)$ is an upper derivative of a normal prae-dilator~$T$. Then we have
\begin{equation*}
D^\xi_X\circ\zeta^{T,S}_X\circ D^{\mu^T}_{D^S_X}\circ D^{\mu^S}_X=D^{\mu^S}_X
\end{equation*}
for any order~$X$.
\end{corollary}
As a final topic of this section we consider derivatives of normal prae-dilators. On the level of normal functions the derivative is the upper derivative with the smallest possible values. In a categorical setting this is naturally expressed via the notion of initial object. To make this precise we need the following construction:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:comp-morphs}
Given coded prae-dilators $T,S^1,S^2$ and a natural transformation $\nu:S^1\Rightarrow S^2$, we define a family of functions $T(\nu)_n:(T\circ S^1)_n\rightarrow (T\circ S^2)_n$ by setting
\begin{equation*}
T(\nu)_n=D^T(\nu_n)
\end{equation*}
for each number~$n$.
\end{definition}
We verify the expected properties:
\begin{lemma}[$\mathbf{RCA}_0$]
Let $T$ be a (normal) prae-dilator. If $\nu:S^1\Rightarrow S^2$ is a morphism of (normal) prae-dilators, then so is $T(\nu):T\circ S^1\Rightarrow T\circ S^2$. Furthermore we have
\begin{equation*}
D^{T(\nu)}_X\circ\zeta^{T,S^1}_X=\zeta^{T,S^2}_X\circ D^T(D^\nu_X)
\end{equation*}
for each order~$X$, where $\zeta^{T,S^i}$ are the natural isomorphisms from Proposition~\ref{prop:compose-dils-rca}.
\end{lemma}
\begin{proof}
Using Proposition~\ref{prop:reconstruct-class-sized-dil} and the naturality of $\nu$ one readily shows that $T(\nu)$ is a natural family of embeddings. If $\nu$ is a morphism of normal prae-dilators, then we invoke the naturality of $D^{\mu^T}$ (due to Proposition~\ref{prop:reconstruct-normal-dil}) to get
\begin{equation*}
T(\nu)_n\circ\mu^{T\circ S^1}_n=D^T(\nu_n)\circ D^{\mu^T}_{S^1_n}\circ\mu^{S^1}_n=D^{\mu^T}_{S^2_n}\circ\nu_n\circ\mu^{S^1}_n=D^{\mu^T}_{S^2_n}\circ\mu^{S^2}_n=\mu^{T\circ S^2}_n,
\end{equation*}
which shows that $T(\nu)$ is a morphism of normal prae-dilators. The equality asserted in the lemma can be verified by unravelling definitions.
\end{proof}
Let us introduce a last ingredient for the definition of derivatives:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:morph-upper-derivs}
Consider a normal prae-dilator $T$ with upper derivatives $(S^1,\xi^1)$ and $(S^2,\xi^2)$. A morphism of upper derivatives is a morphism $\nu:S^1\Rightarrow S^2$ of normal prae-dilators that satisfies $\nu\circ\xi^1=\xi^2\circ T(\nu)$.
\end{definition}
Finally, we can characterize derivatives on the level of coded prae-dilators:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:derivative}
A derivative of a normal prae-dilator $T$ is an upper derivative $(S,\xi)$ of $T$ that is initial in the following sense: For any upper derivative $(S',\xi')$ of $T$ there is a unique morphism $\nu:S\Rightarrow S'$ of upper derivatives.
\end{definition}
Due to the form of the given definition it is clear that the derivative of a normal prae-dilator is unique up to isomorphism of upper derivatives. Other properties of the derivative are harder to establish. In Sections~\ref{sect:constructin-derivative} and~\ref{sect:bi-deriv-wf} we will show that the assumptions of the following theorem hold when $(S,\xi)$ is a derivative of~$T$. This leads to an unconditional version of the result, which will be stated as Corollary~\ref{cor:deriv-dil-to-fct}.
\begin{theorem}\label{thm:equalizer-to-derivative}
Let $(S,\xi)$ be an upper derivative of a normal dilator $T$. Assume that the maps $\xi_n:(T\circ S)_n\rightarrow S_n$ are surjective (so that $\xi$ is an isomorphism), that
\begin{equation*}
\begin{tikzcd}
n\ar{r}{\mu^S_n} &[2em] S_n\arrow[r,shift left,"\operatorname{Id}_{S_n}"]\arrow[r,shift right,swap,"\xi_n\circ D^{\mu^T}_{S_n}"]&[5em] S_n
\end{tikzcd}
\end{equation*}
is an equalizer diagram for every~$n$, and that $X\mapsto D^S_X$ preserves well-foundedness. Then $\alpha\mapsto\operatorname{otp}(D^S_\alpha)$ is the derivative of the normal function~$\alpha\mapsto\operatorname{otp}(D^T_\alpha)$.
\end{theorem}
Before we prove the theorem we motivate the equalizer condition: By Definition~\ref{def:compose-normal-dils} and the fact that $\xi$ is a morphism of normal prae-dilators we get
\begin{equation*}
\xi_n\circ D^{\mu^T}_{S_n}\circ\mu^S_n=\xi_n\circ\mu^{T\circ S}_n=\mu^S_n.
\end{equation*}
So the assumption that the equalizer diagrams commute is automatic. After Definition~\ref{def:coded-normal-dil} we have explained that $\mu^T$ can be seen as an internal version of the function~$\alpha\mapsto\operatorname{otp}(D^T_\alpha)$. Intuitively speaking, the equalizer condition thus demands that any ordinal $\alpha$ with $\operatorname{otp}(D^T_\alpha)=\alpha$ lies in the range of $\alpha\mapsto\operatorname{otp}(D^S_\alpha)$.
\begin{proof}
As a preparation we lift the assumptions of the theorem to the level of class-sized dilators: In view of Definition~\ref{def:morphism-dils} it is straightforward to show that each function $D^\xi_X:D^{T\circ S}_X\rightarrow D^S_X$ is an isomorphism. From Corollary~\ref{cor:deriv-normal-witness} we know that
\begin{equation*}
\begin{tikzcd}
X\ar{r}{D^{\mu^S}_X} &[2em] D^S_X\arrow[r,shift left,"\operatorname{Id}_{D^S_X}"]\arrow[r,shift right,swap,"D^\xi_X\circ\zeta^{T,S}_X\circ D^{\mu^T}_{D^S_X}"]&[5em] D^S_X
\end{tikzcd}
\end{equation*}
is a commutative diagram. To show that it defines an equalizer we consider an arbitrary element $\langle a,\sigma\rangle\in D^S_X$. Invoking Definitions~\ref{def:extend-normal-transfos} and~\ref{def:morphism-dils}, as well as the proof of Proposition~\ref{prop:compose-dils-rca}, we see
\begin{multline*}
D^\xi_X\circ\zeta^{T,S}_X\circ D^{\mu^T}_{D^S_X}(\langle a,\sigma\rangle)=D^\xi_X\circ\zeta^{T,S}_X(\langle\{\langle a,\sigma\rangle\},\mu^T_1(0)\rangle)=\\
=D^\xi_X(\langle a,\langle\{\sigma\},\mu^T_1(0)\rangle\rangle)=\langle a,\xi_{|a|}(\langle\{\sigma\},\mu^T_1(0)\rangle)\rangle=\langle a,\xi_{|a|}\circ D^{\mu^T}_{S_{|a|}}(\sigma)\rangle.
\end{multline*}
If this value is equal to $\langle a,\sigma\rangle$, then we have $\xi_{|a|}\circ D^{\mu^T}_{S_{|a|}}(\sigma)=\sigma$. Thus the equalizer condition from the theorem yields $\sigma=\mu^S_{|a|}(m)$ for some $m<|a|$. According to Definition~\ref{def:coded-prae-dilator-reconstruct} and Lemma~\ref{lem:support-mu} we must have
\begin{equation*}
|a|=\operatorname{supp}^S_{|a|}(\sigma)=\operatorname{supp}^S_{|a|}(\mu^S_{|a|}(m))=\{m\}.
\end{equation*}
This forces $m=0$ and $|a|=1$, say $a=\{x\}$. We can conclude
\begin{equation*}
\langle a,\sigma\rangle=\langle\{x\},\mu^S_1(0)\rangle=D^{\mu^S}_X(x)\in\operatorname{rng}(D^{\mu^S}_X),
\end{equation*}
as required to make the above an equalizer diagram. Based on these preparations we can now prove the actual claim of the theorem: Write $f$ for the normal function $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$ and $f'$ for its derivative. Proposition~\ref{prop:upper-deriv-real} yields $\operatorname{otp}(D^S_\beta)\geq f'(\beta)$. We may thus define an order embedding $f'_S:\beta\rightarrow D^S_\beta$ by stipulating
\begin{equation*}
\operatorname{otp}(D^S_\beta\!\restriction\!f'_S(\alpha))=f'(\alpha).
\end{equation*}
To make use of the equalizer diagram from the beginning of the proof we need
\begin{equation*}
D^\xi_\beta\circ\zeta^{T,S}_\beta\circ D^{\mu^T}_{D^S_\beta}\circ f'_S(\alpha)=f'_S(\alpha).
\end{equation*}
Since $D^\xi_\beta\circ\zeta^{T,S}_\beta:D^T(D^S_\beta)\rightarrow D^S_\beta$ is an isomorphism this reduces to
\begin{equation*}
\operatorname{otp}(D^T(D^S_\beta)\!\restriction\!D^{\mu^T}_{D^S_\beta}\circ f'_S(\alpha))=f'(\alpha).
\end{equation*}
By the proof of Proposition~\ref{prop:normal-dil-fct} and the functoriality of $D^T$, the left side is indeed equal to
\begin{equation*}
\operatorname{otp}(D^T(D^S_\beta\!\restriction\!f'_S(\alpha)))=\operatorname{otp}(D^T_{f'(\alpha)})=f(f'(\alpha))=f'(\alpha).
\end{equation*}
Now the universal property of equalizers yields an embedding $g:\beta\rightarrow\beta$ that satisfies $D^{\mu^S}_\beta\circ g=f'_S$. Since $g$ is a strictly increasing function on the ordinals we have $\alpha\leq g(\alpha)$. Thus, again invoking the proof of Proposition~\ref{prop:normal-dil-fct}, we get
\begin{equation*}
\operatorname{otp}(D^S_\alpha)=\operatorname{otp}(D^S_\beta\!\restriction\!D^{\mu^S}_\beta(\alpha))\leq\operatorname{otp}(D^S_\beta\!\restriction\!D^{\mu^S}_\beta\circ g(\alpha))=\operatorname{otp}(D^S_\beta\!\restriction\!f'_S(\alpha))=f'(\alpha).
\end{equation*}
We have already seen $\operatorname{otp}(D^S_\alpha)\geq f'(\alpha)$. So we can conclude that $\alpha\mapsto\operatorname{otp}(D^S_\alpha)$ coincides with the derivative~$f'$ of the normal function $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$, as desired.
\end{proof}
In Example~\ref{ex:equalizers-without-deriv} we will exhibit an upper derivative $(S,\xi)$ that satisfies the equalizer condition but fails to be a derivative in the sense of Definition~\ref{def:derivative}. This shows that the equalizer condition does not suffice to characterize derivatives on the categorical level. The relevance of Example~\ref{ex:equalizers-without-deriv} is somewhat diminished by the fact that $X\mapsto D^S_X$ does not preserve well-foundedness.
\section{From upper derivative to $\Pi^1_1$-bar induction}\label{sect:upper-deriv-bi}
In this section we prove that bar induction for $\Pi^1_1$-formulas follows from the principle that every normal dilator has an upper derivative that preserves well-foundedness (i.\,e.~that is again a normal dilator). To establish this result we follow the informal argument given at the beginning of the introduction.
The first major goal of the section is to reconstruct the functions~$h$ and~$f$ from the informal argument in terms of dilators (the function $h_0$ corresponds to an intermediate step that will be omitted). Since the notion of dilator is \mbox{$\Pi^1_2$-complete} (due to Girard) it makes sense that this is possible. Indeed our reconstruction of~$h$ is inspired by Norman's proof of $\Pi^1_2$-completeness (see~\cite[\mbox{Annex 8.E}]{girard-book-part2}). To get a usable result we will have to adapt his argument to the specific form of bar induction. Our reconstruction of~$f$ can be read as a proof that the more restrictive class of normal dilators is $\Pi^1_2$-complete as well.
Let us begin with some terminology: Given a set $Y$, we write $Y^{<\omega}$ for the tree of finite sequences with entries in $Y$. If $Y=(Y,<_Y)$ is a linear order, then the Kleene-Brouwer order (also known as Lusin-Sierpi\'nski order) on $Y^{<\omega}$ is defined by
\begin{equation*}
\sigma<_{\operatorname{KB}(Y)}\tau\quad\Leftrightarrow\quad\begin{cases}
\text{either $\sigma$ is a proper end extension of $\tau$},\\
\text{or we have $(\sigma)_j<_Y(\tau)_j$ for the smallest $j$ with $(\sigma)_j\neq(\tau)_j$.}
\end{cases}
\end{equation*}
In the second clause $(\sigma)_j$ refers to the $j$-th entry of $\sigma$, for $j<\operatorname{len}(\sigma)$ below the length of $\sigma$ (note that such a $j$ exists when neither sequence is an end extension of the other). If we want to emphasize that $\mathcal T$ is ordered as a subtree of $Y^{<\omega}$, then we say that it carries the Kleene-Brouwer order with respect to~$Y$. The symbol~$<_{\operatorname{KB}}$ will be reserved for the Kleene-Brouwer order with respect to~$\mathbb N$ (ordered as usual). Recall that a branch of $\mathcal T\subseteq Y^{<\omega}$ is given by a function $f:\mathbb N\rightarrow Y$ such that we have $f[n]\in\mathcal T$ for all $n\in\mathbb N$, where the sequence
\begin{equation*}
f[n]=\langle f(0),\dots,f(n-1)\rangle
\end{equation*}
lists the first $n$ values of $f$. It is well-known that $\mathbf{ACA}_0$ proves the characteristic property of the Kleene-Brouwer order: If $Y$ is a well-order, then $\mathcal T$ has no branch if, and only if, the Kleene-Brouwer order with respect to~$Y$ is well-founded on~$\mathcal T$ (to adapt the proof from~\cite[Lemma~V.1.3]{simpson09}, which treats the case $Y=\mathbb N$, one observes that~$\mathbb N$ embeds into any infinite sub\-order~$Y_0\subseteq Y$, e.\,g.~as an initial segment).
Given an order $X=(X,<_X)$, an $X$-indexed family of orders is given as a set
\begin{equation*}
Y=\{\langle x,y\rangle\,|\,x\in X\text{ and }y\in Y_x\},
\end{equation*}
where $Y_x=(Y_x,<_{Y_x})$ is an order for each $x\in X$. The dependent sum $\Sigma_{x\in X}Y_x$ (or shorter~$\Sigma Y$) is the order with underlying set $Y$ and order relation given by
\begin{equation*}
\langle x,y\rangle<_{\Sigma Y}\langle x',y'\rangle\quad\Leftrightarrow\quad\begin{cases}
\text{either $x<_X x'$,}\\
\text{or $x=x'$ and $y<_{Y_x}y'$}.
\end{cases}
\end{equation*}
For $x\in X$ we write $\Sigma_{x'<_Xx}Y_{x'}$ (or shorter $\Sigma_x Y$) for the suborder that contains all pairs $\langle x',y\rangle\in\Sigma Y$ with $x'<_Xx$. If $X$ is a well-order, then $\Sigma Y$ is well-founded if, and only if, $Y_x$ is well-founded for every $x\in X$, provably in $\mathbf{RCA}_0$ (the first components of a descending sequence in $\Sigma Y$ must become constant with some value~$x\in X$, from which point on the second components form a descending sequence in~$Y_x$). The product $X\times Y$ of two orders is explained as the special case where we have $Y_x=Y$ for all $x\in X$. Let us mention one other construction that will be needed later: Given an order $Y=(Y,<_Y)$, we write
\begin{equation*}
Y^\bot=\{\bot\}\cup Y
\end{equation*}
for the extension of $Y$ by a new minimal element (i.\,e.~we have $\bot<_{Y^\bot}y<_{Y^\bot}y'$ for any $y,y'\in Y$ with $y<_Yy'$). If $f:Y\rightarrow Z$ is an embedding, then we get an embedding $f^\bot:Y^\bot\rightarrow Z^\bot$ by setting
\begin{equation*}
f^\bot(y)=\begin{cases}
f(y) & \text{if $y\in Y\subseteq Y^\bot$,}\\
\bot & \text{if $y=\bot$.}
\end{cases}
\end{equation*}
One readily verifies that the construction is functorial (and in fact a dilator), in the sense that we have $(g\circ f)^\bot=g^\bot\circ f^\bot$ for functions $f,g$ of suitable (co-)domain.
We will be particularly interested in dependent sums of the form $\Sigma\mathcal T=\Sigma_{x\in X}\mathcal T_x$, where $X$ is a well-order and each $\mathcal T_x$ is a subtree of $\mathbb N^{<\omega}$, with the usual Kleene-Brouwer order. In this situation we call $\mathcal T$ an $X$-indexed family of $\mathbb N$-trees. As in the informal argument from the introduction, the idea is that the well-foundedness of $\mathcal T_x$ corresponds to the fact that some $\Pi^1_1$-formula $\varphi$ holds at $x\in X$. Above we have seen that $\Sigma_x\mathcal T$ is well-founded if, and only if, $\mathcal T_y$ is well-founded for all~$y<_X x$. Thus it makes sense to call $\mathcal T$ progressive at $x\in X$ if we have
\begin{equation*}
\text{``$\Sigma_x\mathcal T$ is well-founded''}\rightarrow\text{``$\mathcal T_x$ is well-founded''}.
\end{equation*}
If $\mathcal T$ is progressive at every~$x\in X$, then it is called progressive along~$X$.
To conclude these introductory remarks we recall our approach to prae-dilators, as detailed in the previous section: By Definition~\ref{def:coded-prae-dilator} a (coded) prae-dilator is a particularly uniform functor from natural numbers to linear orders. According to Definition~\ref{def:coded-prae-dilator-reconstruct} and Proposition~\ref{prop:reconstruct-class-sized-dil} any coded prae-dilator $T$ can be extended into an endofunctor $D^T$ of linear orders, which one may call a class-sized prae-dilator. The connection between $T$ and $D^T$ is further illuminated by Lemma~\ref{lem:class-sized-restrict} and the discussion after Definition~\ref{def:coded-dilator}. In brief, coded prae-dilators are important because they can be represented by subsets of the natural numbers, so that we can formalize general statements about (i.\,e.~quantify over) these objects in the language of second order arithmetic. To understand the behaviour of a coded prae-dilator~$T$, however, we must usually consider the full class-sized prae-dilator $D^T$. In particular Definition~\ref{def:coded-dilator} tells us that $T$ is a coded dilator (rather than just a prae-dilator) if $D^T_X$ is well-founded for every well-order~$X$. Unfortunately the intuitive meaning of the orders $D^T_X$ constructed according to Definition~\ref{def:coded-prae-dilator-reconstruct} is sometimes hard to grasp. When this is the case it can help to give a more transparent description of an order that is isomorphic to $D^T_X$, as in Lemma~\ref{lem:H-0-alternative} below.
Let us now describe our reconstruction of the function $h$: The ordinal~$\alpha$ and the induction formula $\varphi$ that appear in the informal argument from the introduction correspond to a well-order~$X$ and an $X$-indexed family $\mathcal T$ of $\mathbb N$-trees. In order to represent the function~$\delta\mapsto h(\gamma,\delta)$ with $\gamma<\alpha$ we will construct a prae-dilator~$H[x]$ such that $\mathcal T$ is progressive at $x\in X$ if, and only if, $H[x]$ is a dilator. Considering the contra\-positive of the implication from left to right, we see that we should ensure the following: If $D^{H[x]}_Z$ is ill-founded for some well-order~$Z$, then $\Sigma_x\mathcal T$ must be well-founded while $\mathcal T_x$ is not. Inspired by Norman's proof of \mbox{$\Pi^1_2$-completeness}, the idea is to define $D^{H[x]}_Z$ as (an order isomorphic~to) a tree: Along each potential branch one searches for an embedding of $\Sigma_x\mathcal T$ into~$Z$ and, simultaneously, for a descending sequence in~$\mathcal T_x$. This leads to the following construction:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:H-0}
Consider a well-order~$X$, an $X$-indexed family $\mathcal T$ of \mbox{$\mathbb N$-trees} and an~$x\in X$. For each $n\in\mathbb N$ we define $H[x]_n=H[X,\mathcal T,x]_n$ as the tree of all sequences $\langle\langle n_0,s_0\rangle,\dots,\langle n_{k-1},s_{k-1}\rangle\rangle\in(n^\bot\times\mathbb N)^{<\omega}$ that satisfy the following:
\begin{enumerate}[label=(\roman*)]
\item Whenever $j_1,j_2<k$ code elements $j_i=\langle y_i,\sigma_i\rangle\in\Sigma_x\mathcal T$, we have $n_{j_i}\in n$ (i.\,e.~$n_{j_i}\neq\bot$) and
\begin{equation*}
\langle y_1,\sigma_1\rangle<_{\Sigma\mathcal T}\langle y_2,\sigma_2\rangle\quad\Rightarrow\quad n_{j_1}< n_{j_2}.
\end{equation*}
\item We have $\langle s_0,\dots,s_{k-1}\rangle\in\mathcal T_x$.
\end{enumerate}
The order $<_{H[x]_n}$ on $H[x]_n$ is the Kleene-Brouwer order with respect to~$n^\bot\times\mathbb N$. For a strictly increasing function $f:n\rightarrow m$ we define $H[x]_f:H[x]_n\rightarrow H[x]_m$ by
\begin{equation*}
H[x]_f(\langle\langle n_0,s_0\rangle,\dots,\langle n_{k-1},s_{k-1}\rangle\rangle)=\langle\langle f^\bot(n_0),s_0\rangle,\dots,\langle f^\bot(n_{k-1}),s_{k-1}\rangle\rangle.
\end{equation*}
To define a family of functions $\operatorname{supp}^{H[x]}_n:H[x]_n\rightarrow[n]^{<\omega}$ we set
\begin{equation*}
\operatorname{supp}^{H[x]}_n(\langle\langle n_0,s_0\rangle,\dots,\langle n_{k-1},s_{k-1}\rangle\rangle)=\{n_j\,|\,j<k\text{ and }n_j\neq\bot\}
\end{equation*}
for each number $n$.
\end{definition}
The conditions from Definition~\ref{def:coded-prae-dilator} are straightforward to verify:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:H-0-prae-dil}
The orders and functions that we have constructed in the previous definition form a (coded) prae-dilator $H[x]=H[X,\mathcal T,x]$.
\end{lemma}
To understand the behaviour of the prae-dilator $H[x]$ we need to look at the orders $D^{H[x]}_Z$ from Definition~\ref{def:coded-prae-dilator-reconstruct}. Unfortunately the intuitive meaning of these orders is not particularly transparent. On a more technical level we can observe that $D^{H[x]}_Z$ is not given as the Kleene-Brouwer order on a tree (note that its elements are pairs). For these reasons it is useful to have an alternative description:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:H-0-alternative}
Given an order $Z$, we define $H[x]_Z$ as the tree of all sequences
\begin{equation*}
\langle\langle z_0,s_0\rangle,\dots,\langle z_{k-1},s_{k-1}\rangle\rangle\in(Z^\bot\times\mathbb N)^{<\omega}
\end{equation*}
that satisfy the following:
\begin{enumerate}[label=(\roman*)]
\item Whenever $j_1,j_2<k$ code elements $j_i=\langle y_i,\sigma_i\rangle\in\Sigma_x\mathcal T$, we have $z_{j_i}\in Z$ (i.\,e.~$z_{j_i}\neq\bot$) and
\begin{equation*}
\langle y_1,\sigma_1\rangle<_{\Sigma\mathcal T}\langle y_2,\sigma_2\rangle\quad\Rightarrow\quad z_{j_1}<_Z z_{j_2}.
\end{equation*}
\item We have $\langle s_0,\dots,s_{k-1}\rangle\in\mathcal T_x$.
\end{enumerate}
The order on $H[x]_Z$ is the Kleene-Brouwer order with respect to $Z^\bot\times\mathbb N$.
\end{definition}
The reader may have observed that Definitions~\ref{def:H-0} and~\ref{def:H-0-alternative} are extremely similar. We have decided to state them separately since there is a conceptual difference: The finite order $n=\{0,\dots,n-1\}$ in Definition~\ref{def:H-0} is coded by a natural number, while the (finite or infinite) order $Z$ in Definition~\ref{def:H-0-alternative} is given as a subset of~$\mathbb N$. In particular the expression $H[x]_n$ is ambiguous, but its meaning is always clear from the context. Let us establish the expected relation:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:H-0-alternative}
For each order~$Z$ we get an isomorphism $D^{H[x]}_Z\cong H[x]_Z$ by stipulating
\begin{multline*}
\langle a,\langle\langle n_0,s_0\rangle,\dots,\langle n_{k-1},s_{k-1}\rangle\rangle\,\rangle\mapsto\\
\langle\langle(\iota_a\circ\operatorname{en}_a)^\bot(n_0),s_0\rangle,\dots,\langle(\iota_a\circ\operatorname{en}_a)^\bot(n_{k-1}),s_{k-1}\rangle\rangle
\end{multline*}
where $\iota_a\circ\operatorname{en}_a:|a|\rightarrow Z$ is the unique embedding with range $a\in[Z]^{<\omega}$.
\end{lemma}
\begin{proof}
To verify the claim one follows the proof of Lemma~\ref{lem:class-sized-restrict}.
\end{proof}
We now show the most important property of the prae-dilator $H[x]$: If $\Sigma_x\mathcal T$ embeds into~$Z$, then $\mathcal T_x$ embeds into $D^{H[x]}_Z$. Crucially, a value of the second embedding can already be computed from a finite approximation to the first. In order to make this precise we consider the finite orders
\begin{equation*}
\Sigma_x\mathcal T\cap k=\{j<k\,|\,\text{$j$ is (the code of) an element of $\Sigma_x\mathcal T$}\},
\end{equation*}
with the same order relation as on $\Sigma_x\mathcal T$. Let us also recall that $\mathcal T_x$ carries the Kleene-Brouwer order $<_{\operatorname{KB}}$ with respect to~$\mathbb N$.
\begin{theorem}[$\mathbf{RCA}_0$]\label{thm:emd-finite}
Consider a well-order~$X$ and an $X$-indexed family $\mathcal T$ of $\mathbb N$-trees. There is a function $E:\Sigma\mathcal T\rightarrow\mathbb N$ such that the following holds for any element $x\in X$, any $\sigma,\sigma_1,\sigma_2\in\mathcal T_x$ and any order~$Z$:
\begin{enumerate}[label=(\roman*)]
\item Given a (finite) embedding $e:\Sigma_x\mathcal T\cap\operatorname{len}(\sigma)\rightarrow Z$, we have
\begin{equation*}
\langle\operatorname{rng}(e),E(\langle x,\sigma\rangle)\rangle\in D^{H[x]}_Z.
\end{equation*}
\item If $e_1:\Sigma_x\mathcal T\cap\operatorname{len}(\sigma_1)\rightarrow Z$ and $e_2:\Sigma_x\mathcal T\cap\operatorname{len}(\sigma_2)\rightarrow Z$ coincide on the intersection of their domains, then we have
\begin{equation*}
\sigma_1<_{\operatorname{KB}}\sigma_2\quad\Rightarrow\quad \langle\operatorname{rng}(e_1),E(\langle x,\sigma_1\rangle)\rangle<_{D^{H[x]}_Z}\langle\operatorname{rng}(e_2),E(\langle x,\sigma_2\rangle)\rangle.
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{proof}
We begin by defining $E(\langle x,\sigma\rangle)$ for given $x\in X$ and $\sigma=\langle s_0,\dots,s_{k-1}\rangle\in\mathcal T_x$. For $j\in\Sigma_x\mathcal T\cap k$ we define $n_j<|\Sigma_x\mathcal T\cap k|$ by stipulating that $j$ is the $n_j$-th element of $\Sigma_x\mathcal T\cap k$. For all $j<k$ outside of $\Sigma_x\mathcal T$ we set $n_j=\bot$. Now we put
\begin{equation*}
E(\langle x,\sigma\rangle)=\langle\langle n_0,s_0\rangle,\dots,\langle n_{k-1},s_{k-1}\rangle\rangle.
\end{equation*}
To establish~(i) we first need $E(\langle x,\sigma\rangle)\in H[x]_{|\operatorname{rng}(e)|}$. Since $e$ is injective its range has the same cardinality as $\Sigma_x\mathcal T\cap k$ (continuing the notation from above, so that~\mbox{$k=\operatorname{len}(\sigma)$}). Thus we do have $n_j\in|\operatorname{rng}(e)|^\bot$ for all~$j<k$. Clause~(i) of Definition~\ref{def:H-0} is satisfied by construction. Clause~(ii) says nothing but $\sigma\in\mathcal T_x$. To complete the verification of~(i) we need
\begin{equation*}
\operatorname{supp}^{H[x]}_{|\operatorname{rng}(e)|}(E(\langle x,\sigma\rangle))=|\operatorname{rng}(e)|.
\end{equation*}
For the crucial inclusion $\supseteq$ it suffices to observe that any $n\in|\operatorname{rng}(e)|=|\Sigma_x\mathcal T\cap k|$ is the position of some $j\in\Sigma_x\mathcal T\cap k$. To prove property~(ii) we compose with the order isomorphism from Lemma~\ref{lem:H-0-alternative}. If $j$ is the $n_j$-th element of~$\Sigma_x\mathcal T\cap k$, then $e(j)$ is the $n_j$-th element of $\operatorname{rng}(e)$. Thus (still with the same notation as above) we see that $\langle\operatorname{rng}(e),E(\langle x,\sigma\rangle)\rangle$ corresponds to
\begin{equation*}
\langle\langle e_\bot(0),s_0\rangle,\dots,\langle e_\bot(k-1),s_{k-1}\rangle\rangle\in H[x]_Z,
\end{equation*}
where $e_\bot:k\rightarrow Z^\bot$ extends $e$ by the values $e_\bot(j)=\bot$ for $j\notin\Sigma_x\mathcal T$. With this description it is straightforward to check property~(ii): Assume that we have
\begin{equation*}
\sigma_1=\langle s_0,\dots,s_{k-1}\rangle<_{\operatorname{KB}}\langle s'_0,\dots,s'_{l-1}\rangle=\sigma_2
\end{equation*}
and that $e_1:\Sigma_x\mathcal T\cap k\rightarrow Z$ and $e_2:\Sigma_x\mathcal T\cap l\rightarrow Z$ coincide below $\min\{k,l\}$. Since $H[x]_Z$ carries the Kleene-Brouwer order with respect to $Z^\bot\times\mathbb N$ we get
\begin{multline*}
\langle\langle (e_1)_\bot(0),s_0\rangle,\dots,\langle (e_1)_\bot(k-1),s_{k-1}\rangle\rangle<_{H[x]_Z}\\
\langle\langle (e_2)_\bot(0),s'_0\rangle,\dots,\langle (e_2)_\bot(l-1),s'_{l-1}\rangle.
\end{multline*}
Up to the isomorphism $H[x]_Z\cong D^{H[x]}_Z$ this is just as required.
\end{proof}
We can now deduce the connection with the premise of $\Pi^1_1$-bar induction. As mentioned before, this part of our argument is similar to Norman's proof that the notion of dilator is \mbox{$\Pi^1_2$-complete} (see~\cite[Annex~8.E]{girard-book-part2}). We choose the base theory $\mathbf{ACA}_0$ since we will apply the characteristic property of the Kleene-Brouwer order:
\begin{corollary}[$\mathbf{ACA}_0$]\label{prop:H-0-captures-prog}
An $X$-indexed family $\mathcal T$ of $\mathbb N$-trees is progressive at $x\in X$ if, and only if, $H[x]$ is a dilator.
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:H-0-prae-dil} we already know that $H[x]$ is a prae-dilator. Thus we need to show that the implication
\begin{equation*}
\text{``$\Sigma_x\mathcal T$ is well-founded''}\rightarrow\text{``$\mathcal T_x$ is well-founded''}
\end{equation*}
is equivalent to the assertion that $D^{H[x]}_Z$ is well-founded for every well-order~$Z$. Aiming at the contrapositive of the first direction, assume that $Z$ is a well-order such that $D^{H[x]}_Z$ is ill-founded. By Lemma~\ref{lem:H-0-alternative} and the characteristic property of the Kleene-Brouwer order we get a branch in the tree $H[x]_Z\subseteq(Z^\bot\times\mathbb N)^{<\omega}$. In view of Definition~\ref{def:H-0-alternative} the first components of this branch form an embedding of $\Sigma_x\mathcal T$ into the well-order $Z$, witnessing the premise of the above implication. The second components of our branch form a branch in the tree $\mathcal T_x$, so that the latter is ill-founded. Thus the implication fails and the first direction is established. For the other direction we assume that $H[x]$ is a dilator. So if the premise of the above implication holds, then $D^{H[x]}_{\Sigma_x\mathcal T}$ is a well-order. To conclude we show that the latter allows to embed $\mathcal T_x$. This is straightforward if we take $Z=\Sigma_x\mathcal T$ in Theorem~\ref{thm:emd-finite}: Considering the inclusions $\Sigma_x\mathcal T\cap\operatorname{len}(\sigma)\hookrightarrow\Sigma_x\mathcal T$ we see that
\begin{equation*}
\mathcal T_x\ni\sigma\mapsto\langle\Sigma_x\mathcal T\cap\operatorname{len}(\sigma),E(\langle x,\sigma\rangle)\rangle\in D^{H[x]}_{\Sigma_x\mathcal T}
\end{equation*}
is the desired embedding.
\end{proof}
Our next goal is to reconstruct the normal function $f$ from the informal argument in the introduction. To streamline the presentation we will not give a detailed reconstruction of the intermediate function~$h_0$. Nevertheless it helps to observe that $h_0$ could be represented by a prae-dilator $H^0$ with $H^0_n=\Sigma_{x\in X}H[x]_n$. This set is ordered by the usual order on a dependent sum, which places $\langle x,\tau\rangle$ before $\langle x',\tau'\rangle$ if we have either $x<_X x'$ or $x=x'$ and $\tau<_{H[x]_n}\tau'$. In the following it will be very convenient to have a more uniform presentation of the order $(\Sigma_{x\in X}H[x]_n)^\bot$, which extends the aforementioned order by a new minimal element. For this purpose we extend the $X$-indexed family of prae-dilators $H[x]$ to a family indexed by $X^\bot$: Define $H[\bot]=H[X,\mathcal T,\bot]$ as the constant prae-dilator with values
\begin{equation*}
H[\bot]_n=\{\star\},
\end{equation*}
where $\star$ is some new symbol. Its action on morphisms and the support functions are given by $H[\bot]_f(\star)=\star$ and $\operatorname{supp}^{H[\bot]}_n(\star)=\emptyset$. Then we have
\begin{equation*}
(\Sigma_{x\in X} H[x]_n)^\bot\cong\Sigma_{x\in X^\bot}H[x]_n,
\end{equation*}
where the isomorphism sends $\bot$ to $\langle\bot,\star\rangle$ and leaves $\langle x,\tau\rangle$ with $x\in X$ unchanged. The point is that all elements of the right side are pairs, which will save us many case distinctions. Invoking Definition~\ref{def:coded-prae-dilator-reconstruct} we see that $D^{H[\bot]}_Z$ consists of the single element $\langle\emptyset,\star\rangle$. Thus $H[\bot]$ is a dilator and we have
\begin{equation*}
\left(\Sigma_{x\in X}D^{H[x]}_Z\right)^\bot\cong\Sigma_{x\in X^\bot}D^{H[x]}_Z.
\end{equation*}
Let us now define the prae-dilator $F$ that reconstructs the function~$f$ from the informal argument. The crucial point is that~$F$ is normal, as we shall see below.
\begin{definition}[$\mathbf{RCA}_0$]\label{def:dil-F}
Consider a well-order~$X$ and an $X$-indexed family $\mathcal T$ of $\mathbb N$-trees. For each number $n$ we define
\begin{equation*}
F_n=F[X,\mathcal T]_n=\Sigma_{N\in n}\Sigma_{x\in X^\bot}H[X,\mathcal T,x]_N.
\end{equation*}
Omitting one pair of parentheses, we write the elements of $F_n$ in the form $\langle N,x,\sigma\rangle$ with $N\in n=\{0,\dots,n-1\}$, $x\in X^\bot$ and $\sigma\in H[x]_N$. The order on $F_n$ is the usual order on a dependent sum, which coincides with the lexicographic order on the triples $\langle N,x,\sigma\rangle$. Given an embedding~$f:n\rightarrow m$, we write $f\!\restriction\!N:N\rightarrow f(N)$ for the restriction of $f$. Then we define $F_f:F_n\rightarrow F_m$ by
\begin{equation*}
F_f(\langle N,x,\tau\rangle)=\langle f(N),x,H[x]_{f\restriction N}(\tau)\rangle.
\end{equation*}
The functions $\operatorname{supp}^F_n:F_n\rightarrow[n]^{<\omega}$ are given as
\begin{equation*}
\operatorname{supp}^F_n(\langle N,x,\tau\rangle)=\{N\}\cup\operatorname{supp}^{H[x]}_N(\tau).
\end{equation*}
Finally we define a family of functions $\mu^F_n:n\rightarrow F_n$ by setting
\begin{equation*}
\mu^F_n(N)=\langle N,\bot,\star\rangle
\end{equation*}
for all numbers~$N<n$.
\end{definition}
Let us verify the following:
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:F-normal}
The orders and functions from the previous definition form a normal prae-dilator $F=F[X,\mathcal T]$.
\end{proposition}
\begin{proof}
Using Lemma~\ref{lem:H-0-prae-dil} it is straightforward to show that $F$ is a functor and that $\operatorname{supp}^F$ is a natural transformation. To conclude that $F$ is a prae-dilator we must verify the support condition from clause~(ii) of Definition~\ref{def:coded-prae-dilator}. To do so we consider an arbitrary $\sigma=\langle N,x,\tau\rangle\in F_n$. We abbreviate $a:=\operatorname{supp}^{H[x]}_N(\tau)\subseteq\{0,\dots,N-1\}$ and write $\iota_\sigma\circ\operatorname{en}_\sigma:|a|+1\rightarrow n$ for the embedding with range $\operatorname{supp}^F_n(\sigma)=a\cup\{N\}$. Since the restriction $(\iota_\sigma\circ\operatorname{en}_\sigma)\!\restriction\!|a|:|a|\rightarrow N$ has range~$a$, the support condition for $H[x]$ yields a $\tau_0\in H[x]_{|a|}$ with $\tau=H[x]_{(\iota_\sigma\circ\operatorname{en}_\sigma)\restriction|a|}(\tau_0)$. By construction we have $\langle |a|,x,\tau_0\rangle\in F_{|a\cup\{N\}|}$ and $\sigma=F_{\iota_\sigma\circ\operatorname{en}_\sigma}(\langle |a|,x,\tau_0\rangle)$, which completes the proof of the support condition for $F$. One readily verifies that $\mu^F$ is a natural family of embeddings (for naturality we recall $H[\bot]_{f\restriction N}(\star)=\star$). In view of Definition~\ref{def:coded-normal-dil} it remains to establish
\begin{equation*}
\sigma<_{F_n}\mu^F_n(N)\quad\Leftrightarrow\quad\operatorname{supp}^F_n(\sigma)<^{\operatorname{fin}} N
\end{equation*}
for arbitrary $\sigma\in F_n$ and $N<n$. For the first direction we recall that $\langle\bot,\star\rangle$ is the smallest element of $\Sigma_{x\in X^\bot}H[x]_N$. Thus any $\sigma<_{F_n}\mu^F_n(N)=\langle N,\bot,\star\rangle$ must have the form $\sigma=\langle N',x,\tau\rangle$ with $N'<N$. In view of $\operatorname{supp}^{H[x]}_{N'}(\tau)\in[N']^{<\omega}$ we get
\begin{equation*}
\operatorname{supp}^F_n(\sigma)=\{N'\}\cup\operatorname{supp}^{H[x]}_{N'}(\tau)<^{\operatorname{fin}} N.
\end{equation*}
For the converse we also write $\sigma=\langle N',x,\tau\rangle$. In view of $N'\in\operatorname{supp}^F_n(\sigma)$ the right side of the desired equivalence implies $N'<N$ and thus $\sigma<_{F_n}\mu^F_n(N)$.
\end{proof}
Lemma~\ref{lem:H-0-alternative} provides a transparent description of the orders $D^{H[x]}_Z$ (recall that the latter consist of pairs $\langle a,\tau\rangle$ with $a\in[Z]^{<\omega}$ and $\tau\in H[x]_{|a|}$, see Definition~\ref{def:coded-prae-dilator-reconstruct}). We now describe $D^F_Z$ relative to these orders:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:F-alternative}
Given an order $Z$, we put
\begin{equation*}
F_Z=\{\langle z,x,\langle a,\tau\rangle\rangle\in Z\times\Sigma_{x\in X^\bot}D^{H[x]}_Z\,|\,a<^{\operatorname{fin}}_Z z\}.
\end{equation*}
The order on $F_Z$ is the indicated product order, which coincides with the lexicographic order on the triples $\langle z,x,\langle a,\tau\rangle\rangle$ (we again omit a pair of angle parentheses).
\end{definition}
As in the case of Definition~\ref{def:H-0-alternative}, the expression $F_n$ with $n\in\mathbb N$ is now ambiguous, but its meaning will always be clear from the context. The following proof consists in a technical verification, which the reader may wish to skip at first reading.
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:F-alternative}
For each order~$Z$ the clause
\begin{equation*}
\chi^F_X(\langle z,x,\langle a,\tau\rangle\rangle)=\langle a\cup\{z\},\langle |a|,x,\tau\rangle\rangle
\end{equation*}
defines an isomorphism $\chi^F_Z:F_Z\rightarrow D^F_Z$.
\end{lemma}
\begin{proof}
To see that $\chi^F_Z$ has values in $D^F_Z$ we consider an arbitrary $\langle z,x,\langle a,\tau\rangle\rangle\in F_Z$. In view of Definition~\ref{def:coded-prae-dilator-reconstruct} we have $\tau\in H[x]_{|a|}$ and $\operatorname{supp}^{H[x]}_{|a|}(\tau)=|a|$. This yields
\begin{equation*}
\langle |a|,x,\tau\rangle\in F_{|a|+1}\quad\text{and}\quad\operatorname{supp}^F_{|a|+1}(\langle |a|,x,\tau\rangle)=\{|a|\}\cup\operatorname{supp}^{H[x]}_{|a|}(\tau)=|a|+1.
\end{equation*}
The condition $a<^{\operatorname{fin}}_Z z$ ensures $|a\cup\{z\}|=|a|+1$. Thus we indeed have
\begin{equation*}
\chi^F_Z(\langle z,x,\langle a,\tau\rangle\rangle)=\langle a\cup\{z\},\langle |a|,x,\tau\rangle\rangle\in D^F_Z.
\end{equation*}
To prove that $\chi^F_Z$ is order preserving we consider an inequality
\begin{equation*}
\langle z_0,x_0,\langle a_0,\tau_0\rangle\rangle<_{F_Z}\langle z_1,x_1,\langle a_1,\tau_1\rangle\rangle.
\end{equation*}
We write $\iota_j:a_j\cup\{z_j\}\hookrightarrow a_0\cup a_1\cup\{z_0,z_1\}=:c$ with $j\in\{0,1\}$ for the inclusions. Furthermore, let $\operatorname{en}_c:|c|\rightarrow c$ and $\operatorname{en}_j:|a_j\cup\{z_j\}|\rightarrow a_j\cup\{z_j\}$ denote the increasing enumerations. As in the previous section, the function $|\iota_j|:|a_j\cup\{z_j\}|\rightarrow|c|$ is determined by the property that it is order preserving and makes the following diagram commute:
\begin{equation*}
\begin{tikzcd}
{|a_j\cup\{z_j\}|} \ar{r}{|\iota_j|}\ar{d}{\operatorname{en}_j} & {|c|} \ar{d}{\operatorname{en}_c}\\
a_j\cup\{z_j\} \ar{r}{\iota_j} & c.
\end{tikzcd}
\end{equation*}
According to Definition~\ref{def:coded-prae-dilator-reconstruct} the desired inequality between the values of $\chi^F_Z$ is equivalent to
\begin{equation*}
F_{|\iota_0|}(\langle|a_0|,x_0,\tau_0\rangle)<_{F_{|c|}}F_{|\iota_1|}(\langle|a_1|,x_1,\tau_1\rangle).
\end{equation*}
By the definition of $F$ the latter amounts to
\begin{equation*}
\langle|\iota_0|(|a_0|),x_0,H[x_0]_{|\iota_0|\restriction|a_0|}(\tau_0)\rangle<_{F_{|c|}}\langle|\iota_1|(|a_1|),x_1,H[x_1]_{|\iota_1|\restriction|a_1|}(\tau_1)\rangle.
\end{equation*}
To establish this inequality we first assume that the given inequality between the arguments of $\chi^F_Z$ holds because of $z_0<_Zz_1$. In view of the condition $a_j<^{\operatorname{fin}}_Z z_j$ we have $\operatorname{en}_j(|a_j|)=z_j$ and thus
\begin{equation*}
\operatorname{en}_c\circ|\iota_0|(|a_0|)=\iota_0\circ\operatorname{en}_0(|a_0|)=z_0<_Zz_1=\iota_1\circ\operatorname{en}_1(|a_1|)=\operatorname{en}_c\circ|\iota_1|(|a_1|).
\end{equation*}
This implies $|\iota_0|(|a_0|)<|\iota_1|(|a_1|)$, so that the required inequality holds. A similar argument shows that $z_0=z_1$ implies $|\iota_0|(|a_0|)=|\iota_1|(|a_1|)$. The case where we have $z_0=z_1$ and $x_0<_Xx_1$ is now immediate. Finally assume $z_0=z_1$, $x_0=x_1=:x$ and
\begin{equation*}
\langle a_0,\tau_0\rangle<_{D^{H[x]}_Z}\langle a_1,\tau_1\rangle.
\end{equation*}
It is straightforward to check that the restriction $|\iota_j|\!\restriction\!|a_j|$ makes the following diagram commute, where the vertical arrows are the increasing enumerations:
\begin{equation*}
\begin{tikzcd}
{|a_j|} \ar{r}{|\iota_j|\restriction|a_j|}\ar{d} &[2em] {|a_0\cup a_1|} \ar{d}\\
a_j \arrow[r,hook] & a_0\cup a_1.
\end{tikzcd}
\end{equation*}
In view of Definition~\ref{def:coded-prae-dilator-reconstruct} we can conclude
\begin{equation*}
H[x]_{|\iota_0|\restriction|a_0|}(\tau_0)<_{H_{|a_0\cup a_1|}}H[x]_{|\iota_1|\restriction|a_1|}(\tau_1),
\end{equation*}
which implies the required inequality. To show that $\chi^F_Z$ is surjective we consider an arbitrary $\langle b,\langle N,x,\tau\rangle\rangle\in D^F_Z$. According to Definition~\ref{def:coded-prae-dilator-reconstruct} we must have
\begin{equation*}
|b|=\operatorname{supp}^F_{|b|}(\langle N,x,\tau\rangle)=\{N\}\cup\operatorname{supp}^{H[x]}_N(\tau),
\end{equation*}
which forces $|b|=N+1$ and $\operatorname{supp}^{H[x]}_N(\tau)=N$. In particular $b$ is non-empty, so that we can write $b=a\cup\{z\}$ with $a<^{\operatorname{fin}}_Z z$. In view of $|a|=N$ we get $\langle a,\tau\rangle\in D^{H[x]}_Z$ and then $\langle z,x,\langle a,\tau\rangle\rangle\in F_Z$, as well as $\chi^F_Z(\langle z,x,\langle a,\tau\rangle\rangle)=\langle b,\langle N,x,\tau\rangle\rangle$.
\end{proof}
By combining previous results we obtain the following:
\begin{corollary}[$\mathbf{ACA}_0$]\label{cor:F-dil}
Consider a well-order~$X$. An $X$-indexed family $\mathcal T$ of $\mathbb N$-trees is progressive along $X$ if, and only if, $F[X,\mathcal T]$ is a normal dilator.
\end{corollary}
\begin{proof}
In view of Corollary~\ref{prop:H-0-captures-prog}, Proposition~\ref{prop:F-normal} and the previous lemma it suffices to show that $Z\mapsto F_Z$ preserves well-foundedness if, and only if, $Z\mapsto D^{H[x]}_Z$ preserves well-foundedness for all $x\in X$. If the latter holds, then
\begin{equation*}
Z\times\Sigma_{x\in X^\bot}D^{H[x]}_Z
\end{equation*}
is well-founded for any well-order~$Z$ (recall that $D^{H[\bot]}_Z=\{\langle\emptyset,\star\rangle\}$ consists of a single element, so that it is well-founded in any case). Since $F_Z$ is contained in that order it must be well-founded itself. To establish the other direction we show the following: For any $x\in X$ the order $D^{H[x]}_Z$ can be embedded into $F_{Z^\top}$, where $Z^\top=Z\cup\{\top\}$ extends $Z$ by a new maximal element. Let us write $\iota:Z\hookrightarrow Z^\top$ for the inclusion. Definition~\ref{def:coded-prae-dilator-reconstruct} and Proposition~\ref{prop:reconstruct-class-sized-dil} tell us that $D^{H[x]}_\iota(\langle a,\tau\rangle)=\langle[\iota]^{<\omega}(a),\tau\rangle=\langle a,\tau\rangle$ defines an embedding of $D^{H[x]}_Z$ into $D^{H[x]}_{Z^\top}$. Since any $a\in[Z]^{<\omega}$ satisfies $a<^{\operatorname{fin}}_{Z^\top}\top$ we see that $\langle a,\tau\rangle\mapsto\langle\top,x,\langle a,\tau\rangle\rangle$ is the desired embedding of $D^{H[x]}_Z$ into $F_{Z^\top}$.
\end{proof}
With the previous result we have completed our reconstruction of the functions $h$ and $f$ that appear in the informal argument from the introduction. The latter proceeds by considering an upper derivative $g$ of $f$. It then invokes induction on~$\gamma$ to show that each tree $\mathcal T_\gamma$ can be embedded into the ordinal~$g(\gamma+1)\leq g(\alpha)$. In the following we recover this crucial part of the informal argument on the level of dilators (recall that $\alpha$ is represented by the well-order~$X$). It is remarkable that this is possible in a weak base theory, even though the informal argument uses transfinite induction.
\begin{theorem}[$\mathbf{RCA}_0$]\label{thm:embed-Sigma-T}
Consider a well-order~$X$ and an $X$-indexed family $\mathcal T$ of $\mathbb N$-trees. Assume that $G$ and $\xi:F\circ G\Rightarrow G$ form an upper derivative of the normal prae-dilator $F=F[X,\mathcal T]$. Then $\Sigma\mathcal T$ can be embedded into the order $D^G_X$.
\end{theorem}
\begin{proof}
As a preparation we specify two functions that are implicit in the given data: By combining Proposition~\ref{prop:compose-dils-rca}, Lemma~\ref{lem:morph-dilators-extend} and Lemma~\ref{lem:F-alternative} we get an embedding
\begin{equation*}
\xi^F:=D^\xi_X\circ\zeta^{F,G}_X\circ\chi^F_{D^G_X}:F_{D^G_X}\rightarrow D^G_X.
\end{equation*}
From Definitions~\ref{def:coded-normal-dil},~\ref{def:extend-normal-transfos} and~\ref{def:upper-derivative} we know that $G=(G,\mu^G)$ must be a normal prae-dilator and does, as such, give rise to an order preserving function
\begin{equation*}
D^{\mu^G}_X:X\rightarrow D^G_X.
\end{equation*}
Based on these observations we now construct the desired embedding
\begin{equation*}
J:\Sigma\mathcal T\rightarrow D^G_X.
\end{equation*}
The informal argument would suggest to construct the functions $\mathcal T_x\ni\sigma\mapsto J(\langle x,\sigma\rangle)$ by recursion on $x\in X$, but this requires a recursion principle that is not available in our base theory. We will instead use a more finitary form of course-of-values recursion: Recall that $\Sigma_x\mathcal T\cap k$ consists of those $j<k$ that are (codes for) elements of $\Sigma_x\mathcal T$. We assume that the code of any $\langle x,\sigma\rangle\in\Sigma\mathcal T$ is at least as big as the length of the sequence $\sigma$. The value $J(\langle x,\sigma\rangle)$ may then depend on the finite function
\begin{equation*}
e_{x,\sigma}:=J\!\restriction\!(\Sigma_x\mathcal T\cap\operatorname{len}(\sigma)):\Sigma_x\mathcal T\cap\operatorname{len}(\sigma)\rightarrow D^G_X.
\end{equation*}
After describing the construction of $J$ we will set up an induction which ensures that $e_{x,\sigma}$ is an embedding. When this is the case Theorem~\ref{thm:emd-finite} yields an element
\begin{equation*}
J^0(\langle x,\sigma\rangle):=\langle\operatorname{rng}(e_{x,\sigma}),E(\langle x,\sigma\rangle)\rangle\in D^{H[x]}(D^G_X).
\end{equation*}
In view of Definition~\ref{def:F-alternative} we can now state the recursive clause for $J$ as
\begin{equation*}
J(\langle x,\sigma\rangle)=\xi^F(\langle D^{\mu^G}_X(x),x,J^0(\langle x,\sigma\rangle)\rangle).
\end{equation*}
To conclude the proof we show the following by simultaneous induction on $j$:
\begin{enumerate}[label=(\roman*)]
\item If $j$ codes an element of $\Sigma\mathcal T$, then we have $J(j)\in D^G_X$.
\item If $j_1,j_2\leq j$ code elements of $\Sigma\mathcal T$, then we have
\begin{equation*}
j_1<_{\Sigma\mathcal T}j_2\quad\Rightarrow\quad J(j_1)<_{D^G_X}J(j_2).
\end{equation*}
\item If $j$ codes an element of $\Sigma_x\mathcal T$, then we have $J(j)<_{D^G_X} D^{\mu^G}_X(x)$.
\end{enumerate}
To verify the induction step for~(i) we write $j=\langle x,\sigma\rangle$. Parts~(i) and~(ii) of the induction hypothesis guarantee that $e_{x,\sigma}$ is an embedding with values in~$D^G_X$, as promised above. We have seen that this yields $J^0(\langle x,\sigma\rangle)\in D^{H[x]}(D^G_X)$. In view of Definition~\ref{def:F-alternative} we also need
\begin{equation*}
\operatorname{rng}(e_{x,\sigma})<^{\operatorname{fin}}_{D^G_X} D^{\mu^G}_X(x).
\end{equation*}
This holds by part~(iii) of the induction hypothesis. To establish the induction step for~(ii) we consider an inequality
\begin{equation*}
j_1=\langle x_1,\sigma_1\rangle<_{\Sigma\mathcal T}\langle x_2,\sigma_2\rangle=j_2.
\end{equation*}
If we have $x_1<_Xx_2$, then we get $D^{\mu^G}_X(x_1)<_{D^G_X}D^{\mu^G}_X(x_2)$ and thus
\begin{equation*}
\langle D^{\mu^G}_X(x_1),x_1,J^0(\langle x_1,\sigma_1\rangle)\rangle<_{F_{D^G_X}}\langle D^{\mu^G}_X(x_2),x_2,J^0(\langle x_2,\sigma_2\rangle)\rangle.
\end{equation*}
As $\xi^F$ is order preserving this implies $J(j_1)<_{D^G_X}J(j_2)$. Now assume that $j_1<_{\Sigma\mathcal T}j_2$ holds because we have $x_1=x_2=:x$ and $\sigma_1<_{\operatorname{KB}}\sigma_2$ (recall that $\mathcal T_x$ carries the usual Kleene-Brouwer order). Since $e_{x,\sigma_1}$ and $e_{x,\sigma_2}$ are restrictions of the same function, they coincide on the intersection of their domains. Thus Theorem~\ref{thm:emd-finite} yields
\begin{equation*}
J^0(\langle x_1,\sigma_1\rangle)<_{D^{H[x]}(D^G_X)}J^0(\langle x_2,\sigma_2\rangle),
\end{equation*}
which again implies $J(j_1)<_{D^G_X}J(j_2)$. Finally we prove the induction step for~(iii). As a preparation we recall that $D^{H[\bot]}(D^G_X)$ consists of the single element $\langle\emptyset,\star\rangle$. In view of Lemma~\ref{lem:F-alternative}, Definition~\ref{def:dil-F} and Definition~\ref{def:extend-normal-transfos} we have
\begin{multline*}
\chi^F_{D^G_X}(\langle D^{\mu^G}_X(x),\bot,\langle\emptyset,\star\rangle\rangle)=\langle\{D^{\mu^G}_X(x)\},\langle 0,\bot,\star\rangle\rangle=\\
=\langle\{D^{\mu^G}_X(x)\},\mu^F_1(0)\rangle=D^{\mu^F}_{D^G_X}\circ D^{\mu^G}_X(x).
\end{multline*}
Together with Corollary~\ref{cor:deriv-normal-witness} we get
\begin{equation*}
\xi^F(\langle D^{\mu^G}_X(x),\bot,\langle\emptyset,\star\rangle\rangle)=D^\xi_X\circ\zeta^{F,G}_X\circ D^{\mu^F}_{D^G_X}\circ D^{\mu^G}_X(x)=D^{\mu^G}_X(x).
\end{equation*}
To deduce~(iii) we observe that any $j\in\Sigma_x\mathcal T$ has the form $j=\langle y,\sigma\rangle$ with $y<_Xx$. The latter implies
\begin{equation*}
\langle D^{\mu^G}_X(y),y,J^0(\langle y,\sigma\rangle)\rangle<_{F_{D^G_X}}\langle D^{\mu^G}_X(x),\bot,\langle\emptyset,\star\rangle\rangle.
\end{equation*}
By the above this yields
\begin{equation*}
J(j)<_{D^G_X}\xi^F(\langle D^{\mu^G}_X(x),\bot,\langle\emptyset,\star\rangle\rangle)=D^{\mu^G}_X(x),
\end{equation*}
as required.
\end{proof}
The following result completes our reconstruction of the informal argument and establishes the implication (2)$\Rightarrow$(3) that we have discussed in the introduction. The notions that appear in statement~(2) have been made precise in Section~\ref{sect:normal-dils-so}.
\begin{corollary}\label{cor:deriv-implies-BI}
For each $\Pi^1_1$-formula $\varphi(x)$ (possibly with further free variables) the following is provable in $\mathbf{ACA}_0$: If every normal dilator $F$ has an upper derivative $(G,\xi)$ such that $G$ is a dilator, then $\varphi$ satisfies bar induction, i.\,e.~we have
\begin{equation*}
\forall_{x\in X}(\forall_{y<_Xx}\varphi(y)\rightarrow\varphi(x))\rightarrow\forall_{x\in X}\varphi(x)
\end{equation*}
for any well-order $X=(X,<_X)$.
\end{corollary}
\begin{proof}
By the Kleene normal form theorem (see~\cite[Lemma~V.1.4]{simpson09}) there is a bounded arithmetical formula $\theta(\sigma,x)$ such that $\mathbf{ACA}_0$ proves
\begin{equation*}
\varphi(x)\leftrightarrow\forall_f\exists_n\theta(f[n],x).
\end{equation*}
Here the universal quantifier ranges over all functions $f:\mathbb N\rightarrow\mathbb N$. Let us recall that $f[n]=\langle f(0),\dots,f(n-1)\rangle$ denotes the sequence that contains the first $n$ values of such a function. Given a sequence $\sigma=\langle \sigma_0,\dots,\sigma_{\operatorname{len}(\sigma)-1}\rangle$ and a number $n\leq\operatorname{len}(\sigma)$, we similarly write $\sigma[n]=\langle \sigma_0,\dots,\sigma_{n-1}\rangle$. We can now define an $X$-indexed family $\mathcal T=\{\langle x,\sigma\rangle\,|\,x\in X\text{ and }\sigma\in\mathcal T_x\}$ of $\mathbb N$-trees by setting
\begin{equation*}
\mathcal T_x=\{\sigma\in\mathbb N^{<\omega}\,|\,\forall_{n\leq\operatorname{len}(\sigma)}\neg\theta(\sigma[n],x)\}
\end{equation*}
for every $x\in X$. Observe that $\forall_n\neg\theta(f[n],x)$ is equivalent to the assertion that $f$ is a branch in $\mathcal T_x$. Thus we have
\begin{equation*}
\varphi(x)\leftrightarrow\text{``$\mathcal T_x$ is well-founded''},
\end{equation*}
where $\mathcal T_x$ carries the usual Kleene-Brouwer order with respect to~$\mathbb N$. Let us now assume that the premise of the desired induction statement holds. Then $\mathcal T$ is progressive along $X$, using the terminology that we have introduced at the beginning of this section. Consider the prae-dilator $F=F[X,\mathcal T]$ that is constructed according to~Definition~\ref{def:dil-F}. From Corollary~\ref{cor:F-dil} we learn that $F$ is a normal dilator. By the assumption of the present corollary we get a dilator $G$ and a natural transformation $\xi:F\circ G\rightarrow G$ that form an upper derivative of $F$. Theorem~\ref{thm:embed-Sigma-T} tells us that $\Sigma\mathcal T$ can be embedded into the order $D^G_X$. The latter is well-founded, because $X$ is a well-order and $G$ is a dilator. Hence $\Sigma\mathcal T$ is well-founded as well. It follows that $\mathcal T_x$ is well-founded for every $x\in X$, which yields the conclusion $\forall_{x\in X}\varphi(x)$ of the desired induction statement.
\end{proof}
\section{Constructing the derivative}\label{sect:constructin-derivative}
In the present section we show how to construct a derivative $\partial T$ of a given normal prae-dilator $T$. We will see that $\mathbf{RCA}_0$ proves the existence of $\partial T$, as well as the fact that it is a derivative. As a consequence we obtain the implication (1)$\Rightarrow$(2) from the introduction. What $\mathbf{RCA}_0$ cannot show is that~$\partial T$ is a dilator (i.\,e.~that $X\mapsto D^{\partial T}_X$ preserves well-foundedness) whenever $T$ is one: Due to Corollary~\ref{cor:deriv-implies-BI} this statement implies $\Pi^1_1$-bar induction. The converse implication, which amounts to (3)$\Rightarrow$(1) from the introduction, will be established in Section~\ref{sect:bi-deriv-wf}.
The construction of $\partial T$ can also be exploited to establish general results about derivatives. This relies on the fact that derivatives are essentially unique, as observed after Definition~\ref{def:derivative}. We will use this approach to show that the assumptions of Theorem~\ref{thm:equalizer-to-derivative} are automatic when $(S,\xi)$ is a derivative of $T$.
As mentioned in the introduction, a categorical construction of derivatives has already been given by Aczel~\cite{aczel-phd,aczel-normal-functors}. In the following we give a rather informal presentation of his approach in the terminology of the present paper (in particular we are rather liberal about the distinction between coded and class-sized dilators, cf.~the discussion after Definition~\ref{def:coded-dilator}). Given a normal dilator $T=(T,\mu^T)$ and an order~$X$, Aczel's idea was to define the value $\partial T_X$ as the direct limit of the diagram
\begin{equation*}
\begin{tikzcd}[column sep = large]
X\arrow[r,"\mu^T_X"] & T_X\arrow[r,"T(\mu^T_X)"] & T^2_X:=T(T_X)\arrow[r,"T^2(\mu^T_X):=T(T(\mu^T_X))"] &[2cm] \cdots.
\end{tikzcd}
\end{equation*}
As a direct limit, $\partial T_X$ comes with compatible embeddings $j^n_X:T^n_X\rightarrow\partial T_X$. By the universal property the functions
\begin{equation*}
T(j^n_X)\circ T^n(\mu^T_X):T^n_X\rightarrow T(\partial T_X)
\end{equation*}
glue to an embedding of $\partial T_X$ into $T(\partial T_X)$. The latter is an isomorphism since $T$ preserves direct limits. Thus $\partial T_X$ is a fixed-point of $T$, as one would expect if $\partial T$ is to be a derivative. Furthermore, Aczel could show that $\partial T$ preserves well-foundedness if $T$ does. This is a non-trivial matter, since well-foundedness is not preserved under direct limits in general. The proof that it is preserved under the specific limit constructed above makes crucial use of the assumption that $T$ preserves initial segments (cf.~the discussion before Lemma~\ref{lem:range-dil-support}). Finally, Aczel has shown that $\alpha\mapsto\operatorname{otp}(\partial T_\alpha)$ is the derivative of the normal function $\alpha\mapsto\operatorname{otp}(T_\alpha)$. Let us mention that he did not give an explicit characterization of derivatives on the level of functors, i.\,e.~he did not formulate an analogue of Definition~\ref{def:derivative}.
In order to recover Aczel's construction in $\mathbf{RCA}_0$ we need to approach the direct limit in a particularly finitistic way. Our idea is to represent $\partial T_X$ by a system of~terms. To~see how this works, recall that we want to ensure the existence of an iso\-morphism~$\xi_X:T(\partial T_X)\rightarrow\partial T_X$. In view of Lemma~\ref{lem:class-sized-restrict} (and the discussion after Definition~\ref{def:coded-dilator}) any element of $T(\partial T_X)$ corresponds to a pair $\langle a,\sigma\rangle\in D^T(\partial T_X)$, where $a\subseteq\partial T_X$ is finite and $\sigma\in T_{|a|}$ satisfies $\operatorname{supp}^T_{|a|}(\sigma)=|a|$. Assuming that the elements of $a$ are already represented by terms, we can add a term $\xi\langle a,\sigma\rangle\in\partial T_X$ that represents the value of $\langle a,\sigma\rangle$ under $\xi_X$. To make this idea precise we switch back to the rigorous framework of coded prae-dilators, as introduced in Section~\ref{sect:normal-dils-so}. In particular we want to construct $\partial T$ as a coded prae-dilator, which leads us to focus on the values $\partial T_n$ for the finite orders $n=\{0,\dots,n-1\}$. Let us first specify the underlying set of the order $\partial T_n$:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:derivative-term-system}
Consider a normal prae-dilator $T=(T,\operatorname{supp}^T,\mu^T)$. For each number $n$ we define a term system $\partial T_n$ by the following inductive clauses:
\begin{enumerate}[label=(\roman*)]
\item We have a term $\mu_m\in\partial T_n$ for every number $m<n$.
\item Given a finite set $a\subseteq\partial T_n$ of terms and a $\sigma\in T_{|a|}$ with $\operatorname{supp}^T_{|a|}(\sigma)=|a|$, we get a term $\xi\langle a,\sigma\rangle\in\partial T_n$, provided that the following holds: If we have $a=\{\mu_m\}$ for some $m<n$, then $\sigma$ must be different from $\mu^T_1(0)\in T_1$.
\end{enumerate}
\end{definition}
Note that the term systems $\partial T_n$ are uniformly computable (with respect to~$n$), so that $\mathbf{RCA}_0$ proves the existence of the set
\begin{equation*}
\{\langle n,s\rangle\,|\,s\in\partial T_n\}.
\end{equation*}
This is crucial if we want to extend $\partial T$ into a coded prae-dilator (cf.~the discussion after Definition~\ref{def:coded-prae-dilator}). In order to understand the proviso in clause~(ii) one should think of $\mu_m$ as a notation for~$f'(m)$, where $f$ is the normal function induced by $T$ and $f'$ is its derivative. In the proof of Proposition~\ref{prop:normal-dil-fct} we have seen that $D^{\mu^T}$ amounts to an internal version of the function $f$. Together with Definition~\ref{def:extend-normal-transfos} we see that $\langle\{\mu_m\},\mu^T_1(0)\rangle=D^{\mu^T}_{\partial T_n}(\mu_m)$ corresponds to $f(f'(m))$. Since the latter is equal to $f'(m)$ the terms $\xi\langle\{\mu_m\},\mu^T_1(0)\rangle$ and $\mu_m$ would represent the same ordinal. To keep our notations unique, the first of these terms has been excluded in clause~(ii). A formal version of this intuitive explanation will play a role in the proof of Theorem~\ref{thm:partial-derivative}. The following notion of term length will be used to define the order on $\partial T_n$:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:deriv-term-length}
For each $n$ we define a length function $L^{\partial T}_n:\partial T_n\rightarrow\mathbb N$ by recursion over the build-up of terms, setting
\begin{equation*}
L^{\partial T}_n(s)=\begin{cases}
\ulcorner s\urcorner & \text{if $s=\mu_m$},\\
\max\{\ulcorner s\urcorner,1+\textstyle\sum_{t\in a}2\cdot L^{\partial T}_n(t)\} & \text{if $s=\xi\langle a,\sigma\rangle$},
\end{cases}
\end{equation*}
where $\ulcorner s\urcorner$ denotes the code (G\"odel number) of the term $s$.
\end{definition}
Note that $\ulcorner s\urcorner$ coincides with $s$ if Definition~\ref{def:derivative-term-system} is already arithmetized. The role of the G\"odel numbers is to justify certain applications of induction and recursion over the length of terms: In view of $\ulcorner s\urcorner\leq L^{\partial T}_n(s)$ a quantifier of the form
\begin{equation*}
\forall_{s\in\partial T_n}(L^{\partial T}_n(s)\leq l\rightarrow\dots)
\end{equation*}
is bounded. Thus such a quantifier does not lead out of the $\Sigma^0_1$-formulas, for which induction is available in $\mathbf{RCA}_0$. To construct a binary relation $<_{\partial T_n}$ on $\partial T_n$, the following definition decides $s<_{\partial T_n}t$ by recursion on $L^{\partial T}_n(s)+L^{\partial T}_n(t)$. In case we have $s=\xi\langle a,\sigma\rangle$ and $t=\xi\langle b,\tau\rangle$ we can assume that the restriction of $<_{\partial T_n}$ to $a\cup b$ is already determined (note that $r\in a$ yields $2\cdot L^{\partial T}_n(r)<L^{\partial T}_n(s)$, so that we can decide $r<_{\partial T_n}r$). In particular we can check whether $<_{\partial T_n}$ is a linear order on the finite set $a\cup b$. If it is, then we may refer to the functions $|\iota_a^{a\cup b}|$ and $|\iota_b^{a\cup b}|$ from Definition~\ref{def:coded-prae-dilator-reconstruct} (see also the second paragraph of Section~\ref{sect:normal-dils-so}). More explicitly, we can write $\operatorname{en}_a:|a|\rightarrow a$ and $\operatorname{en}_{a\cup b}:|a\cup b|\rightarrow a\cup b$ for the unique increasing enumerations with respect to~$<_{\partial T_n}$. Then the function $|\iota_a^{a\cup b}|:|a|\rightarrow|a\cup b|$ is characterized by the fact that it satisfies $\operatorname{en}_{a\cup b}\circ|\iota_a^{a\cup b}|=\iota_a^{a\cup b}\circ\operatorname{en}_a$, where $\iota_a^{a\cup b}:a\hookrightarrow a\cup b$ is the inclusion. Before Lemma~\ref{lem:range-dil-support} we have seen that a linear order $<_X$ on a set $X$ induces relations $<^{\operatorname{fin}}_X$ and $\leq^{\operatorname{fin}}_X$ between finite subsets of $X$. In the following we use $<^{\operatorname{fin}}_{\partial T_n}$ and $\leq^{\operatorname{fin}}_{\partial T_n}$ as abbreviations, without assuming that $<_{\partial T_n}$ is linear on the relevant parts of $X$. In particular we have
\begin{equation*}
s\leq^{\operatorname{fin}}_{\partial T_n} b\qquad\Leftrightarrow\qquad\exists_{r\in b}\,s\leq_{\partial T_n}r
\end{equation*}
for any element $s\in\partial T_n$ and any finite subset $b\subseteq\partial T_n$. Note that $s\leq_{\partial T_n}r$ abbreviates ${s<_{\partial T_n}r}\lor {s=r}$, where the second disjunct refers to the equality of terms. We can now state the definition of the desired order relation:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:derivative-order}
For each $n$ we define a binary relation $<_{\partial T_n}$ on~$\partial T_n$. Invoking recursion on $L^{\partial T}_n(s)+L^{\partial T}_n(t)$, we stipulate that $s<_{\partial T_n}t$ holds if, and only if, one of the following is satisfied:
\begin{enumerate}[label=(\roman*)]
\item We have $s=\mu_m$ and
\begin{itemize}
\item either $t=\mu_k$ and $m<k$,
\item or $t=\xi\langle b,\tau\rangle$ and $s\leq^{\operatorname{fin}}_{\partial T_n} b$.
\end{itemize}
\item We have $s=\xi\langle a,\sigma\rangle$ and
\begin{itemize}
\item either $t=\mu_k$ and $a<^{\operatorname{fin}}_{\partial T_n}t$,
\item or we have $t=\xi\langle b,\tau\rangle$, the restriction of $<_{\partial T_n}$ to $a\cup b$ is linear, and we have $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$.
\end{itemize}
\end{enumerate}
\end{definition}
To show that $<_{\partial T_n}$ is a linear order we will need the following auxiliary result:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:supports-order-normal}
If $T$ is a normal prae-dilator, then we have
\begin{equation*}
\sigma\leq_{T_k}\tau\quad\Rightarrow\quad\operatorname{supp}^T_k(\sigma)\leq^{\operatorname{fin}}\operatorname{supp}^T_k(\tau)
\end{equation*}
for any number $k$ and arbitrary elements $\sigma,\tau\in T_k$.
\end{lemma}
\begin{proof}
If the conclusion of the implication is false, then we have $\operatorname{supp}^T_k(\tau)<^{\operatorname{fin}} m$ for some $m\in\operatorname{supp}^T_k(\sigma)$. Note that $\operatorname{supp}^T_k(\sigma)<^{\operatorname{fin}} m$ must fail. In view of Definition~\ref{def:coded-normal-dil} we obtain $\tau<_{T_k}\mu^T_k(m)\leq_{T_k}\sigma$, so that the premise of our implication is false.
\end{proof}
We can now establish the expected fact:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:deriv-linear}
Given a normal prae-dilator $T$ and any number $n$, the relation $<_{\partial T_n}$ is a linear order on the term system $\partial T_n$.
\end{lemma}
\begin{proof}
It is straightforward to see that $s<_{\partial T_n}s$ must fail for every $s$, based on the fact that the linear order $<_{T_k}$ is antisymmetric for any number~$k$. We now show
\begin{gather*}
s<_{\partial T_n}t\lor s=t\lor t<_{\partial T_n} s,\\
r<_{\partial T_n}s\land s<_{\partial T_n}t\rightarrow r<_{\partial T_n}t
\end{gather*}
by simultaneous induction on $L^{\partial T}_n(s)+L^{\partial T}_n(t)$ resp.~$L^{\partial T}_n(r)+L^{\partial T}_n(s)+L^{\partial T}_n(t)$. Trichotomy is immediate if we have $s=\mu_m$ and $t=\mu_k$ with $m,k<n$. If we have $s=\mu_m$ and $t=\xi\langle b,\tau\rangle$, then the induction hypothesis yields $s\leq^{\operatorname{fin}}_{\partial T_n}b$ or~$b<^{\operatorname{fin}}_{\partial T_n}s$. In the first case we get $s<_{\partial T_n}t$ while the second leads to $t<_{\partial T_n}s$. Now assume that we have $s=\xi\langle a,\sigma\rangle$ and $t=\xi\langle b,\tau\rangle$. The simultaneous induction hypothesis ensures that $<_{\partial T_n}$ is linear on $a\cup b$ (note that $s'<_{\partial T_n}t'\land t'<_{\partial T_n}s'\rightarrow s'<_{\partial T_n}s'$ is covered, due to the factor $2$ in the definition of $L^{\partial T}_n$). It is easy to conclude unless we have $T_{|\iota_a^{a\cup b}|}(\sigma)=T_{|\iota_b^{a\cup b}|}(\tau)$. In this case the naturality of $\operatorname{supp}^T$ yields
\begin{alignat*}{3}
[|\iota_a^{a\cup b}|]^{<\omega}(|a|)&=[|\iota_a^{a\cup b}|]^{<\omega}(\operatorname{supp}^T_{|a|}(\sigma))&&=\operatorname{supp}^T_{|a\cup b|}(T_{|\iota_a^{a\cup b}|}(\sigma))&&={}\\
{}&=\operatorname{supp}^T_{|a\cup b|}(T_{|\iota_b^{a\cup b}|}(\tau))&&=[|\iota_b^{a\cup b}|]^{<\omega}(\operatorname{supp}^T_{|b|}(\tau))&&=[|\iota_b^{a\cup b}|]^{<\omega}(|b|).
\end{alignat*}
Composing both sides with $[\operatorname{en}_{a\cup b}]^{<\omega}$ we get $a=b$. Then $|\iota_a^{a\cup b}|$ and $|\iota_b^{a\cup b}|$ must be the identity on $|a|=|a\cup b|=|b|$. As $T$ is functorial we get $\sigma=\tau$ and hence~$s=t$. To establish transitivity one needs to distinguish several cases according to the form of the terms $r,s$ and $t$. In the first interesting case we have $r=\mu_m$, $s=\xi\langle a,\sigma\rangle$ and $t=\xi\langle b,\tau\rangle$. Invoking the previous lemma we see that $s<_{\partial T_n}t$ implies
\begin{equation*}
[|\iota_a^{a\cup b}|]^{<\omega}(|a|)=\operatorname{supp}^T_{|a\cup b|}(T_{|\iota_a^{a\cup b}|}(\sigma))\leq^{\operatorname{fin}}\operatorname{supp}^T_{|a\cup b|}(T_{|\iota_b^{a\cup b}|}(\tau))=[|\iota_b^{a\cup b}|]^{<\omega}(|b|).
\end{equation*}
Again we compose both sides with $[\operatorname{en}_{a\cup b}]^{<\omega}$, to get $a\leq^{\operatorname{fin}}_{\partial T_n}b$. In view of $r<_{\partial T_n}s$ we have $\mu_m\leq^{\operatorname{fin}}_{\partial T_n}a$. Using the induction hypothesis we can infer $\mu_m\leq^{\operatorname{fin}}_{\partial T_n}b$ and thus $r<_{\partial T_n}t$. Let us now consider $r=\xi\langle a,\sigma\rangle$, $s=\mu_m$ and~$t=\xi\langle b,\tau\rangle$. In this situation $r<_{\partial T_n}s<_{\partial T_n}t$ amounts to $a<^{\operatorname{fin}}_{\partial T_n}s\leq^{\operatorname{fin}}_{\partial T_n}b$, which implies that $b\leq^{\operatorname{fin}}_{\partial T_n}a$ must~fail. Similarly to the previous case we can conclude
\begin{equation*}
\operatorname{supp}^T_{|a\cup b|}(T_{|\iota_b^{a\cup b}|}(\tau))\not\leq^{\operatorname{fin}}\operatorname{supp}^T_{|a\cup b|}(T_{|\iota_a^{a\cup b}|}(\sigma)).
\end{equation*}
Note that we can refer to $|\iota_a^{a\cup b}|$ and $|\iota_b^{a\cup b}|$, since the simultaneous induction hypothesis ensures that $<_{\partial T_n}$ is linear on $a\cup b$. Using the previous lemma and trichotomy for $<_{T_{|a\cup b|}}$ we obtain $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$ and hence $r<_{\partial T_n}t$. To establish transitivity for $r=\xi\langle a,\sigma\rangle$, $s=\xi\langle b,\tau\rangle$ and $t=\xi\langle c,\rho\rangle$ it suffices to considers the inclusions into $a\cup b\cup c$ and to use transitivity for $<_{T_{|a\cup b\cup c|}}$.
\end{proof}
We will see that the following turns $n\mapsto\partial T_n$ into a functor:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:deriv-functor}
Given a strictly increasing function $f:n\rightarrow l$, we define a function $\partial T_f:\partial T_n\rightarrow\partial T_l$ by recursion over the build-up of terms, setting
\begin{align*}
\partial T_f(\mu_m)&=\mu_{f(m)},\\
\partial T_f(\xi\langle a,\sigma\rangle)&=\xi\langle [\partial T_f]^{<\omega}(a),\sigma\rangle.
\end{align*}
\end{definition}
The fact that $\partial T_f$ has values in $\partial T_l$ is established as part of the following proof:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:deriv-embedding}
If $f:n\rightarrow l$ is strictly increasing, then $\partial T_f:\partial T_n\rightarrow\partial T_l$ is an order embedding.
\end{lemma}
\begin{proof}
By simultaneous induction on $L^{\partial T}_n(r)$ resp.~$L^{\partial T}_n(s)+L^{\partial T}_n(t)$ one can verify
\begin{gather*}
r\in\partial T_n\rightarrow\partial T_f(r)\in\partial T_l,\\
s<_{\partial T_n}t\rightarrow\partial T_f(s)<_{\partial T_l}\partial T_f(t).
\end{gather*}
Let us consider the first claim for $r=\xi\langle a,\sigma\rangle$: The simultaneous induction hypothesis implies that $\partial T_f$ is order preserving and hence injective on $a$. In particular we have $|[\partial T_f]^{<\omega}(a)|=|a|$. Furthermore, it is easy to see that $[\partial T_f]^{<\omega}(a)=\{\mu_k\}$ implies $a=\{\mu_m\}$ with $k=f(m)$. Invoking Definition~\ref{def:derivative-term-system} we can now conclude that $r\in\partial T_n$ implies $\partial T_f(r)=\xi\langle [\partial T_f]^{<\omega}(a),\sigma\rangle\in\partial T_l$. To verify that $\partial T_f$ is order preserving we distinguish cases according to the form of $s$ and $t$. In the first interesting case we have $s=\mu_m<_{\partial T_n}\xi\langle b,\tau\rangle=t$ because of $s\leq^{\operatorname{fin}}_{\partial T_n}b$. By the induction hypothesis we obtain $\partial T_f(s)\leq^{\operatorname{fin}}_{\partial T_k}[\partial T_f]^{<\omega}(b)$ and hence
\begin{equation*}
\partial T_f(s)=\mu_{f(m)}<_{\partial T_k}\xi\langle [\partial T_f]^{<\omega}(b),\tau\rangle=\partial T_f(t).
\end{equation*}
Let us also consider the case where $s=\xi\langle a,\sigma\rangle<_{\partial T_k}\xi\langle b,\tau\rangle=t$ holds because we have $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$. To infer $\partial T_f(s)<_{\partial T_k}\partial T_f(t)$ it suffices to show
\begin{equation*}
\left|\iota_{[\partial T_f]^{<\omega}(a)}^{[\partial T_f]^{<\omega}(a\cup b)}\right|=|\iota_a^{a\cup b}|\qquad\text{and}\qquad\left|\iota_{[\partial T_f]^{<\omega}(b)}^{[\partial T_f]^{<\omega}(a\cup b)}\right|=|\iota_b^{a\cup b}|.
\end{equation*}
By the definition of the functor $|\cdot|$ (cf.~the second paragraph of Section~\ref{sect:normal-dils-so}), the first of these equations reduces to
\begin{equation*}
\operatorname{en}_{[\partial T_f]^{<\omega}(a\cup b)}\circ|\iota_a^{a\cup b}|=\iota_{[\partial T_f]^{<\omega}(a)}^{[\partial T_f]^{<\omega}(a\cup b)}\circ\operatorname{en}_{[\partial T_f]^{<\omega}(a)}.
\end{equation*}
The induction hypothesis tells us that $\partial T_f$ is order preserving on $a\cup b$. Since the increasing enumeration of a finite order is uniquely determined this yields
\begin{equation*}
\operatorname{en}_{[\partial T_f]^{<\omega}(a\cup b)}=\partial T_f\circ\operatorname{en}_{a\cup b}:|[\partial T_f]^{<\omega}(a\cup b)|=|a\cup b|\rightarrow[\partial T_f]^{<\omega}(a\cup b).
\end{equation*}
Together with $\operatorname{en}_{a\cup b}\circ|\iota_a^{a\cup b}|=\iota_a^{a\cup b}\circ\operatorname{en}_a$ we indeed get
\begin{align*}
\operatorname{en}_{[\partial T_f]^{<\omega}(a\cup b)}\circ|\iota_a^{a\cup b}|&=\partial T_f\circ\operatorname{en}_{a\cup b}\circ|\iota_a^{a\cup b}|=\partial T_f\circ\iota_a^{a\cup b}\circ\operatorname{en}_a=\\
{}&=\iota_{[\partial T_f]^{<\omega}(a)}^{[\partial T_f]^{<\omega}(a\cup b)}\circ\partial T_f\circ\operatorname{en}_a=\iota_{[\partial T_f]^{<\omega}(a)}^{[\partial T_f]^{<\omega}(a\cup b)}\circ\operatorname{en}_{[\partial T_f]^{<\omega}(a)}.
\end{align*}
The equation with $b$ at the place of $a$ is established in the same way.
\end{proof}
To get a normal prae-dilator (cf.~Definitions~\ref{def:coded-prae-dilator} and~\ref{def:coded-normal-dil}) we need the following:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:deriv-support}
For each $n$ we define a function $\operatorname{supp}^{\partial T}_n:\partial T_n\rightarrow[n]^{<\omega}$ by induction on the build-up of terms, setting
\begin{align*}
\operatorname{supp}^{\partial T}_n(\mu_m)&=\{m\},\\
\operatorname{supp}^{\partial T}_n(\xi\langle a,\sigma\rangle)&=\textstyle\bigcup_{t\in a}\operatorname{supp}^{\partial T}_n(t).
\end{align*}
To define a family of functions $\mu^{\partial T}_n:n\rightarrow\partial T_n$ we put
\begin{equation*}
\mu^{\partial T}_n(m)=\mu_m
\end{equation*}
for all numbers $m<n$.
\end{definition}
Let us verify that $\partial T=(\partial T,\operatorname{supp}^{\partial T},\mu^{\partial T})$ has the expected property:
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:deriv-normal-prae-dilator}
If $T$ is a normal prae-dilator, then so is $\partial T$.
\end{proposition}
\begin{proof}
From Lemma~\ref{lem:deriv-linear} we know that $\partial T_n$ is a linear order for any number~$n$. Lemma~\ref{lem:deriv-embedding} tells us that $f\mapsto\partial T_f$ maps morphisms to morphisms. A straightforward induction over the build-up of terms establishes the functoriality of $\partial T$ and the naturality of $\operatorname{supp}^{\partial T}$. By another induction one can prove the implication
\begin{equation*}
\operatorname{supp}^{\partial T}_n(s)\subseteq a\rightarrow s\in\operatorname{rng}(\partial T_{\iota_a\circ\operatorname{en}_a}),
\end{equation*}
where $\iota_a:a\hookrightarrow n$ denotes the inclusion of a given $a\subseteq n$. For $a=\operatorname{supp}^{\partial T}_n(s)$ this amounts to the support condition from clause~(ii) of Definition~\ref{def:coded-prae-dilator}. We have thus established that $\partial T$ is a coded prae-dilator. The functions $\mu^{\partial T}_n:n\rightarrow\partial T_n$ clearly form a natural family of embeddings. In view of Definitions~\ref{def:derivative-order} and~\ref{def:deriv-support} a~straightforward induction on the build-up of $s$ shows
\begin{equation*}
s<_{\partial T_n}\mu^T_n(m)\qquad\Leftrightarrow\qquad\operatorname{supp}^{\partial T}_n(s)<^{\operatorname{fin}} m.
\end{equation*}
According to Definition~\ref{def:coded-normal-dil} this means that $\partial T$ is normal.
\end{proof}
Our next goal is to turn $\partial T$ into an upper derivative of $T$. According to Definition~\ref{def:upper-derivative} we need to construct a morphism $\xi^T:T\circ\partial T\Rightarrow T$ of normal prae-dilators. Concerning the notion of composition, Definitions~\ref{def:compose-dils} and~\ref{def:coded-prae-dilator-reconstruct} tell us that any element of $(T\circ\partial T)_n=D^T(\partial T_n)$ has the form $\langle a,\sigma\rangle$, where $a\subseteq\partial T_n$ is finite and $\sigma\in T_{|a|}$ satisfies $\operatorname{supp}^T_{|a|}(\sigma)=|a|$. In view of Definition~\ref{def:derivative-term-system} this justifies the following construction:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:deriv-xiT}
For each $n$ we define a function $\xi^T_n:(T\circ\partial T)_n\rightarrow\partial T_n$ by setting
\begin{equation*}
\xi^T_n(\langle a,\sigma\rangle)=\begin{cases}
\mu_m & \text{if $\langle a,\sigma\rangle=\langle\{\mu_m\},\mu^T_1(0)\rangle$ with $m<n$},\\
\xi\langle a,\sigma\rangle & \text{if $\langle a,\sigma\rangle$ has a different form}.
\end{cases}
\end{equation*}
\end{definition}
The following result is important as it implies the implication (1)$\Rightarrow$(2) from the introduction of this paper.
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:partial-upper-deriv}
If $T$ is a normal prae-dilator, then $(\partial T,\xi^T)$ is an upper derivative of $T$.
\end{proposition}
\begin{proof}
In view of Definition~\ref{def:upper-derivative} we must establish that $\xi^T:T\circ\partial T\Rightarrow\partial T$ is a morphism of normal prae-dilators, as characterized by Definition~\ref{def:morphism-dils}. Let us first show that each function $\xi^T_n:(T\circ\partial T)_n\rightarrow\partial T_n$ is an embedding. To make the results from Section~\ref{sect:normal-dils-so} applicable we observe that Definition~\ref{def:extend-normal-transfos} yields
\begin{equation*}
\langle\{\mu_m\},\mu^T_1(0)\rangle=D^{\mu^T}_{\partial T_n}(\mu_m).
\end{equation*}
To see that $s<_{D^T(\partial T_n)}t$ implies $\xi^T_n(s)<_{\partial T_n}\xi^T_n(t)$ we now distinguish cases according to the form of $s$ and $t$. First assume that we have
\begin{equation*}
s=\langle\{\mu_m\},\mu^T_1(0)\rangle=D^{\mu^T}_{\partial T_n}(\mu_m)<_{D^T(\partial T_n)}D^{\mu^T}_{\partial T_n}(\mu_k)=\langle\{\mu_k\},\mu^T_1(0)\rangle=t.
\end{equation*}
From Proposition~\ref{prop:reconstruct-normal-dil} we know that $D^{\mu^T}_{\partial T_n}$ is an embedding. Thus we indeed get
\begin{equation*}
\xi^T_n(s)=\mu_m<_{\partial T_n}\mu_k=\xi^T_n(t).
\end{equation*}
Now consider the case
\begin{equation*}
s=\langle a,\sigma\rangle<_{D^T(\partial T_n)}D^{\mu^T}_{\partial T_n}(\mu_k)=\langle\{\mu_k\},\mu^T_1(0)\rangle=t,
\end{equation*}
where $\langle a,\sigma\rangle$ is not of the form $\langle\{\mu_m\},\mu^T_1(0)\rangle$. By Proposition~\ref{prop:reconstruct-normal-dil} we get $a<^{\operatorname{fin}}_{\partial T_n}\mu_k$. Invoking Definition~\ref{def:derivative-order} we can conclude
\begin{equation*}
\xi^T_n(s)=\xi\langle a,\sigma\rangle<_{\partial T_n}\mu_k=\xi^T_n(t).
\end{equation*}
The case where we have $s=\langle\{\mu_m\},\mu^T_1(0)\rangle$ and $t=\langle b,\tau\rangle$ is of a different form is treated analogously (infer $\mu_m\leq^{\operatorname{fin}}_{\partial T_n}b$ from the fact that $b<^{\operatorname{fin}}_{\partial T_n}\mu_m$ must fail). Finally we consider the case where we have
\begin{equation*}
s=\langle a,\sigma\rangle<_{D^T(\partial T_n)}\langle b,\tau\rangle=t
\end{equation*}
and neither $s$ nor $t$ is of the form $\langle\{\mu_m\},\mu^T_1(0)\rangle$ with $m<n$. By Definition~\ref{def:coded-prae-dilator-reconstruct} we get $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$. In view of Definition~\ref{def:derivative-order} this yields
\begin{equation*}
\xi^T_n(s)=\xi\langle a,\sigma\rangle<_{\partial T_n}\xi\langle b,\tau\rangle=\xi^T_n(t).
\end{equation*}
Let us now show that $\xi^T$ is natural: Given a strictly increasing function~$f:n\rightarrow k$, we invoke Definitions~\ref{def:compose-dils} and~\ref{def:coded-prae-dilator-reconstruct} to obtain
\begin{equation*}
\xi^T_k\circ(T\circ\partial T)_f(\langle a,\sigma\rangle)=\xi^T_k(\langle[\partial T_f]^{<\omega}(a),\sigma\rangle).
\end{equation*}
First assume $\langle a,\sigma\rangle=\langle\{\mu_m\},\mu^T_1(0)\rangle$. Using Definition~\ref{def:deriv-functor} we compute
\begin{multline*}
\xi^T_k\circ(T\circ\partial T)_f(\langle a,\sigma\rangle)=\xi^T_k(\langle\{\mu_{f(m)}\},\mu^T_1(0)\rangle)=\mu_{f(m)}=\\
=\partial T_f(\mu_m)=\partial T_f\circ\xi^T_n(\langle a,\sigma\rangle).
\end{multline*}
Now assume that $\langle a,\sigma\rangle$ does not have the form $\langle\{\mu_m\},\mu^T_1(0)\rangle$. Then $\langle[\partial T_f]^{<\omega}(a),\sigma\rangle$ does not have this form either. Again by Definition~\ref{def:deriv-functor} we get
\begin{equation*}
\xi^T_k\circ(T\circ\partial T)_f(\langle a,\sigma\rangle)=\xi\langle[\partial T_f]^{<\omega}(a),\sigma\rangle=\partial T_f(\xi\langle a,\sigma\rangle)=\partial T_f\circ\xi^T_n(\langle a,\sigma\rangle).
\end{equation*}
To conclude that $\xi^T:T\circ\partial T\Rightarrow T$ is a morphism of normal prae-dilators we must show $\xi^T\circ\mu^{T\circ\partial T}=\mu^{\partial T}$ (cf.~Definition~\ref{def:morphism-dils}). By Definition~\ref{def:compose-normal-dils} we indeed get
\begin{equation*}
\xi^T_n\circ\mu^{T\circ\partial T}_n(m)=\xi^T_n(\langle\{\mu^{\partial T}_n(m)\},\mu^T_1(0)\rangle)=\xi^T_n(\langle\{\mu_m\},\mu^T_1(0)\rangle)=\mu_m=\mu^{\partial T}_n(m)
\end{equation*}
for all numbers $m<n$.
\end{proof}
In the construction of $\partial T$ we have only added terms that were needed as values of $\xi:T\circ\partial T\Rightarrow\partial T$. The resulting minimality of $\partial T$ leads to the following:
\begin{theorem}[$\mathbf{RCA}_0$]\label{thm:terms-derivative}
Assume that $T$ is a normal prae-dilator. Then $(\partial T,\xi^T)$ is a derivative of $T$.
\end{theorem}
\begin{proof}
The previous proposition tells us that $(\partial T,\xi^T)$ is an upper derivative. In view of Definition~\ref{def:derivative} we assume that $S$ and $\xi':T\circ S\Rightarrow S$ form an upper derivative of $T$ as well. We must show that there is a unique morphism $\nu:\partial T\Rightarrow S$ of upper derivatives. Let us begin with existence: Note that the normality of $S$ is witnessed by a natural family of embeddings $\mu^S_n:n\rightarrow S_n$. Also recall that $(T\circ S)_n=D^T(S_n)$ consists of pairs~$\langle b,\sigma\rangle$, where $b\subseteq S_n$ is finite and $\sigma\in T_{|b|}$ satisfies $\operatorname{supp}^T_{|b|}(\sigma)=|b|$. For each $n$ we define $\nu_n:\partial T_n\rightarrow S_n$ by recursion over the build-up of terms, setting
\begin{align*}
\nu_n(\mu_m)&=\mu^S_n(m),\\
\nu_n(\xi\langle a,\sigma\rangle)&=\xi'_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle).
\end{align*}
It is not immediately clear that the second clause produces values in~$S_n$: To see that $\xi\langle a,\sigma\rangle$ implies $\langle[\nu_n]^{<\omega}(a),\sigma\rangle\in D^T(S_n)$ we need $|[\nu_n]^{<\omega}(a)|=|a|$, which relies on the fact that $\nu_n$ is order preserving and hence injective. This suggests to verify
\begin{gather*}
r\in\partial T_n\rightarrow\nu_n(r)\in S_n,\\
s<_{\partial T_n}t\rightarrow\nu_n(s)<_{S_n}\nu_n(t)
\end{gather*}
by simultaneous induction on $L^{\partial T}_n(r)$ resp.~$L^{\partial T}_n(s)+L^{\partial T}_n(t)$. To establish that $\nu_n$ is order preserving one needs to consider different possibilities for the form of $s$ and~$t$. The first interesting case is
\begin{equation*}
s=\xi\langle a,\sigma\rangle<_{\partial T_n}\mu_m=t.
\end{equation*}
According to Definition~\ref{def:derivative-order} we have $a<^{\operatorname{fin}}_{\partial T_n}\mu_m$, so that the induction hypothesis yields $[\nu_n]^{<\omega}(a)<^{\operatorname{fin}}_{S_n}\mu^S_n(m)$. By Definition~\ref{def:coded-normal-dil} we get $\operatorname{supp}^S_n(\nu_n(r))<^{\operatorname{fin}} m$ for all~$r\in a$. Using Lemma~\ref{lem:dilator-cartesian} and Definition~\ref{def:compose-dils} we obtain
\begin{equation*}
\operatorname{supp}^S_n(\xi'_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle))=\operatorname{supp}^{T\circ S}_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle)=\bigcup_{r\in a}\operatorname{supp}^S_n(\nu_n(r))<^{\operatorname{fin}} m.
\end{equation*}
By the other direction of Definition~\ref{def:coded-normal-dil} this implies
\begin{equation*}
\nu_n(s)=\xi'_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle)<_{S_n}\mu^S_n(m)=\nu_n(t),
\end{equation*}
as desired. Let us next consider the case where
\begin{equation*}
s=\mu_m<_{\partial T_n}\xi\langle b,\tau\rangle=t
\end{equation*}
holds because of $\mu_m\leq^{\operatorname{fin}}_{\partial T_n} b$. Similarly to the above one can deduce that the statements $\operatorname{supp}^S_n(\nu_n(t))<^{\operatorname{fin}} m$ and $\nu_n(t)<_{S_n}\mu^S_n(m)=\nu_n(s)$ must fail. In order to conclude $\nu_n(s)<_{S_n}\nu_n(t)$ we shall now establish $\nu_n(s)\neq\nu_n(t)$. According to Definition~\ref{def:compose-normal-dils} the normality of $T\circ S$ is witnessed by the functions
\begin{equation*}
m\mapsto\mu^{T\circ S}_n(m)=\langle\{\mu^S_n(m)\},\mu^T_1(0)\rangle.
\end{equation*}
Since $\xi'$ is a morphism of normal prae-dilators we get
\begin{equation*}
\nu_n(s)=\mu^S_n(m)=\xi'_n\circ\mu^{T\circ S}_n(m)=\xi'_n(\langle\{\mu^S_n(m)\},\mu^T_1(0)\rangle).
\end{equation*}
Invoking the injectivity of the embedding $\xi'_n$ we learn that $\nu_n(s)=\nu_n(t)$ would imply $\langle\{\nu_n(\mu_m)\},\mu^T_1(0)\rangle=\langle[\nu_n]^{<\omega}(b),\tau\rangle$. By induction hypothesis $\nu_n$ is injective on $b\cup\{\mu_m\}$. Hence $\nu_n(s)=\nu_n(t)$ would even yield $t=\xi\langle\{\mu_m\},\mu^T_1(0)\rangle$. This possibility, however, has been excluded in Definition~\ref{def:derivative-term-system}. Finally we assume that
\begin{equation*}
s=\xi\langle a,\sigma\rangle<_{\partial T_n}\xi\langle b,\tau\rangle=t.
\end{equation*}
holds because of $T_{|\iota_a^{a\cup b}|}(\sigma)<_{T_{|a\cup b|}}T_{|\iota_b^{a\cup b}|}(\tau)$. The induction hypothesis ensures that~$\nu_n$ is order preserving on $a\cup b$. As in the proof of Lemma~\ref{lem:deriv-embedding} one can show
\begin{equation*}
\left|\iota_{[\nu_n]^{<\omega}(a)}^{[\nu_n]^{<\omega}(a\cup b)}\right|=|\iota_a^{a\cup b}|\qquad\text{and}\qquad\left|\iota_{[\nu_n]^{<\omega}(b)}^{[\nu_n]^{<\omega}(a\cup b)}\right|=|\iota_b^{a\cup b}|.
\end{equation*}
Invoking Definition~\ref{def:coded-prae-dilator-reconstruct} one then obtains $\langle [\nu_n]^{<\omega}(a),\sigma\rangle<_{D^T(S_n)}\langle [\nu_n]^{<\omega}(b),\tau\rangle$. Since $\xi'_n$ is an embedding of $(T\circ S)_n=D^T(S_n)$ into $S_n$ this implies
\begin{equation*}
\nu_n(s)=\xi'_n(\langle [\nu_n]^{<\omega}(a),\sigma\rangle)<_{S_n}\xi'_n(\langle [\nu_n]^{<\omega}(b),\tau\rangle)=\nu_n(t).
\end{equation*}
So far we have established that each function $\nu_n$ is an embedding of $\partial T_n$ into $S_n$. To conclude that these embeddings form a morphism of prae-dilators we must show that they are natural: Given a strictly increasing function $f:n\rightarrow k$, we establish $\nu_k\circ\partial T_f(s)=S_f\circ\nu_n(s)$ by induction on the build-up of $s$. In the case of $s=\mu_m$ we invoke the naturality of $\mu^S$ to get
\begin{equation*}
\nu_k\circ\partial T_f(\mu_m)=\nu_k(\mu_{f(m)})=\mu^S_k(f(m))=S_f(\mu^S_n(m))=S_f\circ\nu_n(m).
\end{equation*}
Let us now establish the induction step for $s=\xi\langle a,\sigma\rangle$. In view of Definitions~\ref{def:compose-dils} and~\ref{def:coded-prae-dilator-reconstruct} the induction hypothesis yields
\begin{equation*}
(T\circ S)_f(\langle[\nu_n]^{<\omega}(a),\sigma\rangle)=\langle[S_f]^{<\omega}\circ[\nu_n]^{<\omega}(a),\sigma\rangle=\langle[\nu_k]^{<\omega}\circ[\partial T_f]^{<\omega}(a),\sigma\rangle.
\end{equation*}
Together with the naturality of $\xi':T\circ S\Rightarrow S$ we get
\begin{multline*}
\nu_k\circ\partial T_f(\xi\langle a,\sigma\rangle)=\nu_k(\xi\langle[\partial T_f]^{<\omega}(a),\sigma\rangle)=\xi'_k(\langle[\nu_k]^{<\omega}\circ[\partial T_f]^{<\omega}(a),\sigma\rangle)=\\
=\xi'_k((T\circ S)_f(\langle[\nu_n]^{<\omega}(a),\sigma\rangle))=S_f(\xi'_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle))=S_f\circ\nu_n(\xi\langle a,\sigma\rangle),
\end{multline*}
as required. Next we observe
\begin{equation*}
\nu_n\circ\mu^{\partial T}_n(m)=\nu_n(\mu_m)=\mu^S_n(m),
\end{equation*}
which shows that $\nu$ is a morphism of normal prae-dilators. To conclude that we have a morphism of upper derivatives we need to establish $\nu\circ\xi^T=\xi'\circ T(\nu)$. First observe that Definitions~\ref{def:comp-morphs} and~\ref{def:coded-prae-dilator-reconstruct} yield
\begin{equation*}
T(\nu)_n(\langle a,\sigma\rangle)=D^T(\nu_n)(\langle a,\sigma\rangle)=\langle[\nu_n]^{<\omega}(a),\sigma\rangle.
\end{equation*}
If $\langle a,\sigma\rangle\in(T\circ\partial T)_n$ is not of the form $\langle\{\mu_m\},\mu^T_1(0)\rangle$, then we obtain
\begin{equation*}
\nu_n\circ\xi^T_n(\langle a,\sigma\rangle)=\nu_n(\xi\langle a,\sigma\rangle)=\xi'_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle)=\xi'_n\circ T(\nu)_n(\langle a,\sigma\rangle).
\end{equation*}
In the remaining case Definition~\ref{def:deriv-xiT} yields
\begin{equation*}
\nu_n\circ\xi^T_n(\langle\{\mu_m\},\mu^T_1(0)\rangle)=\nu_n(\mu_m)=\mu^S_n(m).
\end{equation*}
Invoking Definition~\ref{def:compose-normal-dils} and the fact that $\xi':T\circ S\Rightarrow S$ is a morphism of normal prae-dilators we also get
\begin{alignat*}{3}
\xi'_n\circ T(\nu)_n(\langle\{\mu_m\},\mu^T_1(0)\rangle)&=\xi'_n(\langle\{\nu_n(\mu_m)\},\mu^T_1(0)\rangle)&&={}\\
{}&=\xi'_n(\langle\{\mu^S_n(m)\},\mu^T_1(0)\rangle)&&=\xi'_n\circ\mu^{T\circ S}_n(m)=\mu^S_n(m).
\end{alignat*}
To complete the proof we must show that $\nu$ is unique: Given an arbitrary morphism $\nu':\partial T\Rightarrow S$ of upper derivatives, we establish $\nu'(s)=\nu(s)$ by induction on the build-up of $s$. In the case of $s=\mu_m$ we invoke Definition~\ref{def:deriv-support} and the assumption that $\nu'$ is a morphism of normal prae-dilators to get
\begin{equation*}
\nu'_n(\mu_m)=\nu'_n\circ\mu^{\partial T}_n(m)=\mu^S_n(m)=\nu_n(\mu_m).
\end{equation*}
Given a term $s=\xi\langle a,\sigma\rangle$, we observe that the induction hypothesis implies
\begin{equation*}
T(\nu')_n(\langle a,\sigma\rangle)=D^T(\nu'_n)(\langle a,\sigma\rangle)=\langle[\nu'_n]^{<\omega}(a),\sigma\rangle=\langle[\nu_n]^{<\omega}(a),\sigma\rangle.
\end{equation*}
Together with the assumption that $\nu'$ is a morphism of upper derivatives we obtain
\begin{equation*}
\nu'_n(\xi\langle a,\sigma\rangle)=\nu'_n\circ\xi^T_n(\langle a,\sigma\rangle)=\xi'_n\circ T(\nu')_n(\langle a,\sigma\rangle)=\xi'_n(\langle[\nu_n]^{<\omega}(a),\sigma\rangle)=\nu_n(\xi\langle a,\sigma\rangle),
\end{equation*}
as required.
\end{proof}
We have described a construction that yields a derivative $\partial T$ of a given normal prae-dilator~$T$. Since derivatives are essentially unique, the construction of $\partial T$ can be exploited to prove general properties of derivatives. The following result establishes some of the assumptions from Theorem~\ref{thm:equalizer-to-derivative}. The remaining assumption, which states that $X\mapsto D^S_X$ preserves well-foundedness, will be considered in the next section (in view of Corollary~\ref{cor:deriv-implies-BI} this will require a stronger base theory).
\begin{theorem}[$\mathbf{RCA}_0$]\label{thm:partial-derivative}
Consider a normal prae-dilator $T$. If $(S,\xi)$ is a derivative of $T$, then $\xi:T\circ S\Rightarrow S$ is a natural isomorphism. Furthermore
\begin{equation*}
\begin{tikzcd}
n\ar{r}{\mu^S_n} &[2em] S_n\arrow[r,shift left,"\operatorname{Id}_{S_n}"]\arrow[r,shift right,swap,"\xi_n\circ D^{\mu^T}_{S_n}"]&[5em] S_n
\end{tikzcd}
\end{equation*}
is an equalizer diagram for every number~$n$.
\end{theorem}
\begin{proof}
The definition of derivative ensures that $\xi$ is a natural transformation. To conclude that it is a natural isomorphism we show that $\xi_n:(T\circ S)_n\rightarrow S_n$ is surjective for each~$n$. Since both $(S,\xi)$ and $(\partial T,\xi^T)$ are derivatives of $T$, there is an isomorphism $\nu:S\Rightarrow\partial T$ of upper derivatives (cf.~the remark after Definition~\ref{def:derivative}). In view of Definitions~\ref{def:morph-upper-derivs} and~\ref{def:comp-morphs} we have
\begin{equation*}
\nu_n\circ\xi_n=\xi^T_n\circ T(\nu)_n=\xi^T_n\circ D^T(\nu_n).
\end{equation*}
Now $\nu_n$ is bijective, and it is straightforward to infer that the same holds for $D^T(\nu_n)$. So it suffices to show that $\xi^T_n$ is surjective. Aiming at the latter, we first observe that Definitions~\ref{def:deriv-support} and~\ref{def:morphism-dils} yield
\begin{equation*}
\mu_m=\mu^{\partial T}_n(m)=\xi^T_n\circ\mu^{T\circ\partial T}_n(m)\in\operatorname{rng}(\xi^T_n).
\end{equation*}
It remains to consider an element $\xi\langle a,\sigma\rangle\in \partial T_n$. In view of Definitions~\ref{def:derivative-term-system},~\ref{def:coded-prae-dilator-reconstruct} and~\ref{def:compose-dils} we have $\langle a,\sigma\rangle\in D^T(\partial T_n)=(T\circ\partial T)_n$. Thus we get
\begin{equation*}
\xi\langle a,\sigma\rangle=\xi^T_n(\langle a,\sigma\rangle)\in\operatorname{rng}(\xi^T_n),
\end{equation*}
which completes the proof that $\xi$ is a natural isomorphism. After the statement of Theorem~\ref{thm:equalizer-to-derivative} above we have observed that the given equalizer diagram is automatically commutative. To establish that $\mu^S_n$ is an equalizer of $\xi_n\circ D^{\mu^T}_{S_n}$ and the identity we must show that
\begin{equation*}
\xi_n\circ D^{\mu^T}_{S_n}(s)=s\quad\Rightarrow\quad s\in\operatorname{rng}(\mu^S_n)
\end{equation*}
holds for any element $s\in S_n$. To reduce the claim to the special case with $(\partial T,\xi^T)$ at the place of $(S,\xi)$ we apply $\nu_n$ to both sides of the antecedent. Using the naturality of~$D^{\mu^T}$, which is provided by Lemma~\ref{prop:reconstruct-normal-dil}, this yields
\begin{equation*}
\nu_n(s)=\nu_n\circ\xi_n\circ D^{\mu^T}_{S_n}(s)=\xi^T_n\circ D^T(\nu_n)\circ D^{\mu^T}_{S_n}(s)=\xi^T_n\circ D^{\mu^T}_{\partial T_n}\circ\nu_n(s).
\end{equation*}
Assuming the special case of the desired implication, we obtain $\nu_n(s)\in\operatorname{rng}(\mu^{\partial T}_n)$, say $\nu_n(s)=\mu^{\partial T}_n(m)$. Since $\nu$ is a morphism of normal prae-dilators, this implies
\begin{equation*}
\nu_n\circ\mu^S_n(m)=\mu^{\partial T}_n(m)=\nu_n(s).
\end{equation*}
Invoking the injectivity of $\nu_n$ we see $s=\mu^S_n(m)\in\operatorname{rng}(\mu^S_n)$, which is the conclusion of the general case. It remains to establish the special case for $\partial T$. Aiming at the contrapositive of the desired implication, let us assume that $s\in\partial T_n$ is not of the form $\mu^S_n(m)=\mu_m$. Then Definitions~\ref{def:extend-normal-transfos} and~\ref{def:deriv-xiT} yield
\begin{equation*}
\xi^T_n\circ D^{\mu^T}_{\partial T_n}(s)=\xi^T_n(\langle\{s\},\mu^T_1(0)\rangle)=\xi\langle\{s\},\mu^T_1(0)\rangle.
\end{equation*}
The term on the right cannot be equal to $s$, which it contains as a proper subterm (one can also appeal to the fact that $s$ is shorter in the sense of Definition~\ref{def:deriv-term-length}).
\end{proof}
To conclude this section we show that the conditions from the previous theorem do not suffice to characterize derivatives on the categorical level:
\begin{example}\label{ex:equalizers-without-deriv}
Define a normal dilator $T$ by setting $T_n=\{0,\dots,n-1\}$, $T_f=f$, $\operatorname{supp}^T_n(m)=\{m\}$ and $\mu^T_n(m)=m$. Furthermore, consider the sets
\begin{equation*}
S_n=\mathbb Z+n=\{\hat p\,|\,p\in\mathbb Z\}\cup\{m\,|\,0\leq m<n\}
\end{equation*}
with the expected ordering (i.\,e.~such that $\hat p<_{S_n}\hat q<_{S_n}m<_{S_n}k$ holds for all $p<q$ from~$\mathbb Z$ and all $m<k$ from $\{0,\dots,n-1\}$). To turn $S$ into a prae-dilator we set
\begin{align*}
S_f(\sigma)&=\begin{cases}
f(m) & \text{if $\sigma=m\in\{0,\dots,n-1\}$,}\\
\sigma & \text{if $\sigma=\hat p$ with $p\in\mathbb Z$,}
\end{cases}\\
\operatorname{supp}^S_n(\sigma)&=\begin{cases}
\{m\} & \text{if $\sigma=m\in\{0,\dots,n-1\}$,}\\
\emptyset & \text{if $\sigma=\hat p$ with $p\in\mathbb Z$.}
\end{cases}
\end{align*}
Let us point out that $S$ is not a dilator, since $D^S_n\cong S_n$ is ill-founded (cf.~Lemma~\ref{lem:class-sized-restrict}). Be that as it may, we obtain a normal prae-dilator by setting
\begin{equation*}
\mu^S_n(m)=m\in\{0,\dots,n-1\}\subseteq S_n.
\end{equation*}
Since all supports with respect to $T$ are singletons we have
\begin{equation*}
(T\circ S)_n=D^T(S_n)=\{\langle\{\sigma\},0\rangle\,|\,\sigma\in S_n\}.
\end{equation*}
Thus we can define $\xi:T\circ S\Rightarrow S$ by setting
\begin{equation*}
\xi_n(\langle\{\sigma\},0\rangle)=\begin{cases}
m & \text{if $\sigma=m\in\{0,\dots,n-1\}$,}\\
\widehat{p+1} & \text{if $\sigma=\hat p$ with $p\in\mathbb Z$.}
\end{cases}
\end{equation*}
One can check that $(S,\xi)$ is an upper derivative of $T$. It is easy to see that~$\xi$ is an isomorphism, as $p\mapsto p+1$ is an automorphism of $\mathbb Z$. Furthermore, the diagram from Theorem~\ref{thm:partial-derivative} defines an equalizer: Assume that we have
\begin{equation*}
\sigma=\xi_n\circ D^{\mu^T}_{S_n}(\sigma)=\xi_n(\langle\{\sigma\},\mu^T_1(0)\rangle)=\xi_n(\langle\{\sigma\},0\rangle).
\end{equation*}
In view of $p\neq p+1$ we cannot have $\sigma=\hat p$ with $p\in\mathbb Z$. Thus we must have $\sigma=m$ for some number $m\in\{0,\dots,n-1\}$. It follows that
\begin{equation*}
\sigma=m=\mu^S_n(m)
\end{equation*}
lies in the range of $\mu^S_n$, as required for the equalizer condition. Thus $S$ and $\xi$ satisfy the conclusion of the previous theorem. Nevertheless they do not form a derivative of $T$. Otherwise we would get a morphism $S\Rightarrow\partial T$ of upper derivatives. This is impossible since $S_n=\mathbb Z+n$ is infinite while $\partial T_n$ is finite: In view of $T_1=\{\mu^T_1(0)\}$ Definition~\ref{def:derivative-term-system} yields $\partial T_n=\{\mu_m\,|\,0\leq m<n\}$.
\end{example}
\section{From $\Pi^1_1$-bar induction to preservation of well-foundedness}\label{sect:bi-deriv-wf}
In this section we use $\Pi^1_1$-bar induction to prove the following: If~$T$ is a normal dilator, then $X\mapsto D^{\partial T}_X$ preserves well-foundedness, so that $\partial T$ is a normal dilator as well. This establishes the implication (3)$\Rightarrow$(1) from the introduction. Together with the results of the previous sections we learn that (1), (2) and~(3) are equivalent over $\mathbf{ACA}_0$. Invoking Theorems~\ref{thm:equalizer-to-derivative} and~\ref{thm:partial-derivative} we will also be able to conclude the following: If $(S,\xi)$ is a derivative of a normal dilator $T$, then $\alpha\mapsto\operatorname{otp}(D^S_\alpha)$ is the derivative (in the usual sense) of the normal function $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$.
The construction from the previous section yields a derivative~$\partial T$ of a given normal prae-dilator~$T$. To assess whether $\partial T$ is a dilator we must consider the orders $D^{\partial T}_X$ (cf.~Definitions~\ref{def:coded-prae-dilator-reconstruct} and~\ref{def:coded-dilator}). These will be approximated as follows:
\begin{definition}[$\mathbf{RCA}_0$]\label{def:partial-up-to-x}
Consider a normal prae-dilator $T$, as well as a linear order $X=(X,<_X)$. We set
\begin{equation*}
\partial T^x_X=\{\langle a,s\rangle\in D^{\partial T}_X\,|\,a<^{\operatorname{fin}}_X x\}
\end{equation*}
for any element $x\in X$.
\end{definition}
To distinguish the expressions $\partial T_X^x$ and $\partial T_n$ (cf.~Definition~\ref{def:derivative-term-system}) it suffices to observe that the latter has no superscript (note that we have $\partial T_n^m\subseteq D^{\partial T}_n$ rather than $\partial T_n^m\subseteq\partial T_n$ in case $X=n=\{0,\dots,n-1\}$). We will argue by induction on~$x$ to show that the suborders $\partial T^x_X\subseteq D^{\partial T}_X$ are well-founded. Assuming that $X$ is non-empty and has no maximal element, we clearly have
\begin{equation*}
D^{\partial T}_X=\bigcup_{x\in X}\partial T^x_X.
\end{equation*}
In general, the union (or direct limit) of compatible well-orders does not need to be well-founded itself. On the other hand it is straightforward to see that an order is well-founded if it is the union of well-founded initial segments. In the present situation we can combine Propositions~\ref{prop:deriv-normal-prae-dilator} and~\ref{prop:reconstruct-normal-dil} to get the following:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:partialTx-initial}
Consider a normal prae-dilator $T$ and a linear order $X$. For any $x\in X$ we have
\begin{equation*}
\partial T^x_X=D^{\partial T}_X\!\restriction\!D^{\mu^{\partial T}}_X(x)=\{\sigma\in D^{\partial T}_X\,|\,\sigma<_{D^{\partial T}_X}D^{\mu^{\partial T}}_X(x)\}.
\end{equation*}
In particular $\partial T^x_X$ is an initial segment of $D^{\partial T}_X$.
\end{lemma}
The assumption that $T$ and hence $\partial T$ is normal is crucial for the previous lemma and for many of the following results (cf.~the remarks before Lemma~\ref{lem:range-dil-support}, as well as the discussion of Aczel's construction at the beginning of Section~\ref{sect:constructin-derivative}). Assuming that $\partial T^y_X$ is well-founded for every $y<_X x$, the lemma allows us to conclude that $\bigcup_{y<_Xx}\partial T_X^y$ is well-founded. To complete the induction step one needs to deduce the well-foundedness of $\partial T^x_X$. For this purpose we approximate~$\partial T^x_X$ by distinguishing terms of different height (cf.~Definition~\ref{def:derivative-term-system}):
\begin{definition}[$\mathbf{RCA}_0$]\label{def:approximations-height}
Let $T$ be a normal prae-dilator. We define a family of functions $\operatorname{ht}^{\partial T}_n:\partial T_n\rightarrow\mathbb N$ by induction over the build-up of terms, setting
\begin{align*}
\operatorname{ht}^{\partial T}_n(\mu_m)&=0,\\
\operatorname{ht}^{\partial T}_n(\xi\langle a, \sigma\rangle)&=\begin{cases}
\operatorname{ht}^{\partial T}_n(s)+1 & \text{if $s$ is the $<_{\partial T_n}$-maximal element of $a$},\\
1 & \text{if $a=\emptyset$}.
\end{cases}
\end{align*}
Given an order $X$ and an element $x\in X$, we put
\begin{equation*}
\partial T^{x,k}_X=\bigcup_{y<_X x}\partial T^y_X\cup\{\langle a,s\rangle\in\partial T^x_X\,|\,\operatorname{ht}^{\partial T}_{|a|}(s)\leq k\}
\end{equation*}
for every number~$k$.
\end{definition}
According to Definition~\ref{def:deriv-functor} and Lemma~\ref{lem:deriv-embedding}, any strictly increasing function $f:n\rightarrow k$ yields an embedding $\partial T_f:\partial T_n\rightarrow\partial T_k$. We will need to know that these embeddings respect our height functions:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:heights-functorial}
Consider a normal prae-dilator $T$ and a strictly increasing function $f:n\rightarrow k$. We have
\begin{equation*}
\operatorname{ht}^{\partial T}_k(\partial T_f(s))=\operatorname{ht}^{\partial T}_n(s)
\end{equation*}
for any element $s\in\partial T_n$.
\end{lemma}
\begin{proof}
The claim can be verified by a straightforward induction on the build-up of~$s$. Concerning the case $s=\xi\langle a,\sigma\rangle$, we point out that $\partial T_f(s')$ is $<_{\partial T_k}$-maximal in $[\partial T_f]^{<\omega}(a)$ if $s'$ is $<_{\partial T_n}$-maximal in $a$.
\end{proof}
Yet again, it will be crucial that Definition~\ref{def:approximations-height} provides an approximation by initial segments. To show that this is the case we need a partial converse to Lemma~\ref{lem:supports-order-normal}:
\begin{lemma}[$\mathbf{RCA}_0$]\label{lem:normal-height-ineq}
If $T$ is a normal prae-dilator, then we have
\begin{equation*}
\operatorname{supp}^{\partial T}_n(s)\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t)\text{ and }\operatorname{ht}^{\partial T}_n(s)<\operatorname{ht}^{\partial T}_n(t)\quad\Rightarrow\quad s<_{\partial T_n} t
\end{equation*}
for any number $n$ and arbitrary elements $s,t\in\partial T_n$.
\end{lemma}
\begin{proof}
We establish the claim by induction on $L^{\partial T}_n(s)+L^{\partial T}_n(t)$, relying on the length function from Definition~\ref{def:deriv-term-length}. To prove the induction step we distinguish cases according to the form of $s$ and $t$. In any case we assume $\operatorname{supp}^{\partial T}_n(s)\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t)$ and~$\operatorname{ht}^{\partial T}_n(s)<\operatorname{ht}^{\partial T}_n(t)$. Let us first consider terms $s=\mu_m$ and~$t=\xi\langle b,\tau\rangle$. In this case we need neither the induction hypothesis nor the assumption about heights: In view of Definition~\ref{def:deriv-support} we have $\{m\}\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t)$, so that $\operatorname{supp}^{\partial T}_n(t)<^{\operatorname{fin}} m$ must fail. Invoking Definition~\ref{def:coded-normal-dil} in conjunction with Proposition~\ref{prop:deriv-normal-prae-dilator} we obtain
\begin{equation*}
s=\mu_m=\mu^{\partial T}_n(m)\leq_{\partial T_n}t.
\end{equation*}
Since $s$ and $t$ are different terms we can conclude $s<_{\partial T_n}t$. Now consider $s=\xi\langle a,\sigma\rangle$ and $t=\mu_k$. Definition~\ref{def:derivative-order} tells us that $s<_{\partial T_n}t$ is equivalent to $a<^{\operatorname{fin}}_{\partial T_n}t$. The latter is trivial if $a$ is empty. Otherwise we consider the maximal element $s'\in a$. In view of Definition~\ref{def:deriv-support} we get $\operatorname{supp}^{\partial T}_n(s')\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t)$. Clearly we also have $\operatorname{ht}^{\partial T}_n(s')<\operatorname{ht}^{\partial T}_n(t)$ and~$L^{\partial T}_n(s')<L^{\partial T}_n(s)$. Thus we obtain $s'<_{\partial T_n}t$ by induction hypothesis. Since $s'\in a$ is maximal this establishes $a<^{\operatorname{fin}}_{\partial T_n}t$, as needed. Finally we consider $s=\xi\langle a,\sigma\rangle$ and $t=\xi\langle b,\tau\rangle$. In the proof of Lemma~\ref{lem:deriv-linear} we have seen that $s<_{\partial T_n}t$ holds if $b\leq^{\operatorname{fin}}_{\partial T_n} a$ fails. In order to refute $b\leq^{\operatorname{fin}}_{\partial T_n} a$ we observe that $\operatorname{ht}^{\partial T}_n(s)<\operatorname{ht}^{\partial T}_n(t)$ implies $b\neq\emptyset$. Let $t'\in b$ be maximal with respect to~$<_{\partial T_n}$. To complete the proof it suffices to establish $a<^{\operatorname{fin}}_{\partial T_n}t'$. Yet again this is trivial if $a$ is empty. Otherwise the claim reduces to $s'<_{\partial T_n}t'$, where $s'\in a$ is maximal. The maximality of $t'$ and Lemma~\ref{lem:supports-order-normal} ensure that $\operatorname{supp}^{\partial T}_n(r)\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t')$ holds for all $r\in b$. In view of Definition~\ref{def:deriv-support} this yields
\begin{equation*}
\operatorname{supp}^{\partial T}_n(s')\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(s)\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t)\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_n(t').
\end{equation*}
As we also have $\operatorname{ht}^{\partial T}_n(s')<\operatorname{ht}^{\partial T}_n(t')$, the induction hypothesis yields $s'<_{\partial T_n} t'$.
\end{proof}
For our approximations of $\partial T^x_X$ we get the following:
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:partial-xk-initial}
Consider a normal prae-dilator $T$, an order $X$ and an element $x\in X$. For any number $k$ the order $\partial T^{x,k}_X$ is an initial segment of $\partial T^x_X$.
\end{proposition}
\begin{proof}
Given $\langle a,s\rangle\in\partial T^{x,k}_X$ and $\langle b,t\rangle\in\partial T^x_X$, we must show that $\langle b,t\rangle\leq_{D^{\partial T}_X}\langle a,s\rangle$ implies $\langle b,t\rangle\in\partial T^{x,k}_X$. If we have $\langle a,s\rangle\in\partial T^y_X$ for some $y<_X x$, then we can conclude by Lemma~\ref{lem:partialTx-initial}. So we may assume $\operatorname{ht}^{\partial T}_{|a|}(s)\leq k$. Aiming at the contra\-positive of the desired implication, let us assume that $\langle b,t\rangle\in\partial T^{x,k}_X$ fails. Then we have $\operatorname{ht}^{\partial T}_{|b|}(t)>k$, and $b<^{\operatorname{fin}}_X y$ must fail for all $y<_X x$. In view of $a<^{\operatorname{fin}}_X x$ we get
\begin{equation*}
a\leq^{\operatorname{fin}}_X b\qquad\text{and}\qquad\operatorname{ht}^{\partial T}_{|a|}(s)<\operatorname{ht}^{\partial T}_{|b|}(t).
\end{equation*}
To complete the proof of the contra\-positive we must show $\langle a,s\rangle<_{D^{\partial T}_X}\langle b,t\rangle$. In view of Definition~\ref{def:coded-prae-dilator-reconstruct} this amounts to
\begin{equation*}
\partial T_{|\iota_a^{a\cup b}|}(s)<_{\partial T_{|a\cup b|}}\partial T_{|\iota_a^{a\cup b}|}(t).
\end{equation*}
In order to show this inequality it suffices to establish the assumptions of Lemma~\ref{lem:normal-height-ineq}, which we shall do in the rest of the proof. Recall that $\operatorname{supp}^{\partial T}$ is natural, that we have $\operatorname{en}_{a\cup b}\circ|\iota_a^{a\cup b}|=\iota_a^{a\cup b}\circ\operatorname{en}_a$ (see the beginning of Section~\ref{sect:normal-dils-so}), and that $\langle a,s\rangle\in D^{\partial T}_X$ requires $\operatorname{supp}^{\partial T}_{|a|}(s)=|a|$ (see Definition~\ref{def:coded-prae-dilator-reconstruct}). Combining these facts we obtain
\begin{multline*}
[\operatorname{en}_{a\cup b}]^{<\omega}(\operatorname{supp}^{\partial T}_{|a\cup b|}(\partial T_{|\iota_a^{a\cup b}|}(s)))=[\operatorname{en}_{a\cup b}]^{<\omega}\circ[|\iota_a^{a\cup b}|]^{<\omega}(\operatorname{supp}^{\partial T}_{|a|}(s))=\\
=[\iota_a^{a\cup b}]^{<\omega}\circ[\operatorname{en}_a]^{<\omega}(|a|)=a.
\end{multline*}
In the same way one can establish
\begin{equation*}
[\operatorname{en}_{a\cup b}]^{<\omega}(\operatorname{supp}^{\partial T}_{|a\cup b|}(\partial T_{|\iota_b^{a\cup b}|}(t)))=b.
\end{equation*}
Above we have shown $a\leq^{\operatorname{fin}}_X b$. Since $\operatorname{en}_{a\cup b}$ is order preserving we can now conclude
\begin{equation*}
\operatorname{supp}^{\partial T}_{|a\cup b|}(\partial T_{|\iota_a^{a\cup b}|}(s))\leq^{\operatorname{fin}}\operatorname{supp}^{\partial T}_{|a\cup b|}(\partial T_{|\iota_b^{a\cup b}|}(t)),
\end{equation*}
which is the first of the assumptions needed for Lemma~\ref{lem:normal-height-ineq}. Above we have also seen $\operatorname{ht}^{\partial T}_{|a|}(s)<\operatorname{ht}^{\partial T}_{|b|}(t)$. Together with Lemma~\ref{lem:heights-functorial} we now get
\begin{equation*}
\operatorname{ht}^{\partial T}_{|a\cup b|}(\partial T_{|\iota_a^{a\cup b}|}(s))<\operatorname{ht}^{\partial T}_{|a\cup b|}(\partial T_{|\iota_b^{a\cup b}|}(t)).
\end{equation*}
This establishes the second assumption of Lemma~\ref{lem:normal-height-ineq} and completes the proof.
\end{proof}
The crucial induction step relies on the assumption that $T$ is a dilator:
\begin{proposition}[$\mathbf{RCA}_0$]\label{prop:wf-height-progressive}
Consider a normal dilator $T$, a linear order $X$, an element $x\in X$, and a number $k$. If $\partial T^{x,k}_X$ is well-founded, then so is $\partial T^{x,k+1}_X$.
\end{proposition}
\begin{proof}
Assume that $\partial T^{x,k}_X$ is a well-order. As $T$ is a dilator it follows that $D^T(\partial T^{x,k}_X)$ is a well-order as well. In order to conclude that $\partial T^{x,k+1}_X$ is well-founded it suffices to show that this order can be embedded into~$D^T(\partial T^{x,k}_X)$. First observe that
\begin{equation*}
D^T(\partial T^{x,k}_X)=\{\langle a,\sigma\rangle\in D^T(D^{\partial T}_X)\,|\,a\subseteq\partial T^{x,k}_X\}
\end{equation*}
is a suborder of $D^T(D^{\partial T}_X)$ (apply Lemma~\ref{lem:range-dil-support} to the inclusion map $\partial T^{x,k}_X\hookrightarrow D^{\partial T}_X$). Also recall that $\partial T$ comes with a natural transformation $\xi^T:T\circ\partial T\Rightarrow\partial T$, which is an isomorphism by the proof of Theorem~\ref{thm:partial-derivative}. In view of Definition~\ref{def:morphism-dils} and Proposition~\ref{prop:compose-dils-rca} we obtain an isomorphism
\begin{equation*}
D^{\xi^T}_X\circ\zeta^{T,\partial T}_X:D^T(D^{\partial T}_X)\rightarrow D^{\partial T}_X.
\end{equation*}
It suffices to show that $\partial T^{x,k+1}_X$ is contained in the image of $D^T(\partial T^{x,k}_X)$ under this isomorphism, which is equivalent to the assertion that
\begin{equation*}
D^{\xi^T}_X\circ\zeta^{T,\partial T}_X(\sigma)\in\partial T^{x,k+1}_X\quad\Rightarrow\quad\sigma\in D^T(\partial T^{x,k}_X)
\end{equation*}
holds for any element $\sigma\in D^T(D^{\partial T}_X)$. To establish this fact we write
\begin{equation*}
\sigma=\langle\{\langle a_1,s_1\rangle,\dots,\langle a_n,s_n\rangle\},\tau\rangle,
\end{equation*}
such that the pairs $\langle a_j,s_j\rangle$ are displayed in increasing order. If we have $n=0$, then $\sigma\in D^T(\partial T^{x,k}_X)$ is immediate. Thus we assume $n>0$ for the rest of the proof. Under this assumption, Proposition~\ref{prop:partial-xk-initial} and Lemma~\ref{lem:partialTx-initial} imply that $\sigma\in D^T(\partial T^{x,k}_X)$ is equivalent to $\langle a_n,s_n\rangle\in\partial T^{x,k}_X$. By Proposition~\ref{prop:compose-dils-rca} and Definition~\ref{def:morphism-dils} we have
\begin{equation*}
D^{\xi^T}_X\circ\zeta^{T,\partial T}_X(\sigma)=\langle c,\xi^T_{|c|}(\langle\{\partial T_{|\iota_1|}(s_1),\dots,\partial T_{|\iota_n|}(s_n)\},\tau\rangle)\rangle,
\end{equation*}
where $\iota_j:a_j\hookrightarrow a_1\cup\dots\cup a_n=:c$ are the inclusions. In view of $a_n\subseteq c$ we learn that $D^{\xi^T}_X\circ\zeta^{T,\partial T}_X(\sigma)\in\partial T^y_X$ implies $\langle a_n,s_n\rangle\in\partial T^y_X$, for any $y\in X$. To complete the proof it suffices to establish the implication
\begin{equation*}
\operatorname{ht}^{\partial T}_{|c|}(\xi^T_{|c|}(\langle\{\partial T_{|\iota_1|}(s_1),\dots,\partial T_{|\iota_n|}(s_n)\},\tau\rangle))\leq k+1\quad\Rightarrow\quad\operatorname{ht}^{\partial T}_{|a_n|}(s_n)\leq k.
\end{equation*}
In view of Definition~\ref{def:deriv-xiT} we distinguish two cases: First assume that we have
\begin{equation*}
\xi^T_{|c|}(\langle\{\partial T_{|\iota_1|}(s_1),\dots,\partial T_{|\iota_n|}(s_n)\},\tau\rangle)=\xi\langle\{\partial T_{|\iota_1|}(s_1),\dots,\partial T_{|\iota_n|}(s_n)\},\tau\rangle.
\end{equation*}
Invoking Definition~\ref{def:coded-prae-dilator-reconstruct} we see that the map $\langle a_j,s_j\rangle\mapsto\partial T_{|\iota_j|}(s_j)$ is order preserving. Thus the values $\partial T_{|\iota_j|}(s_j)$ are displayed in increasing order as well. By Lemma~\ref{lem:heights-functorial} and our definition of heights we get
\begin{equation*}
\operatorname{ht}^{\partial T}_{|a_n|}(s_n)=\operatorname{ht}^{\partial T}_{|c|}(\partial T_{|\iota_n|}(s_n))<\operatorname{ht}^{\partial T}_{|c|}(\xi^T_{|c|}(\langle\{\partial T_{|\iota_1|}(s_1),\dots,\partial T_{|\iota_n|}(s_n)\},\tau\rangle)),
\end{equation*}
which yields the desired implication. Now assume that we have
\begin{equation*}
\xi^T_{|c|}(\langle\{\partial T_{|\iota_1|}(s_1),\dots,\partial T_{|\iota_n|}(s_n)\},\tau\rangle)=\mu_m
\end{equation*}
for some $m<|c|$. This can only happen if we have $\partial T_{|\iota_n|}(s_n)=\mu_m$. We then get
\begin{equation*}
\operatorname{ht}^{\partial T}_{|a_n|}(s_n)=\operatorname{ht}^{\partial T}_{|c|}(\partial T_{|\iota_n|}(s_n))=0\leq k,
\end{equation*}
which is the conclusion of the desired implication.
\end{proof}
To deduce the main result of this section we extend the base theory by the principle of bar induction for $\Pi^1_1$-formulas (abbreviated $\mathbf{\Pi^1_1}\textbf{-BI}$).
\begin{theorem}[$\mathbf{RCA}_0+\mathbf{\Pi^1_1}\textbf{-BI}$]\label{thm:BI-yields-deriv}
If $T$ is a normal dilator, then so is $\partial T$.
\end{theorem}
\begin{proof}
From Proposition~\ref{prop:deriv-normal-prae-dilator} we know that $\partial T$ is a normal prae-dilator. In view of Definition~\ref{def:coded-dilator} it remains to establish that $D^{\partial T}_X$ is well-founded for any well-order~$X$. It suffices to consider the case where $X$ is a limit order, i.\,e.~a non-empty order without a maximal element: If $X$ itself does not have this property, then we replace it by the order~$X+\omega$, in which the initial segment $X$ is followed by a copy of the natural numbers. By Proposition~\ref{prop:reconstruct-class-sized-dil} the inclusion $X\hookrightarrow X+\omega$ yields an embedding of $D^{\partial T}_X$ into $D^{\partial T}_{X+\omega}$, so that the former order is well-founded if the latter~is. For the rest of this proof we assume that $X$ is a well-founded limit order. In view of Definition~\ref{def:partial-up-to-x} we then have
\begin{equation*}
D^{\partial T}_X=\bigcup_{x\in X}\partial T_X^x.
\end{equation*}
Let us argue that $D^{\partial T}_X$ is well-founded if $\partial T_X^x$ is well-founded for every $x\in X$: To find a minimal element of a non-empty set $Y\subseteq D^{\partial T}_X$, pick an element $x\in X$ such that $Y\cap\partial T_X^x$ is non-empty. The well-foundedness of $\partial T_X^x$ provides a minimal element $\sigma\in Y\cap\partial T_X^x$. From Lemma~\ref{lem:partialTx-initial} we know that $\partial T_X^x$ is an initial segment of~$D^{\partial T}_X$. It follows that $\sigma$ is minimal in the entire set $Y$, as required. Invoking the principle of $\Pi^1_1$-bar induction, we shall now establish the well-foundedness of $\partial T_X^x$ by induction on $x\in X$. In the induction step we argue that Definition~\ref{def:approximations-height} and Proposition~\ref{prop:partial-xk-initial} allow us to write
\begin{equation*}
\partial T^x_X=\bigcup_{k\in\mathbb N}\partial T^{x,k}_X
\end{equation*}
as a union of initial segments. Once again it follows that $\partial T^x_X$ is well-founded if $\partial T^{x,k}_X$ is well-founded for every number $k$. We argue by side induction on $k$ to show that the latter is the case. Note that induction over the natural numbers is available as a particular instance of bar induction (alternatively one could combine the main and side induction into a single induction over $X\times\omega$). The side induction step is provided by Proposition~\ref{prop:wf-height-progressive}. To complete the proof it is thus enough to establish the base of the side induction. As a preparation we consider an element $\langle a,s\rangle\in\partial T_X^x$ with~$\operatorname{ht}^{\partial T}_{|a|}(s)=0$. In view of Definition~\ref{def:approximations-height} we must have $s=\mu_m$ for some number~$m<|a|$. Together with Definitions~\ref{def:coded-prae-dilator-reconstruct} and~\ref{def:deriv-support} we obtain
\begin{equation*}
|a|=\operatorname{supp}^{\partial T}_{|a|}(\mu_m)=\{m\}.
\end{equation*}
This forces $m=0$ and $|a|=1$, say $a=\{z\}$ with $z\in X$. Altogether we get
\begin{equation*}
\langle a,s\rangle=\langle\{z\},\mu_0\rangle,
\end{equation*}
where $\langle a,s\rangle\in\partial T_X^x$ ensures $z<_X x$. Let us now distinguish two cases: First assume that $x\in X$ is a limit or zero (i.\,e.~for every $z<_X x$ there is a $y<_X x$ with $z<_X y$). Then Definition~\ref{def:approximations-height} yields
\begin{equation*}
\partial T^{x,0}_X=\bigcup_{y<_X x}\partial T^y_X.
\end{equation*}
By Lemma~\ref{lem:partialTx-initial} this is a union of initial segments. The main induction hypothesis ensures that $\partial T^y_X$ is well-founded for every $y<_X x$. It follows that $\partial T^{x,0}_X$ is well-founded, as required. Now assume that $x$ is the successor of an element~$z\in X$, so that $y<_X x$ is equivalent to $y\leq_X z$. In view of the above we obtain
\begin{equation*}
\partial T^{x,0}_X=\partial T^z_X\cup\{\langle\{z\},\mu_0\rangle\}.
\end{equation*}
Once again the main induction hypothesis tells us that $\partial T^z_X$ is well-founded. Since $\partial T^{x,0}_X$ is a finite extension of this order, it must be well-founded itself. We have thus established the base of the side induction, which completes the proof.
\end{proof}
To shed further light on the previous proof we point out that we have
\begin{equation*}
\partial T^z_X\cup\{\langle\{z\},\mu_0\rangle\}=\{\sigma\in D^{\partial T}_X\,|\,\sigma\leq_{D^{\partial T}_X} D^{\mu^{\partial T}}_X(z)\},
\end{equation*}
due to Definitions~\ref{def:extend-normal-transfos} and~\ref{def:deriv-support} as well as Lemma~\ref{lem:partialTx-initial}. Together with the conclusions of the previous sections we obtain the main result of this paper:
\begin{theorem}[$\mathbf{ACA}_0$]\label{thm:main-result}
The following are equivalent:
\begin{enumerate}[label=(\arabic*)]
\item If $T$ is a normal dilator, then $D^{\partial T}_X$ is well-founded for any well-order~$X$.
\item Any normal dilator $T$ has an upper derivative $(S,\xi)$ such that $X\mapsto D^S_X$ preserves well-foundedness (which means that $S$ is again a normal dilator).
\item The principle of $\Pi^1_1$-bar induction is valid.
\end{enumerate}
\end{theorem}
Note that statements~(1) and~(2) are each expressed by a single formula, relying on the formalization of dilators in Section~\ref{sect:normal-dils-so}. To express~(3) by a single formula one uses a truth definition for $\Pi^1_1$-sentences.
\begin{proof}
The implication (1)$\Rightarrow$(2) follows from Proposition~\ref{prop:partial-upper-deriv}, which asserts that $(\partial T,\xi^T)$ is an upper derivative of $T$ (in fact we have a derivative, by Theorem~\ref{thm:terms-derivative}). By Corollary~\ref{cor:deriv-implies-BI} we get (2)$\Rightarrow$(3) (note that the proof uses arithmetical comprehension, via the Kleene normal form theorem and the characteristic property of the Kleene-Brouwer order). The implication (3)$\Rightarrow$(1) holds by Theorem~\ref{thm:BI-yields-deriv}.
\end{proof}
As in the previous section, results about $\partial T$ transfer to arbitrary derivatives:
\begin{corollary}[$\mathbf{RCA}_0+\mathbf{\Pi^1_1}\textbf{-BI}$]\label{cor:deriv-is-dilator}
Consider a normal dilator~$T$. If $(S,\xi)$ is a derivative of $T$, then $X\mapsto D^S_X$ preserves well-foundedness.
\end{corollary}
\begin{proof}
By Definition~\ref{def:derivative} and Proposition~\ref{prop:partial-upper-deriv} there is a morphism $\nu:S\Rightarrow\partial T$ of upper derivatives. According to Lemma~\ref{lem:morph-dilators-extend} we get an embedding
\begin{equation*}
D^\nu_X:D^S_X\rightarrow D^{\partial T}_X
\end{equation*}
for each linear order~$X$. Together with Theorem~\ref{thm:BI-yields-deriv} it follows that $D^{\partial T}_X$ is well-founded whenever $X$ is a well-order.
\end{proof}
Working in a sufficiently strong set theory, one can deduce the following unconditional version of Theorem~\ref{thm:equalizer-to-derivative}. This result provides further justification for our categorical definition of derivatives:
\begin{corollary}\label{cor:deriv-dil-to-fct}
Let $T$ be a normal dilator. If $(S,\xi)$ is a derivative of $T$, then the function $\alpha\mapsto\operatorname{otp}(D^S_\alpha)$ is the derivative of the normal function $\alpha\mapsto\operatorname{otp}(D^T_\alpha)$.
\end{corollary}
\begin{proof}
According to Theorem~\ref{thm:partial-derivative} and Corollary~\ref{cor:deriv-is-dilator} the assumptions of Theorem~\ref{thm:equalizer-to-derivative} are satisfied whenever $(S,\xi)$ is a derivative of $T$.
\end{proof}
\bibliographystyle{amsplain}
|
2,869,038,155,234 | arxiv |
\section{Introduction}
Swarm robot systems, which are developed to perform given tasks using a number of robots, are expected to have strong environmental adaptability and high fault tolerance compared with systems consisting of a single robot.
The development of small and inexpensive robots has opened up the possibility of practical application of such systems~\cite{Barca2013Swarm,Brambilla2013Swarm,Rubenstein2009Scalable}.
One of the main challenges to controlling swarms is to overcome the trade-off relationship between global optimality and scalability~\cite{Tan2015Handbook}.
More specifically, in order to obtain the optimal control input for each robot,
the states of all other robots have to be taken into consideration, and therefore the computational time for this task increases dramatically as the number of robots increases.
On the other hand, if each robot uses only the local information in order to reduce the computational time, then
ensuring the global optimality becomes difficult.
Such a trade-off between optimality and scalability is commonly known as a problem inherent to general large-scale systems, including swarm robot systems~\cite{Lee2008Cyber,Chee-YeeChong2003Sensor,Baillieul2007Control}.
The \emph{mean field game (MFG)} is a relatively new framework by which to
deduce a macroscopic model for describing the density of the agents
taking into account the microscopic dynamics of each agent~\cite{Lasry2007Mean}.
More specifically, the MFG approximates $N$-dimensional optimal control problems as a one-dimensional problem, where $N$ is the number of agents constituting the swarm.
Two key assumptions are made in order to realize the problem reduction: \emph{homogeneity} and \emph{unlimitedness}.
\emph{Homogeneity} means that the dynamics and evaluation functions are common to all of the agents, and \emph{unlimitedness} indicates that the number of agents is infinitely large, i.e., $N \to \infty $, which are fairly reasonable assumptions in many large-scale systems.
As shown below, the assumptions are properly exploited in deriving the macroscopic model
such that the optimal input is obtained with a much shorter computational time than the $N$-dimensional problem,
while taking into account the influence of the agents nearby via a distribution function.
The MFG is hence a possible approach to meet the challenges in controlling swarm robot systems.
In the present study, we extend the original MFG and propose two methods, namely, the \emph{model predictive mean field game (MP-MFG)} and the \emph{best reply strategy (BRS)}.
A specific task to which the proposed methods are applied, that is, the \emph{optimal coverage control problem}, is considered.
In this problem, each agent seeks the optimal inputs, which make the entire group distributed
uniformly throughout the space while keeping the agent's energy consumption as small as possible.
This task is fundamental in swarm robot control and has a wide range of applications, such as optimal placement of sensor networks and efficient rescue of human life in the event of a disaster~\cite{Choset2001Coverage}.
Various methods have been proposed to solve this coverage control problem, but guaranteeing optimality is still a difficult problem when the number of robots is very large~\cite{Cortes2004Coverage,Izumi2014Coverage,Inoue2019Distributeda}.
The main contributions of this research are summarized as follows:
\begin{itemize}
\item The MP-MFG is presented, in which the agent solves the MFG with a fixed prediction time at each time, for the optimal coverage of the swarm robots.
\item The BRS is formulated, in which the agent determines the input without prediction at each time.
\item The proof of the theorem stating that the MP-MFG is consistent with the BRS in the limit of infinitesimal prediction time.
\item The numerical results for the MP-MFG and BRS are provided, showing that the performance of the MP-MFG improves with increasing prediction time and that the solution of the MP-MFG approaches that of the BRS as the prediction time vanishes.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=80mm,bb=0.000000 0.000000 918.000000 654.000000]{./figs/illust.pdf}
\caption{Schematic diagram describing the links between $N$-players optimal control problem, mean field game, model predictive mean field game, and best reply strategy. }
\label{fig:diagram}
\end{figure}
An overview of the MFG, the MP-MFG, and the BRS is given in Fig.~\ref{fig:diagram}.
\section{Related Research}
Coverage control, in which robots are controlled to spread throughout the space, is known as a basic task for swarm robot systems, together with the consensus control collecting robots in one place~\cite{shamma2008cooperative}.
A common approach for coverage control problems is to divide the space into small regions called Voronoi regions, and each robot moves to the center of gravity of its Voronoi region~\cite{Cortes2004Coverage,Izumi2014Coverage,Inoue2019Distributeda}.
This method guarantees local optimality in the sense that each robot moves in the direction in which its evaluation function is reduced.
However, as the number of robots increases, a long computational time is required in order to obtain the Voronoi region and its center of gravity.
An alternative approach is to stochastically select the direction of movement based on the information that each robot has locally observed~\cite{Mesquita2008Optimotaxis,Inoue2019Stochastic}.
This method allows a low computational cost, but the global optimality is sacrificed.
In contrast, the methods discussed herein guarantee a fixed calculation time, regardless of the number of robots, and ensure the global optimality, i.e., that each robot reduces its evaluation function.
The MFG is a concept that was proposed rather recently by Lions and Lasry in 2007~\cite{Lasry2007Mean,Gomes2014Mean}.
Up to now, various types of agent groups have been modeled with the MFG, such as vehicles on the road~\cite{chevalier2015micro, Huang2019GameTheoretic,Festa2017Mean,Tanaka2019Linearly}, pedestrians~\cite{Lachapelle2011mean,Burger2013mean,Dogbe2010Modeling}, robots~\cite{Chen2018Steering,Liu2019Swarm}, and players in stock trading~\cite{Cardaliaguet2018Mean}.
In these examples, the optimal input design and the behavior of the entire group have been analyzed.
In the present study, we propose a method using the MFG approach for swarm robots control, similar to Refs.~\cite{Chen2018Steering,Liu2019Swarm}.
The difference from previous studies is in the method of generating and applying the optimal input.
While originally the optimal input at a fixed time $t\in[0,T]$ is calculated beforehand,
we herein adopt the model predictive strategy, in which the input is generated by observing the group information at each time $t\in[0,T]$ and solving the MFG in time window $\tau\in[t,t+\calT]$. As a result, the feedforward controller of the previous studies is replaced by a feedback controller, which enables reducing the influence of disturbance and modeling errors.
The best reply strategy was formulated by Degond in 2014~\cite{Degond2014Meanfield}.
Here, in addition to the assumptions of homogeneity and unlimitedness, \emph{reactivity}, where the input at each time is determined from the information at that time, is assumed.
Unlike the MFG, the predicted information is not reflected in the input, resulting that the generated input is myopic, and so the control performance is sacrificed.
On the other hand,
a benefit lies in the computational load, because only the initial value problem of one partial differential equation (PDE) needs to be solved, whereas in the MFG, two PDEs, one with the initial condition and the other with the condition at the final time, must be solved.
Thus far, the BRS has only been applied to a few examples, such as pedestrian groups~\cite{Barker2019mean}.
In addition, few studies have analyzed the performance of the BRS, besides the discussion in the equilibrium state~\cite{Barker2019Comparing}.
The present application of the BRS to a practical example of a swarm robot system and the numerical analysis of its time evolution are new.
In addition, this is the first time to analyze the relation between the MFG and the BRS in time-dependent situations.
\section{Problem Formulation}
Consider $N$ mobile robots in $ \bbR^n $ space, where $n \in\{1,2,3\}$.
The position of the $i$-th robot at time $t \ (\in \bbR_+)$ is denoted as $x_i (t) \ (\in \bbR^n)$, and its dynamics are represented as the Ito-type stochastic differential equation:
\begin{align}\label{eq:general_dynamics}
\rmd x_i(t) &= \left(f(x_i(t)) + u_i(t)\right) \rmd t + \sigma \rmd w_i(t),\ t\in[0,T],\\
x_i(0) &= x_i^0,
\end{align}
where the first term on the right-hand side represents the dynamics of the robot. The function $ f: \bbR^n \to \bbR^n $ is a given Lipschitz continuous function, and the variable $ u_i (t) \in \calU$ is the input of the robot $i$ at time $ t $, where $\calU \subseteq \bbR^n$ is the set of the inputs.
The second term on the right-hand side is a noise term representing the individual difference of the robot. Here, $ w_i (t) \ (\in \bbR ^ n) $ are independent variables following the standard Wiener process, and $ \bbE \{w_i (t) \} = 0 $ and $ \bbE \{w_i (t) w_i (t) ^ \top \} = t I_n $ hold, where $ I_n $ is an $ n $-dimensional identity matrix.
Moreover, $ \sigma \in \bbR ^ {n \times n} $ is a given constant matrix representing the magnitude of noise.
We define an evaluation function by which to assess the $i$-th robot input $u_i (t)$ from time $ 0 $ to $ T \ (\in \bbR_+) $ as follows:
\begin{align}
\begin{split}\label{eq:general_eval_func}
& J_i(\{u_i(t)\}_{t\in[0,T]}) =\\
&\quad \bbE\left\{ \int_0^T \left[ \frac{1}{2}u_i(t)^\top u_i(t) + h\left(x_i(t), x_{i^-}(t)\right)\right] \rmd t \right\},
\end{split}
\end{align}
where the first term in the integrand is for keeping the input small, and the second term is for evaluating the entirety of the robots' situation.
Moreover, $ x_{i ^-} (t) $ is a vector in which the position of all the robots except robot $ i $ are arranged: $x_{i^-}:=[x_1^\top,\ldots, x_{i-1}^\top,\ x_{i+1}^\top,\ldots, x_{N}^\top]^\top$.
In general, it is impossible to find an input that minimizes Eq.~\eqref{eq:general_eval_func} for every robot $ i = 1, \ldots, N $.
This is because Eq.~\eqref{eq:general_eval_func} includes multiple robot variables, and the input to minimize the evaluation function of a certain robot $ i $ may cause the evaluation function of the robot $ j $ to deteriorate.
Therefore, we change the argument of Eq.~\eqref{eq:general_eval_func} to $ J_i (\{u_i (t), u_{i^-} (t) \}_{t \in [0, T]} ))$ and attempt to find an input that satisfies the following inequality for any $u_i\in\calU$:
\begin{align}\label{eq:nash_equilibrium}
J_i(\{u_i^*(t),u_{i^-}^*(t)\}_{t\in[0,T]}) \le J_i(\{u_i(t),u_{i^-}^*(t)\}_{t\in[0,T]}).
\end{align}
This set of inputs $ u_i ^ * \ (i = 1, \ldots, N) $ is referred to as a Nash equilibrium.
In order to find a Nash equilibrium, it is necessary to solve an optimal control problem in which Eq.~\eqref{eq:general_dynamics} and Eq.~\eqref{eq:general_eval_func} are coupled.
Solving this problem becomes significantly more difficult as the number of robots $N$ increases.
\section{Derivation of the Mean Field Game}
This section derives a MFG, which is a method for approximating a Nash equilibrium.
We start by formulating the Hamilton-Jacobi-Bellman equation describing the time evolution of the minimum of the cost function and the Fokker-Planck equation describing the time evolution of the density distribution of the robots.
Then, the MFG is derived by coupling these two equations.
\subsection{Hamilton-Jacobi-Bellman equation}\label{sec:HJB}
In the evaluation function of Eq.~\eqref{eq:general_eval_func}, suppose that the second term of the integrand is written using the density function as follows:
\begin{align}
h\left(x_i, x_{i^-}(t)\right) = \bar h(x_i(t),\rho_{i^-}(x_i(t),t)), \label{eq:eval_density}
\end{align}
where the function $\rho_{i^-}$ is the empirical density function defined as
\begin{align}\label{eq:density}
\rho_{i^-}(x,t) := \frac{1}{N-1}\sum_{j\ne i} \delta(x - x_j(t)).
\end{align}
The aforementioned computational difficulty appears to have been solved in Eq.~\eqref{eq:eval_density} because the term $ x_ {i ^-} $ for the other robots has disappeared.
This is incorrect, of course, because $ \rho_{i ^-} (x, t) $ actually depends on $ x_{i ^-} $.
However, as will be described later, $ \rho_ {i ^-} (x, t) $ is approximated by another equation as a function $ \rho (x, t) $ and so is now considered to be a known function.
When the density function $ \rho (x, t) $ is known, the solution of the optimal control problem is known to follow a partial differential equation called the \emph{Hamilton-Jacobi-Bellman equation (HJB equation)}.
We define a function called the value function, which is the minimum value of the evaluation function \eqref{eq:general_eval_func} in time interval $[t, T]$:
\begin{align}
\begin{split}
&V(x_i,t) := \\
&\min_{u_i([t,T])}\bbE \left\{ \int_t^T \left[ \frac{1}{2}u_i(t)^\top u_i(t) + \bar h(x_i(t),\rho(x_i(t),t)) \right] \rmd t \right\}.
\end{split}
\end{align}
Then, the HJB equation for Eq.~\eqref{eq:general_dynamics} is defined as follows~\cite{Fleming2006Controlled}:
\begin{align}
\begin{split}
-\partial_t V(x ,t) &= \bar h(x,\rho(x,t)) + \partial_x V(x,t)^\top f(x) \\
&\quad -\frac{1}{2}\partial_{x}V(x,t)^\top\partial_{x}V(x,t)\\
&\quad + \frac{1}{2}\mathrm{Tr} \left(\sigma\sigma^\top \partial_{xx} V(x,t)\right),\label{eq:general_HJB}
\end{split}\\
V(x ,T) &= 0,\label{eq:general_HJB_terminal}
\end{align}
where $\partial_{x } {V}(x ,t)$ and $\partial_{x x } {V}(x ,t)$ represent the Jacobian and Hessian, respectively.
The HJB equation is backwardly solvable using the terminal condition \eqref{eq:general_HJB_terminal}:
If the value function $ V (x, t) $ at a certain time is known as a function of state, and its derivatives $ \partial_ {x} {V} (x, t)$ and $\partial_ {xx} {V} (x, t) $ are also calculated, $ V (x, t) $ can be calculated in the inverse time direction using discrete time approximation of Eq.~\eqref{eq:general_HJB}.
Once the solution $ V (x, t) $ is given, the optimal input is obtained as $ u (x, t) =-\partial_x V (x, t) $.
\subsection{Fokker-Planck equation}\label{sec:FP}
When deriving the HJB equation, the density function $ \rho (x, t) $ was given as a known function.
However, the actual population density is determined by each robot moving according to the optimal input.
The time evolution of this density distribution is called the \emph{Fokker-Planck equation (FP equation)}.
Assume that each robot input $ u_i (t) $ is given as a known function $ u (x, t) $ of the position and time.
In addition, the probability density function of all robots at the initial time is equally defined as $ \rho ^ 0 (x) $ in the limit of $ N \to \infty $ in Eq.~\eqref{eq:density}.
Then, the FP equation representing the time evolution of the robots' density distribution $ \rho (x, t) $ is written as follows\cite{Gardiner2009Stochastic}:
\begin{align}
\begin{split}\label{eq:FP}
\partial_t \rho(x,t) &=
-\sum_{j=1}^n \partial_j \left(\left(f(x) + u(x,t)\right)_j\rho(x,t)\right)\\
&\quad + \frac{1}{2}\mathrm{Tr} \left(\sigma\sigma^\top \partial_{xx}\rho(x,t)\right),
\end{split}\\
\rho(x,0)&=\rho^0(x).\label{eq:FP_initial_cond}
\end{align}
The FP equation is forwardly solvable using the initial condition \eqref{eq:FP_initial_cond}:
If the density function $ \rho (x, t) $ at a certain time is known as a function of state and its derivatives are also calculated, then $ \rho (x, t) $ can be calculated in the forward time direction using discrete time approximation of Eq.~\eqref{eq:FP}.
\subsection{Mean Field Game}
The HJB equation is used to obtain an optimal input based on the assumption that the density distribution of the robot is known at an arbitrary time.
On the other hand, the FP equation is used to determine the time evolution of the density distribution assuming that the optimal input is known at an arbitrary time.
A system that combines both equation is called a \emph{mean field game (MFG)}:
\begin{align}
\begin{split}
-\partial_t V(x ,t) &= \bar h(x,\rho(x,t)) + \partial_x V(x,t)^\top f(x)\\
&\quad -\frac{1}{2}\partial_{x}V(x,t)^\top\partial_{x}V(x,t)\\
&\quad + \frac{1}{2}\mathrm{Tr} \left(\sigma\sigma^\top \partial_{xx} V(x,t)\right),
\end{split}\label{eq:MFG_HJB} \\
\begin{split}
\partial_t \rho(x,t) &=
-\sum_{j=1}^n \partial_j \left(\left(f(x) -\partial_{x}V(x,t)\right)_j\rho(x,t)\right)\\
&\quad
+ \frac{1}{2}\mathrm{Tr} \left(\sigma\sigma^\top \partial_{xx}\rho(x,t)\right)
,
\end{split}\label{eq:MFG_FP}\\
V(x ,T) &= 0,\label{eq:MFG_HJB_terminal}\\
\rho(x,0)&=\rho^0(x).\label{eq:MFG_FP_initial}
\end{align}
The HJB equation is solved in the inverse time direction from the terminal condition, and the FP equation is solved in the forward time direction from the initial condition.
Accordingly, the MFG is a forward-backward-type equation, including both the initial condition and the terminal condition.
Among the several possible methods for solving such an equation, the simplest solution is to repeatedly perform two steps of 1) solving the forward equation while the backward variables are fixed and 2) solving the backward equation while the forward variables are fixed.
This method is shown in Algorithm~\ref{alg:HJB-FP}.
What is obtained as a result of solving the MFG is an optimal input of the robot swarm and a density function of the robot swarm under the input.
Since the MFG involves approximations in its derivation process, the resulting input does not exactly achieve the Nash equilibrium \eqref{eq:nash_equilibrium}.
However, the solution of the MFG is known to give an appropriate approximation of the Nash equilibrium which is called $\epsilon$-Nash;
when using the solution of the MFG $ u (x, t) $ as the control input of $N$ players $ u_i ^ * (t) = u (x_i (t), t) $ , there exists a sequence $\{\epsilon_N\}$ satisfying $\epsilon_N \searrow 0$ as $N\to\infty$ such that for all $i=1,\ldots,N$ and for all $\{u_i(t)\}_{t\in[0,T]}$,
\begin{align}
J_i(\{u_i^*(t),u_{i^-}^*(t)\}_{t\in[0,T]}) \le J_i(\{u_i(t),u_{i^-}^*(t)\}_{t\in[0,T]}) + \epsilon_N,
\end{align}
which shows the convergence to the Nash equilibrium~\cite{Caines2017Mean,bensoussan2013mean,Gueant2011Mean,Porretta2017weak}.
\begin{algorithm}[t]\label{alg:HJB-FP}
\caption{Numerical Solution of Mean Field Game}
\KwIn{$V_0(x,t)$, $\rho_0(x,t)$: Initial values of the mean field game \eqref{eq:MFG_HJB}--\eqref{eq:MFG_FP_initial}}
\KwOut{$V(x,t)$, $\rho(x,t)$: Solution of the mean field game \eqref{eq:MFG_HJB}--\eqref{eq:MFG_FP_initial}}
$V(x,t)\leftarrow V_0(x,t)$, $\rho(x,t)\leftarrow \rho_0(x,t)$\\
$z\leftarrow +\infty$\\
\While{$z>\epsilon$}{
$V_\text{old}(x,t)\leftarrow V(x,t)$, $\rho_\text{old}(x,t)\leftarrow \rho(x,t)$\\
Update $V(x,t)$ by backwardly solving Eq.~\eqref{eq:general_HJB} with fixed $\rho(x,t)$\\
Update $\rho(x,t)$ by forwardly solving Eq.~\eqref{eq:FP} with fixed $V(x,t)$\\
$z\leftarrow \|V-V_\text{old}\| + \|\rho-\rho_\text{old}\|$
}
\end{algorithm}
\section{Proposed Methods}\label{sec:Proposed}
This section derives a method using the MP-MFG, by extending the MFG as a feedback control method.
We also derive the BRS, which is realized as the limit of reducing the prediction time in the MP-MFG.
\subsection{Model Predictive Mean Field Games}
The MFG is a framework for calculating the feedforward optimal input of a robot swarm at a fixed time interval from $ t = 0 $ to $ t = T $.
Considering the use of the MFG for real robot control, modeling errors and disturbances may cause errors between the predicted time evolution of the population and the actual time evolution.
In order to prevent such errors, it is desirable to perform feedback control in which a control input is sequentially generated based on the observed information of the entire group.
To achieve this, we propose a \emph{model predictive mean field game (MP-MFG)}, in which at each time $t\in[0,T]$, robots repeatedly solve the MFG of the time interval $ [t, t + \calT] $, where $ \calT \in \bbR_+ $ denotes a fixed prediction time.
By using the model predictive strategy, the density function observed at each time can be used as an initial condition of the MFG, so that the effects of modeling errors and disturbances are expected to be reduced.
More specifically, in the MP-MFG, instead of the evaluation function of Eq.~\eqref{eq:general_eval_func}, the following function is minimized:
\begin{align}
\begin{split}
&J_i(\{u_i(t)\}_{t\in[t,t+\calT]}) =\\
&\ \bbE\left\{ \int_t^{t+\calT} \left[ \frac{1}{2}u_i(\tau)^\top u_i(\tau) + \bar h\left(x_i(\tau), \rho_{i^-}(x_i(\tau),\tau) \right)\right] \rmd \tau \right\}.
\end{split}\label{eq:eval_func_MP}
\end{align}
Then, at each time $ t \in [0, T] $, the following MFG in the time window $ \tau \in [t, t + \calT] $ is solved:
\begin{align}
\begin{split}
-\partial_\tau \hat V(x ,t,\tau) &= \bar h(x,\hat \rho(x,t,\tau)) + \partial_x \hat V(x,t,\tau)^\top f(x)\\
&\quad -\frac{1}{2}\partial_{x}\hat V(x,t,\tau)^\top\partial_{x}\hat V(x,t,\tau)\\
&\quad + \frac{1}{2}\mathrm{Tr}\left(\sigma\sigma^\top\partial_{xx} \hat V(x,t,\tau)\right),
\end{split}\label{eq:MP-MFG_HJB}\\
\begin{split}
\partial_\tau \hat \rho(x,t,\tau) &=\\
\quad -\sum_{j=1}^n & \partial_j \left(\left(f(x) -\partial_{x}\hat V(x,t,\tau)\right)_j\hat \rho(x,t,\tau)\right)\\
\quad + \frac{1}{2}&\mathrm{Tr}\left(\sigma\sigma^\top \partial_{xx}\hat \rho(x,t,\tau)\right)
,
\end{split}\label{eq:MP-MFG_FP}\\
\hat V(x ,t,\tau&=t+\calT) = 0,\label{eq:MP-MFG_HJB_terminal}\\
\hat\rho(x,t,\tau&=t) = \rho(x,t),\label{eq:MP-MFG_FP_initial}
\end{align}
where $\hat \rho(x,t,\tau)$ represents the density distribution of the agents, predicted in the time window $ \tau \in [t, t + \calT] $, and $\hat V(x,t,\tau)$ is the value function used in the MP-MFG.
As shown in Eq.~\eqref{eq:MP-MFG_FP_initial}, the observed density distribution $\rho(x,t)$ is used as the initial condition of the predicted distribution $\hat \rho(x,t,t)$.
Each agent solves Eqs.~\eqref{eq:MP-MFG_HJB}--\eqref{eq:MP-MFG_FP_initial} at each time $t\in [0,T]$, and moves according to the optimal input $ u (x, t) =-\partial_x \hat V (x, t, \tau = t) $.
This is represented as the FP equation:
\begin{align}
\begin{split}
\partial_t \rho(x,t) &=
-\sum_{j=1}^n \partial_j \left(\left(f(x) -\partial_x\hat V(x,t,t) \right)_j\rho(x,t)\right)\\
&\quad
+ \frac{1}{2}\mathrm{Tr}\left(\sigma\sigma^\top \partial_{xx}\rho(x,t)\right)
,
\end{split}\label{eq:MP-MFG_dynamics}\\
\rho(x,0)&=\rho^0(x,t).\label{eq:MP-MFG_dynamics_initial}
\end{align}
A summary of the MP-MFG as an algorithm is shown in Algorithm~\ref{alg:MP-MFG}.
\begin{algorithm}[t]\label{alg:MP-MFG}
\caption{Numerical Solution of Model Predictive Mean Field Game}
\KwIn{$\calT\in \bbR_+$: Prediction time for MP-MFG \eqref{eq:MP-MFG_HJB}--\eqref{eq:MP-MFG_dynamics_initial};\
$\Delta t\ (\le \calT)$: Time step for numerically solving MP-MFG;\
$\rho^0(x)$: Initial density function for MP-MFG}
\KwOut{$\rho(x,t)$, $u(x,t)$: Solution of MP-MFG \eqref{eq:MP-MFG_HJB}--\eqref{eq:MP-MFG_dynamics_initial}}
$t\leftarrow 0$\\
$\rho(x,t)\leftarrow \rho^0(x)$\\
$\hat\rho(x,t,t)\leftarrow \rho(x,t)$\\
\While{$t<T$}{
Obtain $u(x,t)\ (=\partial_x \hat V(x,t,t))$ by solving Eq.~\eqref{eq:MP-MFG_HJB}--\eqref{eq:MP-MFG_FP_initial} with Algorithm~\ref{alg:HJB-FP}, using $\hat\rho(x,t,t)$\\
Calculate $\rho(x,t+\Delta t)$ by numerically solving Eq.~\eqref{eq:MP-MFG_dynamics}.\\
$t\leftarrow t+\Delta t$\\
$\hat\rho(x,t,t)\leftarrow \rho(x,t)$
}
\end{algorithm}
\subsection{Best Reply Strategy}
Assume that the function $\bar h(x,\rho(x,t))$ in the evaluation function of the MP-MFG \eqref{eq:eval_func_MP} is written using the function $\tilde h:\bbR^n\times\bbR_+\to\bbR$ and prediction time $\calT$ as
\begin{align}
\bar h(x,\rho(x,t)) = \frac{1}{\calT} \tilde h(x,\rho(x,t)).
\end{align}
Then, the \emph{best reply strategy (BRS)} is obtained by setting inputs as
\begin{align}\label{eq:BRS_input}
u(x,t) = -\partial_x \tilde h(x,\rho(x,t)),
\end{align}
and the density dynamics as
\begin{align}
\begin{split}
\partial_t \rho(x,t) &=
-\sum_{j=1}^n \partial_j \left(\left(f(x) -\partial_x \tilde h(x,\rho(x,t)) \right)_j\rho(x,t)\right)\\
&\quad
+ \frac{1}{2}\mathrm{Tr} \left(\sigma\sigma^\top \partial_{xx}\rho(x,t)\right)
,
\end{split}\label{eq:BRS_density}\\
\rho(x,0)&=\rho^0(x,t).\label{eq:BRS_density_initial}
\end{align}
In the MP-MFG, it is necessary to solve two forward-backward partial differential equations, i.e., Eqs.~\eqref{eq:MP-MFG_HJB} through \eqref{eq:MP-MFG_FP_initial}, at each time, whereas in the case of the BRS, it is only necessary to solve the initial value problem of one partial differential equation, given by Eqs.~\eqref{eq:BRS_density} and \eqref{eq:BRS_density_initial}.
This has the advantage of reducing the calculation time.
As shown in the following theorem, the BRS is obtained as a limit of reducing the prediction time $ \calT$ of the MP-MFG.
\begin{theorem}\label{thm:MFG_BRS_convergence}
Assume that the evaluation function of Eq.~\eqref{eq:eval_func_MP} is given by
\small
\begin{align}
\begin{split}
&J_i(\{u_i(t)\}_{t\in[t,t+\calT]}) =\\
&\bbE\left\{ \int_t^{t+\calT} \left[ \frac{1}{2}u_i(\tau)^\top u_i(\tau) + \frac{1}{\calT}\tilde h\left(x_i(\tau), \rho_{i^-}(x_i(\tau),\tau) \right)\right] \rmd \tau \right\},
\end{split}\label{eq:eval_correspond_BRS}
\end{align}
\normalsize
and that the solution of the corresponding MP-MFG, Eqs.~\eqref{eq:MP-MFG_HJB} through \eqref{eq:MP-MFG_dynamics_initial}, is given by smooth functions $\rho_{\calT}(x,t)$ and $u_{\calT}(x,t)\ (=-\partial_x \hat V(x,t,t))$.
In addition, assume that the solution of the BRS, Eqs.~\eqref{eq:BRS_input} through \eqref{eq:BRS_density_initial}, is given by bounded functions $\tilde\rho(x,t)$ and $\tilde u(x,t)$.
Then, for any $(x,t)\in \bbR^n\times [0,T]$,
\begin{align}
\lim_{\calT\to +0} \rho_{\calT}(x,t)=\tilde \rho(x,t),\label{eq:density_correspondence}\\
\lim_{\calT\to +0} u_{\calT}(x,t)=\tilde u(x,t),
\end{align}
hold.
\end{theorem}
\begin{proof}
The HJB equation of MP-MFG corresponding to the evaluation function \eqref{eq:eval_correspond_BRS} is calculated as
\begin{align}
\begin{split}
-\partial_\tau \hat V(x,t ,\tau) &= \frac{1}{\calT}\tilde h(x,\hat \rho(x,t,\tau)) + \partial_x \hat V(x,t,\tau)^\top f(x)\\
&\quad -\frac{1}{2}\partial_{x}\hat V(x,t,\tau)^\top\partial_{x}\hat V(x,t,\tau)\\
&\quad + \frac{1}{2}\mathrm{Tr}\left(\sigma\sigma^\top\partial_{xx} \hat V(x,t,\tau)\right).
\end{split}
\end{align}
Let $\tau=t$ hold. Then by using finite difference approximation with the time interval of the same time as prediction time $\calT$, we obtain
\begin{align}
\begin{split}
-&\frac{\hat V(x,t ,t+\calT)-\hat V(x,t ,t)}{\calT}\\
&= \frac{1}{\calT}\tilde h(x,\hat \rho(x,t,t)) + \partial_x\hat V(x,t,t)^\top f(x)\\
&\quad -\frac{1}{2}\partial_{x}\hat V(x,t,t)^\top\partial_{x}\hat V(x,t,t)\\
&\quad + \frac{1}{2}\mathrm{Tr}\left(\sigma\sigma^\top\partial_{xx} \hat V(x,t,t)\right) + O(\calT).
\end{split}
\end{align}
Multiplying both sides by $ \calT $ yields
\begin{align}
\begin{split}
-&\hat V(x,t ,t+\calT)+\hat V(x,t ,t)\\
&= \tilde h(x,\hat \rho(x,t,t)) + \calT\left\{c(x,t)+O(\calT)\right\},
\end{split}
\end{align}
where $c(x,t)$ is a function that is not related to $ \calT $.
Since the terminal condition of Eq.~\eqref{eq:MP-MFG_HJB_terminal} holds, and the function $ c (x, t) $ is bounded because of the smoothness assumption of $u_\calT(x,t)$, we obtain
\begin{align}\label{eq:V=h}
\lim_{\calT\to +0}\hat V(x,t ,t) &= \tilde h(x,\hat \rho(x,t,t)),\\
&= \tilde h(x,\rho(x,t))
\end{align}
from which the optimal input of the MP-MFG is represented as
\begin{align}
\begin{split}
u_{\calT}(x,t) &= -\partial_x \hat V(x,t ,t)\\
& \to -\partial_x \tilde h(x,\rho(x,t)),\ \text{as }\calT\to+0.
\end{split}
\end{align}
This corresponds to the input of the BRS \eqref{eq:BRS_input}.
Then, the FP equation of the MP-MFG (Eq.~\eqref{eq:MP-MFG_FP}) and the FP equation of the BRS (Eq.~\eqref{eq:BRS_density}) are equal, and Eq.~\eqref{eq:density_correspondence} is also confirmed to hold.
\end{proof}
\section{Numerical Examples}
In this section, we numerically calculate the MP-MFG formulated in Sec.~\ref{sec:Proposed} and evaluate the performance of the solution for various prediction times.
We also perform numerical calculations for the BRS, in order to confirm the validity of Theorem~\ref{thm:MFG_BRS_convergence}.
We consider the optimal coverage control of robots moving in one-dimension space $ \Omega: = [0,1] $ with a periodic boundary condition.
The dynamics of each robot are defined as follows:
\begin{align}\label{eq:robot_dynamics}
\rmd x_i(t) &= u_i(t) \rmd t + \sigma \rmd w_i(t),\ t\in[0,T]
\end{align}
where $ x_i (t) \in \Omega $ is the position of the robot, $ u_i (t) \in [-1,1] $ is the input of the robot, and $\sigma\in \bbR$ is the gain of the noise.
Each robot attempts to scatter throughout the space while minimizing its energy consumption.
To achieve this purpose, we design the evaluation function of the MP-MFG as
\begin{align}
\begin{split}
&J_i(\{u_i(t)\}_{t\in[t,t+\calT]}) =\\
&\quad \bbE\left\{ \int_t^{t+\calT} \left[ \frac{1}{2}u_i(\tau)^2 + \frac{1}{\calT}\ln(\rho_{i^-}(x,\tau))\right] \rmd \tau \right\}.
\end{split}\label{eq:robot_eval_func}
\end{align}
The second term of the integrand is multiplied by $ 1 / \calT $ in order to confirm the correspondence between the MP-MFG and the BRS, as stated in Theorem~\ref{thm:MFG_BRS_convergence}.
In all numerical calculations, a Gaussian function having a peak at $ x = 0.5 $ was used as the initial density distribution $ \rho ^ 0 (x) $:
\begin{align}
\rho^0(x)=\frac{1}{\sqrt{0.02\pi}}\exp\left\{-\frac{(x-0.5)^2}{0.02}\right\}.
\end{align}
The noise gain of the dynamics is set as $ \sigma ^ 2 = 0.05 $.
Then, the Equations for input generation in the MP-MFG is written as:
\begin{align}
\begin{split}
-\partial_\tau \hat V(x,t ,\tau) &= \frac{1}{\calT}\ln(\hat \rho(x,t,\tau))\\
&\quad - \frac{1}{2}\partial_{x}\hat V(x,t,\tau)^\top\partial_{x}\hat V(x,t,\tau)\\
&\quad + \frac{\sigma^2}{2} \partial_{xx} \hat V(x,t,\tau),
\end{split}\label{eq:robot_MP-MFG_HJB} \\
\begin{split}
\partial_\tau \hat \rho(x,t,\tau) &=
\partial_x \left(\partial_{x}\hat V(x,t,\tau)\hat \rho(x,t,\tau)\right)\\
&\quad + \frac{\sigma^2}{2}\partial_{xx}\hat \rho(x,t,\tau),
\end{split}\label{eq:robot_MP-MFG_FP}\\
\hat V(x,t ,t+\calT) &= 0,\label{eq:robot_MP-MFG_HJB_terminal}\\
\hat\rho(x,t,t)&=\rho(x,t),\label{eq:robot_MP-MFG_FP_initial}
\end{align}
and the dynamics of the density distribution is written as
\begin{align}\label{eq:robot_MP-MFG_dynamics}
\partial_t \rho(x,t) &=
\partial_x \left(\partial_{x}\hat V(x,t,t)\rho(x,t)\right)
+ \frac{\sigma^2}{2}\partial_{xx}\rho(x,t),
\end{align}
\begin{figure*}[t]
\centering
\includegraphics[width=160mm,bb=0.000000 0.000000 987.000000 288.000000]{./figs/m_MPMFG.pdf}
\caption{Time evolution of the density function $\rho(x,t)$ obtained by solving Eqs.~\eqref{eq:MP-MFG_HJB} through \eqref{eq:MP-MFG_dynamics_initial} for various prediction times $\calT$. (a) $\calT=0.01$, (b) $\calT=0.1$, (c) $\calT=1.0$. }
\label{fig:MP-MFG_density}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=160mm,bb=0.000000 0.000000 987.000000 288.000000]{./figs/u_MPMFG.pdf}
\caption{Time evolution of the optimal input function $u(x,t)$ obtained by solving Eqs.~\eqref{eq:MP-MFG_HJB} through \eqref{eq:MP-MFG_dynamics_initial} for various prediction times $\calT$. (a) $\calT=0.01$, (b) $\calT=0.1$, (c) $\calT=1.0$. }
\label{fig:MP-MFG_input}
\end{figure*}
The results of calculating the MP-MFG (Eqs.~\eqref{eq:robot_MP-MFG_HJB} through \eqref{eq:robot_MP-MFG_dynamics}) using Algorithm~\ref{alg:MP-MFG} are shown in Figs.~\ref{fig:MP-MFG_density} and \ref{fig:MP-MFG_input}.
The horizontal axis of each figure represents the domain of each function $ \Omega = [0,1] $, and the vertical axis represents the time $ t \in [0,1] $.
We compare the solutions for three prediction times: $ \calT \in\{0.01, 0.1, 1.0\}$.
First, Fig.~\ref{fig:MP-MFG_density} is a plot of the density function.
For all prediction times, the robots are dispersed over time, and the dispersion speed becomes faster as the prediction time becomes smaller.
This is because the second term of the integrand of the evaluation function (Eq.~\eqref{eq:robot_eval_func}) has order $ O (\calT ^ {-1}) $, resulting in the small prediction times leading to the excessive input generation.
Next, Fig.~\ref{fig:MP-MFG_input} is a plot of the input function.
In all calculations, the peak of the robots' distribution is suppressed as the robot located at $ x> 0.5 $ moves to the right and the robot located at $ x <0.5 $ moves to the left.
When the predicted time is short, particularly in the transient state, each robot is moving at $ u = \pm 1.0 $, which is the maximum (or minimum value) speed.
This means that the energy consumption of each robot increases as the prediction time decreases.
\begin{figure}[t]
\centering
\includegraphics[width=80mm,bb=0.000000 0.000000 392.540986 267.635810]{./figs/compare_eval_func.pdf}
\caption{Time evolution of the evaluation function (Eq.~\eqref{eq:eval_total}) for various prediction times $\calT$. }
\label{fig:MP-MFG_eval}
\end{figure}
In order to quantitatively confirm the above observations, we set and calculate the evaluation function for assessing the entirety of the robots' situation as follows:
\begin{align}\label{eq:eval_total}
\bar J(t) := \int_\Omega \left[ \frac{1}{2}u(x,t)^2 + \ln(\rho(x,t)) \right]\rho(x,t)\rmd x.
\end{align}
We plot $\bar J(t)$ for various prediction times in Fig.~\ref{fig:MP-MFG_eval}.
For all prediction times, the value of the evaluation function decreases as time elapses.
This is a result of reducing the second term of the evaluation function, which is related to the density of the robots, by moving the robots in the direction opposite to the distribution peak.
In addition, as the prediction time increases, the value of the evaluation function (Eq.~\eqref{eq:robot_eval_func}) decreases, particularly in a transient state.
This is because an excessively large input is applied when the prediction time is short, so that the first term related to the input of robots in the evaluation function becomes large.
In the stationary state, the peak of the distribution is sufficiently suppressed, so that the values of the evaluation functions are approximately the same for all the prediction times.
\begin{figure*}[t]
\centering
\includegraphics[width=120mm,bb=0.000000 0.000000 885.000000 346.000000]{./figs/m_u_BRS.pdf}
\caption{Time evolution of the density function $\rho(x,t)$ and input function $u(x,t)$ obtained by solving Eqs.~\eqref{eq:BRS_input}, \eqref{eq:BRS_density}, and \eqref{eq:BRS_density_initial}. (a) Density function $\rho(x,t)$, (b) Input function $u(x,t)$. }
\label{fig:BRS}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=80mm,bb=0.000000 0.000000 398.426767 269.185601]{./figs/compare_solution.pdf}
\caption{Distance between the solutions of the model predictive mean field game and the best reply strategy for various prediction times $\calT$. }
\label{fig:MFG_BRS_converge}
\end{figure}
Next, a numerical calculation is performed on the system of Eq.~\eqref{eq:robot_dynamics} with the BRS (Eqs.~\eqref{eq:BRS_input} through \eqref{eq:BRS_density_initial}).
The input in the BRS, corresponding to the evaluation function of Eq.~\eqref{eq:robot_eval_func}, is calculated as follows:
\begin{align}
u(x,t) = -\frac{1}{\rho(x,t)}\partial_x\rho(x,t).
\end{align}
The FP equation under this input is written as the following diffusion equation:
\begin{align}
\partial_t \rho(x,t) &= \frac{\sigma^2+2}{2}\ \partial_{xx}\rho(x,t).\label{eq:robot_BRS_FP}
\end{align}
Figure~\ref{fig:BRS} shows the BRS calculated from these equations, where Fig.~\ref{fig:BRS}(a) is the time evolution of the density function and Fig.~\ref{fig:BRS}(b) is the time evolution of the input function.
As in the case of the MP-MFG, the peak is reduced with time by each robot moving in the direction opposite to the distribution peak.
In addition, comparing Fig.~\ref{fig:BRS} with Figs.~\ref{fig:MP-MFG_density} and \ref{fig:MP-MFG_input}, the density and input obtained by the BRS appear to be close to those obtained by the MP-MFG with the short prediction time.
In order to confirm this, we calculate the norm of the difference between the functions obtained in the MP-MFG ($\rho_\calT(x,t)$ and $u_\calT(x,t)$) and the functions obtained in the BRS ($\tilde\rho(x,t)$ and $\tilde u(x,t)$):
\begin{align}
D(\rho_\calT, \tilde\rho, u_\calT, \tilde u) := \|\rho_\calT-\tilde\rho\| + \|u_\calT-\tilde u\|,
\end{align}
and plot the norm in Fig.~\ref{fig:MFG_BRS_converge}.
In the MP-MFG, the shorter the prediction time is, the closer the solution is to the solutions given by the BRS.
Thus, the statement of Theorem~\ref{thm:MFG_BRS_convergence} is numerically confirmed.
\section{Conclusion}
In the present paper, we proposed methods for solving the optimal coverage control problem of swarm robot systems using the MP-MFG and the BRS.
Numerical calculations show that the performance of the MP-MFG improves as the prediction time increases (see Fig.~\ref{fig:MP-MFG_eval}),
and the MP-MFG and the BRS asymptotically coincide in the limit where the prediction time goes to $0$ (see Fig.~\ref{fig:MFG_BRS_converge}).
While the original MFG is a feedforward controller, the proposed MP-MFG is a feedback controller that uses the information of the entire group at each time.
Therefore, the MP-MFG is considered to be strong against modeling errors and disturbances. The next step in this research is to confirm such robustness against such modeling errors and disturbances. The proposed method is widely applicable not only to swarm robot systems but also to many systems that possess homogeneity and unlimitedness,
and extending the scope of application of the proposed method is among the subjects for future study.
\section*{Acknowledgment}
The authors would like to thank Dr.~Norikazu Saito and Dr.~Hiroyoshi Mitake of the University of Tokyo, for the useful discussion.
|
2,869,038,155,235 | arxiv | \section{The \hess\ extra-galactic sky}
\textit{H.E.S.S.}, a system of four atmospheric Cherenkov telescopes sensitive to $\gamma$-ray photons above $100$ GeV, has detected more than 20 extra-galactic sources since the beginning of scientific operations in 2003. \\ Apart from the starburst galaxy \textit{NGC~253}, the VHE extra-galactic sky seen by \hess\ is entirely composed of AGNs. Among these, blazars (BL Lac objects and FSRQ) are by far the most important sub-class. In the unified AGN model \cite{Urry}, these objects are radio-loud AGNs whose relativistic jet is pointed towards the observer. The emitting region is thought to be a very dense zone inside the jet, filled with a high-energy particle ($e^\pm$) population and a magnetic field of the order of $10^{-2}$-$10^{-1}$G. The high energy emission is usually interpreted as inverse Compton scattering of synchrotron photons (synchrotron self-Compton model, SSC), or external photons (external inverse Compton model), off the high energy particles in the emitting region. The measured flux is enhanced by relativistic effects.\\
Apart from the FSRQ \textit{PKS~1510-089} \cite{1510} and the two radio-galaxies \textit{Cen~A} \cite{CenA} and \textit{M~87} \cite{M87,M87bis}, all the AGNs detected with \hess\ are BL Lac objects.\\
The list of the extra-galactic sources detected by \hess\ up to now is reported in Table 1, while the integrated number of extra-galactic sources as a function of the time is shown in Fig. \ref{simp_fig}.
Recently, an important improvement has been achieved through the developement of enhanced analysis techniques \cite{Yvonne,Mathieu,Ohm}, which permit the detection of faint $\gamma$-ray sources with shorter observation time compared to the standard Hillas reconstruction technique \cite{Hillas}.\\
In the following, we concentrate on some of the most recent \hess\ results on extra-galactic sources, namely the detection of VHE emission from the BL Lac objects \textit{SHBL~J001355.9-185406}, \textit{1RXS~J101015.9-311909}, \trezedouze, \aplib\ and the BL Lac candidate \hesscandidate.\\
\begin{table}[t]
\begin{center}
\begin{tabular}{c|c|c}
\hline
& Redshift & Ref. \\
\hline
\textbf{Starburst Galaxies}& &\\
\hline
NGC 253 & $8.14\times10^{-4}$ & \cite{NGC253}\\
\hline
\textbf{Radio Galaxies}& &\\
\hline
M 87 & $0.0044$& \cite{M87,M87bis}\\
Centaurus A & $0.00183$ & \cite{CenA}\\
\hline
\textbf{FSRQs}& &\\
\hline
PKS 1510-089 & $0.36$ & \cite{1510}\\
\hline
\textbf{BL Lacs}& &\\
\hline
SHBL J001355.9-185406 & $0.095$ & \cite{0013}\\
RGB J0152+017 & $0.08$ &\cite{0152}\\
1ES 0229+200& $0.14$ &\cite{0229}\\
1ES 0347-121& $0.188$ &\cite{0347}\\
1ES 0414+009& $0.287$ & \cite{0414}\\
PKS 0447-439& $>0.176$ & \cite{0447}\\
PKS 0548-322& $0.069$ &\cite{0548}\\
1RXS J101015.9-311909& $0.143$ & \cite{Texas}\\
1ES 1101-232& $0.186$ &\cite{1101}\\
Mrk 421& $0.031$ &\cite{Mrk421}\\
1ES 1312-423& $0.105$ & \cite{Texas}\\
AP Lib & $0.049$ & \cite{APLib, APLibtexas}\\
PG 1553+113& $0.43-0.58$ \cite{1553redshift} & \cite{1553,1553bis}\\
HESS J1943+213& $>0.14$ &\cite{1943}\\
PKS 2005-489& $0.071$ &\cite{2005,2005bis}\\
PKS 2155-304& $0.116$ & \cite{2155,2155bis,2155ter}\\
& & \cite{2155quater,2155quinquies,2155sexies}\\
H 2356-309& $0.165$ &\cite{2356,2356bis}\\
\hline
\end{tabular}
\caption{Extra-galactic sources detected by \hess\ }
\label{tablepippo}
\end{center}
\end{table}
\section{Recent \hess\ results on extra-galactic sources}
\subsection{\shbl\ }
\shbl\ is a high-energy-peaked BL Lac object (HBL) \cite{shblcatalog} located at a redshift of $z=0.095$. The VHE emission from this very weak source is detected by \hess\ at $\sim5$ standard deviations ($\sigma$) above $300$ GeV, during $40$ hours of observation live-time taken between 2008 and 2010 \cite{0013}. The excess map is plotted in Fig. \ref{dixdixplot}. The TeV flux from the source corresponds to $\sim1\%$ of the flux from the Crab nebula (''Crab units''). Following the \hess\ announcement, the presence of a GeV counterpart was claimed by the \fermi\ collaboration \cite{0013Fermi} : the flux of the source above $100$ MeV is $(9\pm7)\times10^{-10}\ \textrm{ph}/\textrm{cm}^2/\textrm{s}$, with a photon index of $1.5\pm0.2$. However, the source is not present in the \fermi\ 2-year catalog (2FGL) \cite{2FGL}.
\subsection{\dixdix\ }
\dixdix\ is a bright X-ray source detected for the first time by \textit{ROSAT} \cite{Rosat}, and identified as a blazar at redshift $z=0.143$ \cite{dizdizid}. The VHE $\gamma$-ray emission from this object has been detected by \hess\ at $7\sigma$, in $49$ hours of observation live-time taken between 2006 and 2010 \cite{Texas}. The excess map is plotted in Fig. \ref{dixdixplot}. The time-averaged photon index of the source is $3.1\pm0.5_{stat}$, with a flux equal to $\sim0.008$ Crab units. The source is present in the 2FGL catalog : the flux between $300$ MeV and $100$ GeV is $(3.5\pm0.9)\times10^{-9}\ \textrm{ph}/\textrm{cm}^2/\textrm{s}$, the photon index being $2.2\pm0.1$. No significant flux variability has been detected in the \hess\ data.
\begin{figure}[!t]
\vspace{5mm}
\centering
\includegraphics[width=3.3in]{icrc0913_fig01.ps}
\caption{Cumulative number of extra-galactic sources detected by \hess\ as a function of the year}
\label{simp_fig}
\end{figure}
\begin{figure*}[t!]
\vspace{5mm}
\centering
\includegraphics[width=6in]{icrc0913_fig02.ps}
\caption{Excess map centered on \shbl\ (left) and \dixdix\ (right). Taken from \cite{Texas}.}
\label{dixdixplot}
\end{figure*}
\subsection{\trezedouze\ }
\trezedouze\ is an X-ray source classified as a blazar at a redshift of $z=0.105$ \cite{Einstein}. The object was discovered serendipitously in the field of view of \hess\ observations centered on the radio galaxy \textit{Centaurus A} (see Fig.\ref{trezedouzeplot}). The total observation live-time (corrected for the lower acceptance due to the $2^\circ$ offset from the center of the field of view) is $\sim65$ hours. The TeV emission from \trezedouze\ is detected by \hess\ at $\sim7\sigma$ with a flux equal to $0.004$ Crab units \cite{Texas}. The source is not present in the \fermi\ 2-year catalog and no significant variability has been detected in the \hess\ data set.
\begin{figure}[!t]
\vspace{5mm}
\centering
\includegraphics[width=3.5in]{icrc0913_fig03.ps}
\caption{\hess\ significance map of the sky region around \trezedouze\, with the radio-galaxy \textit{Cen A} in the same field-of-view \cite{Texas}}
\label{trezedouzeplot}
\end{figure}
\subsection{\aplib\ }
\aplib\ is a nearby AGN ($z=0.049$), classified as a low-energy-peaked BL Lac object (LBL) \cite{APLibid}. The TeV emission from \aplib\ is detected by \hess\ at $7\sigma$ in $11$ hours of observation live-time taken in 2010 \cite{APLib, APLibtexas}. The VHE spectral index of the source is $\Gamma$=$2.5\pm0.2$, and its flux is $\sim0.02$ Crab units. The object is detected by \fermi\ as well, with an integrated $0.3$-$300$ GeV flux of $(1.9\pm0.1)\times10^{-8} \textrm{ph}/\textrm{cm}^2/\textrm{s}$, and a spectral index of $2.1\pm0.1$ \cite{APLibtexas}. The SED of \aplib\ (see Fig. \ref{aplibsed}\cite{APLibtexas}) shows a very broad high energy bump, which covers the X-ray to VHE $\gamma$-ray energy band. Such a broad component is usually explained by assuming inverse Compton emission on the external photon field (i.e. broad line region, accretion disk), which is thought to be important in LBLs \cite{Ghisellini}. \aplib\ is the third LBL ever detected at VHE (together with \textit{BL~Lac} and \textit{S5~0716+714}).
\begin{figure*}[t!]
\vspace{5mm}
\centering
\includegraphics[width=5in]{icrc0913_fig04.ps}
\caption{Spectral energy distribution of \aplib. See \cite{APLibtexas} for details.}
\label{aplibsed}
\end{figure*}
\subsection{\hesscandidate\ }
\hesscandidate\ is a point-like source detected by \hess\ during the VHE galactic survey \cite{1943}. Its position is consistent with the unidentified hard X-ray source \textit{IGR J19443+2117}. The source is detected at $7.9\sigma$ in ~$25$ hours of observation live-time: its flux corresponds to $\sim0.02$ Crab units, with a photon index of $3.1\pm0.3_{stat}$ between $470$ GeV and $6$ TeV. \\
The study of the spectral energy distribution of this source, including radio, infrared and X-ray data, suggests an extra-galactic origin of the emission. A gamma-ray binary hypothesis is disfavored mainly due to the lack of a plausible massive stellar counterpart in optical/infrared and the absence of orbital variability. A pulsar wind nebula origin is disfavored by the very soft TeV spectrum and the absence of extended X-ray counterparts. On the other hand, the overall spectral energy distribution is consistent with an extreme high-energy-peaked blazar, with a synchrotron peak energy $>1$ keV. The lower limit on the redshift, based on the expected flux from the host galaxy, is
$z>0.14$.
\section{Summary and conclusions}
A summary of the \hess\ extra-galactic sky was presented, with a particular emphasis given on the latest TeV-emitting AGNs detected by \hess\ .\\
The increasing number of detections claimed by \hess\ in the last years is mainly due to the development of new high-performance analysis techniques. In the near future, the study of the VHE sky will be significantly improved with the construction of a fifth 24m-diameter telescope : in this new configuration (\hess\ \textit{II}), the increasing of sensitivity and the lowering of the energy threshold (down to $30$ GeV), will permit the detection of fainter and more distant objects, populating the known VHE extra-galactic sky. \hess\ \textit{II} is currently under construction and should be fully operational by the end of 2012.
\section{Acknowledgements}
The support of the Namibian authorities and of the University of Namibia in facilitating the
construction and operation of \hess\ is gratefully acknowledged, as is the support by the German
Ministry for Education and Research (BMBF), the Max Planck Society, the French Ministry
for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS,
the U.K. Science and Technology Facilities Council (STFC), the IPNP of the Charles University,
the Polish Ministry of Science and Higher Education, the South African Department of Science
and Technology and National Research Foundation, and by the University of Namibia. We appreciate
the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg,
Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment.
|
2,869,038,155,236 | arxiv | \section{Introduction}
Negative electron affinity (NEA) materials are of great interest for photocathodes due to their high photoelectron yield and low thermal emittance \citep{Spicer_1977,Machuca_2000, Cui_2000}. The electron affinity of diamond depends on the exact surface, i.e. chemical species, orientation and reconstruction, and is readily adjustable between +1.7 eV and -1.3 eV with only oxygen and hydrogen as chemisorbed atoms \citep{Maier_2001}. If the surfaces are terminated by hydrogen, they reveal a comparatively low work function and true NEA, i.e. a conduction band minimum (CBM) above the vacuum level at the solid-vacuum interface. This boosts the photoelectron yield by orders of magnitude, paving the way for a highly efficient photocathode. Photo-excitation happens in the bulk and electrons are emitted into vacuum when they reach the surface, even if they have thermalized to the CBM. This is in contrast to metals, where only photoelectrons excited within the thermalization length below the surface can escape into vacuum. The combination of NEA, high thermal conductivity and mechanical robustness under imperfect vacuum condition make diamond a desirable material for photocathodes.\\
As the electron emittance - a measure of beam quality - is directly connected to the electron source size, nanosized emitters are favoured. Sharp tungsten tips are known as the brightest electron sources in scanning and transmission electron microscopes because of their extremely small virtual source size, which can be even smaller than the nanometer-sized geometrical source size \citep{Spence2013,Ehberger2015}. Coating such a sharp tungsten tip with a NEA material like hydrogen-terminated diamond holds promise for an ever brighter photocathode. Since a small source size is important to be maintained, the diamond layer should be thin. The thickness also defines the mean migration time of the excited carriers to the surface. This migration influences the electron pulse duration after pulsed photo-excitation.\\
All these arguments favour a thin and dense diamond coating on a sharp tungsten tip. For the deposition of such thin layers, a high nucleation/seeding density is crucial. The shorter the mean distance between neighbouring nucleation sites, the thinner the resulting dense film can be. Sufficiently high seed densities require appropriate adhesion of the seeds to the substrate, e.g. via electrostatic forces \cite{Hees_2011,Mandal2017}. Rheological forces occurring during the drying process of dispersed particle suspensions influence the local seeding density and can lead to phenomena like ring stains, also known as "coffee ring effect" \citep{Deegan1997}. Therefore, zeta potential adjustment and counteracting of rheological forces is essential for the control of the seeding densities. Moreover, the morphology of the nanodiamond films will play a crucial role for the electron emission properties as well. $sp^2$-bonded carbon specifically located at the grain boundaries of the film is beneficial in terms of providing sufficient conductivity through the film to prevent charging during operation. \\
Previous work reported the coating of tips based on electrophoresis \citep{Zhirnov1996, Alimova1999, Choi1996}, with bias-enhanced nucleation during chemical vapour deposition (CVD) \citep{Liu1994,Albin1999}, parafin wax and CVD \citep{Palomino2014} and ultrasonic seeding in nanodiamond slurry with and without a carburization step \citep{Tzeng2008}.
In this work, we report a new recipe on the fabrication of diamond-coated tungsten tips via dip-seeding, nitrogen gas flow and CVD. Furthermore, we characterize the structure of the resulting diamond coating. The results are promising for a high brightness diamond-based electron source. This approach is also valuable for the fabrication of samples for local electrode atom probe tomography to investigate the spatial distribution of dopants in ultrathin nanocrystalline diamond films.
\begin{figure*}
\centering
\includegraphics[width=12cm]{N2_flow_tips_rev}
\caption{SEM images of diamond coated tungsten tips with different nitrogen flow after dip-seeding. a)-c) Zero flow: The seeding density clearly decreases along the tip shank and only few grains are found close to the apex. Homogeneous coating of the apex could not be achieved. d)-f) Moderate flow: The entire tip including the apex is densely covered with nanodiamond. g)-i) Strong flow: Here, the apex region is selectively coated with 20~nm thin diamond. Only the first 400~nm are coated with a dense layer of diamond. From $\sim$ 800~nm behind the apex, the coating is almost absent. The inset in i) is the same tip prior to deposition.}
\label{fig:N2_flow_tips}
\end{figure*}
\section{Experimental}
\subsection{Tip fabrication}
Tungsten wire is etched electrochemically with 3 mol/L aqueous NaOH via the lamellae drop-off technique \citep{Klein1997}: A thin film of the electrolyte is trapped in a ring-shaped gold electrode. A bias voltage of 6 V is applied between this gold cathode and tungsten wire -which acts as the anode- until the wire is etched through. A second gold electrode with trapped electrolyte underneath the cathode detects the drop-off and shuts off the etching potential within less than 1 $\mu$s to prevent post-etching and blunting of the tip. Freshly etched tips are rinsed with deionised water to remove electrolyte residue. Resulting radii of tungsten tips with this method are typically 5-20~nm.
\subsection{Diamond seeding}
Sharp tungsten tips are dip-seeded for a few seconds in monodisperse nanodiamond suspensions of crystal diameter of 4-6~nm. Both 0.025 wt.\% aqueous suspension from Carbodeon and 0.025 wt.\% in dimethyl sulfoxide:methanol 1:3 from Adamas Nanotechnologies were used in this work with comparable results. Due to the oxidized tungsten surface after etching, the zeta potential of the freshly etched tungsten surface is presumably negative at the pH of the seeding suspensions. To ensure good adhesion of the diamond seeds, highly zeta-positive hydrogenated seeds are therefore used. Even though the exact zeta potential of the tungsten surface and of the seeds is not known, this qualitative approach has worked reliably on flat samples and nanotips. Experiments with zeta-negative seeds on flat tungsten samples showed more than one order of magnitude lower seeding densities. Directly after seeding of a nanotip by dip-coating, it is blown dry with pressurized nitrogen gas directed from the shaft towards the tip apex with adjustable flow rates between $0.5 - 2.5 \frac{L}{sec}$ through a nozzle with 3.5 mm diameter.
\subsection{Diamond deposition}
Diamond deposition is performed in a home-built microwave plasma-enhanced chemical vapor deposition (MPECVD) chamber at 2.5 GHz frequency and 512 W microwave power, at a pressure of 49 mbar using 50 sccm hydrogen and 2 sccm methane flow. The sample holder is heated to 600 \degree C and then lifted into the plasma. The plasma additionally heats the sample so that the local temperature of the tip apex region is expected to be higher than 600 \degree C. Deposition times range between 2 and 20 min with a growth rate of approximately 10 $\frac{nm}{min}$. After terminating the diamond growth by switching off the microwave power and sample heater, the sample cools down in a hydrogen atmosphere at 45 mbar. This recipe reliably results in a hydrogen-terminated diamond surface with negative electron affinity \citep{Cui2000,Maier_2001}.
\subsection{Characterisation}
Scanning electron microscopy (SEM) is used for routine imaging of the tip geometry and the morphology of the diamond films. In addition, a coated tungsten tip is characterized by imaging, electron diffraction and electron energy loss spectroscopy (EELS) in a Titan Themis transmission electron microscope (TEM) operating at 200~kV. The microscope is equipped with C$_s$-correctors both at the illumination and imaging side and a Gatan GIF Quantum ER spectrometer. The sample wire was inserted in a Nanofactory STM-TEM holder. The image corrector was tuned to negative C$_s$ imaging condition where the first pass band corresponds to a resolution of 1{\AA}. We noticed that upon illumination of the tip area, the effective lens aberration can suffer from strong drift at high dose rate. Therefore, a moderate to low dose rate and careful grounding of the STM-TEM holder is necessary to obtain high-resolution TEM (HRTEM) images with good quality. The single electron energy loss (EEL) spectra were acquired directly in TEM diffraction-coupled mode with the largest collection angle - i.e. without objective aperture - to suppress the anisotropic effect in the study of graphite \citep{Leapman1983}. \\
The EEL spectrum image was acquired in scanning TEM (STEM) mode with an effective collection angle of 30 mrad, a pixel size of 1.1 nm and a short dwell time to balance the sample drift.
The low-loss and high-loss (i.e. zero-loss and Carbon 1s region in this study) are acquired with a dispersion of 0.25eV/channel. The standard Fourier-Log deconvolution method using the recorded zero-loss and plasmon peaks is applied to account for multiple scattering \citep{Egerton2011}.
We use an approximate quantification scheme to extract the $sp^2$/$sp^3$ ratio neglecting the anisotropy of the scattering cross section of the graphitic components.
\section{Results and discussion}
\subsection{Diamond seeding and coating}
Diamond films deposited ater dip-seeding on tungsten foils show clear signs of evaporation dynamics and their influence on local seeding density (see \ref{ch:AppendixA}). A related effect can be observed when tungsten tips are dip-seeded with nanodiamond suspensions. Without dry-blowing of the dip-seeded tips, continuous and homogeneous coating with diamond was achieved at the shank of the tip, as can be seen in fig. \ref{fig:N2_flow_tips}a)-c). The high nucleation density at the shank is presumably a result of the high positive zeta potential of seeds and the negative zeta potential of the tungsten surface. However, the density of diamond crystallites indicating the seeding density clearly decreases towards the tip. Solvent evaporation and the accompanying forces seem to redistribute the seeds, which are pushed away from the tip. To counteract this effect, we adopt a controlled flow of nitrogen gas towards the apex during the drying process. Without the nitrogen gas flow not a single sample out of ten samples was covered with diamond at the apex. \\
Using pressurized nitrogen gas for dry-blowing immediately after the dip-seeding and consecutive MPECVD, diamond was successfully grown on the tip apex with a 82\% success rate (14 out of 17 samples).
At moderate flows rates (0.5 - 1.0 L/sec), homogeneous coating both at the shank and at the apex is achieved as can be seen in fig. \ref{fig:N2_flow_tips}d)-f). Even the sharpest tips with approximately 5~nm radius were succesfully coated with this technique (fig. \ref{fig:TEM}).
At high flow rates (up to 2.5 L/sec), the tungsten tips can even be coated selectively at the apex within less than 1 $\mu$m with 20~nm thin diamond (fig. \ref{fig:N2_flow_tips}g)-i)). From these observations we deduce that the nitrogen flow successfully counteracts the migration of the seeds away from the apex. If the flow is high enough, the seeds start migrating towards the apex and remain there only.
\subsection{Structural and chemical characterization}
Carbon deposited by MPECVD results in various phases as graphite, diamond and amorphous carbon depending on the exact parameters. The nanocrystalline diamond (NCD) films are expected to be composites of graphitic and diamond phases. Their morphology and composition will have decisive influence on the electron emission properties from coated tips. In order to elucidate the structural details we performed an extensive TEM study on an ultrasharp tungsten tip (apex radius approximately 5~nm) covered by a 100~nm thin NCD film.
Figure~\ref{fig:TEM}a) shows a bright-field image of the tip. The tungsten is seen dark in the center and is covered by a gray layer of NCD. The selected area electron diffraction (SAED) pattern shown in fig.~\ref{fig:TEM}b) was acquired using an aperture covering an area with about 200~nm radius over the tip region showing diffraction rings perfectly matching the powder pattern of diamond (red circles in fig. \ref{fig:TEM}b)). This confirms that the coated layer is dominated by diamond crystallites. Some additional weak spots that do not belong to diamond can be attributed to tungsten and graphite. Although the experimental diffraction ring pattern fills each circle completely, some sparse segments and strong spots can be seen especially on the \{220\} diffraction ring. \\
\begin{figure}
\centering
\includegraphics[width=7.5 cm]{fig2_rev}
\caption{The columnar nature of the diamond grains on the polycrystalline tungsten tip becomes visible under TEM inspection.
(a) bright-field TEM image of the diamond coated tungsten tip. The radius of curvature of the coated tip is 100~nm, while the initial W tip radius is $\sim$ 5nm.
(b) selected area electron diffraction (SAED) pattern of the sample. The calculated diamond powder ring pattern (using kinematical diffraction theory, red circles) is superimposed on the acquired pattern, indicating that the coating is dominated by crystalline diamond. Some weak spots can be assigned to tungsten and graphite as is exemplarily illustrated by the black arrows.
(c-e) dark-field images of the sample with the objective aperture placed at different positions of the \{111\} diffraction ring as indicated in (b) by the colored circles. Single grains oriented such that the \{111\} Bragg condition is satisfied are revealed this way and the columnar shape as well as the characteristic grain size of 20 nm is revealed. The high resolution image in fig. \ref{fig:HRTEM} is obtained at the dashed black box in (a) and the EEL spectrum shown in fig. \ref{fig:EELS} is acquired in the blue dashed circle region in (a).
}
\label{fig:TEM}
\end{figure}
This is due to the textured structure of the diamond grains. However, the texture can hardly be retrieved from this diffraction pattern alone and will be subject of future research. In order to reveal the shape of the diamond grains more clearly, a series of dark-field images was recorded with the objective aperture placed at different azimuth location of the diamond \{111\} diffraction ring as indicated by the coloured circles in fig.~\ref{fig:TEM}b). The corresponding images are displayed in fig.~\ref{fig:TEM}(c-e). The columnar shape of the diamond grains with a width of about 20~nm is clearly evidenced by these dark-field images. The grain columns seem to align themselves at a small angle to the surface normal. On closer inspection, the columnar grains are also faintly visible in the bright-field image in fig.~\ref{fig:TEM}a). \\
Fig.~\ref{fig:EELS} presents the background substracted EEL spectrum recorded from the tip region marked by the dashed circle in fig.~\ref{fig:TEM}(a), as well as graphitic and diamond reference spectra. The fine structure of the carbon K-edge in the EEL spectrum reflects the orbital character of the conduction band states. Transitions into $sp^2-\sigma^*$ antibonding states, which form the upper part of the graphite conduction band, create a broad and featureless band in the EEL spectrum with a maximum at 292~eV. The diamond conduction band with $sp^3-\sigma^*$ character is also reflected as a broad band in the K-edge EEL sprectrum with a threshold at about 290 eV and peaks at 292, 297 and 305 eV. These peaks are well resolved in the spectrum of the coated tip (fig. \ref{fig:EELS}) and their presence is a clear proof that the coating is diamond \citep{Chang2016}.
\begin{figure}
\centering
\includegraphics[width=7.5 cm]{EELS_tip_with_refs}
\caption{EEL spectrum acquired in the blue dashed circle region in fig.~\ref{fig:TEM}(a) with characteristic peaks of nanocrystalline diamond, in-situ graphite reference (black) and bulk diamond (blue) \citep{Chang2016}.}
\label{fig:EELS}
\end{figure}
Both the tip and the graphitic reference spectrum show a well resolved peak at 285 eV loss energy that is assigned to electron transitions into $sp^2-\pi^*$ antibonding states. As the $sp^2-\pi^*$ signal is absent for monocrystalline diamond, the integral of appropriate energy windows holds quantitative information on the ratio of $sp^2$- to $sp^3$-bonded carbon \citep{Lossy1995,Cuomo1991,Pappas_1992,Berger1988}. Fig. \ref{fig:STEM-EELS} shows the pixel-wise evaluated $sp^2$ to $sp^3$ ratio from the spatially-resolved STEM-EELS spectra (for details of the evalution, see \ref{ch:AppendixB}). The map reveals that the average $sp^2$ content is larger at the apex and that paths of high $sp^2$ content are present which align with the axes of the single grains (fig. \ref{fig:TEM} \& \ref{fig:STEM-EELS}). The large $sp^2$ content at the apex is attributed to a larger seeding density at the apex as fig. \ref{fig:N2_flow_tips} shows that the seeds adhere well to the apex after dry-blowing with nitrogen.
\begin{figure}
\centering
\includegraphics[width=7.5 cm]{fig4-v4}
\caption{Qualitative map of the ratio of $sp^2$- to $sp^3$-bonded carbon. (a) Evaluated map after the simple formula and processing method in \ref{ch:AppendixB}. More $sp^2$-bonded carbon is found in the axial region of the apex. The high ratio at the surface is due to carbon deposition (contamination) during the STEM-EELS measurement. The (deconvoluted and background subtracted) spectra denoted at positions b, c and d are shown in (b)-(d), respectively. (e) Intensity of extracted Gaussion peak 1 (G1) at 285~eV and (f) sum of Gaussian peak 2 and 3 (G2+G3) at 289 and 292~eV, respectively.}
\label{fig:STEM-EELS}
\end{figure}
Further insight into the morphology of the film, specifically the location of the $sp^2$-bonded tissue, is given by the HRTEM image in fig. \ref{fig:HRTEM}. Wavy lattice fringes with a characteristic distance of 354~pm corresponding to the interlayer spacing of graphite can be seen. Some fringes are marked in white in fig. \ref{fig:HRTEM} for easier identification. Apparently, the graphitic components form contiguous paths between the diamond crystallites, which promises sufficient conductivity of the composite film to prevent charging of the tip during electron emission in future applications. \\
At the tip, one can see a few grains showing 2D lattice fringes. From the lattice fringe distances, one can attribute the lattice plane indices and plane normal directions. A small region as marked by the dotted box in fig. \ref{fig:HRTEM} is magnified in the inset with its Fourier transform on the upper right side. The \{111\} and \{220\} lattice planes can be easily recognized. However, drawing general conclusions about texture requires a thinner coating and will be subject of future studies.
\begin{figure}
\centering
\includegraphics[width=7.5 cm]{Fig6-rev}
\caption{HRTEM image of the diamond coating at the apex of the tungsten tip. Graphitic paths between the grains with interplane distance of 0.354~nm are visible.
The dashed boxed region is magnified as inset with its Fourier transform shown on the right side.}
\label{fig:HRTEM}
\end{figure}
\section{Conclusion}
Tungsten tips with apex radii down to 5~nm have been successfully coated with dense nanocrystalline diamond films with a thickness as small as 20~nm. Solvent evaporation after seeding has a large effect on the local seeding density, especially at strongly curved surfaces, and must be engineered appropriately. To counteract evaporation forces, we adopt a nitrogen gas flow towards the tip apex. Diamond deposition on shaft only, apex only as well as homogeneous coating of shaft and apex is achieved by variation of the nitrogen flow. We achieved the growth of 20~nm thin diamond limited to less than 1 $\mu$m away from the tip apex by this technique. EELS and electron diffraction of a coated tungsten tip confirm the presence of diamond with a fraction of $sp^2$-bonded carbon, identified as graphitic paths inbetween grains via HRTEM images. A spatially resolved STEM-EELS measurement shows an elevated fraction of the relative $sp^2$-content at the tip apex. Furthermore, a columnar radial growth of diamond crystallites with a typical grain size of 20~nm is revealed. We expect that these diamond-coated tungsten tips with negative electron affinity offer a great potential for the use as high brightness photocathodes both in dc and ultrafast laser-triggered operation.
\section{Acknowledgements}
We thank J. Litzel for setting up and modernizing the MPECVD reactor. This work was supported by the ERC grant "Near Field Atto", DFG research training group "In-situ Microscopy with Electrons, X-rays and Scanning Probes"(GRK 1896) and DFG collaborative research center "Synthetic carbon allotropes" (CRC 953).
\section{Literature}
\bibliographystyle{prsty}
|
2,869,038,155,237 | arxiv | \section{Introduction}
\IEEEPARstart{I}{t} is known that orthogonal frequency division multiplexing (OFDM) without channel knowledge at the transmitter, such as the case in digital radio and television broadcasting, depends on channel coding to achieve good performance \cite{Sari95,Wilson95,Wang04}
Channel coding allows OFDM to exploit the channel frequency diversity by using the non-faded subcarriers to recover the information carried on attenuated subcarriers. By contrast, the single-carrier (SC) scheme is able to exploit such diversity even in the absence of channel coding, since each transmitted symbol spreads throughout the entire used band due to its smaller duration when compared to OFDM. Furthermore, the cyclic-prefix (CP) and the one-tap equalizer techniques, which allow low complexity equalization, are not a privilege of OFDM and can as well be applied to the SC scheme \cite{Falconer02}. It is also possible to enhance the performance of the SC scheme with little additional complexity by using the decision-feedback equalizer (DFE). In addition, it was shown by \cite{Cioffi95_p1} that using a DFE without error propagation, \emph{i.e.}, an ideal DFE, with an unbiased minimum mean square error (MMSE) criterion the channel capacity can be achieved. These facts have spawned many comparisons between OFDM and SC.
Analytical results were provided by \cite{Wilson95} and \cite{Aue98}. The latter, by using the cutoff rate, analyzes the effect of the coding rate considering 4-quadrature amplitude modulation (QAM), a two-tap block-fading channel configuration scenario and just a SC with linear equalization. The former shows that both the OFDM and SC-DFE schemes can exploit the frequency diversity on frequency selective block-fading channels, but there are no results on channel capacities differences. This subject is addressed in \cite{Franceschini08}, in which the authors conjecture that the SC scheme with an optimal receiver has equal or better performance than OFDM for a uniform power spectrum transmission. It also considers that the possible rate difference will be larger when the channel is more frequency selective.
In this letter, we analyze such behavior using channel capacity results and Jensen's inequality \cite{Cover06} and we show that the conjecture presented by \cite{Franceschini08} is valid for any given channel but only for 4- and 16-QAM. For higher-order modulations, the conjecture is true in almost all scenarios, but in some very particular cases the OFDM scheme can present some marginal capacity advantage over the ideal SC-DFE system.
This letter is organized as follows. In Section \ref{sec:System_Model}, we describe the system model. In Section \ref{sec:capacity}, the comparison between the schemes is done through the use of the channel capacity for different modulation cardinalities and any channel, using concavity function analysis and Jensen's inequality. Finally, conclusions are stated in Section \ref{sec:conclusions}.
\section{System Model}
\label{sec:System_Model}
In block transmission schemes using the CP approach, before transmitting each block $\mathbf{s}=\left[s_0,\cdots, s_{N-1}\right]^T$, it is appended a CP of length $L-1$. Theses samples are linearly convoluted with a channel $\mathbf{h}=\left[h_0,\cdots, h_{L-1},0,\cdots,0\right]^T$, with $L<N$ and a zero-mean circularly complex white Gaussian noise with variance $\sigma_v^2$ is added to the channel output. By removing the CP at the receiver, the received signal can be represented by $\mathbf{r}=\boldsymbol{\mathcal{H}}\mathbf{s}+\mathbf{v}$, where $\boldsymbol{\mathcal{H}}$ is a circulant matrix whose first column is given by $\mathbf{h}$ and $\mathbf{v}=\left[v_0,\cdots, v_{N-1}\right]^T$ is the added noise. Without loss of generality, we assume that $\mathbf{h}$ has unitary norm. In the following, we build the OFDM and SC-DFE schemes upon this model.
\subsection{Orthogonal Frequency Division Multiplexing}
Consider $\mathbf{x}=\left[x_0,\cdots, x_{N-1}\right]^T$ the block of QAM symbols to be transmitted in $N$ subcarriers. This is achieved by multiplying $\mathbf{x}$ by the unitary inverse discrete Fourier transform (IDFT) matrix of dimension $N\times N$, which results in $\mathbf{s}=\mathbf{F}^H\mathbf{x}$. In the receiver, we apply a discrete Fourier transform (DFT) $\mathbf{F}$ to $\mathbf{r}$ and we obtain $\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{Fv}$,
where $\mathbf{H}=\mathbf{F}\boldsymbol{\mathcal{H}}\mathbf{F}^H=\rm{diag}\{\mathbf{Fh}\}$, and $\mathbf{Fv}$ is also a zero-mean circularly complex white Gaussian noise with variance $\sigma_v^2$, due to the use of the unitary DFT.
We can interpret each element of $\mathbf{y}$ as an additive white Gaussian noise (AWGN) channel with a complex gain. Thus, the equalizer used to estimate the transmitted symbols can be reduced to phase and magnitude compensations, the so-called one-tap equalizer, which do not change the SNR in each subcarrier. Then, the estimated symbols are $\tilde{\mathbf{x}}=\mathbf{Q}\mathbf{y},$ where ${\mathbf{Q}}=\rm{diag}\{\left[Q_0,\cdots, Q_{N-1}\right]\}=\mathbf{H}^{-1}$.
\subsection{Single Carrier-Decision Feedback Equalizer}
In order to make the capacity comparison feasible, we have to make some simplifications. The first one is that there is no error propagation, since it is hard to analyze and it is known to considerably affect the DFE performance. In practice, this can approximated by using iterative block DFE schemes that can account for the reliability of the decisions that are fed back at the expense of some additional computational complexity \cite{Ben05, Sari06, Hanzo11}. In this paper, in order to eliminate the error propagation, we assume that the transmitted sequence is known at the reception and is fed back into the feedback filter. The second assumption is related to the causality imposed by the feedback filter and the constraint imposed by the circular convolution needed for the one-tap equalization. This requires to initialize the feedback filter inputs with the CP samples, but these symbols will only be decided by the end of the block, which implies in non-casuality. In practice, this can be solved by using the unique word (UW) technique \cite{Witschnig02,Falconer02} that is a fixed pseudo-random sequence known to the receiver and that is used in place of the CP. Nevertheless, if we use the same DFT length as the one used with the CP technique, the UW technique has a lower spectral efficiency. A possible solution to make both modulation schemes have almost the same efficiency is to consider the DFT block much larger than the UW or CP lengths. However, in order to simplify calculations and to keep the same transmission structure, we assume the non-casuality hypothesis and use the CP method in the analysis.
The SC scheme can be viewed as a linearly precoded OFDM where the symbols $\mathbf{x}$ are precoded by the unitary DFT matrix $\mathbf{F}$ (\emph{i.e.}, $\mathbf{s}=\mathbf{F}^H\mathbf{F}\mathbf{x}=\mathbf{x}$) and the received signal, after the one-tap equalizer, must be decoded by the unitary IDFT matrix $\mathbf{F}^H$.
However, SC allows us to improve its performance by using a non-linear filter, the DFE, which uses past decisions on the received signal to remove intersymbol-interference. Thus, using the hypotheses discussed in the beginning of this subsection, the SC-DFE output can be written as $\tilde{\mathbf{x}}=\mathbf{F}^H\left[\mathbf{QF}\boldsymbol{\mathcal{H}}+\mathbf{BF}\right]\mathbf{x}+\mathbf{F}^H\mathbf{QFv}$, where ${\mathbf{Q}}=\rm{diag}\{\left[Q_0,\cdots, Q_{N-1}\right]\}$ and ${\mathbf{B}}=\rm{diag}\{\left[B_0,\cdots, B_{N-1}\right]\}$ are the feedforward and feedback filters in the frequency domain respectively.
The chosen criterion to obtain the filter coefficients is the unbiased MMSE criterion, since its is shown by \cite{Cioffi95_p1} that an ideal DFE with such criterion can achieve channel's capacity. The coefficients can be calculated as shown in \cite{Benvenuto02,Falconer_DFE02}. Also, with regard to the MMSE SC-DFE, the number of feedback coefficients that maximizes the performance is equal to the channel's memory \cite{Valcarce04}.
\section{Capacity Analysis}
\label{sec:capacity}
In this section, we analyze the capacity difference between the ideal SC-DFE and OFDM schemes. Initially, in order to calculate the OFDM capacity, let us define the signal-to-noise ratio (SNR) for each subcarrier as:
\begin{equation}
\gamma_k=\gamma\left|H_k\right|^2,
\end{equation}
where $H_k$ is the $k^{th}$ element of the diagonal of $\mathbf{H}$ and $\gamma=\frac{\sigma_x^{2}}{\sigma_v^2}$ is the average SNR, since $\sum_{k=0}^{N-1}\left|H_k\right|^2=1$, due to the unitary norm channel and the unitary DFT.
Let us first consider a Gaussian input. In such a case, the capacity per real dimension for a uniform power spectrum allocation input is the average capacity of the subcarriers:
\begin{equation}
C_{\mathrm{OFDM}}=\frac{1}{N}\sum_{k=0}^{N-1}{\frac{1}{2}\log_2\left(1+\gamma_k\right)}.
\label{eq:cap_ofdm_gauss}
\end{equation}
It is noteworthy that a single channel code can be used to code the information bits to form $\mathbf{x}$ as long as the information rate is lower than \eqref{eq:cap_ofdm_gauss} \cite{Root68}.
For the ideal SC-DFE scheme, in order to calculate its capacity, we must obtain the SNR at its output. Let us first assume a system without CP and an infinite length feedforward filter. In such a case, it is known that the SNR at the output of such MMSE DFE equalizer is given by \cite[p.663]{Proakis08}:
\begin{equation}
\gamma_{\mathrm{DFE},\infty}=\exp\left\{\frac{T}{2\pi}\int_{-\pi/T}^{\pi/T}\log\left(\frac{\sigma_v^2+\sigma_x^2\left|H(e^{j\omega T})\right|^2}{\sigma_v^2}\right)d\omega\right\}-1,
\label{eq:SNR_dfe_gauss_infinity}
\end{equation}
where $H(e^{j\omega T})$ is the discrete time Fourier transform of the channel impulse response at the frequency $\omega$ and $T$ is the symbol period.
Since the CP guarantees that we can perfectly invert a FIR channel that does not have spectral nulls with another FIR filter and that the transmission power and bandwidth are kept the same when the CP is added, in order to obtain the SNR at the SC-DFE output we just have to discretize the SNR given by (\ref{eq:SNR_dfe_gauss_infinity}) at the frequencies $\omega(k)=2\pi k/NT$:
\begin{equation}
\gamma_\mathrm{DFE}=\exp\left\{\frac{1}{N}\sum_{k=0}^{N-1}{\log\left(1+\gamma_k\right)}\right\}-1,
\label{eq:DFE_SNR}
\end{equation}
since $\frac{\sigma_v^2+\sigma_x^2\left|H\left(e^{j\omega(k)T}\right)\right|^2}{\sigma_v^2}=1+\gamma_k$.
Therefore, also assuming that a Gaussian modulation is used, so that the equalizer output is also Gaussian, the capacity per real dimension of the ideal SC-DFE is:
\begin{equation}
C_{\mathrm{DFE}}=\frac{1}{2}\log_2\left(1+\gamma_\mathrm{DFE}\right).
\label{eq:cap_dfe_gauss}
\end{equation}
By placing \eqref{eq:DFE_SNR} in \eqref{eq:cap_dfe_gauss}, it results exactly in \eqref{eq:cap_ofdm_gauss}; hence, the OFDM and ideal SC-DFE systems present the same capacity.
However, such analysis consider that Gaussian symbols are transmitted, which is not the case in practice. For ordinary modulations, such as $M$-ary QAM, the capacity can still be numerically evaluated. Considering an AWGN channel and only square $M$-QAM schemes, the associated capacity is two times the capacity of a $\sqrt{M}$-pulse amplitude modulation \cite{Cover06}:
\begin{equation}
C^{\text{M-QAM}}(\gamma)=-2\int\limits_{-\infty}^{+\infty}{f_{R}\left(r,\gamma\right)\log_2f_{R}\left(r,\gamma\right)dr}-\log_2{\frac{2\pi e\sigma_x^2}{\gamma}},
\label{eq:Cap_mqam}
\end{equation}
where $f_{R}\left(r,\gamma\right)=\sqrt{\frac{\gamma}{2M\pi\sigma_x^2}}\sum_{m=-\sqrt{M}/2}^{\sqrt{M}/2-1}e^{-\frac{\gamma\left(r-(2m+1)\right)^2}{2\sigma_x^2}},$
which corresponds to the sum of Gaussian distributions with variance $\sigma_v^2=\gamma/\sigma_x^2$, centered at the $\sqrt{M}$-PAM points and pondered by $1/\sqrt{M}$ in order to have unitary area.
Equation (\ref{eq:Cap_mqam}) has a shape similar to the Gaussian capacity, but it has an asymptotic gap of 1.53 dB (\emph{i.e.}, the shaping gain) when $M\to\infty$ and it will saturate at $\log_2M$ for finite $M$. This saturation is the main reason that OFDM in frequency selective channels will present a capacity degradation when compared to the ideal SC-DFE scheme. Due to the saturation, for certain average SNR values, some subcarriers cannot attain a capacity close to what would be attained by a larger $M$ scenario for the same average SNR values and are not able to fulfill the expected capacity to compensate for the attenuated subcarriers. To the best of the authors knowledge, such limitation has been only observed in references \cite{Franceschini08} and \cite{Wesel95}, but the latter misses a detailed explanation and, in the former, although it provides compelling evidence, it does not provide conclusive proofs on the capacity comparison of OFDM with SC schemes.
Before establishing a detailed analysis, let us first illustrate the origin of the OFDM capacity limitations. In order to show this, let us consider that any residual ISI of the QAM signal in the ideal SC-DFE output can be modeled as a Gaussian noise due to the central limit theorem. Hence, the capacity of the ideal SC-DFE system can be evaluated applying \eqref{eq:DFE_SNR} in~\eqref{eq:Cap_mqam}:
\begin{equation}
C^{\text{M-QAM}}_{\text{DFE}}=C^{\text{M-QAM}}\left(\gamma_{\mathrm{DFE}}\right).
\label{eq:Cap_mqam_sccp}
\end{equation}
On the other hand, the OFDM capacity is the average of the capacities in the different subcarriers:
\begin{equation}
\label{eq:Cap_mqam_ofdm}
C^{\text{M-QAM}}_{\text{OFDM}}=\frac{1}{N}\sum_{k=0}^{N-1}{C^{\text{M-QAM}}\left(\gamma_k\right)}.
\end{equation}
If we have $\gamma\rightarrow\infty$, then (\ref{eq:Cap_mqam_sccp}) and (\ref{eq:Cap_mqam_ofdm}) converge to $\log_2M$. If $\gamma$ is small enough for all $k$ to have $C^{\text{M-QAM}}(\gamma_k)\approx C^{\text{M}'\text{-QAM}}(\gamma_k)$, with {$M'\gg~M$}, then the capacities are practically equal. However, if $C^{\text{M-QAM}}(\gamma_k)$ falls close to the saturation region of (\ref{eq:Cap_mqam}) for certain values of $k$, then the OFDM capacity will be inferior to that of the ideal SC-DFE.
As an example and without loss of generality, we calculate the OFDM and SC-DFE capacities for the unitary norm channel with zeros in $0.95\exp\left(\pm j 0.9\pi\right)$, considering 16- and 64-QAM modulations, $N=8$ and $\gamma= 11$ dB. The results are depicted in Fig.~\ref{Fig:cap_highsnr}, where we show the ideal SC-DFE and OFDM capacities, as well as the OFDM capacity of each one of its subcarriers.
As it can be seen, the OFDM and SC-DFE capacities are practically the same for 64-QAM. However, for the same average SNR, when using 16-QAM, there is a capacity difference between them. This is due to the fact that the capacity of some subcarriers (the ones with higher gains) falls close to the capacity saturation region and cannot compensate the lower capacity of the attenuated subcarriers. It is worth noting that the capacity of the SC-DFE has barely changed, since, for such $\gamma_{DFE}$, which does not depends on the cardinality, the capacity of both 16- and 64-QAM are almost the same.
From the results discussed in the previous paragraph, we can predict that larger deviations of $\left|H_k\right|^2$ will be more prone to create differences between the capacities of the OFDM and ideal SC-DFE schemes.
In the next subsection, we show
that the ideal SC-DFE is always better than OFDM for 4- and 16-QAM, and that for higher cardinalities, OFDM can be marginally better in some specific cases.
\begin{figure}
\centering
\includegraphics[scale=.47]{cap_highSNR2.eps}\vspace{-0.2cm}
\caption{Capacity for the OFDM and SC-DFE systems considering 16-QAM and 64-QAM for a SNR value equal to $\gamma=11$ dB. The ($\nabla$) and the ($\Box$) represent the SC-DFE and OFDM capacities respectively and the (o) represents the capacity of each subcarrier.}
\label{Fig:cap_highsnr}
\end{figure}
\subsection{Theoretical Capacity Comparison between the ideal SC-DFE and OFDM using QAM}\label{ap:Cap_teo}
In order to compare \eqref{eq:Cap_mqam_sccp} and \eqref{eq:Cap_mqam_ofdm}, firstly, let us define $\phi\left(x\right)=\log\left(x+1\right)$ and, thus, $\gamma_{DFE}$ can be expressed as:
\begin{equation}
\gamma_{DFE}=\phi^{-1}\left(\frac{1}{N}\sum_{k=0}^{N-1}\phi\left(\gamma_k\right)\right).
\end{equation}
In the following, we can write:
\begin{equation}
\tau\left(x\right)=C^{\text{M-QAM}}\left(\phi^{-1}(x)\right),
\end{equation}
so that \eqref{eq:Cap_mqam_sccp} and \eqref{eq:Cap_mqam_ofdm} can be rewritten as
\begin{equation}
C^{\text{M-QAM}}_{\text{DFE}}\left(\gamma\right)=\tau\left(\frac{1}{N}\sum_{k=0}^{N-1}\phi\left(\gamma_k\right)\right),
\end{equation}
and
\begin{equation}
C^{\text{M-QAM}}_{\text{OFDM}}\left(\gamma\right)=\frac{1}{N}\sum_{k=0}^{N-1}\tau\left(\phi\left(\gamma_k\right)\right).
\end{equation}
Therefore, all we have to do is to analyze the concavity properties of:
\begin{equation}
\label{eq:tau}
\tau\left(x\right)=C^{\text{M-QAM}}\left(\exp(x)-1\right)
\end{equation}
The concavity properties of \eqref{eq:tau} can be analyzed through its second derivative. If it is positive, we can state that the function $\tau\left(x\right)$ is convex in the considered interval. Otherwise, we state that $\tau\left(x\right)$ is concave. In this case, using Jensen's inequality, we state that:
\begin{equation}
\tau\left(\frac{1}{N}\sum_{k=0}^{N-1}\phi\left(\gamma_k\right)\right) \geq \frac{1}{N}\sum_{k=0}^{N-1}\tau\left(\phi\left(\gamma_k\right)\right),
\end{equation}
which means that $C^{\text{M-QAM}}_{\text{DFE}}\left(\gamma\right)\geq C^{\text{M-QAM}}_{\text{OFDM}}\left(\gamma\right)$.
In Fig. \ref{Fig:concavity}, we show the second derivative of the function $\tau(x)$, numerically evaluated, for different values of $M$. We can observe that it is a non-positive function for $M$ equal to 4 and 16. For the other cardinalities, there are some intervals of $x$ that the second derivative is positive and thus OFDM may present a higher capacity than the ideal SC-DFE depending on the channel. Anyway, the second derivative of $\tau(x)$ assumes only small positive values from which we conclude that the performance advantage tends to be small, even for larger cardinalities. Such behavior is also observed for larger $M$, where such interval of $x$ gets broader. For instance, the ratio $C^{\text{M-QAM}}_{\text{DFE}}/C^{\text{M-QAM}}_{\text{OFDM}}$ for the channel $H(z)=0.1624(1+z^{-4})+0.4546(z^{-1}+z^{-3})+0.7307z^{-2}$ is shown in Fig.~\ref{Fig:comp_cap} for 1024-QAM. Such channel was chosen because we found that it emphasizes the advantage of the OFDM scheme for a certain range of SNR, even though its capacity is less than 1\% superior to the capacity of the ideal SC-DFE and it also generates a noticeable advantage for SC-DFE when higher SNR regimes are considered. In general, for 64- and 256-QAM, the OFDM capacity advantage is even smaller, when existent. In particular, for 64-QAM, the second derivative of $\tau(x)$ is positive for a small interval of $x$ comprised between 2.568 and 2.724, where it attains a maximum value of $3.58\times10^{-4}$. Thus, for practical considerations, the SC-DFE capacity can be considered equal or superior to the OFDM capacity for any channel for such modulation cardinality.
\begin{figure}[!t]
\centering
\includegraphics[scale=.5]{tau_concavity.eps}\vspace{-0.2cm}
\caption{Second derivative of $\tau(x)$}
\label{Fig:concavity}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=.5]{teste.eps}\vspace{-0.2cm}
\caption{Capacity ratio between $C^{1024\text{-QAM}}_{\mathrm{DFE}}$ and $C^{1024\text{-QAM}}_{\mathrm{OFDM}}$ for 1024-QAM, $H(z)=0.1624(1+z^{-4})+0.4546(z^{-1}+z^{-3})+0.7307z^{-2}$ and N=512.}
\label{Fig:comp_cap}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this letter, we have compared the channel capacities of the ideal SC-DFE and OFDM schemes for square $M$-QAM modulations in frequency selective channels. Initially, we show that the subcarriers that are close to the capacity limit of the used modulation cannot fulfill the expected capacity to compensate for the attenuated subcarriers. Then, using Jensen's inequality and the square $M$-QAM capacity, we were able to prove that the ideal SC-DFE capacity is larger than the OFDM capacity for 4- and 16-QAM for any given channel. We have also analyzed higher-order square QAM and, in such a case, the OFDM scheme may surpass the ideal SC-DFE capacity, but only for a very small amount and only in specific scenarios.
\bibliographystyle{IEEEbib}
|
2,869,038,155,238 | arxiv | \section{Introduction}\label{introduction}
Integrated circuits are made of several layers. In a nutshell, the bottom layers contains the transistors, while the other layers (called {\em metal layers}) are used to connect the different components to comply with the designed functionalities of the device. We usually distinguish two types of components in metal layers: {\em vias} and {\em `wires'}. The former components are used for vertical connections and the latter for horizontal ones, see Fig. \ref{integrated_circuit} for an illustration.
\begin{center}
\includegraphics[scale = 0.25]{circuit.pdf}
\captionof{figure}{a 3D view of an integrated circuit (source: \url{https://commons.wikimedia.org/wiki/File:Silicon_chip_3d.png})}\label{integrated_circuit}
\end{center}
Integrated circuit manufacturing involves the production of each layer of the circuit iteratively. The core technique used in production is {\em lithography}. In brief, the idea is to etch each layer by exposing a photosensitive material to a light source through a mask: this creates a kind of {\em mould}, which is then filled with an appropriate conductor material (the mould is later removed through some chemical process). Optical distortion might however result in the fusion of components if attention is not paid to keep the minimum distance between any two components above a certain threshold, called {\em the lithography distance}. When some components are below this distance, the production of the mould is typically decomposed into several rounds of lithography: sub-masks are defined for (feasible) subsets of components, and the mould is produced in sequence, a process called {\em multiple patterning}. The process requires the proper alignment of the sub-masks which is a major challenge as the number of rounds increases. Hence multiple patterning induces additional costs and time in the production process and the industry tries to keep the number of patterning steps small. The problem of finding the minimum number of patterning steps readily translates into a vertex coloring problem. Indeed, one can build a graph $G$ whose node set is the set of components and two components are adjacent if they are at a distance less or equal to the lithography distance. This graph is sometimes called the {\em conflict graph}. The minimum number of rounds needed to produce the mould is the chromatic number of $G$.
DSA-aware Multiple Patterning (DSA-MP) is a technique that combines Directed Self-Assembly (DSA) technology with lithography in order to go beyond the resolution limit imposed by lithography alone in the fabrication of integrated circuits. Due to the nature of the process, it is particularly useful for the manufacturing of {vias}. DSA-MP allows to introduce within a same sub-mask some vias that are closer to the lithography distance, under certain conditions. The main idea is to intentionally fuse some vias within larger objects, called {\em guiding patterns}, and then to correct the corresponding design with DSA in a second step. More precisely, the mould, possibly obtained after applying several rounds of lithography, is filled with a {\em block copolymer} in a random state that self-organize in a structured way after a chemical reaction is triggered: if the guiding patterns are designed appropriately, and thus if the vias that are fused satisfy some specific properties, one of the two polymers assembles into a set of cylinders that can then be used for via creation after some additional processing. We refer the reader to \cite{IP} for more details about integrated circuit manufacturing and DSA-aware multiple patterning.
One of the core problem in DSA-MP is again the problem of minimizing the number of lithography steps as the production time and costs are still dominated by lithography. We showed in \cite{IP} that several variants are of interest in the industry. One such variant consists in fusing ``small chains of vias'' and it reduces to the following variant of vertex coloring: given a graph $G=(V,E)$, a subset $F$ of edges of $E$ and an integer $k\geq 0$, color the vertices of $G$ with a minimum number of colors so that each color induces, in $G$, a disjoint union of paths of $G_F=(V,F)$ of length less or equal to $k$. We call the problem the {\em $k$-path coloring problem}. Concretely, the node set of $G$ is the set of vias in a layer, the edges correspond to the pairs of vias that are closer than the lithography distance, $F$ are the pairs of vias whose distance is within a certain range specific to the DSA technology used (but smaller or equal to the lithography distance), and $k$ is a small number. The Electronic Design Automation (EDA) industry is especially interested in the $1$-path and the $2$-path coloring problems as they correspond to the current technological capabilities. Regular (vertex) coloring is equivalent to $0$-path coloring, which proves that (i) $k$-path coloring is NP-hard in general and (ii) that we can only improve upon standard multiple patterning by choosing $k\geq 1$. The problem was introduced under the same name\footnote{Some authors use the same terminology for another variant of graph coloring, see for instance \cite{Frick,Johns,Mynhardt}.} in \cite{Akiyama} in the special case where $F=E$. In particular it is known that $k$-path $L$-colorability is already hard for $L=2$ and $k=1$ \cite{Jinjiang1} and for $L=3$ and $k=2$ \cite{Jinjiang2}.
Mentor graphics has developed in-house heuristics to solve the problem quickly. A typical design can be made of several hundred thousands of vias per layer and Mentor Graphics' heuristics can solve (approximately) these instances in less than a second. However the company has interest in fast exact algorithms as well. Initially, their main interest for exact approaches lay in the possibility to assess the quality of their heuristics. However, because of our first encouraging results with integer programming \cite{IP}, the company understood that exact approaches might actually find their way to production. While testing different integer programming models in \cite{IP}, we observed that typical industrial instances exhibit some structure. We indeed noticed that most instances are extremely sparse and typically `tree-like'. The main reason is that the design of the circuit (and the placement of vias in particular) is made with lithography constraints in mind so that distances between vias is kept as large as possible. In this project, we decided to test whether exact algorithms exploiting the `tree-like' property could be competitive with heuristics (computationally wise) for production. A well-known mesure of ``tree-likeness'' is the notion of treewidth introduced by Robertson and Seymour \cite{Robertson_seymour_1986}.
\begin{definition}{(Tree decomposition ~\cite{Robertson_seymour_1986}).}
Let $G=(V,E)$ be a graph. A {\em tree decomposition} of $G$ is a pair $(X,T)$ where $T$ is a tree and $X=\{X_1, \dots, X_n\}$ is the set of nodes of $T$, called {\em bags} such that $\forall i \in \{1, \dots, n\}$, $X_i \subseteq V$ and the three following conditions are verified:
\begin{enumerate}
\item $\cup_{i \in \{1, \dots, n\}} X_i = V$.
\item $\forall (u,v) \in E, \exists X_i \in X$ such that $u,v \in X_i$.
\item $\forall u \in V$, the bags containing $u$ is a connected sub-tree of $T$.
\end{enumerate}
\end{definition}
The {\em width} of a tree decomposition is the size of the largest bag minus one. The {\em treewidth} of a graph $G$ is the minimum width over all possible tree decomposition of $G$. In particular the treewidth of a tree is $1$ (each edge can be used as a bag).
There are many combinatorial optimization problems that are hard in general but that can be solved in polynomial time on graphs with bounded treewidth (see for instance \cite{parametrizedcomplexity}). Such problems include stable set, dominating set, vertex coloring, Steiner tree, feedback vertex set, hamiltonian path, etc... Besides, Courcelle~\cite{Courcelle_1990} has shown that a large class of problems on graphs can be solved in linear time when restricted to graphs with bounded treewidth. Courcelle's theorem essentially states that every graph property that can be formulated in {\em monadic second order logic} (MSOL)\footnote{Monadic second order logic in graphs is a formulation of a property of a graph using logical connectors ($\land$, $\lor$ $\lnot$, $\iff$, etc...), quantification on vertices and sets of vertices ($\forall v \in V$, $\exists v \in V$, $\forall V' \subseteq V$, $\exists V' \subseteq V$), quantification on edges and sets of edges ($\forall e \in E$, $\exists e \in E$, $\forall E' \subseteq E$, $\exists E' \subseteq E$), membership tests ($e\in F$, $v\in W$, etc...) and incidence tests ($v$ endpoint of $e$, $(u,v)\in E$).} can be decided in linear time when restricted to graphs with bounded treewidth. It is possible to prove that $k$-path $L$-colorability falls under Courcelle's theorem umbrella, see \cite{Dehia}. It then follows that $k$-path coloring is polynomial in graphs with bounded treewidth \footnote{It is part of folklore that the chromatic number of a graph with treewidth $w$ is at most $w+1$ (we can `greedily' color the vertices as there is always a vertex of degree less or equal to $w$, see for instance \cite{Bodlaender2}). Hence the $k$-path chromatic number is also bounded by $w+1$. We can in particular restrict testing for $k$-path $L$-colorability to $L=1,...,w+1$.}. Unfortunately Courcelle's theorem, albeit linear in the graph size, is considered impractical \cite{parametrizedcomplexity}: {\em ``Courcelle's theorem and its
variants should be regarded primarily as classification tools, whereas designing efficient dynamic-programming routines on tree decompositions requires
`getting your hands dirty' and constructing the algorithm explicitly.''} In this paper, we develop such a direct dynamic programming approach for the $k$-path $L$-coloring problem and we test the performances of the corresponding algorithm on real-world instances from Mentor Graphics arising from DSA-aware Multiple Patterning for $k=1$ and $k=2$ (the cases of interest in the industry).
\subsection*{Additional definitions and notations}
Given a graph $G=(V,E)$ and a subset $E'$ of edges of $E$, we call a $E'$-neighbor of a vertex $v$, a vertex $u$ such that $(u,v)\in E'$. Also a $E'$-path denotes a path with edges in $E'$. For a graph $G$, we sometimes denote by $V(G)$ the set of vertices of $G$, by $E(G)$ the set of edges of $G$, and, for $U\subseteq V(G)$, by $E[U]$ the set of edges of $G$ induced by the vertices in $U$ (that is with both extremities in $U$).
We call a pair $(G,f)$, where $G=(V,E)$ is a graph and $f$ is a positive weight function on its edge set, a {\em weighted graph}. We can represent a weighted graph by its {\em (weighted) adjacency matrix}, that is a matrix $A$ of size $|V|\times |V|$ such that $A(u,v)=f((u,v))>0$ for all $u,v\in V: (u,v)\in E$ and $A(u,v)=0$ for all $u,v\in V: (u,v)\not\in E$. Note that the {\em support} of $A$, that is, the 0/1 matrix of same dimension as $A$ whose ones indicate pairs $(u,v)$ for which $A(u,v)>0$, is the (standard) adjacency matrix of $G$. A graph $G$ can be considered as a weighted graph with weight function $f=\bf 1$.
Let $P$ be a path of $G$ with node set $v_0,...,v_{p}$, for some integer $p\geq 0$, and edge set $\{(v_i,v_{i+1})$ for $i=0,...,p-1\}$ and let $U\subseteq V$. The vertices of $P$ with smallest and largest index $i$ such that $v_i\in U$ are called the $U$-extremities of $P$. If $P$ has only one node in $U$, we call the corresponding node a $U$-two-extremity. By opposition, we call $v_0,v_p$ the {\em true} extremities of $P$. We call a node of $P$ {\em internal} if it is not a true extremity.
Given a path $P$ such that $V(P)\cap U\neq \emptyset$ and a function $f:E(P)\mapsto {\mathbb R}^+ \setminus \{0\}$ , we define the {\em trace on $U$} of the weighted path $(P,f)$ as the weighted graph obtained from $P$ by `shrinking' the internal nodes of $P$ not in $U$. More formally, if $i_1<...<i_{l}$ are the indices of the vertices of $U$ on $P$, for some integer $l\geq 1$, the trace of $(P,f)$ on $U$ is the weighted graph $(P',f')$, where $P'$ is the path with vertex set $\{v_0,v_{i_1},...,v_{i_{l}},v_{p}\}$ and edge set $\{(v_0,v_{i_1}),(v_{i_{l}},v_{p})\} \cup \{(v_{i_j},v_{i_{j+1}})$ for $j=1,...,l-1\}$, and for any edge $e$ of $P'$, $f'(e)$ is the length with respect to $f$ of the (sub)path of $P$ in-between the two end points of $e$. We define the trace on $U$ of a union of disjoint paths as the union of the traces of each path intersecting $U$.
A {\em rooted tree decomposition} is a tree decomposition where a bag is chosen as a root and the edges of the tree are oriented in the direction of the root bag. In a rooted tree decomposition $(X,T)$, we can naturally define {\em children} and {\em parent} bags and then, for a bag $X_i$, we denote by $V_i$ the union of the bags in the subtree of $T$ rooted at $X_i$, and by $G_i$ the subgraph of $G$ induced by the nodes in $V_i$.
\section{Dynamic programming}
We now develop a dynamic programming algorithm to solve the $k$-path $L$-coloring problem on graphs with bounded treewidth. It is convenient to present the algorithm on a {\em nice tree decomposition}.
\begin{definition}{(Nice tree decomposition).}
Let $G=(V,E)$ be a graph. A nice tree decomposition is a rooted tree decomposition of $G$ where every bag $X_i$ of the tree has at most two children and is one of the four following types:
\begin{itemize}
\item \textbf{Leaf bag:} $X_i$ has no children and contains only one vertex $v \in V$, i.e. $|X_i|=1$.
\item \textbf{Introduce bag:} $X_i$ has exactly one child bag noted $X_j$ such that $X_i = X_j \cup \{v\}$ for some $v \in V$ (and $v \not \in V_j$).
\item \textbf{Forget bag:} $X_i$ has exactly one child bag noted $X_j$ such that $X_i = X_j \backslash \{v\}$ for some $v\in X_j$.
\item \textbf{Join bag:} $X_i$ has exactly two children noted $X_{j_1}$ and $X_{j_2}$ such that $X_i = X_{j_1} = X_{j_2}$.
\end{itemize}
\end{definition}
Kloks \cite{Kloks_1994} proved that a tree decomposition can be converted into a nice tree decomposition (with at most four times the number of vertices in $G$) in linear time, while preserving the same treewidth. We assume thus that we are given such a nice tree decomposition.
\begin{theorem} There exists an algorithm that, given a graph $G=(V,E)$, a set of edges $F \subseteq E$, and a nice tree decomposition $(X,T)$ of $G$ of width $w$ solves the $k$-path $L$-coloring problem in time $O(L^w.k^{2(3w)^2}.(3w)^2.|T|)$.
\end{theorem}
\begin{proof}
Let $G=(V,E)$ be a graph, let $F \subseteq E$, and let us consider a nice tree decomposition $(X,T)$ of $G$ of width $w$. The general idea behind a dynamic programming approach to a decision problem on graph with bounded treewidth is to evaluate in each bag $X_i$ whether there is a solution to the problem restricted to $G_i$, and to store enough information about the corresponding solutions, to propagate and extend the solutions from the children bag(s) to the parent node iteratively. Hence, in our setting, for each bag $X_i$, we would like to know if there exists a $k$-path $L$-coloring of $G_i$. A $k$-path $L$-coloring of $G_i$ is obviously a $k$-path $L$-coloring of its children bag(s). Hence we can try to identify the $k$-path $L$-coloring of $G_i$ by checking which $k$-path $L$-coloring of the children bag(s) can be extended to a $k$-path $L$-coloring of $G_i$. Now, because we want to design an efficient algorithm for testing $k$-path $L$-colorability of $G$, we cannot keep a full list of all $k$-path $L$-colorings of $G_j$ for all bags $X_j$ as the list could grow exponentially as we move up the tree. Because each color of a $k$-path $L$-coloring induces a disjoint union of $F$-paths, and because of the structure of a tree decomposition, it is enough, as we will discuss in detail below, to keep the colors of the vertices in $X_i$ and the trace on $X_i$ of the $F$-paths in each color. We call such a solution a {\em partial solution\footnote{This is the standard terminology in the field.}} as it can be extended to build a $k$-path $L$-coloring of $G_i$. Because we can encode, for each bag $X_i$, the (disjoint) union of the traces on $X_i$ of the $F$-paths of each color as a weighted graph with at most $3w$ vertices\footnote{Each path in the trace has at least one node from $X_i$ by definition and in the worst case, three is exactly one for each path and two additional neighbors.} (we call this weighted graph the {\em trace of the solution}), the number of partial solutions is bounded by $O(L^w.k^{(3w)^2})$ for each bag, which is constant for $k$ and $w$ bounded (we could obviously use better data structures to improve upon this value). We can now explain why it is enough to keep track of the trace of each solution of $G_i$ for bag $X_i$ to build all partial solutions as we move up the tree decomposition. We need to distinguish according to the different types of bag.
\begin{itemize}
\item \textbf{Leaf bag:} In a leaf bag $X_i$, we have $|X_i|=\{v\}$ for some $v\in V$ and we can enumerate all $k$-path $L$-coloring of $G_i$ by enumerating all possible coloring of $v$. The partial solutions coincide with the solutions for $G_i$ so the trace is trivial in this case: it consists in the graph with vertex set $\{v\}$ and edge set $\{\}$, and any weight function as the set of edges is empty.
\item \textbf{Forget bag:} In a forget bag $X_i$, we delete a vertex $v$ from a child bag $X_j$ i.e. $X_i = X_j \setminus \{v\}$ for some $v\in V$. In such a node of the tree decomposition, any partial solution $P_j$ for $X_j$ yields a partial solution $P_i$ for $X_i$. Indeed, any $k$-path $L$-coloring for $G_j$ that would be consistent with $P_j$ would again be a $k$-path $L$-coloring of $G_i$ as $G_i$ and $G_j$ coincide. The traces of the solution on $X_i$ and $X_j$ differ though, but we can easily recover the trace on $X_i$ from the trace on $X_j$. Indeed, we can update the coloring and the trace as follows: the coloring is kept identical but it is restricted to the nodes of $X_i$ and the new trace is simply the trace on $X_i$ of the trace in $P_j$ (that is we simply `shrink' $v$).
\item \textbf{Introduce bag:} In an introduced bag $X_i$, we add a vertex $v$ from a child bag $X_j$ i.e. $X_i = X_j \cup \{v\}$ for some $v \in V$ (and $v \not \in V_j$). In order to check whether a $k$-path $L$-coloring $S_j$ of $G_j$ can be extended to a $k$-path $L$-coloring of $G_i$, we only need to check which coloring $c(v)$ of $v$ is compatible with $S_j$. Adding $v$ to a color set adds edges in the subgraph of $G$ induced by the node of color $c(v)$. We want the resulting graph to be a union of disjoint $F$-paths of $G_i$ of length at most $k$. Let $G^{c(v)}_i$ (resp. $G^{c(v)}_j$) be the subgraph of $G_i$ (resp. $G_j$) induced by the node of color $c(v)$. Because of the structure of a tree decomposition, $v$ is only adjacent to vertices of $X_i$ in $G_i$. It follows that in order to check whether $S_j$ can be extended, it is enough to check whether the set of edges ${\mathcal E}\subseteq E$ incident to both $v$ and a node of color $c(v)$ in $X_i$ are all in $F$ and that adding $v$ and $\mathcal E$ (with weight one) to the trace $(H,f)$ of $G^{c(v)}_j$ on $X_j$ yields a weighted graph $({H',f'})$ whose support ${H'}$ is a union of disjoint paths of length at most $k$ with respect to ${f'}$ (note that adding $v$ might merge two previously disjoint paths). This can be checked in linear time by adapting for instance a depth first search algorithm. In case of a positive result, substituting the trace $(H',f')$ by $(H,f)$ actually yields the trace on $X_i$ of the extension of $S_j$ to $G_i$.
\item \textbf{Join bag:} In a join bag $X_i$, we want to check which partial solutions obtained for two different graphs $G_{j_1}$, $G_{j_2}$ associated with the two children bags $X_{j_1},X_{j_2}$ are ``compatible'', that is, would be the restriction of a $k$-path $L$-coloring of $G_i$. The logic is pretty similar to the previous situation. Let $S_{j_1}$ and $S_{j_2}$ be two solutions for $G_{j_1}$ and $G_{j_2}$ respectively. We first need to check that common vertices (that is, vertices in $X_i$) are colored in the same way in both solutions. Then we need to check that the graph induced by each color set is a (disjoint) union of $F$-paths of length at most $k$. Because there is no edge between vertices in $V_{j_1}\setminus X_i$ and vertices in $V_{j_2}\setminus X_i$ by the properties of the tree decomposition, it follows that the only obstruction can come from the fact that, in a color, the union of the edges of the disjoint $F$-paths in each solution (the union is indeed the induced graph) is not a union of disjoint $F$-paths of length at most $k$ . We can restrict attention to the trace of the solutions on $X_i$ and we only need to check that the {\em union of the trace graphs} of same color induces a disjoint union of paths of length at most $k$. There is a subtlety though and we need to be careful about the definition of the union of the trace graphs. Here we mean the union of all paths from the trace graphs with the condition that two edges with same extremities are considered identical (and thus are not duplicated in the union) only if their weight is $1$ in both solutions (note that this can only happen to edges in $E[X_i]$ as edges of length one that do not connect two vertices in $X_i$ contain at least one vertex in $V_{j_1} \setminus X_i$ or $V_{j_2} \setminus X_i$ and can thus only appear in one of the two solutions). Indeed, otherwise they must be considered different as they correspond to pieces of paths of $G$ whose internal nodes (in $V_{j_1} \setminus X_i$ or $V_{j_2} \setminus X_i$) where shrunk, and the union graph should be seen as a multi-graph. This is to deal with the special case that, when an edge of the traces of same color corresponds to an edge of $G$ between two vertices of $X_i$, it can (and will) be part of both partial solutions for $G_{j_1}$ and $G_{j_2}$. In case of a positive outcome, the weighted graph whose (weighted) adjacency matrix $A_i$ is $\max(A_{j_1}, A_{j_2})$ (where $A_{j_1}$ and $A_{j_2}$ are the (weighted) adjacency matrices of the traces in the partial solutions for $G_{j_1}$ and $G_{j_2}$ respectively) yields the trace on $X_i$ of the associated $k$-path $L$-coloring of $G_i$.
\end{itemize}
The discussion above shows that we do not miss any partial solution as we move up the tree (and that each partial solution we build is actually valid). Hence there exist a $k$-path coloring of $G$ if and only if there exists a partial solution in the root node of the tree decomposition when applying the procedure above.
The computational time at each node of the nice tree decomposition is dominated by the join bag case. For each coloring of the vertices of $X_i$ we then need to check whether the union of the support graph of the trace of each possible partial coloring for $X_{j_{1}}$ and $X_{j_{2}}$ are compatible: in the worst case we need to check $L^w$ coloring, and $k^{(3w)^2} \times k^{(3w)^2} $ pairs of traces, and checking that the union of the traces in each color still yields a union of disjoint paths of length at most $k$ can be done in time $O((3w)^2)$, by first building the weighted adjacency matrix of the union of the trace (and checking that it does not contain multi-edges) and by then adapting the depth first search algorithm, since each weighted adjacency matrix has size at most $(3w)^2$. The overall complexity is thus bounded by $O(L^w.k^{2(3w)^2}.(3w)^2.|T|)$.
\end{proof}
\section{Numerical Experiments}
In ~\cite{Arnborg_1987}, Arnborg et al have proved that deciding if the treewidth of a graph $G$ is at most $w$, where $w\geq 0$, is NP-complete. However, there are good heuristics to determine a tree decomposition of a given graph $G$ with a width `close to' the treewidth. Different heuristics are presented and compared in \cite{Bodlaender3}. Moreover, Arnborg et al ~\cite{Arnborg_1987} showed that for every fixed value of $w$, there is a linear-time algorithm that finds a tree decomposition of width $w$ (if it exists).
We propose to solve DSA-MP on real instances arising from integrated circuit manufacturing by first using a heuristic to get a `small' tree-decomposition of the graph, and by then using the dynamic programming algorithm described in the previous section to solve the problem (actually, we tailored the algorithms to the case where $k=1$ and $k=2$ to make them slightly simpler to implement, see \cite{Dehia} for the details about the corresponding implementations). We used D-FLAT to implement the corresponding algorithm~\cite{DFLAT_2014}. D-FLAT has the advantage to implement different state-of-the-art heuristics to find a close-to-optimal nice tree decompositions and offers a generic langage to describe how to extend solutions for each type of nodes of the nice tree decomposition. We ran D-FLAT iteratively to solve the $k$-path $L$-colorability problem starting from $L=2$ and increasing $L$ until a solution was found. All tests were done on a machine equiped with an Intel(R) Xeon(R) CPU E5-2640 2.60 GHz and a memory of 529GB. As already observed, the typical industrial designs are made of several hundred thousands of vias, but because the placement of the vias is made as to anticipate as much as possible conflicts that may arise from lithography, the `conflict graph' is usually extremely sparse and contains only hundreds or thousands of different connected components. Since the optimization of each connected component can be parallelized, we focus attention on the computational time for each connected component individually.
We report hereafter computational experiments on 23 connected components of increasing size arising from true industrial instances in Table \ref{results}. We can see that the linear time complexity is confirmed experimentally.
\begin{table}[h!]
\begin{center}
\scalebox{0.8}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Instance name & $|V|$ & $|E|$ & $|F|$ & $\omega(G)$ & $\Delta(G)$ & DFLAT\_TW(G) & $\chi_{path}^{1}$ & cpu time (sec) & $\chi_{path}^2$ & cpu time (sec) \\
\hline
industrial\_1 & 54 & 56 & 52 & 3 & 4 & 2 & 2 & 0.35 & 2 & 0.92 \\
industrial\_2 & 61 & 86 & 85 & 3 & 5 & 2 & 2 & 0.75 & 2 & 1.62 \\
industrial\_3 & 89 & 112 & 107 & 3 & 5 & 2 & 2 & 0.51 & 2 & 1.5 \\
industrial\_4 & 90 & 113 & 109 & 3 & 5 & 2 & 2 & 0.98 & 2 & 1.21 \\
industrial\_5 & 96 & 111 & 105 & 3 & 4 & 2 & 2 & 0.55 & 2 & 1.73 \\
industrial\_6 & 98 & 126 & 120 & 3 & 5 & 3 & 2 & 0.84 & 2 & 3.45 \\
industrial\_7 & 102 & 144 & 141 & 3 & 5 & 2 & 2 & 0.94 & 2 & 1.47 \\
industrial\_8 & 111 & 140 & 137 & 3 & 5 & 2 & 2 & 1.45 & 2 & 2.21 \\
industrial\_9 & 114 & 131 & 125 & 3 & 4 & 2 & 2 & 0.79 & 2 & 2.21 \\
industrial\_10 & 116 & 155 & 151 & 3 & 5 & 3 & 2 & 1.15 & 2 & 2.5 \\
industrial\_11 & 119 & 142 & 136 & 3 & 4 & 3 & 2 & 1.41 & 2 & 4.02 \\
industrial\_12 & 128 & 159 & 149 & 3 & 5 & 2 & 2 & 1.17 & 2 & 2.83\\
industrial\_13 & 137 & 167 & 160 & 3 & 5 & 3 & 2 & 1.22 & 2 & 3.05\\
industrial\_14 & 159 & 196 & 188 & 3 & 5 & 2 & 2 & 1.42 & 2 & 3.53\\
industrial\_15 & 173 & 224 & 216 & 3 & 5 & 3 & 2 & 1.83 & 2 & 3.44\\
industrial\_16 & 382 & 396 & 339 & 3 & 4 & 2 & 2 & 2.57 & 2 & 4.95\\
industrial\_17 & 969 & 1001 & 900 & 3 & 3 & 2 & 2 & 7.26 & 2 & 12.52\\
industrial\_18 & 993 & 1009 & 927 & 3 & 4 & 2 & 2 & 6.19 & 2 & 12.63\\
industrial\_19 & 997 & 1047 & 906 & 3 & 4 & 2 & 3 & 8.86 & 2 & 12.82\\
industrial\_20 & 998 & 1024 & 924 & 3 & 4 & 2 & 3 & 8.65 & 2 & 12.95\\
industrial\_21 & 1900 & 1937 & 1804 & 3 & 4 & 2 & 3 & 21.09 & 2 & 26.87\\
industrial\_22 & 1912 & 1960 & 1809 & 3 & 4 & 2 & 3 & 18.22 & 2 & 26\\
industrial\_23 & 1937 & 1996 & 1812 & 3 & 4 & 2 & 2 & 6.29 & 2 & 27.02 \\
\hline
\end{tabular}}
\caption{Industrial instances characteristics and results: $\omega(G)$ is the size of the maximum clique in $G$, $\Delta(G)$ is the maximum degree of $G$, DFLAT\_TW(G) is the width of the tree decomposition returned by D-FLAT heuristics, and $\chi_{path}^{k}$ is the $k$-path chromatic number.}\label{results}
\end{center}
\end{table}
All instances could be solved in less than 30 seconds. The 23 industrial instances used in this study share similar properties with the pseudo-industrial instances generated in \cite{IP}, when the resolution limit is set to 31nm. We thus compared the two approaches on the same set of instances (and on the same machine) and the results are reported in Table \ref{results2}.
\begin{table}[h!]
\begin{center}
\scalebox{0.65}{\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
Instance & $|V| $ &$|E| $ &$|F| $ &$\omega(G)$ &$\Delta(G)$ & DFLAT\_TW(G) & $\chi_{path}^{1}$ & DFLAT time 1 (sec) & IP 1 time (sec) & $\chi_{path}^2$ & DFLAT time 2(sec) & IP 2 time (sec) \\
\hline
clip1\_31& 191& 242& 242 & 3& 5 &3 &2 &2,71 &0,8 &2 &465,74 &1,93\\
clip2\_31& 139& 188& 188 & 3& 5 &3 &3 &21,15 &1,54 &2 &64,89 &2,11\\
clip3\_31& 98& 117& 108 & 3& 4 &2 &2 &0,8 &0,11 &2 &1,69 &0,34\\
clip4\_31& 120& 147& 139 & 3& 4 &3 &2 &0,87 &0,49 &2 &9,45 &0,46\\
clip5\_31& 170& 213& 213 & 3& 5 &3 &3 &5,45 &0,99 &2 &9,57 &1,52\\
clip6\_31& 178& 229& 229 & 3& 5 &3 &2 &2,1 &0,68 &2 &38,45 &1,89\\
clip7\_31& 203& 256& 223 & 3& 5 &3 &3 &8,83 &0,94 &2 &92,05 &0,73\\
clip8\_31& 122& 162& 160 & 3& 5 &3 &2 &0,83 &0,45 &2 &2,89 &1,35\\
clip9\_31& 152& 193& 193 & 3& 4 &3 &2 &2,8 &0,28 &2 &31,13 &1,96\\
clip10\_31&139& 175& 175 & 4& 4 &3 &2 &0,91 &0,51 &2 &2,3 &0,64\\
\hline
\end{tabular}}
\caption{Pseudo-industrial instances characteristics and results: $\omega(G)$ is the size of the maximum clique in $G$, $\Delta(G)$ is the maximum degree of $G$, DFLAT\_TW(G) is the width of the tree decomposition returned by D-FLAT heuristics, DFLAT k time (resp. IP k time) represents the time used by DFLAT (resp. IP) to solve the k-path coloring problem, and $\chi_{path}^{k}$ is the $k$-path chromatic number.}\label{results2}
\end{center}
\end{table}
It appears that the best integer programming formulations from \cite{IP} outperforms the dynamic programming approach on these instances. It is not completely clear to us whether building upon heuristics that would be tailored to the industrial setting to find a small tree decomposition and developing a finer implementation of our dynamic programming approach (with possibly some improvement in the data structures to speed up the algorithm) could lead to substantial computational improvements and could make the technique compete (or even beat) the results obtained with IP on the typical industrial instances. However, the results obtained for the larger pseudo-industrial instances used in \cite{IP} (when the resolution limit is set to 39nm and 49nm) did not encourage us to pursue this line of research:
\begin{itemize}
\item For instances with tree-width four, the computational times were still ``reasonable'' to hope that a better implementation could help make the technique competitive: the computation time ranges from several minutes to one hour in this case, while the IP formulation can solve all instances within seconds or minutes (note that the size of the instances, i.e. the largest connected component, is now in the interval $[816,3850]$).
\item For instances with tree-width five or more, the computation time are becoming prohibitive though compared to IP : several hours or more for dynamic programming versus a few minutes for IP.
\end{itemize}
Even though we could also use additional `tricks' to reduce the size of the instances (for instance, if an edge of $E\setminus F$ disconnects some connected component, the problem can be solved independently on both subgraphs and the solutions recombined later - if the $k$-path chromatic number is at least two), we did not investigate this direction further, as the same techniques could also be exploited by the IP model, and this is this later direction that is currently investigated by Mentor Graphics to see whether IP can be made competitive with their in-house heuristics, that can solve all instances in less than a second (the corresponding `tricks' are already implemented in Mentor Graphics' heuristics).
\section{Acknowledgment}
This project has been partly supported by the Association Nationale de la Recherche et de la Technologie (Convention CIFRE 2015/0553).
\bibliographystyle{abbrv}
|
2,869,038,155,239 | arxiv | \section{Introduction}
Spiking Neural Networks (SNNs) have gained tremendous attention towards ultra-low-power machine learning \cite{roy2019towards}. SNNs leverage spatio-temporal information of unary spike data to achieve energy-efficient processing in resource-constrained edge devices \cite{davies2018loihi,akopyan2015truenorth}. However, in the case of large-scale tasks such as image classification, the model size of SNNs significantly increases.
Unfortunately, edge devices typically have limited on-chip memory, rendering large-scale SNN deployment unpractical. To this end, recent works have proposed various unstructured SNN pruning techniques to achieve high weight sparsity in SNNs \cite{kim2022exploring,deng2021comprehensive}.
Although unstructured pruning manages to compress the SNN models into the available memory resources, sparse SNNs encounter a \textbf{workload-imbalance problem} \cite{loadimbalance}. The workload-imbalance problem comes from the conventional weight stationary dataflow \cite{eyeriss} adopted in sparse accelerators \cite{ gospa,sparten,scnn}. In weight stationary dataflow, filters are divided into several groups and kept stationary inside processing elements (PEs) for filter reuse. However, different filter groups inevitably have different densities of non-zero weights, due to the random weight connections from the unstructured pruning. As a result, different PEs end up with unbalanced workloads. Since all PEs run in parallel, PEs with fewer workloads need to wait for the PE that has the largest workload. This results in low utilization and imposes idle cycles, which increases the running latency and leakage of energy waste.
\begin{figure}[t]
\centerline{\includegraphics[width=0.95\linewidth]{figure/intro_comparison.pdf}}
\vspace*{-3mm}
\caption{Comparison between u-Ticket and state-of-the-art workload balance methods. Overall, u-Ticket recovers the PE utilization up to $100\%$ for extremely sparse networks with 98\% weight sparsity (here, we consider
VGG-16). Please note that u-Ticket does not introduce any hardware area overhead, and thus is the best fit for SNNs
(↑: the higher is the better, ↓: the lower is the better).
}
\vspace*{-1.5em}
\label{intro_comp}
\end{figure}
To address the workload-imbalance problem, various methods have been proposed in the prior sparse accelerator designs.
However, they cannot be efficiently applied to SNNs for the following reasons.
\textbf{(1) {Requiring extra hardware:}}
The prior methods require extra hardware (\eg deep FIFOs or permuting units) \cite{sparten,gospa,eie,column,ese} to balance the workloads. For instance, applying the hardware-based (FIFOs \cite{gospa} and permuting networks \cite{sparten}) workload balancing methods to SNNs require approximately 18\% and 13\% of extra chip area (see Fig. \ref{intro_comp}). Consequentially, the improvements in PE utilization are at the cost of additional hardware resources, which should be avoided for SNNs whose running environments are typically resource-constrained edge devices.
\textbf{(2) {Limited to low sparsity:}}
As shown in Fig. \ref{intro_comp}, the solutions from prior sparse accelerators \cite{gospa,sparten} only work on low sparsity (roughly 60\% and 35\% on VGG-16) which is not sufficient for SNNs' extremely low-power edge deployment. Moreover, the workload-imbalance problem naturally gets more difficult to solve at high weight sparsity regime. Hence, the exploration of workload balancing for extremely sparse networks ($>95\%$ weight sparsity) is missing in prior works. Considering the above-mentioned problems, we need an SNN-friendly solution to address the workload imbalance.
To this end, we propose u-Ticket, an iterative workload-balanced pruning method for SNNs that can effectively achieve high weight sparsity and minimize the workload imbalance problem simultaneously.
Our method is based on Lottery Ticket Hypothesis (LTH) \cite{frankle2018lottery} which states that sub-networks with similar accuracy can be found in over-parameterized networks by repeating \textit{training-pruning-initialization} stages.
Different from the standard LTH method \cite{kim2022exploring} where the pruned networks are naively used for the next round, we either remove or recover weight connections to balance workloads across all PEs before sending the networks to re-initialization (see Fig. \ref{fig:method:lth_vs_uticket}).
Compared to prior workload-balancing methods (see Fig. \ref{intro_comp}), the u-Ticket approach improves PE utilization by up to 100\% (70\% for \cite{gospa} and 92\% for \cite{sparten}) while maintaining filter sparsity of 98\% (60\% for \cite{gospa} and 35\% for \cite{sparten}), at iso-accuracy with the standard LTH-based pruning baseline \cite{kim2022exploring}. Furthermore,
since our method balances the workload during the pruning process, u-Ticket does not incur any additional hardware overhead for deployment.
We summarize the key contributions as follows:
\begin{enumerate}
\item We propose u-Ticket which discovers highly sparse SNNs with optimal PE utilization.
The discovered sparse SNN model achieves a similar level of accuracy, weight sparsity, and spike sparsity with the standard LTH baseline \cite{kim2022exploring} while improving the utilization up to $100\%$.
\item By balancing the workload, u-Ticket reduces the running latency and energy cost by up to $76.9\%$ and $63.8\%$, respectively, compared to the standard LTH method.
\item We extend the prior sparse accelerator\cite{gospa} and propose an energy estimation model for sparse SNNs.
\item To validate the proposed u-Ticket, we conduct experiments on two representative deep architectures (i.e., VGG-16 \cite{vgg} and ResNet-19 \cite{resnet}) across three public datasets including CIFAR10 \cite{krizhevsky2009learning}, Fashion-MNIST \cite{xiao2017fashion} and SVHN \cite{netzer2011reading}.
\end{enumerate}
\begin{figure}[t]
\begin{center}
\def0.5{0.5}
\begin{tabular}{@{\hskip 0.0\linewidth}c@{\hskip 0.0\linewidth}c@{\hskip 0.00\linewidth}c@{\hskip 0.00\linewidth}c@{}c}
\hspace{-10mm}
\includegraphics[width=1.0\linewidth]{figure/uticket_LTH.pdf}
\\
\end{tabular}
\caption{ Illustration of the concept of the proposed u-Ticket. Our u-Ticket consists of training (\textbf{step1}), pruning (\textbf{step2}), adjusting weight connections based on workload (\textbf{step3}), and re-initialization (\textbf{step4}). We repeat these steps for multiple rounds.
Note, the standard LTH method consists of training (\textbf{step1}), pruning (\textbf{step2}), and re-initialization (\textbf{step4}), which does not consider the utilization of the pruned SNNs.}
\vspace{-5.5mm}
\label{fig:method:lth_vs_uticket}
\end{center}
\end{figure}
\begin{figure}[h]
\centerline{\includegraphics[width=0.99\linewidth]{figure/intro_figure_new_new.pdf}}
\vspace*{-3mm}
\caption{Example utilization and latency resulted from imbalance and balanced workload under the same model sparsity. With the unstructured pruning, non-zero weights will have a random distribution across four groups, thus leading to unbalanced workloads across PEs as shown on the left side (PE0 has four weights assigned, while PE1 and PE2 only have one).}
\vspace*{-1em}
\label{imbalance_u}
\end{figure}
\section{Background}
\subsection{Spiking Neural Networks}
Spiking Neural Networks (SNNs) process the unary temporal signal through multi-layer weight connections.
Instead of ReLU neuron for a non-linear activation, recent SNN works use a Leaky-Integrate-and-Fire (LIF) neuron which contains a memory called membrane potential.
The membrane potential captures the temporal spike information by storing incoming spikes and generating output spikes accordingly.
Suppose a LIF neuron $i$ has a membrane potential $u_{i}^{t}$ at timestep $t$. We can formulate the discrete neuronal dynamics \cite{wu2018spatio,fang2021incorporating} by:
\begin{equation}
u_i^t = \lambda u_i^{t-1} + \sum_j w_{ij}s^t_j.
\label{eq:LIF}
\end{equation}
Here, $\lambda$ is the leaky factor for decaying the membrane potential through time. The $s^t_j$ stands for the output spike from a neuron $j$ at timestep $t$.
The $w_{ij}$ denotes a weight connection between neuron $j$ in the previous layer and neuron $i$ in the current layer.
If the membrane potential reaches a firing threshold, the neuron generates an output spike, and the membrane potential is reset to zero.
Similar to ANNs, we train the weight connection $w_{ij}$ in all layers.
Our weight optimization is based on the recently proposed surrogate gradient learning, which assumes approximated gradient function for the non-differentiable LIF neuron \cite{neftci2019surrogate}.
We use $tanh(\cdot)$ approximation following the previous work \cite{fang2021incorporating}.
\subsection{Lottery Ticket Hypothesis}
Lottery Ticket Hypothesis (LTH) \cite{frankle2018lottery} has been proposed where they found a dense neural network contains sparse sub-networks (\textit{i}.\textit{e}., winning tickets) with similar accuracy compared to the original dense network.
The winning tickets are founded by
multiple rounds of magnitude pruning operations.
Specifically, suppose we have a dense network $f(x;\theta)$ with randomly-initialized parameter weights $\theta \in \mathbb{R}^{n}$.
In the first round, the dense network $f(x;\theta)$ is trained to convergence (\textbf{step1} in Fig. \ref{fig:method:lth_vs_uticket}). Based on the trained weights, we prune $p\%$ weight connections with the lowest absolute weight values (\textbf{step2} in Fig. \ref{fig:method:lth_vs_uticket}).
We represent this pruning operation as a binary mask $m \in \{0, 1\}^{n}$.
In the next round, we reinitialize the pruned network with the original initialization parameters $f(x;\theta \odot m)$ (\textbf{step4} in Fig. \ref{fig:method:lth_vs_uticket}), where $\odot$ represents the element-wise product.
The \textit{training-pruning-initialization} stages are repeated for multiple rounds.
In the SNN domain, Kim \textit{et al}. \cite{kim2022exploring} recently applied LTH to deep SNNs, resulting in high weight sparsity ($\sim $98\%) for VGG and ResNet architectures.
However, they do not consider the workload imbalance problem in sparse SNNs.
Different from the previous work, we adjust weight connections for improving utilization at each pruning round \textbf{step3}, which reduces up to $77\%$ latency and $64\%$ energy cost compared to the standard LTH \cite{kim2022exploring} while maintaining both sparsity and accuracy.
\subsection{Workload Imbalance Problem}
\label{problem_sec}
In the context of neural network accelerators, dataflow refers to the input and weight mapping strategy on the hardware.
To this effect, recent works \cite{scnn,sparten,gospa,spinalflow,sata} have demonstrated the efficacy of the weight stationary dataflow towards efficient deployment of sparse networks and SNNs.
For weight-stationary dataflow, different weights are cast to different PEs and stay inside the PE until they are maximally reused across all the relevant computations. More specifically, during the running time, depending on the memory capacity of the hardware, each layer's filter kernels will be grouped in a chosen pattern and sent to each PE. As shown in Fig. \ref{imbalance_u}, due to the randomness in unstructured pruning, the number of non-zero elements (or workload) allocated to each PE varies significantly. Moreover, the workload imbalance is persistent irrespective of the grouping method chosen. Note, here we define the number of non-zero weights assigned to a PE as the workload.
In this case, the wasted resources in PEs are based on the difference between the largest workload and the average of all other workloads. To quantitatively measure the portion of non-wasted resources, we use the utilization metric\cite{loadimbalance}, given by
\begin{equation}
\label{eq:imbalance}
\mu = 1-
\frac{T_{max} - T_{avg}}{T_{max}} \cdot \frac{n}{n-1},
\end{equation}
$T_{max}$ and $T_{avg}$ are the slowest and the average processing time among the PEs. $n$ is the number of PEs. The metric quantifies the percentage of processing time that the rest of the PEs, excluding the slowest one, is engaging in useful work.
In Fig. \ref{lth_problem}, we show how the utilization degrades as the weight sparsity of the SNN increases in the standard LTH method \cite{kim2022exploring}. The preliminary result shows that in the final round, the utilization can be as low as $59\%$ on VGG-16 CIFAR10. Here, we assume that the total number of PEs is 16 and the utilization is averaged across all layers (weighted by parameter count).
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\linewidth]{figure/utilization_rounds_1.pdf}}
\vspace*{-3mm}
\caption{Sparsity and utilization across pruning rounds for the standard LTH method without utilization awareness. The pruning is done for 13 rounds on VGG-16 being trained for image classification on CIFAR10 with 16 PEs.}
\vspace*{-1mm}
\label{lth_problem}
\end{figure}
\section{u-Ticket}
To resolve the workload imbalance problem, we propose u-Ticket where we achieve high utilization in sparse SNNs during iterative pruning.
In this section, we first present the algorithm to train sparse SNNs while maintaining high utilization.
We then provide details of the proposed PE design and the energy model to map the u-Ticket on the hardware.
\subsection{Algorithmic Approach}
\begin{algorithm}[t]\small
\caption{u-Ticket}\label{alg:u_rec}
\textbf{Input}: SNNs $f(x;\theta)$ with randomly-initialized parameter weights $\theta \in \mathbb{R}^{n}$, connectivity mask $m_i \in \{0, 1\}^{n}$ at iteration $i$, total pruning round $N_{Round}$, total number of layer $L$, number of PEs $n$, Workload of a PE $d$, Workload list of a layer $W^{l}$.
\\
\textbf{Output}: Pruned $f(x;\theta_{trained} \odot m_N)$
\begin{algorithmic}[1]
%
\State {initialize $m_1$ with $1$} \Comment{{\color{purple}\textsc{No pruning in the first round}}}
\For{$i \gets $ 1 to $N_{Round}$}
\State {$f(x;\theta \odot m_i)$} \Comment{{\color{purple}\textsc{ Iterative magnitude pruning}}}
\State {$f(x;\theta_{trained} \odot m_i) \gets Train(f(x;\theta \odot m_i))$}
\State {$\hat{m}_{i} \gets Prune (f(x;\theta_{trained} \odot m_i)$)}
\For{$l \gets $ 1 to $L$} \Comment{{\color{purple}\textsc{ Layer-wise adjustment}}}
\State{$W^{l} \gets GetWorkloadList(f(x;\theta^{l}_{trained} \odot \hat{m}^{l}_{i}), n$)}
\State{$d^{l}_{avg} \gets GetAverage(W^{l})$}
\For{$d$ in $W_{l}$}\Comment{{\color{purple}\textsc{ Adjust each workload group}}}
\If{$d < d^{l}_{avg}$}
\State {$m^{l}_{i+1} \gets RandomlyRecover(\hat{m}^{l}_{i}, d_{avg}-d)$}
\Else
\State {$m^{l}_{i+1} \gets RandomlyRemove(\hat{m}^{l}_{i}, d-d_{avg})$}
\EndIf
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\label{algorithm: overall}
\end{algorithm}
Our u-Ticket pruning consists of multiple rounds similar to LTH \cite{frankle2018lottery}.
For each round, we train the networks till convergence, prune the low-magnitude weight connections, balance the workload of PEs by recovering or removing the weight connections, and finally re-initialize the weights.
The main idea is to ensure a balanced workload between PEs after unstructured pruning in each round.
The overall u-Ticket process is described in Algorithm \ref{alg:u_rec}.
For each round, the pruned SNN from the previous round is re-initialized. After that, the model is trained and pruned where we obtain connectivity mask $\hat{m}_i$ with imbalanced PE workloads.
To increase the utilization, we first compute the workload for each PE, constructing the PE workload list $W^l$ for each layer. Based on the $W^l$, we calculate the average workload $w_{avg}^l$ for layer $l$.
Then, we go through each workload $w$ in $W^l$, and randomly recover ($w_{avg}-w$) number of weight connections if the PE's workload $w$ is smaller than the average workload $w_{avg}^l$.
Otherwise, the number of weight connections is pruned by ($w-w_{avg}$).
After the workload adjustment, every workload $w$ will have the same magnitude, to ensure the optimal utilization $\mu$.
We repeat the above-mentioned stages for $N$ rounds.
In our method, we use average workload $w_{avg}^l$ across all PEs at layer $l$ as the reference to recover/remove weight connections.
The reason behind such design choice is as follows:
(1) If we look at only partial PE workloads to decide on a reference workload, it will bring a sub-optimal solution.
(2) The cost of checking all PE workloads is negligible compared to the overall iterative training-pruning-initialization process. We find that on an RTX 2080Ti GPU, the time cost of our workload-balancing method is only 0.11\% of one complete LTH searching round.
\subsection{Hardware Mapping}
\subsubsection{Processing Elements (PEs)}
To get an accurate energy estimation, we need to map the sparse SNN to a proper hardware design. We develop our PE design based on \cite{gospa}, one of the state-of-the-art sparse accelerators, to support the running of sparse SNNs. Please note that our method of balancing the workloads works on any sparse accelerator design as long as it utilizes the weight stationary dataflow.
First, the non-zero weights, input spikes, and their corresponding metadata (index) are read from the DRAM. The weights are represented in weight sparsity pattern (WSP) \cite{gospa}, while the spike activations are represented in standard compressed sparse row (CSR) format. We use four timesteps for the SNN in our experiments, thus we can group every two activations into one byte (each activation has four unary spikes.)
Then, an activation processing unit (APU, outside PEs) filters out the zero activation (0-spikes across four timesteps) and sends the non-zero activation together with their position indices (decoded from CSR) to the PE arrays. The position indices help to match the non-zero weights and activation in 2-D convolution.
At the PE level, each PE contains four 16-bit AND gates, 256 24-bit accumulators, and one 1024 $\times$ 16 bits SRAM-based scratch-pad. We further extend the 256 accumulators with 256 LIF units for generating the output spikes. Each LIF unit is equipped with four 24-bit registers for storing the membrane potential across four timesteps.
Fig. \ref{inner_PE} illustrates the overall architecture and the computation flow inside the PE. We process the network in a tick-batched manner \cite{spinalflow}. At step \circled{1}, the non-zero weights together with their WSPs are mapped to each PE. At step \circled{2}, the spike activation $S_{in}$ together with their position indices are sent to PE. Based on the weight's WSP and the activation's position index, the selector unit will output the matched non-zero weight. At step \circled{3} and \circled{4}, the dot-product operations between the input spike and the matched weights are carried out, and the partial sums are stored according to their position index. At step \circled{5}, the partial sums for each time step are sequentially sent to the LIF units to generate the output spikes for each time step. Note that steps \circled{3} - \circled{5} need to be repeated four times for matching the four timesteps used in our SNN model (only 1 bit of $S_{in}$ is cast to the PE at a time in step \circled{2}).
\begin{figure}[t]
\centerline{\includegraphics[width=0.9\linewidth]{figure/arc_new_new.pdf}}
\vspace*{-2mm}
\caption{Overall architecture and the detailed inner architecture of PE. Here APU denotes the activation processing unit.}
\vspace*{-1em}
\label{inner_PE}
\end{figure}
\subsubsection{Energy Modeling}
\label{sec:model}
We do the simulation for the full architecture. Since u-Ticket balances the workloads between PEs, the majority of the improvements can be found at the PE level. Thus, we focus on energy estimation at the PE level in this work.
We extend the energy model from \cite{sata} to estimate the total energy:
\begin{equation}\label{eq:pe_energy_total}
E_{total}= N_{work} \cdot (E_{PE}^{d}\cdot(1-S_{in}^{spa}) + E_{PE}^{l}) + N_{idle} \cdot E_{PE}^{l},
\end{equation}
where $E_{PE}^{d}$ and $E_{PE}^{l}$ are the dynamic and leakage energy of a single PE processing one input spike. As shown in \cite{sata}, there is no extra cost for skipping the zero-spike computation in SNNs. Thus, we directly apply the term of spike sparsity, $S_{in}^{spa}$, in Eqn. \ref{eq:pe_energy_total} to consider the dynamic energy saving by skipping the zero spikes. Here $N_{work}$ is defined as the total work cycles in which PEs are doing useful work and $N_{idle}$ denotes the total cycles in which PEs are waiting in an idle state.
\section{EXPERIMENT}
\subsection{Experimental Settings}
\subsubsection{Software Configuration}
First, to validate the u-Ticket pruning method, we evaluate our u-Ticket methods on three public datasets: CIFAR10~\cite{krizhevsky2009learning}, Fashion-MNIST~\cite{xiao2017fashion}, and SVHN~\cite{netzer2011reading}. We choose two representative deep network architectures: VGG-16 \cite{vgg} and ResNet-19 \cite{resnet}. We implement the networks on PyTorch and set the timesteps $T$ to 4 for all experiments. We use state-of the-art direct encoding technique that has been shown to train SNNs on image classification datasets with very few timesteps. We use the same training configurations used in \cite{kim2022exploring}.
\subsubsection{Hardware Configuration}
We report the utilization, latency, work cycles, and idle cycles based on our PyTorch-based simulator which simulates the running-time distribution of the weights to PEs. We use the weights grouping method as in \cite{gospa,sata} with 16 PEs. The PE level energy is estimated with the model in Section \ref{sec:model} with all computing units synthesized in Synopsys Design Compiler at 400MHz using 32nm CMOS technology and the memory units simulated in CACTI. We set the standard LTH method \cite{kim2022exploring} without utilization-awareness as our baseline and use the same estimation model to get the speed-up and energy results.
\begin{table}[t!]
\centering
\caption{Comparison of accuracy, sparsity of filters and spikes between our method and the standard LTH method.}
\vspace*{-1mm}
\begin{adjustbox}{max width =0.93\linewidth}
\begin{tabular}{@{\extracolsep{4pt}}llccc}
\toprule
{Dataset} & {Method} & {Acc.(\%)} & {Sparsity(\%)} & {Sparsity(\%)}\\
& & & {(filters)} & {(spikes)}\\
\midrule
\multicolumn{5}{c}{{VGG-16}\cite{vgg}} \\
\midrule
{CIFAR10} & LTH\cite{kim2022exploring} & \textbf{91.0}& 98.2 & 84.8\\
& {u-Ticket} (ours) & 90.7 & \textbf{98.4} & \textbf{85.9} \\
\addlinespace[0.4em]
{FMNIST} & LTH\cite{kim2022exploring} & \textbf{94.6} & 98.2 & \textbf{83.9}\\
& {u-Ticket} (ours) & 94.0 & \textbf{98.5} & 81.4\\
\addlinespace[0.4em]
{SVHN}& LTH\cite{kim2022exploring} & \textbf{95.5} & 98.2 & \textbf{84.9}\\
& {u-Ticket} (ours) & 94.8 & \textbf{98.5} & 80.1\\
\midrule
\multicolumn{5}{c}{{ResNet-19}\cite{resnet}} \\
\midrule
{CIFAR10} & LTH\cite{kim2022exploring} & \textbf{91.0} & 97.6 & 64.1\\
& {u-Ticket} (ours) & 90.3 & \textbf{98.4} & \textbf{68.3}\\
\addlinespace[0.4em]
{FMNIST} & LTH\cite{kim2022exploring} & \textbf{94.4} & 98.2 & 60.1\\
& {u-Ticket} (ours) & 93.3 & \textbf{99.0} & \textbf{62.9}\\
\addlinespace[0.4em]
{SVHN} & LTH\cite{kim2022exploring} & \textbf{95.1} & 97.6 & 63.6\\
& {u-Ticket} (ours) & 94.6 & \textbf{98.6} & \textbf{68.2}\\
\bottomrule
\end{tabular}
\label{valid_tab}
\end{adjustbox}
\end{table}
\begin{figure*}[t]
\begin{center}
\def0.5{0.5}
\begin{tabular}{@{}c@{}c@{}c@{}c}
\includegraphics[width=0.25\linewidth]{figure/new_work_results.pdf} &
\includegraphics[width=0.25\linewidth]{figure/new_idle_results.pdf}&
\includegraphics[width=0.25\linewidth]{figure/new_latency_results.pdf} &
\includegraphics[width=0.265\linewidth]{figure/utilization_result.pdf} \\
{\hspace{3mm}(a)} & {\hspace{3mm}(b)} &{ \hspace{3mm}(c)}& { \hspace{5mm}(d) } \\
\end{tabular}
\caption{The layerwise performance comparison between LTH and u-Ticket on four metrics, \textit{i}.\textit{e}., (a) work cycles, (b) idle cycles, (c) latency, (d) utilization. We conduct experiments with VGG-16 architecture on CIFAR10.
}
\label{fig:layerwise_speed}
\vspace{-5mm}
\end{center}
\end{figure*}
\subsection{Experimental Results}
\subsubsection{Validation Result}
We summarize the validation results in Table \ref{valid_tab}. The results confirm that our method works well for deep SNNs (less than $\sim $1\% accuracy drop). We also compare the sparsity of filters and spikes between these two methods. u-Ticket has a slightly higher filter sparsity, due to the extra reduction in weight connections to ensure balanced workloads for each PE. At the same time, u-Ticket keeps a similar level of spike sparsity on VGG-16 and has better spike sparsity on ResNet-19. While higher spike sparsity will bring better energy efficiency, a spike sparsity that is too high will cause an accuracy drop in deep SNNs \cite{li2021differentiable}. This explains the accuracy-sparsity tradeoff on ResNet-19 (on average 0.76\% accuracy drop with 3.5\% sparsity gain).
\subsubsection{Hardware Performance}
We consider four metrics in this section (\textit{i}.\textit{e}., work cycles, idle cycles, latency, and utilization).
\begin{itemize}
\item \textbf{Work cycles} ($N_{work}$ in Eqn. 4): Sum of total work cycles for every PE across all the layers in the network.
\item \textbf{Idle cycles} ($N_{idle}$ in Eqn. 4): Sum of total idle cycles for every PE across all the layers in the network.
\item \textbf{Latency}: Time required by PEs to process all the layers in the network. The latency is normalized with respect to the time required for a PE to process one input spike.
\item \textbf{Utilization}: We use Eqn. \ref{eq:imbalance} to compute the utilization for each layer.
To compute the utilization of the network, we calculate the weighted average utilization.
\end{itemize}
The hardware improvement results are summarized in Table \ref{speed_tab}. By iteratively applying the utilization recovery during the pruning, u-Ticket can recover the utilization up to $100\%$ in the final pruning round, thus reducing almost all the idle cycles for PEs. Because of the re-balance of workloads among PEs, the network can leverage more parallelism from the PE array, thus significantly reducing the running latency. The number of work cycles stays similar on both networks. We further visualize the layerwise speedup results for VGG-16 on CIFAR10 in Fig. \ref{fig:layerwise_speed}. Overall, the layerwise work cycles and latency share similar trends between the two methods. Furthermore, u-Ticket has a larger number of idle cycle reductions on earlier layers due to the larger feature map sizes.
\begin{table}[t!]
\centering
\caption{Comparison of work cycles, idle cycles, latency, and utilization between u-Ticket and the standard LTH.}
\vspace*{-1mm}
\begin{adjustbox}{max width =\linewidth}
\begin{tabular}{@{\extracolsep{4pt}}llcccc}
\toprule
{Dataset} & {Method} &{Work} & {Idle} & {Latency} & {Utilization} \\
& & \textbf{($\times 1e8$)} & \textbf{($\times 1e8$)} & \textbf{($\times 1e8$)} & \\
\midrule
\multicolumn{6}{c}{{VGG-16}\cite{vgg}} \\
\midrule
{CIFAR10} & LTH\cite{kim2022exploring} & {1.34} & 0.41 & 0.11 & 0.59\\
& {u-Ticket} (ours) & \textbf{0.94} & \textbf{0.00} & \textbf{0.06} & \textbf{1.00} \\
\addlinespace[0.4em]
{FMNIST} & LTH\cite{kim2022exploring} & 1.10 & 0.42 & 0.10 & 0.57\\
& u-Ticket (ours)& \textbf{0.81} & \textbf{0.00} & \textbf{0.05} & \textbf{1.00}\\
\addlinespace[0.4em]
{SVHN} & LTH\cite{kim2022exploring} & 1.15 & 0.69 & 0.12 & 0.47\\
& {u-Ticket} (ours) & \textbf{0.86} & \textbf{0.00} & \textbf{0.05} & \textbf{1.00}\\
\midrule
\multicolumn{6}{c}{{ResNet-19}\cite{resnet}} \\
\midrule
{CIFAR10} & LTH\cite{kim2022exploring} & 1.66 & 1.73 & 0.21 &0.31 \\
& {u-Ticket} (ours) & \textbf{1.10} & \textbf{0.00} & \textbf{0.07} & \textbf{1.00}\\
\addlinespace[0.4em]
{FMNIST} & LTH\cite{kim2022exploring} & 1.26 & 1.89 & 0.20 & 0.27\\
& {u-Ticket} (ours) & \textbf{0.73} & \textbf{0.00} & \textbf{0.05} & \textbf{1.00}\\
\addlinespace[0.4em]
{SVHN} & LTH\cite{kim2022exploring} & 1.27 & 1.34 & 0.16 & 0.30\\
& {u-Ticket} (ours) & \textbf{0.84} & \textbf{0.00} & \textbf{0.05} & \textbf{1.00}\\
\bottomrule
\end{tabular}
\label{speed_tab}
\end{adjustbox}
\end{table}
\subsubsection{Energy Performance}
In this section, we further show the energy efficiency improvements of u-Ticket over the standard LTH baseline. The energy differences are visualized in Fig. \ref{fig:energy_rsults_ablation} (a), from which we observe that the energy benefits of balancing the workloads are huge. For CIFAR10, FMNIST, and SVHN, we manage to reduce the energy cost by $41.8\%$, $35.4\%$, and $37.2\%$ on VGG-16, and $55.5\%$, $63.8\%$, and $56.1\%$ on ResNet-19.
The main source of energy cost reduction comes from the elimination of idle cycles and the reduction of latency, which ultimately reduces the leakage energy of the hardware. ResNet-19, whose network is deeper, suffers more from the workload imbalance problem and thus has more idle cycles and longer latency compared to VGG-16. By eliminating almost all the idle cycles, u-Ticket brings more energy cost reduction to ResNet-19 compared to VGG-16.
\subsubsection{Analysis of Sparsity} We study the effects of the u-Ticket method under different weight sparsity. We measure the energy difference between u-Ticket and the LTH baseline at different pruning rounds for both ResNet-19 and VGG-16 on the CIFAR10 dataset. The result is visualized in Fig. \ref{fig:energy_rsults_ablation} (b). As observed, with an increase in weight sparsity, the benefits of using u-Ticket get larger. This is due to the degradation of the utilization in LTH as aforementioned in Fig. \ref{lth_problem}.
\subsubsection{Analysis of \#PEs} We further study the effects of changing the number of PEs. We run the u-Ticket for VGG-16 on CIFAR10 with 2, 4, 8, 16, 32, and 64 PEs, and illustrate the results in Table. \ref{tab:exp:numpe_ablation}. While the energy cost only slightly changes with the increasing number of PEs, the latency decreases linearly. Considering that the area of PE arrays will also linearly increase with the number of PEs, we conduct most of our experiments with 16 PEs, which is a suitable trade-off point.
\begin{table}[t]
\centering
\caption{Changes in normalized energy cost and normalized latency as the total number of PEs increases. Results are normalized with respect to the energy and latency of 2 PEs.}
\vspace*{-1mm}
\begin{adjustbox}{max width =\linewidth}
\begin{tabular}{@{\extracolsep{4pt}}lccccccc}
\toprule
Number of PEs & 2 & 4 & 8 & 16& 32& 64\\
\midrule
Normalized Energy & 1 & 1.01 & 1.02 & 0.95 & 1.02 & 1.04\\
Normalized Latency & 1 & 0.49 & 0.25 & 0.12 & 0.06 & 0.03\\
\bottomrule
\end{tabular}
\label{tab:exp:numpe_ablation}
\end{adjustbox}
\end{table}
\begin{figure}[t]
\begin{center}
\def0.5{0.5}
\begin{tabular}{@{}c@{}c}
\includegraphics[width=0.59\linewidth]{figure/new_energy_diff.pdf} &
\includegraphics[width=0.42\linewidth]{figure/new_gain_sparsity_new.pdf}
\\
{ (a)} & { (b)}
\end{tabular}
\caption{(a) Comparison of the normalized energy cost between two networks and across three datasets. The energy results are normalized to the energy required by a PE to process one input spike.
(b) Percentage of normalized energy reduction compared to the LTH baseline for different weight sparsity.
}
\vspace{-4mm}
\label{fig:energy_rsults_ablation}
\end{center}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=0.9\linewidth]{figure/new_pie_pe.pdf}}
\vspace*{-3mm}
\caption{Comparison of the energy breakdown between u-Ticket and the LTH baseline. MAC\_L, MEM\_L, and LIF\_L denote the leakage energy for MAC, LIF, and memory operation, while MAC\_D, MEM\_D, and LIF\_D denote their dynamic energy.}
\label{breakdown}
\end{figure}
\subsubsection{Energy Breakdown} In Fig. \ref{breakdown}, we show the energy breakdown comparison between u-Ticket and the LTH baseline on ResNet-19 for the CIFAR10 dataset. The energy components are the dynamic and leakage energy of MAC operation, LIF operation, and MEM operation (reading of SRAM-based scratchpad). We observe that the leakage energy for both MAC and LIF operation is significantly reduced in u-Ticket due to the elimination of the idle cycles. Expectedly, the portion of the dynamic energy of MAC and LIF operation increases.
\begin{figure}[t]
\begin{center}
\def0.5{0.5}
\begin{tabular}{@{}c@{}c}
\includegraphics[width=0.52\linewidth]{figure/memory_accesses_new.pdf} &
\includegraphics[width=0.48\linewidth]{figure/total_e_break.pdf}
\\
{ (a)} & { (b)}
\end{tabular}
\caption{(a) Normalized DRAM and SRAM accesses comparison across different weight sparsity. (b) The component breakup of the total energy for LTH baseline with 95\% sparsity. Both results are shown for VGG-16 with CIFAR10.
}
\vspace*{-1em}
\label{fig:mem_comp}
\end{center}
\end{figure}
\subsubsection{System Level Study}
Finally, we study the behavior of the overall system of sparse SNNs. In Fig. \ref{fig:mem_comp} (a), we show how the total DRAM and SRAM access (normalized with respect to dense SNN) decrease with increasing weight sparsity. Furthermore, we find that in the extremely high weight sparsity regime, the PE level energy starts to take a significant portion of the total energy ($\sim$ 45\% on VGG-16 with CIFAR10). As a result, after applying u-Ticket to balance the PE workloads, we manage to reduce approximately $19\%$ of the total energy at the system level as shown in Fig. \ref{fig:mem_comp} (b).
\section{Conclusion}
In this work, we propose u-Ticket, a utilization-aware LTH-based pruning method that solves the workload imbalance problem in SNNs.
Unlike prior works, u-Ticket recovers the utilization during pruning, thus avoiding additional hardware to balance the workloads during deployment. Additionally, at iso-accuracy, u-Ticket improves PE utilization by up to 100\% compared to the standard LTH-based pruning method while maintaining filter sparsity of 98\%. Moreover, u-Ticket reduces the running latency by up to 77\% and energy cost by up to 64\% compared to standard LTH baseline.
{
|
2,869,038,155,240 | arxiv | \section{Introduction}
\label{sec:Introduction}
After the discovery of the Higgs boson in 2012 by the ATLAS and CMS
experiments~\cite{Aad:2012tfa, Chatrchyan:2012xdj}, the experimental
Higgs effort has transitioned to a full-fledged program of Higgs
characterization and precision measurements of its couplings to
Standard Model (SM) particles. The direct observation of the Higgs to
vector bosons has been established at high
significance~\cite{Aad:2015gba, Khachatryan:2014jba,
Khachatryan:2016vau}, while decays to taus and bottom quarks have
yet to reach discovery significance and direct knowledge about the
couplings of the Higgs to first and second generation fermions is
utterly lacking.
The most straightforward information about light generation Yukawas
would come from direct decays of the Higgs. While these are certainly
viable possibilities for the charged leptons~\cite{Aad:2014xva,
Khachatryan:2014aep}, the inability to distinguish light
quark-initiated jets from each other renders this avenue a practical
impossibility, with the notable exception of charm tagging. A few
studies~\cite{Delaunay:2013pja, Perez:2015aoa, Perez:2015lra} have
investigated the prospects for identifying direct decays of Higgs to
charm jets, where bottom- and charm-jet tagging work in tandem to
disentangle enhanced bottom and charm Yukawa couplings.
Aside from direct decays of the Higgs to light quark jets, the other
possibilities for measuring light quark Yukawa couplings come from
charm--Higgs associated production~\cite{Brivio:2015fxa}, which also
requires a careful calibration of charm jet tagging efficiencies and a
precise determination of Higgs and associated jet backgrounds. The
practical applicability of this technique is not well established,
however, since a systematic treatment of Higgs and non-Higgs
backgrounds is still absent.
An enhanced light quark Yukawa can also lead to significant effects in
rare Higgs decays to quark--anti-quark mesons and vector
bosons~\cite{Isidori:2013cla, Bodwin:2013gca, Kagan:2014ila,
Koenig:2015pha}. The impressive control of theoretical uncertainty
in these calculations and the corresponding proof of principle
searches for such rare decays from $Z$ and Higgs
bosons~\cite{Aad:2015sda, Khachatryan:2015lga, Aaboud:2016rug} make it
an interesting channel to pursue. In these channels, though,
interpreting a deviation from the SM expectation would require
knowledge of the Higgs vertices in the so-called indirect
contributions. A deviation in the rate for $h \to J/\Psi \gamma$, for
example, could be attributed to a nonstandard effective coupling of
the Higgs to two photons as well as the charm Yukawa coupling. Hence,
the realistic sensitivity of these rare Higgs decays to nonstandard
light quark Yukawas suffers not only from the small expected SM rates,
but also because the indirectness of the probe necessitates a
combination with other Higgs measurements.
Nevertheless, the power of combined fits to Higgs signal strengths
cannot be discounted as an important tool in constraining nonstandard
Yukawa couplings~\cite{Perez:2015aoa, Kagan:2014ila, Meng:2012uj}.
Such combined fits, however, are handicapped by the inability to
determine the total width of the Higgs and thus require
model-dependent assumptions in order to extract Higgs
couplings~\cite{Dawson:2013bba}. For example, the possibility of
exotic production modes of the Higgs boson contaminating the Higgs
dataset~\cite{Yu:2014mda} would introduce new physics parameters
outside of the coupling deviation framework, spoiling the entire
applicability of the $\kappa$-framework.
We see that many of the proposed tests of non-standard Yukawa
couplings have varied difficulties in experimental applicability or
theoretical interpretation. While direct decay tests are best and
subject to the least theoretical bias, the only potentially viable
channel is the $h \to c\bar{c}$ decay. Production tests, like
measuring $hc + h\bar{c}$ production, are fraught with many
backgrounds and experimental challenges such as charm tagging.
Indirect tests, whether via Higgs rare decays to quantum
chromodynamics (QCD) mesons and vectors or combined coupling fits to
Higgs data, are most robust when conducted as consistency tests of the
SM.
In the spirit of offering new channels for probing the Standard Model
Yukawa couplings, we motivate the charge asymmetry in vector boson
associated Higgs production at the LHC. As a proton-proton machine,
the LHC handily favors $W^+ h$ production over $W^- h$ production,
mainly through the Higgsstrahlung process $q q' \to W^{\pm*} \to W^\pm
h$. At the 14~TeV LHC, for example, with $m_H = 125.09$~GeV,
$\sigma(W^+ h) / \sigma(W^- h) = 1.56$~\cite{YellowReport,
Heinemeyer:2013tqa}. We point out, however, that this inclusive
charge asymmetry is dramatically changed if the light SM quarks have
large Yukawa couplings. Concomitant effects from large light quark
Yukawa couplings, such as $q\bar{q}$ $s$-channel Higgs production and
a rapid increase in the total Higgs width, provide additional channels
for indirectly constraining enhanced quark Yukawas.
In~\secref{Yukawas}, we provide a theory motivation and background on
Yukawa coupling deviations. In~\secref{charge}, we discuss the charge
asymmetry of $pp \to W^\pm h$ production in the SM and the
modifications induced by anomalous light quark Yukawa couplings. We
then present a collider analysis for same-sign leptons targetting the
$W^\pm h$ charge asymmetry measurement in~\secref{collider},
demonstrating that the charge asymmetry can be measured at the LHC to
subpercent accuracy. We proceed to discuss other phenomenological
consequences of enhanced light quark Yukawa couplings and their
constraints in~\secref{signalstrength}. We conclude
in~\secref{conclusions}.
\section{Yukawa deviations}
\label{sec:Yukawas}
The question of fermion mass generation is a central aspect of the
structure of the Standard Model. A nonstandard Yukawa coupling in the
SM Lagrangian leads to unitarity violation for $f \bar{f} \to VV$
scattering amplitudes. In the Higgs post-discovery phase, and in the
absence of direct knowledge of the Yukawa coupling for a given SM
fermion $f$, we can calculate a unitarity bound from $f \bar{f} \to
W^+ W^-$ scattering~\cite{Appelquist:1987cf} by requiring the partial
amplitude satisfies unitarity, $|a_0| \leq 1/2$. The scale of
unitarity violation is then given by
\begin{equation}
E_f \simeq \dfrac{8 \pi v^2 \xi}{|m_f - y_f v|} \ ,
\label{eqn:unitarity}
\end{equation}
where $v = 246$~GeV is the Higgs vev, $\xi = 1/\sqrt{3}$ for quarks
and $\xi = 1$ for charged leptons. This unitarity violation is a
general feature in theories with chiral fermion masses arising from
spontaneous symmetry breaking if the fermion mass is mismatched with
its Yukawa coupling. A stronger bound on $E_f$ can be found by
studying $f \bar{f}$ scattering to arbitrary numbers of longitudinal
modes of electroweak bosons~\cite{Dicus:2005ku}.
Resolving the mass-Yukawa coupling mismatch necessarily requires
either new sources of $SU(2)_L$ breaking beyond the Higgs vacuum
expectation value (vev) or new matter fermions which mix with the SM
fermions. Such completions would add new diagrams to the partial wave
amplitude calculated above in precisely the necessary manner to remove
the $\sqrt{s}$ growth in the amplitude.
We note that regardless of the source of the new sources of Yukawa
deviations, the unitarity bound can be far beyond the reach of the
LHC. For example, light quarks with $\mathcal{O}(1)$ Yukawa couplings
(which requires fine-tuning of SM and new physics Lagrangian
parameters to reproduce the physical light quark masses) motivate $E_f
\sim 3.6$~TeV as the scale of unitarity breakdown. Although such a
fine-tuned light quark mass is aethestically unappealing, such a
mismatch between the quark mass and the Higgs Yukawa coupling cannot
be discounted from collider searches for heavy fermions, seeing that
limits on vector-like top parters reach only the $1$~TeV
scale~\cite{Aad:2015kqa, Khachatryan:2015oba}.
The unitarity bound and inadequacy of the ad-hoc renormalizable
Lagrangian can be simultaneously cast into more familiar language by
appealing to dimension-6 effective operators for Higgs physics. Here,
the SM provides the usual dimension-4 couplings that preserve the
mass-coupling relation expected in SM physics, but the fermion masses
and their Yukawa couplings get additional contributions from
dimension-6 operators. We have
\begin{eqnarray}
\mathcal{L} &\supset& y_u \bar{Q}_L \tilde{H} u_R + y_u' \frac{H^\dagger H}{\Lambda^2} \bar{Q} \tilde{H} u_R \nonumber \\
&+& y_d \bar{Q}_L H d_R + y_d' \frac{H^\dagger H}{\Lambda^2} \bar{Q} H d_R + \text{ h.c.} \ ,
\label{eqn:Lagrangian}
\end{eqnarray}
where $y_u$, $y_u'$, $y_d$ and $y_d'$ are $3\times 3$ matrices in the
flavor space of $Q_L$, $u_R$, and $d_R$. The flavor rotations of $Q_L
= (u_L \ d_L)^T$, $u_R$, and $d_R$ are then used to ensure the mass
matrices,
\begin{equation}
m_f = \frac{y_f v}{\sqrt{2}} + \frac{y_f' v^3}{2 \sqrt{2} \Lambda^2} \ ,
\end{equation}
are diagonal, with $f$ denoting up-type or down-type quarks, and we
have expanded $H = \frac{1}{\sqrt{2}} (h + v)$ about its vev.
Importantly, these flavor rotations does not guarantee in general that
the Yukawa matrices
\begin{equation}
\dfrac{y_{f, \text{ eff}}}{\sqrt{2}} = \frac{y_f}{\sqrt{2}} + \frac{3y_f' v^2}{2 \sqrt{2} \Lambda^2} = \frac{m_f}{v} + \frac{2 y_f' v^2}{2 \sqrt{2} \Lambda^2} \ ,
\label{eqn:Yukawa}
\end{equation}
are diagonal. Simultaneous diagonalization of $m_f$ and $y_f'$ is not
guaranteed unless they are aligned, and hence without additional
assumptions, the Yukawa terms in dimension-6 Higgs effective theory
are expected to introduce flavor-changing Higgs couplings. Moreover,
phases in $y_f'$ are not guaranteed to vanish, so we also expect $CP$
violation in Higgs couplings (the overall phase in each Yukawa matrix
is not observable). Bounds on both flavor-changing Higgs couplings
and $CP$-violating couplings can be obtained from studying meson
mixing~\cite{Dery:2014kxa, Benitez-Guzman:2015ana} and electron and
neutron dipole moment constraints~\cite{Brod:2013cka}.
Nevertheless, a large, enhanced diagonal coupling for fermions is
readily achieved from~\eqnref{Yukawa}. Note that for $y_u'$, $y_d'
\sim \text{ diag}(\mathcal{O}(1))$ and $v/\Lambda \sim
\mathcal{O}(1~\text{TeV})$, we obtain Yukawa enhancements $\kappa$ of
$\mathcal{O}(10^3-10^4)$ for first generation quarks,
$\mathcal{O}(10^2)$ for second generation quarks, and
$\mathcal{O}(10^2-10^{0})$ for third generation quarks, precisely
reflecting the universality of the dimension-6 Higgs $H^\dagger H /
\Lambda^2$ operator compared to the hierarchical structure of the SM
Yukawa matrix.
\section{$W^+ h$ vs. $W^- h$ charge asymmetry}
\label{sec:charge}
In the Standard Model, inclusive $W^\pm h$ production exhibits a
charge asymmetry of 21.8\% at the $\sqrt{s} = 14$~TeV
LHC~\cite{YellowReport, Heinemeyer:2013tqa}. This charge asymmetry
directly results from the inequality of the LHC $pp$ parton
distribution functions (PDFs) under charge conjugation. The tree
level diagrams for $W^\pm h$ production are shown
in~\figref{diagrams}, and in the SM, the Higgsstrahlung diagrams are
completely dominant compared to the Yukawa-mediated diagrams. As a
result, the mismatch between $u \bar{d}$ vs.~$\bar{u} d$ PDFs at the
LHC drives the bulk of the charge asymmetry, which is ameliorated by
the more symmetric $c \bar{s}$ vs.~$\bar{c} s$ PDFs. The
Cabibbo-suppressed contributions from $u \bar{s}$ vs.~$\bar{u} s$ and
$c \bar{d}$ vs.~$\bar{c} d$ PDFs also enhance and dilute,
respectively, the charge asymmetry.
\begin{figure}[tb!]
\begin{center}
\includegraphics[scale=0.35, angle=0]{wpHiggs_Higgsstrahlung.eps}
\includegraphics[scale=0.35, angle=0]{wmHiggs_Higgsstrahlung.eps} \\
\vspace{0.4cm}
\includegraphics[scale=0.40, angle=0]{wpHiggs_tchannel.eps}
\includegraphics[scale=0.40, angle=0]{wmHiggs_tchannel.eps}
\caption{Leading order $W^+ h$ (left column) and $W^- h$ (right
column) production diagrams, showing the Higgsstrahlung process (top
row) and Yukawa-mediated contributions (bottom two rows).}
\label{fig:diagrams}
\end{center}
\end{figure}
Enhanced light quark Yukawa couplings cause the inclusive $W^\pm h$
charge asymmetry to deviate significantly from the SM expectation.
For very large Yukawa enhancements, we can neglect the Higgsstrahlung
diagrams in~\figref{diagrams} and focus on the Yukawa-mediated
diagrams. If the charm Yukawa dominates the other couplings, then the
$c \bar{s}$ vs.~$\bar{c} s$ PDFs symmetrize $W^\pm h$ production, and
the overall charge asymmetry even turns negative from the residual $c
\bar{d}$ vs.~$\bar{c} d$ PDFs. Similarly, an enhanced strange Yukawa
drives the balanced $c \bar{s}$ vs.~$\bar{c} s$ PDFs to dominate
$W^\pm h$ production, while the Cabibbo-suppressed $u \bar{s}$
vs.~$\bar{u} s$ initial states still retains a positive asymmetry.
Finally, large down and up quark Yukawas actually enhance the positive
charge asymmetry beyond the SM expectation, since the ameliorating
effects from second generation quarks in the proton PDFs are weakened.
We adopt the usual $\kappa$ notation to describe rescalings of the
Higgs Yukawa couplings to the first and second generation quarks,
$y_{f, \text{ eff}} = \kappa_f y_{f, \text{ SM}}$ for $f$ = $d$, $u$,
$s$, or $c$. Throughout this work, we will only consider one Yukawa
deviation at a time and will comment briefly in the conclusions about
simultaneous deviations in multiple Yukawa couplings. For
convenience, we also use the $\bar{\kappa}_f$ normalization, which
rescales $\kappa_f$ into units of $y_b^{\text{SM}}$ evaluated at $\mu =
125$~GeV:
\begin{equation}
\bar{\kappa}_f \equiv \dfrac{m_f (\mu = 125~\text{GeV})}{m_b (\mu =
125~\text{GeV})} \kappa_f \ .
\label{eqn:kappabar}
\end{equation}
In~\figref{inclusivecharge}, we show the inclusive charge asymmetry
\begin{equation}
A = \dfrac{\sigma(W^+ h) - \sigma (W^- h)}{\sigma (W^+
h) + \sigma (W^- h)} \ ,
\end{equation}
for the 14~TeV LHC as a function of $\bar{\kappa}_f$ for individually
enhanced Yukawa couplings, $f = d$, $u$, $s$, and $c$. These results
were generated using MadGraph v2.4.3~\cite{Alwall:2014hca} where the
Yukawa couplings were implemented via a
FeynRules~\cite{Alloul:2013bka} model implementing automatic
next-to-leading order (NLO) quantum chromodynamics (QCD) corrections
at 1-loop from NLOCT v1.0~\cite{Degrande:2014vpa} interfaced with
the NNPDF2.3 NLO~\cite{Ball:2010de} PDF set. Yukawa couplings
were renormalized using the boundary values from the Particle Data
Group~\cite{Agashe:2014kda} and run to the Higgs mass with
RunDec~\cite{Chetyrkin:2000yt}. The boundary values are $m_d =
4.8$~MeV, $m_u = 2.3$~MeV, $m_s = 0.95$~GeV at $\mu = 2$~GeV, and $m_c
= 1.275$~GeV at $\mu = m_c$. We used a two-step procedure in the
renormalization group running to account for the change in the
$\alpha_s$ behavior at $b$-mass scale, $m_b = 4.18$~GeV at $\mu =
m_b$. The extracted SM quark masses at $\mu = 125$~GeV are $m_d =
2.73$~MeV, $m_u = 1.31$~MeV, $m_s = 54$~MeV, $m_c = 634$~MeV, and $m_b
= 2.79$~GeV, which are used in~\eqnref{kappabar} to rescale $\kappa_f$
to $\bar{\kappa}_f$. The Higgs coupling to $W$ bosons was fixed to
the SM value for this scan.
While QCD theory uncertainties are formally expected to cancel out in
a charge asymmetry, since QCD interactions respect charge
conservation, the factorization of the $W^\pm h$ partonic hard process
from the parent protons spoils this expectation and hence scale and
PDF uncertainties will not generally cancel. We show the $1\sigma$
scale uncertainty for the whole range of $\bar{\kappa}_f$
in~\figref{inclusivecharge} as a shaded band. We also evaluated the
PDF uncertainty using a leading order calculation interfaced with the
leading order NNPDF2.3 and CTEQ6L~\cite{Nadolsky:2008zw} PDF sets.
The two PDF sets leads to a $\approx 1\%$ disagreement in the
asymptotic values of the charge asymmetry for very large individual
$\kappa_f$.
We remark that the statistical precision on the exclusive charge
asymmetry, which we propose to measure in~\secref{collider}, is
expected to be at the subpercent level, which we expect will improve
the overall status of PDF determinations at the
LHC~\cite{Rojo:2015acz}, regardless of the sensitivity to light quark
Yukawa couplings. Moreover, $W^\pm h$ measurements complement $W^\pm
Z$ and $W^\pm + $ jets measurements, and improved measurements of the
charge asymmetry in these separate channels will confirm or refute
whether $W^\pm h$ production is dominated by the light quarks as
expected in the SM.
\begin{figure}[tb!]
\begin{center}
\includegraphics[scale=0.60, angle=0]{AsymmetryNLO.eps}
\caption{Inclusive charge asymmetry $A = (\sigma(W^+ h) - \sigma(W^-
h)) / (\sigma(W^+ h) + \sigma(W^- h))$ at NLO QCD for the $\sqrt{s}
= 14$~TeV LHC as a function of individual Yukawa rescaling factors
$\bar{\kappa}_f$ for $f = u$ (red), $d$ (green), $s$ (blue), and $c$
(purple). Shaded bands correspond to scale uncertainties at $1\sigma$
from individual $\sigma(W^+ h)$ and $\sigma(W^- h)$ production,
which are conservatively taken to be fully uncorrelated. The gray
region shows the bound from the direct Higgs width measurement,
$\Gamma_H < 1.7$~GeV~\cite{Khachatryan:2014jba}, which excludes
$\bar{\kappa}_f > 25$ for each light quark flavor and is discussed
in~\secref{signalstrength}. The expected statistical error from
this measurement using 3 ab$^{-1}$ of LHC data is also shown.}
\label{fig:inclusivecharge}
\end{center}
\end{figure}
Measuring the asymmetry at the collider requires tagging the leptonic
decay of the $W$ boson and using a Higgs decay final state that
simultaneously tempers the background and retains sufficient
statistics to enable subpercent level accuracy. In this vein, very
clean Higgs decays, such as $h \to Z Z^* \to 4\ell$ or $h \to \gamma
\gamma$ are inadequate for this purpose because the expected SM rates
for $\sigma(W^\pm h) \times $ Br$(W^\pm \to \ell^\pm \nu) \times $
Br$(h \to 4\ell)$ or Br$(h \to \gamma \gamma)$ are not statistically
large. On the other hand, the largest SM Higgs decay channel, $h \to
b \bar{b}$, must contend with both the charge-symmetric semi-leptonic
$t \bar{t}$ background and the charge-asymmetric $W^\pm + $ jets
background: therefore, extracting the $W^\pm h$ charge asymmetry from
this Higgs final state will be challenging. An interesting decay is
$h \to \tau^+ \tau^-$, where improvements in hadronic and leptonic
$\tau$ decays have led to important evidence for the Higgs decays to
taus~\cite{Khachatryan:2016vau}. The efficacy of these Higgs
resonance reconstruction methods in the presence an additional lepton
and neutrino, however, has not been demonstrated.
We instead explore a new Higgs process, $W^\pm h \to (\ell^\pm \nu)
(\ell^\pm \nu j j)$, taking advantage of the semi-leptonic decay of
the Higgs via $WW^*$. This process has a number of features
that make it attractive for measuring the $W^\pm h$ charge asymmetry.
First, this same-sign lepton final state inherits the same charge
asymmetry as the inclusive $W^\pm h$ process. Second, the leading
non-Higgs background processes for same-sign leptons are all
electroweak processes, in contrast to the $h \to b\bar{b}$ decay
discussed before. Finally, although the Higgs resonance is not
immediately reconstructible in this decay channel, we have a number of
kinematic handles to isolate the Higgs contribution to this final
state, which make it eminently suitable to extract the charge
asymmetry.
\section{Collider analysis: Same-sign leptons from associated $W^\pm h$
production}
\label{sec:collider}
Having motivated the possibility and importance of direct tests for
light quark Yukawa couplings via their effects in the charge asymmetry
of $W^\pm h$ production, we now present a search for $W^\pm h \to
\ell^\pm \ell^\pm \slashed{E}_T$ + 1 or 2 jets, with $\ell = e$ or
$\mu$, which can be a benchmark process for measuring the charge
asymmetry. We emphasize that the charge asymmetry measured in an
exclusive Higgs decya mode is at best considered a consistency test of
the Standard Model, since large Yukawa deviations in light quark
couplings will dilute the SM Higgs branching fractions, which we
address in~\secref{signalstrength}. Nevertheless, the charge
asymmetry of $W^\pm h$ production is a prediction of the Standard
Model that can be affected by deviations in light quark Yukawa
couplings.
The primary backgrounds for the $\ell^\pm \ell^\pm \slashed{E}_T$ + 1
or 2 jets signature are $W^\pm W^\pm jj$, $W^\pm Z$, with $Z \to
\ell^+ \ell^-$ and a lost lepton, and $W^+ W^-$ with charge
mis-identification. Note that all of these diboson backgrounds are
electroweak processes, giving the benefit that $W^\pm h$ signal rates
are roughly comparable to the background rates. On the other hand,
these backgrounds also have their own charge asymmetries, but these
can be probed via complementary hadronic channels, inverting selection
cuts, or data-driven techniques.
Other backgrounds we do not consider are fully leptonic $t\bar{t}$,
which we discard because it requires charge mis-identification and
would be killed by $b$-jet vetoes. The single vector boson
backgrounds, $W + $ jets and $Z + $ jets, are neglected because they
need a jet faking a lepton or in the case of the $Z$ with charge
mis-identification, would still reconstruct the $Z$ peak. We do not
consider hard brehmstrahlung with subsequent photon conversion, and we
ignore jet faking lepton rates, which eliminates QCD backgrounds.
Signal and background samples are generated for $\sqrt{s} = 14$~TeV
LHC using MadGraph 5 v2.2.1~\cite{Alwall:2014hca} at leading order in
QCD. Signal bosons are decayed on-shell via $W^\pm \to \ell^\pm \nu$
and $h \to \ell^\pm \nu j j$, where the lepton charges are chosen to
be the same, and $\ell = e$ or $\mu$. Backgrounds must pass the
preselection requirements of jet $p_T > 30$~GeV, lepton $p_T >
10$~GeV, and $\Delta R_{jj} > 0.2$. In the background samples, $\tau$
leptons are included in the boson decays, since softer leptonic decays
from $\tau$s can contaminate the signal region. We perform MLM
matching~\cite{Mangano:2001xp, Mangano:2002ea} for the $W^\pm Z$ and
$W^+ W^-$ backgrounds up to 1 jet, with the matching scale set to
30~GeV. Events are passed to Pythia v6.4~\cite{Sjostrand:2006za} for
showering and hadronization and then simulated using a mock detector
simulation based on ATLAS and CMS performance measurements using
electrons~\cite{Aad:2011mk}, muons~\cite{ATLAS:2011hga},
jets~\cite{ATLAS:2011qda}, and
$\slashed{E}_T$~\cite{Chatrchyan:2011tn}. We adopt an electron charge
mis-identification rate of 0.16\% for $0 < |\eta_e| < 1.479$ and 0.3\%
for $1.479 < |\eta_e| < 3$ and neglect muon charge
mis-identification~\cite{CMS-DP-2015-035}.
We calculate and apply flat NLO QCD $K$-factors using MCFM
v7.0~\cite{Campbell:1999ah, Campbell:2011bn, Campbell:2015qma} and
find $K = 1.71$ for $W^+ Z$, $K = 1.74$ for $W^- Z$, and $K = 1.55$
for $W^+ W^-$. The NLO QCD corrections to the $W^\pm W^\pm jj$
background have been calculated in Refs.~\cite{Jager:2009xx,
Melia:2010bm, Melia:2011gk}, from which we adopt a flat $K = 1.5$
factor. After preselection, $K$-factors, and specified leptonic
branching fractions, our background rates are 113~fb for $W^\pm W^\pm
jj$, 630~fb for $W^+ Z$, 440~fb for $W^- Z$, and 8.80~pb for $W^+ W^-$.
To enhance the $W^\pm h$ contribution to the final state, we select
exactly two same-sign leptons with $p_T > 10$~GeV, $|\eta| < 2.5$. We
then select either one or two jets with $p_T > 20$~GeV, $|\eta| <
2.5$, where jets are clustered using the anti-$k_T$
algorithm~\cite{Cacciari:2008gp} with $R = 0.4$ from FastJet
v3.1~\cite{Cacciari:2011ma}. We allow events with only one jet
because the second jet from the Higgs decay is too soft or merges with
the first jet a significant fraction of the time. Two-jet events are
required to be consistent with a hadronic $W$ candidate, $60$~GeV $<
m_{jj} < 100$~GeV. Since the subleading lepton typically arises from
the Higgs semileptonic decay, we require $m_{T, \text{ subleading }
\ell, jj} < 200$~GeV for two jet events. These cuts are summarized
in~\tableref{cutflow}.
\begin{table}[tbh]
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& {\bf SM $W^\pm h$} & {\bf $W^\pm W^\pm j j$}
& {\bf $W^+ Z$} & {\bf $W^- Z$} & {\bf $W^+ W^-$} \\
Cross section, cut, survival efficiency
& 6.5 fb + 4.2 fb & 113 fb & 630 fb & 440 fb & 8.80 pb \\
\hline
Exactly two leptons, $p_T > 10$ GeV &
53.4\% & 32.6\% & 32.2\% & 31.9\% & 46.3\% \\
Same-charge leptons &
53.1\% & 31.7\% & 6.6\% & 6.6\% & 0.087\% \\
Either one or two jets, $p_T > 25$ GeV &
34.2\% & 22.5\% & 3.3\% & 3.4\% & 0.044\% \\
60 GeV $< m_{jj} < 100$ GeV &
28.1\% & 11.7\% & 2.6\% & 2.6\% & 0.029\% \\
$m_{T,\text{ subleading } \ell jj} < 200$ GeV &
25.1\% & 4.9\% & 2.1\% & 2.2\% & 0.022\% \\
Number of events &
496 + 312 & 1070 + 604 & 3960 + 11 & 10 + 2860 & 270 + 303 \\
\hline
Statistical significance, 300 fb$^{-1}$, $S / \sqrt{S+B}$ &
\multicolumn{5}{c|}{6.5$\sigma$, 4.9$\sigma \Rightarrow 8.1\sigma$} \\
\hline
\end{tabular}
\caption{Cut flow for same-sign leptons from $W^\pm h$ production,
where we denote the $++$ and $--$ contributions to the total number
of events separately.}
\label{table:cutflow}
\end{table}
Normalizing the signal to the SM expectation~\cite{YellowReport,
Heinemeyer:2013tqa}, $\sigma(W^+ h) \times $ Br$(W^+ \to \ell^+ \nu)
\times$ Br$(h \to \ell^+ \nu jj) = 6.5$~fb, $\sigma(W^- h) \times $
Br$(W^- \to \ell^- \bar{\nu}) \times$ Br$(h \to \ell^- \nu jj) =
4.2$~fb, where $\ell = e$ or $\mu$ only, we have a combined
statistical significance of $S / \sqrt{S + B} = 8.12\sigma$ from 300
fb$^{-1}$ of $14$~TeV LHC luminosity, and the individual $++$ and $--$
sign combinations are expected to reach $6.5\sigma$ and $4.9\sigma$,
respectively. Hence, this mode should provide practical discovery
sensitivity to $W^\pm h$ production compared to the null hypothesis.
Although this mode does not admit a resonant reconstruction of the
Higgs candidate, the presence of the two same-charge leptons with
manageable background rates makes it a uniquely robust analysis for
studying the $W^\pm h$ charge asymmetry.
After our cuts, the $W^\pm h$ signal asymmetry is 22.8\%, while the
total charge asymmetry from background contamination is 16.8\%. A
more careful study of systematic effects, subleading backgrounds, and
further reduction of the diboson backgrounds in this channel is
certainly warranted but beyond the scope of this work. Optimized cuts
on the hadronic $W^{(*)}$ candidate from the Higgs signal would help
minimize the dominant charge-asymmetric $W^\pm Z$ background and
improve the signal to background discrimination. Moreover, binning
the charge asymmetry according to the leading lepton pseudorapidity
can help test enhanced Yukawa couplings via different admixtures of
underlying PDF contributions. We note, however, that this is a
relatively mild effect because the leading lepton does not always
originate from the associated parent $W$ and, in the case that one
Yukawa coupling is enhanced at a time, the signal $W^\pm h$ process is
still produced from combinations of valence and sea quark PDFs because
of the non-negligible Cabibbo angle.
We expect future studies from additional reconstructable decay modes
of the Higgs, such as $h \to b \bar{b}$, $h \to \ell^+ \ell^- \nu \nu$
(via $Z Z^*$ or $W W^*$), $h \to \tau^+ \tau^-$, and $h \to \gamma
\gamma$ will also contribute to the overall sensitivity of measuring
the $W^\pm h$ charge asymmetry. Each of these modes requires,
however, a dedicated discussion of the charge asymmetries in their
dominant backgrounds, which is the scope of future work. We expect
that decay channels giving comparable discovery significance of the
$W^\pm h$ associated production mode will add further improvements to
an overall global fit of PDFs, if $W^\pm h$ production is assumed to
be SM-like, and simultaneously, be instrumental in testing for
nonstandard light quark Yukawa couplings.
Extrapolating to $3$ ab$^{-1}$, we find that the charge asymmetry of
the $W^\pm h$ process can be tested with a statistical precision of
$\approx 0.4\%$, which would be sensitive to higher order theory
uncertainties, including PDF errors, and experimental systematic
uncertainties, which we have neglected in this treatment. We note
that the statistical precision will be comparable to the QCD scale
uncertainty in the theory calculation and the expected
$\mathcal{O}(few)\%$ PDF uncertainty already with $300$ fb$^{-1}$ of
LHC luminosity using the current cuts. Rigorous optimization of this
analysis focusing on improving the signal to background ratio,
however, will avoid this sensitivity saturation to larger
luminosities. Moreover, improved measurements of charge asymmetries
in $W + $ jets and $W^\pm Z$ processes~\cite{Rojo:2015acz} will
further reduce the light quark PDF uncertainties, while measurements
of $W^\pm c$ rates with charm-tagging will significantly improve the
determination of $s$, $\bar{s}$ PDFs. Overall, the charge asymmetry
in the $W^\pm h$ channel complements the charge asymmetry measurements
in other $W^\pm$ production modes, and in the event of a discrepancy,
provides a direct, diagonstic tool to test for enhanced Higgs
couplings to light quarks.
We remark that for non-standard Yukawa couplings, the kinematic
distributions for $W^\pm h$ production are expected to change,
resulting in small differences in the quoted efficiencies. For
example, with $\kappa_d = 1000$ ($\kappa_u = 1000$) the final $W^\pm
h$ signal efficiency decreases to 24.8\% (24.5\%) compared to the SM
benchmark efficiency of 25.1\%.
\section{Phenomenology of Quark Yukawa couplings and current constraints}
\label{sec:signalstrength}
The set of Higgs measurements from the LHC and the Tevatron provide a
broad but patchwork picture of Higgs couplings constraints. We
emphasize that a direct measurement of Higgs couplings at the LHC is
not currently feasible since the total width of the Higgs is unknown,
and thus interpretation of Higgs measurements requires model assumptions
about the underlying Lagrangian dictating the Higgs couplings and
possible new light degrees of freedom. For example, the
$\kappa$-framework for studying Higgs coupling deviations is invalid
when new exotic modes for Higgs production are
accessible~\cite{Yu:2014mda}, which cause changes in signal efficiency
that are not captured by simple coupling rescalings.
\subsection{Total width constraints}
\label{sec:total}
The only direct test for enhanced light quark Yukawa couplings from
the LHC is the constraint from the direct measurement of the total
Higgs width. From the 7+8~TeV combined analyses using the $\gamma
\gamma$ and $4\ell$ channels, ATLAS reported a Higgs total width
$\Gamma_H$ constraint of 2.6~GeV at 95\% CL~\cite{Aad:2014aba} and CMS
reported a tighter bound of 1.7~GeV~\cite{Khachatryan:2014jba}. With
the latest 13~TeV data, CMS observed a bound of 3.9~GeV (expected
2.7~GeV) in the $4\ell$ channel~\cite{CMS:2016ilx} compared to a bound
of 3.4~GeV (expected 2.8~GeV) with the Run I
dataset~\cite{Chatrchyan:2013mxa}, indicating that lineshape
measurements of the Higgs have already saturated the resolution
expected from the LHC. We remark that the next-generation $e^+ e^-$
Higgs factory machines~\cite{Baer:2013cma, Gomez-Ceballos:2013zzn,
CEPC-SPPCStudyGroup:2015csa} will inaugurate the true precision era
of Higgs measurements by virtue of being able to tag Higgs-candidate
events via the recoil mass method, which can determine the SM Higgs
width with 2--5\% precision~\cite{Dawson:2013bba}. Since light
quarks are kinematically accessible decay modes of the 125~GeV Higgs,
however, the on-shell decay of the Higgs to light quarks via enhanced
Yukawa couplings is untamed for large $\bar{\kappa}$.
We can thus use the $\Gamma_H < 1.7$~GeV constraint from
CMS~\cite{Khachatryan:2014jba} to bound the individual light quark
Yukawa couplings:
\begin{equation}
\kappa_d < 27500, \quad \kappa_u < 57400, \quad \kappa_s < 1300, \quad
\kappa_c < 120 \ ,
\end{equation}
using the renormalized quark masses calculated from
RunDec~\cite{Chetyrkin:2000yt}. These translate to
\begin{equation}
\bar{\kappa}_f \lesssim 25 \ ,
\end{equation}
for each of the first or second generation light quarks, $f = d$, $u$,
$s$, or $c$. These bounds are indicated in the gray region
of~\figref{inclusivecharge}.
If we recast the latest indirect measurements of the Higgs width
$\Gamma_H < 41$~MeV~\cite{CMS:2016ilx}, obtained from ratios of
Higgs-mediated events in $gg \to ZZ \to 4\ell$ production in off-shell
vs.~on-shell Higgs regions~\cite{Kauer:2012hd, Caola:2013yja,
Campbell:2013una}, we find $\bar{\kappa}_f \lesssim 4$. This bound
depends, however, on model assumptions about the behavior of Higgs
couplings in the off- and on-shell regions, controlled theory
uncertainties in the NLO QCD corrections to the interference between
the $gg \to ZZ$ box diagram and the Higgs amplitude, and fixing all
other Higgs partial widths to their SM values. Referring
to~\figref{inclusivecharge}, this current bound still permits a
percent-level deviation in the inclusive charge asymmetry, which we
expect is measureable with the full dataset of the LHC. In our view,
the indirect width measurement of the Higgs and the charge asymmetry
measurement are equally valid as consistency tests of the Standard
Model Higgs, and we strongly advocate for the charge asymmetry test in
future LHC Higgs analyses.
\subsection{Inclusive charge asymmetry}
\label{sec:inclusiveasym}
At the fully inclusive level, the Higgs Yukawa couplings can be tested
via the proposed charge asymmetry measurement. While more stringent
constraints on the light quark Yukawa couplings can be obtained from
global fits combining all Higgs data, these global fits suffer from
the requirement of a theoretical model dependence, most commonly the
$\kappa$ framework.
We point out, however, that absent deviations in light quark Yukawa
couplings the fully inclusive charge asymmetry also provides a
model-independent measurement of the Higgs coupling to $W$ bosons.
Fully inclusive Higgs production processes are not normally considered
at hadronic colliders because of the inability to ascertain the Higgs
contribution independent of the Higgs decay mode. This is analogous
to the recoil mass method advocated for $e^+ e^-$ Higgs factories,
which allows a fully inclusive rate measurement sensitive to the $hZZ$
coupling. At the moment, though, there is no practical proposal for
measuring such an inclusive variable in any Higgs process and all
Higgs data stems from analyses for specific Higgs decays, and so the
intriguing possibility of a fully inclusive Higgs measurement to
extract a Higgs production coupling remains remote.
\subsection{Exclusive Higgs measurements and current constraints}
\label{sec:exclusive}
In~\eqnref{Lagrangian}, we only introduced new physics operators that
modified the mass generation and Yukawa couplings of the SM quarks,
leaving the Higgs-vector couplings untouched. As a result, enhanced
Yukawa couplings lead to increased rates for $\sigma (q q' \to W^\pm
h)$ and $\sigma (q \bar{q} \to h)$ production, but the effective
signal strengths $\mu_{Wh}$ and $\mu_{gg}$ of exclusive Higgs decays
to a particular $X$ final state are depleted according to
\begin{eqnarray}
\mu_{Wh} (h \to X) &=& \dfrac{ \left(\sigma_{Wh}^{\text{NP}} \right)}{
\left(\sigma_{Wh}^{\text{SM}} \right)} \times \dfrac{ \Gamma (h \to
X)^{\text{NP}} / \Gamma_H^{\text{NP}}}{\Gamma (h \to
X)^{\text{SM}} / \Gamma_H^{\text{SM}} } \ , \\
\mu_{gg} (h \to X) &=& \dfrac{ \left(\sigma_{gg}^{\text{NP}} +
\sigma_{qq}^{\text{NP}} \right)}{ \left(\sigma_{gg}^{\text{SM}}
\right)} \times \dfrac{ \Gamma (h \to X)^{\text{NP}} /
\Gamma_H^{\text{NP}}}{\Gamma (h \to X)^{\text{SM}} /
\Gamma_H^{\text{SM}} } \ ,
\label{eqn:signalstrength}
\end{eqnarray}
where we have included $s$-channel $q\bar{q}$ Higgs production in the
overall gluon fusion rate. We remark that the gluon fusion and
$q\bar{q}$ annihilation production modes can be possibly disentangled
at the LHC by studying Higgs candidate
kinematics~\cite{Bishara:2016jga, Soreq:2016rae, Bonner:2016sdg},
while the $q\bar{q}$ decay can also possibly be probed at $e^+ e^-$
Higgs factories~\cite{Gao:2016jcm}.
Solely turning on large Yukawa couplings for light quarks is hence
strongly constrained by combined coupling fits using current Higgs
data, since the increased production rates from the Yukawa-mediated
processes is not enough to counterbalance the rate loss in measured
Higgs modes such as $h \to 4 \ell$ and $h \to \gamma \gamma$. For
example, if we require that $\mu_{gg} (h \to 4\ell)$ is within 40\% of
the SM signal strength, consistent with the latest 13~TeV Higgs
measurement results~\cite{CMS:2016ilx} and only allow one light quark
Yukawa coupling to deviate at a time, then we derive the following
constraints:
\begin{equation}
\kappa_d < 1270, \quad \kappa_u < 2860, \quad \kappa_s < 53, \quad
\kappa_c < 5 \ ,
\label{eqn:naive}
\end{equation}
which can be converted to
\begin{equation}
\bar{\kappa}_d < 1.24, \quad \bar{\kappa}_u < 1.34, \quad
\bar{\kappa}_s < 1.03, \quad \bar{\kappa}_c < 1.14 \ ,
\end{equation}
where we have fixed $\sigma_{gg} = 48.58$ pb~\cite{Anastasiou:2014vaa,
Anastasiou:2016cez} using $m_H = 125$~GeV for both the SM and NP
rates and only considered the additional contribution from $q\bar{q}$
annihilation. These ad-hoc constraints are only presented to
demonstrate the naive sensitivity to light quark Yukawa couplings from
a 1-parameter test, where all other SM couplings are held fixed. We
note that the intrinsic contribution from light quarks affecting gluon
fusion is suppressed by the loop function dependent on the quark
masses. Moreover, new colored particles in the gluon fusion loop
(see, e.g., Ref.~\cite{Kumar:2012ww} and references therein) can add
to the $s$-channel $q\bar{q}$ Higgs production channel to compensate
for the drop in the $h \to 4 \ell$ branching fraction. In principle,
an enhanced coupling of the Higgs bosons to electroweak vectors can
also relieve the bounds above, although concrete possibilities are
limited~\cite{Logan:2015xpa}. A global analysis performed in
Ref.~\cite{Kagan:2014ila}, allowing all Higgs couplings to vary, has
derived the constraints $\bar{\kappa}_d < 1.4$, $\bar{\kappa}_u <
1.3$, $\bar{\kappa}_s < 1.4$, and $\bar{\kappa}_c < 1.4$.
We note that the Tevatron also provides constraints on enhanced light
quark Yukawa couplings given the nature of the machine as a
proton--anti-proton collider. The primary search channel at the
Tevatron sensitive to $s$-channel Higgs production was the $WW^*$
decay mode~\cite{Aaltonen:2013ioz}, which constrained $\sigma (gg
\rightarrow H) \times$ Br$(H \to WW^*)$ at $m_H = 125$~GeV to be less
than 0.77~pb. If $\sigma (gg \to H)$ and Br$(H \to WW^*)$ are held
fixed, then this constrains the extra production from $\sigma
(q\bar{q} \to H)$ at a level roughly a factor of 2-10 weaker than the
naive estimate in~\eqnref{naive}, with the strongest constraints for
$\kappa_d$ and $\kappa_u$; again, this is an inconsistent treatment of
the bounds unless new physics is introduced to keep Br$(H \to W^+
W^-)$ fixed. In a similar manner, double Higgs production rates are
also increased, but their impact at the LHC is already excluded in a
model independent fashion from the total Higgs width measurement
discussed earlier.
Finally, probing enhanced quark Yukawa couplings using the exclusive
charge asymmetry measurement discussed in~\secref{collider} requires
also requires an increased $h \to WW^*$ partial width in order to
maintain the signal rate comparable to the SM expectation.
Nevertheless, the measurement of the charge asymmetry provides an
important consistency test of the SM Higgs boson. Moreover, the
$0.4\%$ statistical precision afforded by the proposed $W^\pm h \to
\ell^\pm \ell^\pm jj + \slashed{E}_T$ measurement establishes a new
channel to constrain and evaluate parton distribution functions and
their uncertainties if light quark Yukawa deviations are absent.
\section{Conclusions}
\label{sec:conclusions}
In this work, we have explored the prospects for measuring light quark
Yukawa couplings at the LHC via the charge asymmetry of $W^\pm h$
production. From the limited set of new physics operators considered,
the net effect of enhanced light quark Yukawa couplings was to rapidly
increase the total Higgs width, which can be tested in a
model-independent fashion at the LHC in the high resolution $\gamma
\gamma$ and $4\ell$ final states. Enhanced light quark Yukawa
couplings consistent with the direct Higgs width constraint predict
inclusive charge asymmetries that deviate significantly from the SM
expectation.
We hence motivated the possible measurement of the $W^\pm h$ charge
asymmetry in the exclusive mode $W^\pm h \to \ell^\pm \ell^\pm
\slashed{E}_T$ + 1 or 2 jets, which is a clean same-sign dilepton
final state that inherits the same charge asymmetry as the original
Higgs production process. After accounting for the main backgrounds
from electroweak diboson production, we estimate that the individual
$++$ and $--$ final states reach a statistical $\approx 5\sigma$ significance
each with 300 fb$^{-1}$ of 14~TeV LHC data. Even though the Higgs
boson is not fully reconstructed in this decay, the clean same-sign
dilepton signature can be readily extrapolated to the expected $3$
ab$^{-1}$ high luminosity run, enabling a statistical precision on the
exclusive charge asymmetry of 0.4\%. If the measured asymmetry
deviates from the SM expectation, then a likely interpretation would
be an enhanced SM light quark Yukawa counterbalanced by additional new
physics effects that preserve rough current consistency of the Higgs
data with SM expectation. A future deviation can favor enhanced down
and up quark Yukawas if the observed charge asymmetry exceeds the SM
expectation, while strange and charm quark Yukawas would be
responsible if the charge asymmetry were smaller.
The $W^\pm h$ charge asymmetry hence provides an interesting and new
consistency test for Higgs measurements. We conclude by remarking
that although we focused on the prospects for testing light quark
Yukawa coupling deviations using the charge asymmetry, this
measurement also probes the Higgs coupling to $W^\pm$ bosons directly,
which adds a new ingredient in combined coupling fits for testing
custodial symmetry.
\section*{Acknowledgments}
\label{sec:acknowledgments}
The author is grateful to Wolfgang Altmannshofer, Fady Bishara,
Joachim Brod, Maikel de Vries, Stefan Kallweit, Joachim Kopp, Andreas
von Manteuffel, Gilad Perez, Yotam Soreq, and Nhan Tran, for useful
discussions. This research is supported by the Cluster of Excellence
Precision Physics, Fundamental Interactions and Structure of Matter
(PRISMA-EXC 1098), by the ERC Advanced Grant EFT4LHC of the European
Research Council, and by the Mainz Institute for Theoretical Physics.
The author is grateful to the Universit\'{a} di Napoli Federico II and
INFN for its hospitality and its partial support during the completion
of this work. The author would also like to acknowledge the
hospitality of the respective theory groups from the IBS CTPU in
Korea, the Technische Universit\"{a}t Dortmund, the Technion, and the
Tel Aviv University, where parts of this work were completed, as well
as thank the organizers and participants of the Higgs Effective Field
Theories 2016 (HEFT2016) in Chicago for their stimulating comments and
discussion.
\bibliographystyle{apsrev4-1}
|
2,869,038,155,241 | arxiv | \section{Introduction}
The fact that some number of neutron star (NS) binaries merge, had been widely anticipated in the scientific community before GW170817. This expectation was mostly based on observations of double NS systems containing at least one pulsar. Precise pulsar timing allows the determination of various orbital parameters, which led to the conclusion that a fraction of NS binaries would collide as a consequence of gravitational-wave (GW) emission within less than a Hubble time. Only a minority of scientists expected the observation of a NS merger in the year 2017 even after several detections of black-hole binaries succeeded in the previous years\footnote{This was at least the perception of the author of this article.}. Many researchers considered a NS merger much closer than 100~Mpc to be very unlikely. However, Nature was very kind and placed a NS merger at about 40~Mpc from Earth right during the second LIGO/Virgo observing run~\cite{Abbott2017}. The analysis of the GW signal revealed the binary parameters of the system. Specifically, the so-called chirp mass~$\mathcal{M}=(M_1 M_2)^{3/5}(M_1+M_2)^{-1/5}$ was very accurately measured, while the binary mass ratio $q=M_1/M_2$ was only roughly determined to be in the range between 0.7 and 1. Combining this information implies that GW170817 was very likely a merger of two NSs unless some exotic mechanism could produce stellar black holes with masses smaller than the maximum mass of NSs. The total mass of GW170817 was found to be $M_\mathrm{tot}=2.74^{+0.04}_{-0.01}~M_\odot$ consistent with typical masses observed in NS binaries~\cite{Lattimer2012}.
The localization of the event through the GW signal also allowed to find an electromagnetic counterpart in the optical/infrared about half a day after the merger (electromagnetic radiation was observed throughout the full electromagnetic spectrum from radio to gamma rays~\cite{Abbott2017b}). The electromagnetic emission at optical/infrared wavelengths evolved on the time scale of days and was observable for more than a week, e.g.~\cite{Abbott2017b,Villar2017}. The importance of these measurements lies in the fact that they provide the to date best evidence that the ejecta of NS mergers undergo the rapid neutron capture process (r-process) to form heavy elements. This conclusion is based on the interpretation of the light curve, which is compatible with matter being heated by the radioactive decays in the aftermath of the r-process. Although estimates of ejecta properties like their mass and outflow velocities depend on the underlying modeling (see~\cite{Cote2018} for a compilation of different results), the follow-up observations of GW170817 combined with approximate rate estimates indicate that NS mergers play an important role for the enrichment of the Universe by r-process elements. This is a remarkable development considering that only in the last $\sim 10$ years NS mergers have received increasing attention as potential sources of heavy r-process elements~\cite{Metzger2010,Roberts2011,Goriely2011} with some earlier works starting in the 70ies~\cite{Lattimer1974,Eichler1989,Ruffert1997,Freiburghaus1999,Rosswog1999}.
Another important development after GW170817 are observational constraints on the incompletely known equation of state (EoS) of high-density matter, which uniquely determines stellar parameters of NSs likes their radii and their tidal deformability, i.e. the response to an external tidal field. Two types of constraints have been presented. 1) Considering solely the GW signal, the measurement of finite-size effects during the late inspiral phase constrains the tidal deformability from above. While the precise limits depend somewhat on assumptions made for the analysis, the GW signal establishes a robust upper bound on NS radii of about 13.5~km for typical NS masses and thus excludes very stiff nuclear matter. 2) A second type of constraints makes use of the additional information which is provided by the electromagnetic signature of GW170817. Based on the multi-messenger observations of GW170817 a variety of such studies has been put forward~\cite{Margalit2017,Bauswein2017,Shibata2017,Rezzolla2018,Radice2018,Ruiz2018,Coughlin2018,Radice2019,Koeppel2019,Kiuchi2019}.
Some of these EoS constraints are more tentative, while others are more robust and include a proper error analysis like the one described below. Interestingly, the multi-messenger interpretation of GW170817 discussed in Sect.~\ref{sec:main} provides a lower bound on NS radii and is thus complementary to the limits given by finite-size effects during the inspiral. Finally, both types of constraints have lead to a number of studies discussing which specific EoS models are compatible with the new observations, e.g.~\cite{Fattoyev2018}. We emphasize that because of the limited space we cannot mention all constraints and studies in the context of GW170817 and we can only provide an incomplete list of the relevant literature.
All these different considerations may only be a first indication of the potential of future multi-messenger observations which can be expected in the next years.
\section{EoS constraints from GW170817}\label{sec:main}
Here we describe how the multi-messenger observations of GW170817 provide a lower limit on NS radii and the nuclear EoS with a minimum of assumptions. More details of this idea can be found in~\cite{Bauswein2017}. The constraint is based on three main arguments: (a) the assumption that GW170817 did not result in a direct gravitational collapse of the merger remnant, (b) an empirical relation for the threshold binary mass for direct black-hole formation, which depends on NS radii and the maximum mass of nonrotating NSs, and (c) causality, which implies certain constraints on NS properties. We discuss some details before combining the arguments to derive NS radius constraints.
\subsection{No direct collapse in GW170817}
Different groups observed the electromagnetic counterpart of GW170817 in the optical and infrared, which evolved on the time scale of days (e.g.~\cite{Villar2017}). The light curve is compatible with ejecta from a NS merger, which is heated by radioactive decays during the r-process. Fitting the observed luminosity to theoretical emission models allows to infer the ejecta masses and other properties of the outflow~\cite{Metzger2017a}. There is general agreement between the different models that the emission is best explained by the ejection of a few hundreds of a solar mass of matter undergoing the r-process. There is no detailed information about the composition of the ejecta available (but there may be evidence of spectral features~\cite{Smartt2017}). The emission points to different components of the ejecta, which is consistent with theoretical expectations that there are dynamical ejecta getting unbound within the first milliseconds after merging followed by a slower component which emerges on somewhat longer time scales by viscous, nuclear and neutrino processes. Although details remain model-dependent, the overall agreement with theoretical predictions provides strong evidence that the r-process took place in the outflow of GW170817 and that a few $0.01~M_\odot$ became gravitationally unbound in this event.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{f3a-eps-converted-to.pdf}
\caption{Dynamical ejecta mass as function of the radius $R_{1.35}$ of a 1.35~$M_\odot$ NS for different EoS models. All data points refer to merger simulations with two equal-mass NSs of 1.35~$M_\odot$, thus a system with a total mass comparable to that of GW170817. The open circles indicate simulations which lead to a prompt collapse of the merger remnant. Red symbols stand for fully temperature-dependent EoS models, wheres balc and blue symbols mark data from simulations with an approximate treatment of thermal effects. See~\cite{Bauswein2013a} for details. Figure taken from~\cite{Bauswein2013a}. See~\cite{Bauswein2013a} for a similar plot for 1.2-1.5~$M_\odot$ mergers, which generally yield higher ejecta masses.}
\label{fig:mejr135}
\end{figure}
This amount of ejecta is at the high end of what is expected from hydrodynamical simulations~\cite{Hotokezaka2013,Bauswein2013a,Metzger2014,Just2015}, keeping in mind that one cannot easily distinguish dynamical and secular ejecta which can be comparable in mass (see e.g. the compilation by~\cite{Wu2016}). One should also appreciate that determining the dynamical or secular ejecta mass in simulations comes with an uncertainty of at least a few 10 per cent. From simulations it is known that the ejecta mass depends on the binary masses and the incompletely known high-density EoS~\cite{Hotokezaka2013,Bauswein2013a} (see Figs.~\ref{fig:mejr135} and~\ref{fig:mtotq}). Figure~\ref{fig:mejr135} shows the dynamical ejecta mass for 1.35-1.35~$M_\odot$ mergers for many different model EoSs. The different models are characterized by the radius of a 1.35~$M_\odot$ NS described the different EoSs, where soft EoSs yield small radii and stiff nuclear matter results in larger radii. It is interesting to note that the amount of dynamical ejecta critically depends on the immediate outcome of the merger. If the merging leads to a prompt gravitational collapse on a dynamical time scale, the ejecta mass is significantly reduced compared to the case where a NS merger remnant forms which survives for at least a few milliseconds or longer (see circles in Fig.~\ref{fig:mejr135} and Fig.~\ref{fig:mtotq}). Also asymmetric binaries with a mass ratio different from unity lead to relatively small ejecta masses if the remnant undergoes a prompt collapse. Fig.~\ref{fig:mtotq} shows in more detail how binary parameters affect the amount of dynamical ejecta. Mass ejection is generally enhanced for asymmetric systems because tidal ejection of matter is stronger. The dynamical ejecta is reduced for low-mass systems likely because the merger proceeds in a less violent way compared to binaries with higher total binary mass which do not undergo a prompt collapse.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{f7b-eps-converted-to.pdf}
\caption{Dynamical ejecta mass (color-coded in solar masses) as function of the binary mass ratio $q=M_1/M_2$ and the total binary mass $M_\mathrm{tot}=M_1+M_2$ for the DD2 model EoS~\cite{Hempel2012}. The open circles indicate simulations which lead to a prompt collapse of the merger remnant. Crosses mark simulations resulting in a NS remnant. Figure taken from~\cite{Bauswein2013a}.}
\label{fig:mtotq}
\end{figure}
For a given EoS the type of merger remnant (black hole or NS) critically depends on the total binary mass. If the total binary $M_\mathrm{tot}$ mass exceeds a certain threshold mass $M_\mathrm{thres}$, the remnant cannot be stabilized against the gravitational collapse and directly forms a black hole. The mass ratio of the binary for a fixed total mass has only a very weak impact, which has been shown in for limited set of simulations~\cite{Bauswein2013,Bauswein2017}. Obviously, the threshold binary mass $M_\mathrm{thres}$ for prompt black-hole formation depends sensitively on the EoS because the exact properties of high-density matter like its stiffness determine how much mass can be supported against the gravitational attraction. We discuss this point in more detail below. The reduction of the dynamical ejecta by a prompt collapse is generically found for all tested EoSs.
In essence, the relatively high ejecta mass inferred from GW170817 indicates that there was no direct black-hole formation in this event. We will use this conclusion as working hypothesis to derive NS radius constraints. Although there is no final proof for this assumption, it appears reasonable, as also other arguments point towards no prompt black-hole formation. For instance, Ref.~\cite{Ruiz2017} showed in general-relativistic magneto-hydrodynamical simulations that jet formation only takes place if the NS remnant undergoes a delayed gravitational collapse, while prompt black-hole formation may not lead to a beamed relativistic outflow. Given the observational indications for a relativistic jet in GW170817, this may provide another piece of evidence for no direct collapse of the merger remnant in GW170817.
The main conclusion from the discussion above is that it is reasonable to assume that there was no direct collapse in GW170817. Therefore, the measured total binary mass $M_\mathrm{tot}^\mathrm{measured}=2.74^{+0.04}_{-0.01}~M_\odot$ is smaller than the threshold binary mass $M_\mathrm{thres}$ for prompt black-hole formation.
\subsection{Empirical relation for $M_\mathrm{thres}$}
By means of a large set of hydrodynamical merger simulations (see~\cite{Bauswein2013}) it is possible to infer an approximate empirical relation describing the EoS dependence of the threshold binary mass for prompt black-hole formation. To this end, we performed simulations for different total binary masses and determined the outcome. Like this one can identify $M_\mathrm{thres}$ for every model EoS and relate $M_\mathrm{thres}$ to certain properties of the EoS, e.g. stellar parameters of nonrotating NSs. It is found that generally $M_\mathrm{thres}$ increases with the maximum mass $M_\mathrm{max}$ of nonrotating NSs and with radii of nonrotating NSs. For instance, the threshold binary mass can be described to good accuracy by
\begin{equation}\label{eq:mthrrmax}
M_\mathrm{thres}=\left(-3.38\frac{G\,M_\mathrm{max}}{c^2\,R_\mathrm{max}} +2.43 \right)\,M_\mathrm{max}
\end{equation}
or by
\begin{equation}\label{eq:mthrr16}
M_\mathrm{thres}=\left(-3.606\frac{G\,M_\mathrm{max}}{c^2\,R_{1.6}} +2.38 \right)\,M_\mathrm{max}
\end{equation}
with $R_\mathrm{max}$ being the radius of the maximum-mass configuration and $R_{1.6}$ being the radius of a 1.6~$M_\odot$ NS ($G$ and $c$ are the gravitational constant and the speed of light, respectively). These equations represent empirical relations based on hydrodynamical models for a representative set of EoSs, which should be accurate to within a few per cent with uncertainties resulting from the finite spacing of merger configurations in the space of total binary masses and from the underlying physical and numerical model. Note the generally good agreement with the recent results from~\cite{Koeppel2019} for a smaller set of EoSs using grid-based simulations in full General Relativity. Note that these general dependencies have been corroborated by semi-analytical models based on stellar equilibrium configurations~\cite{Bauswein2017a}.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{MRcausal.pdf}
\caption{Example for a mass-radius relation of an artificially modified EoS at higher densities. The original model is shown by the black curve. This model EoS is modified by assuming that the EoS becomes maximally stiff beyond the central density of a 1.6~$M_\odot$, i.e. that the speed of sound equals the speed of light. This artificial modification alters the mass-radius relation beyond a mass of 1.6~$M_\odot$ and leads to the red dashed curve. The resulting maximum mass (red cross) is approximately the highest possible maximum mass that an EoS with the given radius $R_{1.6}$ at 1.6~$M_\odot$ (red dot) could have.}
\label{fig:causalsfho}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{MmaxR16_forpro.pdf}
\caption{Upper limit $M_\mathrm{max}^\mathrm{up}$ on the maximum mass of nonrotating NSs as function of the radius $R_{1.6}$ of a NS with 1.6~$M_\odot$. Data points are generated by modifying the high-density part of a large sample of microphysical EoS models as illustrated by the example in Fig.~\ref{fig:causalsfho}. The mass of the red cross and the radius of the red dot in Fig.~\ref{fig:causalsfho} constitute an individual point in this diagram. The solid line provides a safe upper bound, which is employed for NS radius constraints.}
\label{fig:mmaxr16}
\end{figure}
\subsection{Constraints from causality}
It is well known that causality places constraints on stellar properties of NSs by requiring that the speed of sound $c_s$ of an EoS cannot exceed the speed of light $c$. This limits the stiffness of the EoS, and leads to $M_\mathrm{max}<\frac{1}{2.82}\frac{c^2}{G}R_\mathrm{max}$, see e.g.~\cite{Lattimer2016} for an extended discussion. Similarly, one can also find an upper limit for $M_\mathrm{max}$ for a given radius $R_{1.6}$. We consider a sample of 23 EoSs and artificially modify those EoSs in the high-density regime. Above the central density of a 1.6~$M_\odot$ NS we replace the original EoS by the maximally stiff EoS, i.e. an EoS with $c_s=\sqrt{\frac{dp}{de}}=c$. Solving the stellar structure equations for these modified EoSs one finds the highest maximum mass $M_\mathrm{max}^\mathrm{up}$ which is compatible with a given $R_{1.6}$.
An example is given in Fig.~\ref{fig:causalsfho}. The solid black curve shows the mass-radius relation for the original SFHO EoS~\cite{Steiner2013}. The red dashed line displays the stellar configurations for the modified EoS with the maximally possible stiffening beyond the central density of a 1.6~$M_\odot$ NS. In this particular example $M_\mathrm{max}$ is increased by about 0.2~$M_\odot$ if the EoS was maximally stiff beyond the central density of a 1.6~$M_\odot$ NS.
Repeating this procedure of modifying the high-density regime for a large number of possible EoSs yields Fig.~\ref{fig:mmaxr16}. The figure shows $M_\mathrm{max}^\mathrm{up}$, which is the maximum mass of the artificially stiffened EoS, as function of $R_{1.6}$. For the SFHO EoS $M_\mathrm{max}^\mathrm{up}$ corresponds to the red cross in Fig.~\ref{fig:causalsfho}, while $R_{1.6}$ is given by the radius of the red point. The solid line is given by
\begin{equation}\label{eq:causal16}
M_\mathrm{max}^\mathrm{up}=\frac{1}{3.1}\frac{c^2}{G}R_{1.6}
\end{equation}
and provides a robust upper limit on $M_\mathrm{max}$ for a given $R_{1.6}$. We expect only minor changes if for the same $R_{1.6}$ the EoS was modified at lower densities.
\subsection{Radius constraints}
We can now combine the arguments of (a), (b) and (c) to derive lower limits for $R_\mathrm{max}$ and $R_{1.6}$. We find
\begin{eqnarray}
M_\mathrm{tot}^\mathrm{measured}&<&\left(-3.38\frac{G\,M_\mathrm{max}}{c^2\,R_\mathrm{max}} +2.43 \right)\,M_\mathrm{max}\\
&<&\left( -\frac{3.38}{2.82} +2.34\right) \frac{1}{2.82}\frac{c^2}{G}R_\mathrm{max}.
\end{eqnarray}
In the first line we assume that the measured total binary mass in GW170817 $M_\mathrm{tot}^\mathrm{GW170817}$ is smaller than the threshold mass for prompt black hole formation $M_\mathrm{thres}$, which is given by the empirical relation Eq.~(\ref{eq:mthrrmax}). Then we insert the causality constraint to eliminate $M_
\mathrm{max}$ and find
\begin{equation}\label{eq:radmax}
R_\mathrm{max}> 2.29\,\frac{G}{c^2} \,M_\mathrm{tot}^\mathrm{measured}.
\end{equation}
Similarly, by using Eq.~(\ref{eq:mthrr16}) and Eq.~(\ref{eq:causal16}), one obtains
\begin{equation}\label{eq:rad16}
R_{1.6}> 2.55\,\frac{G}{c^2} \,M_\mathrm{tot}^\mathrm{measured}.
\end{equation}
It is very conservative to employ the limits imposed by causality because the true EoS is unlikely to be that stiff. Also, one may assume that the merger remnant survived for at least 10~ms before collapsing to a black hole. This effectively means that the binary mass of GW170817 is at least $\Delta M\approx 0.1~M_\odot$ smaller than $M_\mathrm{thres}$, which can be included in Eqs.~(\ref{eq:radmax}) and~(\ref{eq:rad16}). This finally yields $R_\mathrm{max}> 2.29\,\frac{G}{c^2} \,(M_\mathrm{tot}^\mathrm{measured}+\Delta M)=9.6$~km and $R_{1.6}> 2.55\,\frac{G}{c^2} \,(M_\mathrm{tot}^\mathrm{measured}+\Delta M)=10.7$~km. A detailed discussion of errors can be found in~\cite{Bauswein2017}. See also~\cite{Koeppel2019} for radius constraints resulting from a very similar line of arguments. Figure~\ref{fig:RM} compares our constraints with a large set of EoSs available in the literature. Based on these arguments some of the very soft EoSs can be ruled out, which complements the upper limits on NS radii arising from the measurement of finite-size effects during the late inspiral phase.
\begin{figure}\label{fig:RM}
\includegraphics[width=0.8\columnwidth]{f2.pdf}
\caption{Mass-radius relations of different EoS models available in the literature. The red areas provide a very conservative lower bound on NS radii. The cyan area display a more realistic constraint, which exclude some very soft EoSs~\cite{Bauswein2017}. The thin horizontal lines indicate most massive accurately measured NS mass~\cite{Antoniadis2013}. The thick dashed curve marks the causality limit. Figure taken from~\cite{Bauswein2017}.}
\label{fig:RM}
\end{figure}
\subsection{Discussion}
We stress some more aspects. First, any new future measurement can be employed to strengthen these radius constraints if evidence for no direct black-hole formation is found. For higher binary masses the lower limits on NS radii increase as can be easily seen from Eqs.~(\ref{eq:radmax}) and~(\ref{eq:rad16}), which can be directly applied to any new measurement. Second, if evidence for a prompt collapse of the merger remnant is found in some future event, the measured binary mass exceeds $M_\mathrm{thres}$. Following a similar line of arguments, one can then derive an upper limit on NS radii and $M_\mathrm{max}$ (see~\cite{Bauswein2017} for details). Third, these limits on NS radii rely only on the mass measurement and the interpretation of the electromagnetic signal but not on the extraction of finite-size effects of the GW signal. This implies that also future events with low signal-to-noise ratio can be employed to improve these constraints if the identification of an electromagnetic counterpart allows to infer the merger outcome. We anticipate that as more events are observed, our understanding of which of those mergers led to a prompt collapse and which did not, will increase and allow a robust distinction. Fourth, the errors of radius limits depend only on the errors of the mass measurement and the accuracy of the empirical relations for $M_\mathrm{thres}$, which can be easily propagated through the derivation (see~\cite{Bauswein2017}). Fifth, we emphasize that the use of the causality argument is very conservative since the speed of sound of the true EoS will likely be smaller than the speed of light. Also, the empirical relations Eqs.~(\ref{eq:mthrrmax}) and~(\ref{eq:mthrr16}) are derived for equal-mass mergers. Asymmetric mergers lead to threshold masses which are equal or slightly smaller than that of equal-mass systems. Hence, adopting the empirical relations for symmetric mergers is conservative as well.
For a comparison to the constraints obtained from finite-size effects during the inspiral, it is helpful to convert the above radius constraints to limits on the tidal deformability. The tidal deformability is defined by $\Lambda=\frac{2}{3}k_2\left(\frac{c^2\,R}{G\,M}\right)^5$ with the tidal Love number $k_2$, the NS mass $M$ and the NS radius $R$ (see~\cite{Hinderer2008}). The combined tidal deformability $\tilde{\Lambda}=\frac{16}{13}\frac{(M_1+12 M_2) M_1^4\Lambda_1 + (M_2+12 M_1) M_2^4\Lambda_2 }{(M_1+M_2)^5}$ with the masses $M_{1/2}$ and tidal deformabilities $\Lambda_{1/2}$ of the individual stars is the quantity which is directly inferred from the GW signal~\cite{Abbott2017}. For equal-mass binaries the combined tidal deformability equals the tidal deformability of the individual stars. For not too asymmetric systems $\tilde{\Lambda}$ is very close to the combined tidal deformability of the equal-mass system with the same chirp mass~$\mathcal{M}=(M_1 M_2)^{3/5}(M_1+M_2)^{-1/5}$. $\tilde{\Lambda}$ was found to be smaller than $\sim 800$ with the precise value depending on priors and other assumptions~\cite{Abbott2017,TheLIGOScientificCollaboration2018,De2018}.
Figure~\ref{fig:lam} displays the tidal deformability of a 1.37~$M_\odot$ NS as function of $R_{1.6}$ for a large set of candidate EoSs available in the literature. In this plot we relate stellar parameters of nonrotating stars of different masses. The relatively tight relation allows an approximate conversion of the constraint on $R_{1.6}$ to a limit on the tidal deformability $\Lambda_{1.37}$. Note that for an accurate conversion a full coverage of all possible EoSs would be required, which is beyond the scope of this work. The constraint $R_{1.6}>10.7$~km derived above corresponds to $\Lambda_{1.37}>210$ as indicated by the dashed curve in Fig.~\ref{fig:lam}.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{lam137r16.pdf}
\caption{Tidal deformability $\Lambda_{1.37}$ of a 1.37~$M_\odot$ NS as function of the radius $R_{1.6}$ of nonrotating NSs with a mass of 1.6~$M_\odot$ (crosses). The solid curve is a quadratic least-squares fit to the data. The red dashed curve indicates how the radius constraint of $R_{1.6}<10.7$~km as is discussed in the main text is converted to a limit on the tidal deformability.
\label{fig:lam}
\end{figure}
The lower limit of $\Lambda_{1.37}\gtrsim210$ appears weaker than $\Lambda_{1.37}\gtrsim400$ from~\cite{Radice2018} (revised to $\Lambda_{1.37}\gtrsim300$ in~\cite{Radice2019}) who followed a similar argumentation and observation namely that early black-hole formation would result in a dimmer electromagnetic counterpart than that of GW170817. The different limits on $\Lambda$ are explained by the fact that Refs.~\cite{Radice2018,Radice2019} considers only 4 different EoS models to determine the boundary between dim and bright electromagnetic counterparts. Hence, in~\cite{Radice2018,Radice2019} the limit on $\Lambda_{1.37}$ is only coarsely resolved and leads in fact to an overestimation of the lower bound on $\Lambda_{1.37}$ (see e.g. the simulation for a 1.37-1.37~$M_\odot$ merger with the Sly4 EoS with $\Lambda_{1.37}\approx 338$ in~\cite{Dietrich2017a} leading to significant mass ejection and thus a presumably bright electromagnetic counterpart compatible with GW170817, or , similarly, a bright counterpart may be expected for a 1.4-1.4~$M_\odot$ merger with the APR4 EoS with $\Lambda_{1.37}\approx 281$~\cite{Hotokezaka2013} having a total mass comparable to that of GW170817). See also the discussion in~\cite{Kiuchi2019}. We thus conclude that the current data provide a lower limit of only $\Lambda\gtrsim 210$, which is found if the full EoS dependence is considered.
We also refer to the studies in~\cite{Wang2018a} and~\cite{Coughlin2018}, which find lower limits of $\Lambda_{1.37}\gtrsim309$ and $\Lambda_{1.37}\gtrsim279$, respecively. The comparison to these limits is not straightforward as those works involve a more sophisticated interpretation of the multi-messenger observation of GW170817.
\section{Strategy for future multi-messenger observations}
Observing time in particular at large telescopes is a limited resource. It is therefore important to define an observing strategy for future searches of electromagnetic counterparts after a trigger by a GW event indicates the collision of a NS merger. This strategy should specify for which events follow-up observations promise the highest scientific gain since possibly not for all future triggers an extensive search of the electromagnetic counterpart can be initiated.
We thus discuss which potential multi-messenger events are the most promising for constraints as the one presented above. This means we focus on the determination of threshold binary mass $M_\mathrm{thres}$ for prompt black-hole formation because measuring $M_\mathrm{thres}$ provides crucial information about the properties of high-density matter. It can be employed to constrain NS radii (see Sect.~\ref{sec:main}) and to determine the unknown maximum mass $M_\mathrm{max}$ of NSs\footnote{If the radius of a 1.6~$M_\odot$ NS is known to some accuracy, the empirical relation for $M_\mathrm{thres}(M_\mathrm{max};R_{1.6})$ expressed by Eq.~(\ref{eq:mthrr16}) can be inverted to yield the maximum mass $M_\mathrm{max}$ of nonrotating NS~\cite{Bauswein2013}.} if radii are known with some precision~\cite{Bauswein2013}. Moreover, the collapse behavior, i.e. the distinction between direct and delayed collapse of the remnant, may be crucial for the development of a relativistic outflow~\cite{Ruiz2017}. Hence, knowledge of $M_\mathrm{thres}$ may be important to interpret future simultaneous observations of short gamma-ray bursts and GWs. If $M_\mathrm{thres}$ is known, the binary mass measured through the GW inspiral phase reveals whether or not the merger resulted in a direct gravitational collapse.
We therefore propose here an observing strategy to determine $M_\mathrm{thres}$ through multi-messenger observations with a minimum number of follow-up searches. We assume that generally mergers with $M_\mathrm{tot}>M_\mathrm{thres}$ can be distinguished from systems with $M_\mathrm{tot}>M_\mathrm{thres}$ by their electromagnetic emission in the optical/infrared. Simulations suggest that direct black-hole formation leads to dim electromagnetic counterparts, while no collapse or a delayed collapse result in brighter optical emission.
Considering these results, there is good evidence that the total mass of GW170817 is smaller than $M_\mathrm{thres}$, hence it provides a lower limit on $M_\mathrm{thres}$ (see Sect.~\ref{sec:main}). We thus argue that an event with a total binary mass larger than that of GW170817 may be more rewarding in comparison to a trigger with a lower total binary mass. A high-mass merger may provide an upper limit on $M_\mathrm{thres}$ (in case of a relatively dim electromagnetic counterpart) or strengthen the current lower limit on $M_\mathrm{thres}$ (in case of a relatively bright electromagnetic counterpart) and thus imply a stronger bound on NS radii from below.
Note that a prompt collapse event with a potentially dimmer optical emission may be particularly challenging to observe. Thus, a trigger indicating a binary mass potentially above $M_\mathrm{thres}$ may motivate an intense search with dedicated instruments and strategies to uncover particularly dim events. A merger resulting in a delayed/no collapse, i.e. with $M_\mathrm{tot}<2.74~M_\odot$, may be more easy to detect.
We also emphasize that the empirical relations for $M_\mathrm{thres}$ (Eq.~(\ref{eq:mthrr16})) and the upper limit on $M_\mathrm{max}$ for a given $R_{1.6}$ (Eq.~(\ref{eq:causal16})) in combination with the current upper limit on NS radii suggest that mergers with $M_\mathrm{tot}$ exceeding 3.73~$M_\odot$ are in any case expected to result in a prompt collapse\footnote{In detail, the current upper limit on the threshold binary mass for prompt collapse is derived as follows. $M_\mathrm{thres}$ increases with $R_{1.6}$ and with $M_\mathrm{max}$ (in the relevant regime). For a fixed $M_\mathrm{max}$ the upper bound on $R_{1.6}\lesssim R_{1.6}^\mathrm{up}=14$~km implies an upper limit on $M_\mathrm{thres}$ (Eq.~(\ref{eq:mthrr16})). Furthermore, $R_{1.6}\lesssim 14$~km yields an upper limit $M_\mathrm{max}^\mathrm{up}(R_{1.6})$ on $M_\mathrm{max}$ through the causality constraint (Eq.~(\ref{eq:causal16}), see Fig.~\ref{fig:mmaxr16}). Combining these arguments yields an upper limit on the threshold mass of
\begin{equation}
M_\mathrm{thres}<\left(-3.606\frac{G\,M^\mathrm{up}_\mathrm{max}}{c^2\,R^\mathrm{up}_{1.6}} +2.38 \right)\,M^\mathrm{up}_\mathrm{max} =\left(\frac{-3.606}{3.1} +2.38 \right)\frac{c^2 \, R_{1.6}^\mathrm{up}}{3.1\,G}=3.73~M_\odot,
\end{equation}
where we first use Eq.~(\ref{eq:causal16}) and then insert $R_{1.6}^\mathrm{up}=14$~km, which is the limit provided by the measurement of the tidal deformability in GW170817.
}. Hence, a trigger with a binary mass above 3.73~$M_\odot$ may point to a less promising target. On the one hand the outcome may be anticipated and does not provide new constraints on the nuclear EoS, on the other hand the detection may be more challenging since the electromagnetic emission may be relatively dim. We thus suggest to first focus resources on events with a binary mass between 2.74~$M_\odot$ and 3.73~$M_\odot$. The upper limit may be further reduced by future measurements providing stronger upper bounds on NS radii.
After an upper limit on $M_\mathrm{thres}$ has been established by a future observation, the most rewarding systems will be those with a binary mass (or chirp mass) in between the lower and upper limits on $M_\mathrm{thres}$. Following this strategy, more precise estimates of the threshold binary mass will be obtained. Comparing the different future electromagnetic observations will also clarify whether or not the collapse behavior has indeed a strong impact on the properties of the optical emission as suggested by simulations.
\section{Perspective for Nuclear (Astro)physics}
The relatively early detection of a NS merger~\footnote{Early detection is meant in the sense that the first measurement succeeded before the current GW instruments reached their design sensitivity.} indicates that many more events will be observed as the sensitivity of the current generation of GW detectors increases. Future measurements similar to that of GW170817 can be employed to strengthen the constraints on NS radii and the maximum mass of NS following the procedure described above if the observation of an electromagnetic counterpart allows the inference of the immediate merger outcome. See~\cite{Bauswein2017} for a discussion of hypothetical future cases.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{rhomax-fpeak_135135_wrhoiniforproceedings.pdf}
\caption{Different rest-mass density regimes probed during the inspiral and the postmerger phase as function of the dominant postmerger GW frequency $f_\mathrm{peak}$ for 1.35-1.35~$M_\odot$ mergers for different EoS models. The triangles display the maximum density during the inspiral. The crosses and asterisks show the highest density which is reached during the early postmerger evolution. Figure adopted from~\cite{Bauswein2019}.}
\label{fig:fpeakrho}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{fpeak-lam135_135135-2.pdf}
\caption{Dominant postmerger GW frequency $f_\mathrm{peak}$ as function of the tidal deformability $\Lambda_{1.35}$ in 1.35-1.35~$M_\odot$ mergers for different EoS models. The black symbols refer to purely baryonic EoSs. The resulting correlation between $f_\mathrm{peak}$ and $\Lambda_{1.35}$ is well described by a least-squares fit (black curve) with relatively small deviations between the fit and the underlying data (black symbols). The maximum deviation is indicated by the gray band. An EoS with a strong first-order phase transition to deconfined quark matter (green symbol) occurs as a clear outlier. Such a feature would thus provide strong evidence for the occurrence of the hadron-quark phase transition in NSs. Hyperonic EoS models are displayed by asterisks. Figure taken from~\cite{Bauswein2019}.}
\label{fig:fpeaklam}
\end{figure}
The increase of the detector sensitivity during the next observing runs also implies that the signal-to-noise ratio of an event at a distance similar to that of GW170817 will be higher. This will significantly improved constraints on the tidal deformability from the inspiral phase and will thus further constrain the EoS in the density regime which is probed by the inspiraling stars.
In addition, as the instruments approach design sensitivity the detection of GWs from the postmerger phase comes into reach for cases which do not result in a prompt gravitational collapse of the remnant. Measuring gravitational radiation from the postmerger remnant is complementary to the inference of finite-size effects during the inspiral phase in a twofold sense. First, measuring the dominant oscillation frequency of the postmerger NS remnant will provide an independent measurement on NS radii~\cite{Bauswein2012,Bauswein2012a,Hotokezaka2013a,Takami2014,Bernuzzi2015,Bauswein2015,Bauswein2016}. The complementarity concerns the underlying physical model for the interpretation of the data and the employed data analysis methods, e.g.~\cite{Chatziioannou2017}. Second, the postmerger remnant involves higher densities than the progenitor stars. This can be seen in Fig.~\ref{fig:fpeakrho}, where the triangles refer to the highest density during the inspiral phase, while the crosses and asterisks mark the maximum density of the postmerger stage for 1.35-1.35~$M_\odot$ mergers. Simulations with different EoSs yield different postmerger GW frequencies $f_\mathrm{peak}$, which characterize the stiffness of the EoSs. Soft EoSs yield high frequencies and lead to a particularly strong increase of the density in the postmerger phase. It is obvious that GWs from the postmerger stage probe a different regime of the nuclear EoSs. In particular, the detection of postmerger GW emission offers the possibility to learn about the presence of a hadron-quark phase transition in compact stars~\cite{Bauswein2019,Most2019}. Figure~\ref{fig:fpeaklam} reveals that the presence of a strong first-order phase transition results in a significant increase of the dominant postmerger GW frequency $f_\mathrm{peak}$ for a given tidal deformability, which is measured from the inspiral phase. See e.g.~\cite{Paschalidis2018,Drago2018,Han2019} for other studies of quark stars in the context of binary mergers.
Finally, we emphasize that observational evidence for the presence or absence of postmerger GW emission in combination with the binary mass measurements from the inspiral phase can be employed to directly determine the threshold binary mass for prompt black-hole formation. As discussed above (see Eqs.~(\ref{eq:mthrrmax}) and~(\ref{eq:mthrr16})) this information can be used to obtain the maximum mass of nonrotating NSs~\cite{Bauswein2013}, which would thus provide very important constraints on the very high density regime of the nuclear EoS.
{\it Acknowledgments: The author acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 759253, the Klaus-Tschira Foundation, and the Sonderforschungsbereich SFB 881 ``The Milky WaySystem'' (subproject A10) of the German Research Foundation (DFG). I thank the organizers of the workshop ``Nuclear astrophysics in the new era of multi-messenger astronomy''.}
\section*{References}
\input{elsarticle-template.bbl}
\end{document}
|
2,869,038,155,242 | arxiv | \section{Introduction}
Let $d\in\N$. We consider a semigroup of probability measures $(p_t)_{t\geq0}$ given by
$$
\int_{\Rd} e^{i\langle\xi,x\rangle} p_t(\ud x)
=e^{-t\psi(\xi)}\,,\qquad \xi \in\Rd\,,
$$
where $\psi$ is a symbol defined by
\begin{align*}
\psi (\xi) = \int_{\Rd}\left( 1-e^{i \langle\xi,z\rangle}\right)\nu (\ud z),\quad \xi\in\Rd,
\end{align*}
and $\nu(\ud z)$ is a Borel measure satisfying
$$
\nu(\{0\})=0\,,
\qquad\quad
\int_{\Rd} (1\land |z|) \, \nu(\ud z)<\infty \,.
$$
Let $\{P_t\}_{t\geq0}$ be the convolution semigroup of operators on $\calC_0(\Rd)$ defined by $(p_t)_{t\geq 0}$ and let $\calL$ denote its infinitesimal generator, which for $f\in C_c^2(\Rd)$ is given by the formula
\begin{align}
\calL f (x) &= \int_{\Rd} \left(f(x+z) - f(x)\right) \nu (\ud z). \label{L}
\end{align}
Let $\Omega$ be a non-empty, open subset of $\Rd$ such that its Lebesgue measure $|\Omega|$ is finite. We consider the following quantity associated with the semigroup $(p_t)_{t\geq0}$,
\begin{align*}
H _{\Omega} (t) = \int_{\Omega}\int_{\Omega-x}p_t( \ud y)\ud x,
\end{align*}
which we will call \textit{heat content}.
We note that the function $u(t,x) = \int_{\Omega-x}p_t(\ud y)$ is the weak solution of the initial value problem
\begin{align*}
\frac{\partial}{\partial t}u(t,x) &= \mathcal{L}\, u(t,x),\quad t>0,\, x\in \Rd, \\
u(0,x) &= \mathds{1}_{\Omega}(x).
\end{align*}
Therefore, the quantity $H_\Omega (t)$ can be interpreted as the amount of \textit{heat} in $\Omega$ if its initial temperature is one whereas the initial temperature of $\Omega^c$ is zero.
Our main goal is to study the
asymptotic expansion
of $H_{\Omega}(t)$
for small $t$.
We observe that
$$
H_{\Omega}(t) = |\Omega| - H(t),
$$
where
$$
H(t) = \int_{\Omega} \int_{\Omega^c-x} p_t(\ud y) \ud x,
$$
and hence it suffices to work with the function $H(t)$. One of the main results of \cite{MR3563041} states that, for small $t$,
$$
H_{\Omega}(t) = |\Omega| -t\Per_{\nu}(\Omega)+o(t),$$
where
$\Per_{\nu}(\Omega)$ is the nonlocal perimeter related to the measure $\nu$, defined as
\begin{align}\label{X_perimeter}
\Per_{\nu}(\Omega)= \int_{\Omega}\int_{\Omega ^c-x}\nu (\ud y)\, \ud x .
\end{align}
For instance, if $\nu$ is the $\al$-stable L{\'e}vy measure with $\al \in (0,1)$, denoted by
$\nu^{(\al)}(\ud z) = \mathcal{A}_{d,-\alpha}|z|^{-d-\al} \, \ud z$,
where
$$
\mathcal{A}_{d,-\alpha} = \frac{2^{\alpha}\Gamma\left(\frac{d+\alpha}{2}\right)}{\pi^{d/2} \left|\Gamma\left(-\frac{\alpha}{2}\right)\right|},
$$
then $\Per_{\nu^{(\al)}}(\Omega) = \mathcal{A}_{d,-\al} \Per_{(\al)}(\Omega)$, with
$\Per_{(\al)}(\Omega)$ being the well-known $\al$-perimeter \cite{MR2675483}, given for $0<\al <1$ by
\begin{align*}
\Per_{(\al)} (\Omega)= \int_{\Omega}\int_{\Omega ^c} \frac{\ud y\, \ud x}{|x-y|^{d+\al}}.
\end{align*}
In the present paper, we shall establish the next terms of the asymptotic expansion of the heat content related to convolution semigroups. Such result is new even for the fractional Laplacian $(-\Delta)^{\alpha/2}$ (in our setting we consider $\alpha\in(0,1)$). For instance, if $1/\alpha$ is a natural number, we prove the following expansion of the heat content for the fractional Laplacian:
$$H_\Omega(t)=|\Omega|+ \sum_{n=1}^{1/\alpha-1} \frac{(-1)^{n}}{n!} t^n \Per_{\nu^{(n\alpha)}} (\Omega) +
\frac{(-1)^{1/\alpha}}{(1/\alpha-1)!\pi}t^{1/\alpha}\log(1/t) \Per(\Omega) + o(t^{1/\al} \log(1/t)) ,$$
where $\Per$ is the classical perimeter of the set, see \eqref{Perimeter_def}.
A natural question arises: will the next term be the mean curvature or its non-local counterpart?
The key observation to obtain the asymptotic expansion is that the heat content can be expressed as the action of the semigroup on the {\it covariance function} of a set. We give more general results, concerning the asymptotic expansion of $P_tf$ for functions $f$ belonging to H{\"o}lder space. Our standing assumption is the {\it weak upper scaling} of the
symbol $\psi^*$, or equivalently, certain scaling properties of the {\it concentration function} of a L\'evy measure, see Theorem \ref{thm:thm1}. For a class of convolution semigroups, for instance for the semigroup associated to $\log(1+\Delta)$, we get the full expansion, see Theorem \ref{thm:thm2}. We apply these results to obtain the expansion of heat content, which are stated in Corollaries \ref{thm:cor1} and \ref{thm:cor2}.
Using asymptotic expansion of the heat kernel of the fractional Laplacian, we give more explicit asymptotic expansion in the case of $\al$-stable semigroups, see Theorems \ref{thm3} and \ref{thm4}.
Heat content related to the Gaussian semigroup ($\calL = \frac{1}{2} \Delta$) of a set at time $t$ was defined by van den Berg \cite{MR3116054} by means of the heat semigroup. Van den Berg and Gilkey \cite{MR1262245} proved that the heat content, regarded as a function of variable $t$, has an asymptotic expansion as $t$ tends to $0$. The first three terms in the expansion include the volume of the set, its perimeter and its mean curvature. The short time behavior of heat semigroup in connection
with the geometry of sets with finite perimeter was also studied by Angiuli, Massari and Miranda \cite{MR3019137}. The concept of the heat content was extended to the nonlocal setting of $\al$-stable semigroups in 2016 by Acu\~{n}a Valverde \cite{MR3606559}, who described the small-time asymptotic behavior of the nonlocal heat content in this case. In the one-dimensional case, the number of terms of the expansion depends on the parameter $\al$, and in the multidimensional case, there are the first two terms of the expansion. The same author found first three terms of the asymptotic expansion for the Poisson heat content over the unit ball \cite{MR4224349} and over convex bodies \cite{MR4158754}.
In 2017, Cygan and Grzywny \cite{MR3563041} introduced the notion of a nonlocal heat content related to general probabilistic convolution semigroups and generalized the mentioned results of Acu\~{n}a Valverde. Later, they proved similar results for the generalized heat content related to convolution semigroups \cite{MR3859849}. Maz\`{o}n, Rossi and Toledo \cite{MR3930619} found the full asymptotic expansion of the heat content for nonlocal diffusion with nonsingular kernels. Recently, in a more general setting, the heat
content related to fractional Laplacian in Carnot groups was studied by Ferrari, Miranda, Pallara, Pinamonti, and Sire \cite{MR3732178}.
\section{Preliminaries}
\subsection{Convolution semigroups}\label{sec21}
For $f: \Rd \to \R$, let
$$
P_t f (x) = \int_{\Rd} f(x+y) \, p_t(\ud y), \quad \, t \geq 0\,, \, x \in \Rd\,.
$$
The generator $\calL$ of the semigroup $\{P_t\}_{t\geq 0}$ is defined as
$$
\mathcal{L}f(x) = \lim_{t \to 0^+} \frac{P_t f(x) - f(x)}{t},
$$
for functions for which the above limit exists.
We denote by $\calC_0(\Rd)$ the space of continuous functions $f: \Rd \to \R$ vanishing at infinity. For $\beta\in(0,1]$ we define
$$
\vertiii{f}_{\beta} := \sup_{|x-y|\leq 1} \frac{|f(x)-f(y)|}{|x-y|^{\beta}}.
$$
We will consider the H\"older space
$$
\calC_0^{\beta} = \left\{ f \in \calC_0(\Rd): \|f\|_{\beta}:= \vertiii{f}_{\beta} + \|f\|_{\infty} < \infty \right\}.
$$
\cite[Theorem 3.2]{MR3926121} implies that, for a fixed $\beta \in (0,1]$, if $\int_{|y|<1} |y|^{\beta} \nu (\ud y) < \infty$, then for $f \in \calC_0^{\beta}$,
\begin{equation}\label{eq:LH}
\calL f(x) = \int_{\Rd} (f(x+y)-f(x)) \, \nu (\ud y).
\end{equation}
The real part of the symbol $\psi$
equals
$
{\rm Re}[\psi(\xi)]=\int_{\Rd}\big( 1-\cos \left<\xi,z\right> \big) \, \nu(\ud z)
$. We will consider its radial, continuous and non-decreasing majorant
defined by
$$
\psi^*(r)=\sup_{|\xi|\leq r} {\rm Re}[\psi(\xi)],\qquad r>0\,.
$$
For $r>0$ we define the {\it concentration function}
$$
h(r)= \int_{\Rd}\left(1\wedge\frac{|x|^2}{r^2}\right)\nu(\ud x)\,.
$$
By \cite[Lemma~4]{MR3225805}, for all $r>0$,
\begin{align}\label{ineq:comp_TJ}
\frac{1}{8(1+2d)} h(1/r)\leq \psi^*(r) \leq 2 h(1/r)\,.
\end{align}
Hence $h$ is a more tractable version
of $\psi^*$. By \cite[Lemma 2.1]{MR4140542},
\begin{equation}\label{eq:h_estimate}
\int_{|z|\geq r } \nu(\ud z)\leq h(r) \quad \textnormal{for all} \,\, r>0.
\end{equation}
By \cite[Lemma 2.7]{MR4140542}, if $f\colon [0,\infty)\to [0,\infty)$ is differentiable, $f(0)=0$, $f'\geq 0$ and $f'\in L^1_{loc}([0,\infty))$, then for all $r>0$,
\begin{equation}\label{eq:lem26}
\int_{|z|<r} f(|z|) \,\nu(\ud z) =\int_0^r f'(s) \nu (|x|\geq s)\, \ud s - f(r) \nu(|x|\geq r).
\end{equation}
Let $\theta_0 \geq 0$ and $\phi : (\theta_0, \infty) \to [0,\infty]$. We say that
$\phi$ satisfies {the} {\it weak upper scaling condition} (at infinity) if there are numbers
$\al \in \R$,
and $\uC \in [1,\infty)$ such that
\begin{equation}\label{eq:USC}
\phi(\lambda\theta)\le
\uC \lambda^{\al} \phi(\theta) \quad \mbox{for}\quad \lambda\ge 1, \quad\theta
>\theta_0
\end{equation}
In short, $\phi\in$ $\WUSC{\al}{\theta_0}{\uC}$. This condition will be our standing assumption on the symbol $\psi^*$ throughout the paper.
The following auxiliary result is a consequence of \eqref{ineq:comp_TJ}, \eqref{eq:h_estimate} and \eqref{eq:lem26}.
\begin{lemma}\label{lem:equiv}
Let $\al \in(0,2]$, $\uC \in [1,\infty)$ and $\theta_0 \in[0,\infty)$. Consider
\begin{enumerate}
\item[\Aa] $\psi^* \in \WUSC{\al}{\theta_0}{\uC}$
\item[\Ab] There is $C>0$ such that for all $\lambda \leq 1$ and $r<1/\theta_0$,
$$
h(\lambda r)\leq C \lambda^{-\al}h(r).
$$
\end{enumerate}
Then, $\Aa$ implies $\Ab$ with $C = c_d \uC$, where $c_d=16(1+2d)$, while $\Ab$ gives $\Aa$ with $\uC=c_d C$. If additionally $\theta_0 \in [0,1)$, then $\Aa$ and $\Ab$ each imply
\begin{enumerate}
\item[\Ac] For all $\eps>0$,
$$
\int_{|y|<1} |y|^{\al+\eps} \nu(\ud y) < \infty.
$$
\end{enumerate}
\end{lemma}
\begin{proof}
We will show that \Aa \, implies \Ab. Using \eqref{ineq:comp_TJ}, \eqref{eq:USC} and again \eqref{ineq:comp_TJ}, we obtain
\begin{align*}
h(\lambda r) \leq 8(1+2d) \psi^*((\lambda r)^{-1}) & \leq 8(1+2d) \uC \lambda^{-\al} \psi^*(r^{-1}) \\
&\leq 16(1+2d) \uC \lambda^{-\al} h(r).
\end{align*}
The converse implication can be proved analogously. It remains to show that \Aa \, implies \Ac. By \eqref{eq:lem26} with $f(s)= s^{\al+\eps}$ and $r=1$,
\begin{align*}
\int_{|y|<1} |y|^{\al + \eps} \, \nu (\ud y) &= (\al+\eps) \int_0^1 s^{\al+\eps-1} \nu(|x|\geq s) \, \ud s - \nu(|x|\geq 1) \\
&\leq (\al+\eps) \int_0^1 s^{\al+\eps-1} h(s) \, \ud s \\
&\leq (\al+\eps) \, C h(1) \int_0^1 s^{\eps-1} \, \ud s
= \frac{C(\al+\eps)}{\eps} h(1) <\infty.
\end{align*}
\end{proof}
\subsection{Heat content}
Following \cite[Section 3.3]{MR1857292}, for any measurable set $\Omega \subset \Rd$ we define its perimeter $\Per(\Omega)$ as
\begin{align}\label{Perimeter_def}
\Per(\Omega) = \sup \left\{ \int_{\Rd}\ind_{\Omega}(x)\mathrm{div}\, \phi (x)\, \ud x:\, \phi \in C_c^1(\Rd,\Rd),\, \norm{\phi}_{\infty}\leq 1 \right\}.
\end{align}
We say that $\Omega$ is of finite perimeter if $\Per(\Omega)<\infty$.
We mention that, by \cite[Proposition 3.62]{MR1857292}, for any open $\Omega$ with Lipschitz boundary $\partial \Omega$ and finite Hausdorff measure $\sigma (\partial \Omega)$ we have
\begin{align*}
\Per(\Omega) = \sigma (\partial \Omega).
\end{align*}
For any $\Omega \subset\Rd$ with finite Lebesgue measure $|\Omega|$, we define the covariance function $g_\Omega$ of $\Omega$ as follows
\begin{align}\label{g_omega_defn}
g_\Omega (y)=|\Omega\cap (\Omega + y)|=\int_{\Rd}\,\ind_{\Omega}(x)\,\ind_{\Omega}(x-y) \ud x,\quad y\in \Rd.
\end{align}
We recall some important properties of the function $\g$.
Let $\Omega \subset \Rd$ have finite measure. Then, by \cite[Proposition 2, Theorem 13 and Theorem 14]{MR2816305},
$\g$ is symmetric, nonnegative, bounded from above by $|\Omega|$,
and
$g_{\Omega} \in \calC_0(\Rd)$.
Moreover, if $\Per(\Omega)<\infty$, then
$g_\Omega$ is Lipschitz with
\begin{equation}\label{prop_iv}
2\norm{g_\Omega}_{\mathrm{Lip}} \leq \Per(\Omega).
\end{equation}
Moreover, for all $r>0$ the limit $\lim_{r\to 0^+}\frac{\g(0)-\g(ru) }{r}$ exists, is finite and
\begin{equation}\label{prop_v}
\Per(\Omega) = \frac{\Gamma((d+1)/2)}{\pi^{(d-1)/2}} \int_{\mathbb{S}^{d-1}}\lim_{r\to 0^+}\frac{ \g(0)- \g(ru)}{r} \sigma (\ud u).
\end{equation}
In particular, there is a constant $C=C(\Omega)>0$ such that
\begin{align}
0\leq g_\Omega (0)- g_\Omega (y) \leq C (1\wedge |y|).\label{g_Omega_bound}
\end{align}
By Cygan, Grzywny \cite[Lemma 3]{MR3563041}, the related function $H(t)$ has the following form
\begin{equation}\label{H_formula}
H(t) = \int _{\Rd}\left( g_\Omega (0) -\g (y)\right) p_t(\ud y),
\end{equation}
and by \cite[Proof of Lemma 1]{MR3563041},
\begin{equation}\label{perX}
\Per_{\nu} (\Omega) = \int_{\Rd} \left(\g(0)-\g(y)\right) \nu(\ud y).
\end{equation}
By \cite[Theorem 3]{MR3563041}, if $\Omega \subset \Rd$ is an open set such that $|\Omega| < \infty$ and $\Per(\Omega)<\infty$ (i.e. $\mathds{1}_{\Omega} \in \operatorname{BV}(\Rd)$), then
\begin{align}\label{eq222}
t^{-1}H(t) = t^{-1}(\g(0) - P_t\g (0)),
\end{align}
which converges to $ -\calL \g(0) = \Per _{\nu}(\Omega)$ as $t$ tends to $0$.
\section{Main results and proofs}
\subsection{Convolution semigroups for nonlocal operators on $\Rd$}
\begin{lemma}\label{thm:lem2}
Assume that $\psi^* \in \WUSC{\al}{1}{\uC}$ for some $\al \in (0,1)$.
If $f \in \calC_0^{\beta}$ for some $\beta \in (\al,1]$, then $\calL f \in \calC_0^{\beta-\al}$ and $\|\calL f\|_{\beta-\al} \leq C_1 (1-\al/\beta)^{-1} h(1) \|f\|_{\beta}$, where $C_1=C_1(c_d, \overline{C})$.
In particular, if $\beta \in (2\al, 1]$, then $\calL f \in \mathcal{D}(\calL)$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:equiv}, for all $\lambda \leq 1$ and $r \leq 1$,
\begin{equation}\label{eq:h_scaling}
h(\lambda r)\leq c \lambda^{-\al}h(r),
\end{equation}
where $c= c_d\uC$, and for all $\eps>0$,
\begin{equation}
\int_{|y|<1} |y|^{\al+\eps} \nu(\ud y) < \infty.
\end{equation}
First, we will deal with $\vertiii{\calL f}_{\beta -\al }$. Consider $|x-y|\leq 1$. By \eqref{eq:LH},
\begin{align}
|\calL f(x)-\calL f(y)| &= \left|\int_{\Rd} \left(f(x+z)-f(x)-(f(y+z)-f(y))\right) \nu(\ud z) \right| \nonumber\\
& \leq \int_{\Rd} \left|f(x+z)-f(x)-f(y+z)+f(y)\right| \nu(\ud z). \label{eq:int}
\end{align}
We split the integral above as follows
$$
\int_{\Rd} = \int_{|z| \leq |x-y|} + \int_{|z| > |x-y|} =: \mathrm{I}_1 + \mathrm{I}_2.
$$
We will first deal with $\mathrm{I}_1$. Denote $L= \vertiii{f}_{\beta}$. We have
\begin{equation}\label{eq:Holder}
|f(x)-f(y)|\leq L |x-y|^{\beta}.
\end{equation}
By
\eqref{eq:Holder}, Fubini theorem, \eqref{eq:h_estimate} and \eqref{eq:h_scaling} we have
\begin{align*}
\mathrm{I}_1
&\leq \int_{|z| \leq |x-y|} \left(|f(x+z)-f(x)|+|f(y+z)-f(y)|\right) \nu(\ud z) \leq 2 L \int_{|z| \leq |x-y|} |z|^{\beta} \nu(\ud z) \\
&= 2L \int_{|z| \leq |x-y|} \int_0^{|z|^{\beta}} \ud s \nu(\ud z) = 2L\int_0^{|x-y|^{\beta}} \int_{s^{1/\beta} \leq |z| \leq |x-y|} \nu(\ud z) \ud s \leq 2L\int_0^{|x-y|^{\beta}} \int_{|z| \geq s^{1/\beta}} \nu(\ud z) \ud s \\
&\leq 2L\int_0^{|x-y|^{\beta}} h(s^{1/\beta}) \ud s \leq 2Lch(1) \int_0^{|x-y|^{\beta}} s^{-\al/\beta} \ud s = 2Lch(1) (1-\al/\beta)^{-1} |x-y|^{\beta-\al}.
\end{align*}
Now we will estimate $\mathrm{I}_2$. Using again
\eqref{eq:Holder}, \eqref{eq:h_estimate} and \eqref{eq:h_scaling} we get
\begin{align*}
\mathrm{I}_2
&\leq \int_{|z| > |x-y|} \left(|f(x+z)-f(y+z)| + |f(x)-f(y)|\right) \nu(\ud z) \\
&\leq 2L |x-y|^{\beta} \int_{|z| > |x-y|} \nu(\ud z) \leq 2L |x-y|^{\beta} h(|x-y|) \leq 2L c h(1) |x-y|^{\beta-\al}.
\end{align*}
Hence
\begin{equation}\label{eq:fest1}
\vertiii{\calL f}_{\beta -\al } \leq \left(1+ (1-\al/\beta)^{-1}\right) 2ch(1) \vertiii{f}_{\beta}.
\end{equation}
Futhermore,
\begin{align*}
|\calL f(x)| \leq \int_{\Rd} |f(x+y) -f(x)| \, \nu (\ud y).
\end{align*}
We split the integral above as follows
$$
\int_{\Rd} = \int_{|y|<1} + \int_{|y|\geq1} =: \mathrm{I}_3 + \mathrm{I}_4.
$$
Proceeding as in the case of $\mathrm{I}_1$, we obtain
\begin{align*}
\mathrm{I}_3 \leq \vertiii{f}_{\beta} \int_{|y|<1} |y|^{\beta} \nu (\ud y) \leq \vertiii{f}_{\beta} \frac{c}{1-\al/\beta} h(1).
\end{align*}
Next,
\begin{align*}
\mathrm{I}_4 \leq 2 \|f\|_{\infty} \int_{|y|\geq1} \nu (\ud y) \leq 2\|f\|_{\infty} h(1).
\end{align*}
Therefore
\begin{align}
\|\calL f\|_{\infty} \leq \frac{c}{1-\al/\beta} h(1) \vertiii{f}_{\beta} + 2h(1) \|f\|_{\infty} &\leq \left(\frac{c}{1-\al/\beta} +2\right) h(1) \|f\|_{\beta} \nonumber\\
&\leq \frac{3c}{1-\al/\beta} h(1) \|f\|_{\beta} \label{eq:lfe1}.
\end{align}
By \eqref{eq:fest1} and \eqref{eq:lfe1},
\begin{align}
\|\calL f\|_{\beta-\al} &\leq 2h(1) \|f\|_{\infty} + \left(2+ \frac{3}{1-\al/\beta}\right) ch(1) \vertiii{f}_{\beta} \nonumber\\
&\leq \left(2+ \left(2+\frac{3}{1-\al/\beta}\right) c\right) h(1) \|f\|_{\beta} \leq \frac{7c}{1-\al/\beta} h(1) \|f\|_{\beta} \label{eq:lfe2}.
\end{align}
The proof is complete.
\end{proof}
\begin{corollary}\label{thm:cor1}
Assume that $\psi^* \in \WUSC{\al}{1}{\uC}$ for some $\al \in (0,1)$.
If $f \in \calC_0^{\beta}$ for some $\beta \in (n\al,1]$, then $\calL^k f \in \calC_0^{\beta-k\al}$ for $k \in \{1, \ldots, n\}$ and $\calL^k f \in \mathcal{D}(\calL)$ for $k \in \{1, \dots, n-1\}$.
\end{corollary}
It is well-known that for $f \in \mathcal{D}(\calL)$ and $t\geq 0$, $P_{t}$ is differentiable and $\frac{\ud}{\ud t} P_tf = \calL P_t f = P_t \calL f$, see e.g. Pazy \cite[Theorem 1.2.4 c)]{MR710486}. Therefore, if $\calL^k f \in \mathcal{D}(\calL)$ for $k \in \{1, \dots, n-1\}$, then $\frac{\ud^n}{\ud t^n} P_tf = \calL^{n} P_t f = P_t \calL^{n} f$.
To apply this result, we will use the fact that for $t_0>0$, $P_{t_0}(\calC^{\beta}_0) \subset \calC^{\beta}_0$. Indeed,
\begin{align*}
|P_{t_0}f(x)-P_{t_0}f(y)| &\leq \int_{\Rd} \left|f(x+z)-f(x)-f(y+z)+f(y)\right| p_{t_0} (\ud z) \\
&\leq \int_{\Rd} \left(|f(x+z)-f(y+z)| + |f(y)-f(x)| \right) p_{t_0} (\ud z) \\
&\leq 2L |x-y|^{\beta} \int_{\Rd} p_{t_0}(\ud z) = 2L |x-y|^{\beta}.
\end{align*}
\begin{theorem}\label{thm:thm1}
Assume that $\psi^* \in \WUSC{\al}{1}{\uC}$ for some $\al \in (0,1)$.
If $f \in \calC_0^{\beta}$ for some $\beta \in (n\al,1]$, then
$$
\lim_{t \to 0^+} t^{-n} \, \left(P_tf - \sum_{k=0}^{n-1} \frac{t^k}{k!} \calL^k f \right) = \frac{1}{n!} \calL^n f.
$$
\end{theorem}
\begin{proof}
By Corollary \ref{thm:cor1}, $\calL^k f \in \mathcal{D}(\calL)$ for $k \in \{1, \dots, n-1\}$, hence $P_t f$ is $n$ times differentiable. By Taylor's theorem applied to $t \mapsto P_tf$,
$$
P_t f = \sum_{k=0}^{n-1} \frac{t^k}{k!} \calL^k f + \frac{t^n}{n!} P_{\theta_0} \calL^n f
$$
for some $\theta_0 \in (0,t)$. The thesis follows from the right continuity of $P_t$ at $t=0$.
\end{proof}
Theorem \ref{thm:thm1}, Lemma \ref{lem:equiv} and \eqref{prop_iv} give the following result.
\begin{corollary}\label{thm:cor2}
Assume that
there exists $C>0$ such that for all $\lambda\leq 1$ and $r<1$, $h(\lambda r) \leq C \lambda^{-\alpha} h(r)$
for some $\al \in (0,1)$.
Let $n\geq 2$. If $n\al<1$, then
\begin{align*}
\lim_{t \to 0^+} t^{-n} \left(H(t) - t\Per_{\nu}(\Omega) + \sum_{k=2}^{n-1}\frac{t^k}{k!} \calL^k \g(0)\right) = -\frac{1}{n!} \calL^n \g(0).
\end{align*}
\end{corollary}
\begin{example}\label{ex_stable}
If $\nu^{(\al)}$ is an $\al$-stable L\'evy measure, $\al \in (0,1)$, then the H\"older space $\calC_0^{\beta}$ is contained in the domain of $\calL =-(-\Delta)^{\al/2}$ for any $\beta \in (\al,1]$, and we have
\begin{equation*}
\calL f(x) = \int_{\Rd\setminus\{0\}} (f(x+y)-f(x)) \, \nu^{(\al)}(\ud y), \qquad f \in \calC_0^{\beta},\; x \in \Rd.
\end{equation*}
The associated semigroup $(p_t)_{t\geq0}$ is the $\al$-stable semigroup in $\Rd$, determined by $\psi(\xi) = |\xi|^{\al}$. We have $\psi(r \xi) = r^{\al}\psi(\xi)$, so in particular $\psi^* \in \WUSC{\al}{0}{1}$. If $f \in \calC_0^{\beta}$ for some $\beta \in (n\al,1]$, then by Corollary \ref{thm:cor1}, $\calL^k f \in \mathcal{D}(\calL)$ for $k \in \{1, \dots, n-1\}$ and $\calL^k f \in \calC_0^{\beta-k\al}$ for $k \in \{1, \ldots, n\}$, and by Theorem \ref{thm:thm1},
\begin{equation}\label{eq:stable}
\lim_{t \to 0^+} t^{-n} \, \left(-P_tf + \sum_{k=0}^{n-1} \frac{(-1)^{k-1} t^k}{k!} \left(-(-\Delta)^{k\al/2}\right) f \right) = \frac{(-1)^{n}}{n!} \left(-(-\Delta)^{n\al/2}\right)f,
\end{equation}
since $\left((-\Delta)^{\al/2}\right)^n =(-\Delta)^{n\al/2}$ for $n\al<2$, see \cite[(1.1.12)]{MR0350027}.
By Corollary \ref{thm:cor2},
\begin{align*}
\lim_{t \to 0^+} t^{-n} \left(H(t) - \sum_{k=1}^{n-1}\frac{(-1)^{k-1} t^k}{k!} \Per_{\nu^{(k\al)}} (\Omega)\right) = \frac{(-1)^{n-1}}{n!} \Per_{\nu^{(n\al)}} (\Omega).
\end{align*}
\end{example}
\begin{theorem}\label{thm:thm2}
Assume that for all $\al \in (0,1)$, $\psi^* \in \WUSC{\al}{1}{C \al^{-1}}$ for some $C>0$.
If $f \in \calC_0^{\beta}$ for some $\beta \in (0,1]$, then there exists $t_0>0$ such that for all $t\in(0,t_0)$,
$$
P_t f = \sum_{k=0}^{\infty} \frac{t^k}{k!} \calL^k f
$$
in $\calC_0(\Rd)$.
\end{theorem}
\begin{proof}
Without loss of generality, we can assume $h(1)=1$.
For any $N \in \mathbb{N}$, let $\alpha = \alpha(N) = \beta/(2N)$. For $n \in \{1, 2, \ldots, N-1\}$, let $\beta_n = \beta - n\alpha$.
Since $N\alpha = \beta/2 < \beta$, by the Proof of Theorem \ref{thm:thm1},
\begin{align*}
P_t f (x) = \sum_{n=0}^{N-1} \frac{t^n \calL^n f (x)}{n!} + \tilde{R}_N,
\end{align*}
where
$$
\tilde{R}_N = \frac{t^N P_{\theta_0} \calL^N f(x)}{N!}.
$$
By \eqref{eq:lfe1}, for any $N \in \mathbb{N}$,
\begin{equation}\label{eq:lnf}
\|\calL^N f\|_{\infty}
\leq \frac{7 C/\alpha}{1-\al/\beta_{N-1}} \|\calL^{N-1} f\|_{\beta_{N-1}} .
\end{equation}
By \eqref{eq:lfe2}, for $k \in \{1, 2, \ldots N-1\}$,
\begin{equation}\label{eq:lfb}
\|\calL f\|_{\beta_k}
\leq \frac{7C/\alpha}{1-\al/\beta_{k-1}} \|f\|_{\beta_{k-1}}.
\end{equation}
Using \eqref{eq:lnf} and applying \eqref{eq:lfb} $N$ times we get
\begin{align}
\|\calL^N f \|_{\infty} \leq \frac{7C/\al}{1-\al/\beta_{N-1}}
\|\calL^{N-1} f\|_{\beta_{N-1}}
&\leq \left(\prod_{k=0}^{N-1} \frac{7C/\al}{1-\al/\beta_k}
\right) \|f\|_{\beta} \nonumber\\
&\leq (7C)^N \left(\prod_{k=0}^{N-1} \frac{1/\al}{1-2\al/\beta}\right) \|f\|_{\beta} \nonumber\\
&
= (7C)^N \left(\prod_{k=0}^{N-1} \frac{2N/\beta}{1-1/N}\right)
\|f\|_{\beta} \nonumber \\
&
=(14C/\beta)^N N^N\left(\prod_{k=0}^{N-1} \frac{1}{1-1/N}\right)
\|f\|_{\beta} \nonumber\\
&\leq (14C/\beta)^N N^N e \|f\|_{\beta}\label{eq:lnfe}.
\end{align}
By the contractivity of $P_{\theta_0}$, \eqref{eq:lnfe} and Stirling's formula,
\begin{align*}
|\tilde{R}_N|& \leq \frac{\|t^N P_{\theta_0} \calL^N f \|_{\infty}}{N!} \leq \frac{t^N\|\calL^N f \|_{\infty}}{N!}
\leq \frac{c C'^N N^N t^N}{\sqrt{2\pi N} (N/e)^N} \|f\|_{\beta} \leq \frac{c'\tilde{C}^N t^N}{\sqrt{N}} \|f\|_{\beta}
\end{align*}
which tends to $0$ as $N \to \infty$. The proof is complete.
\end{proof}
\begin{example}
Let $\psi(\xi)=\log(1+|\xi|^2)$, i.e. $\calL =- \log(1 - \Delta)$.
Let $\alpha\in(0,1]$. Since, for $\lambda\geq 1$ and $x\geq 1$,
\begin{align*}
\log(1+\lambda x) &\leq \log(\lambda(1+x))
= \frac{1}{\alpha} \log(\lambda^{\alpha}(1+x)^{\alpha})
\leq \frac{1}{\alpha} \log(\lambda^{\alpha} (1+x)),
\end{align*}
and
$$
\frac{\log(\lambda^{\alpha} (1+x))}{ \log(1+x)}\leq \frac{ \lambda^{\alpha}}{ \log 2},
$$
we have $\log(1+\cdot) \in \WUSC{\al}{1}{2/\al}$. Hence $\psi \in \WUSC{\al}{1}{4/\al}$,
that is $\psi$ satisfies the assumptions of Theorem \ref{thm:thm2}.
\end{example}
\begin{corollary}\label{cor3}
Assume that for all $\al \in (0,1)$ and $\lambda \leq 1$, $h(\lambda r) \leq C \al^{-1} \lambda^{-1} h(r)$ for some $C>0$.
Then, there exists $t_0>0$ such that for all $t\in(0,t_0)$,
\begin{align*}
H(t) = t\Per_{\nu}(\Omega) - \sum_{k=2}^{\infty}\frac{t^k}{k!} \calL^k \g(0).
\end{align*}
\end{corollary}
\begin{example}
Let $\nu$ be a finite measure on $\Rd$ and let $(p_t)_{t\geq0}$ be determined by
$$
\psi(\xi) = \int_{\Rd} (1-e^{i \langle \xi, z \rangle}) \, \nu(\ud z).
$$
In this case
\begin{align*}
\calL f (x)
&= \int_{\Rd} \left(f(x+y)-f(x)\right) \nu(\ud y).
\end{align*}
Generator $\calL$ can be expressed as a convolution operator
$$
\calL f = \left(\nu - \nu(\Rd) \delta_0 \right) * f,
$$
therefore
$$
\calL^n f = (\nu - \nu(\Rd) \delta_0)^{*n} * f = \sum_{i=0}^{n} (-1)^{n-i} \binom{n}{i} \nu(\Rd)^{n-i} \nu^{*i} * f,
$$
where for $k\in\mathbb{N}$, $\mu^{*k}$ denotes the $k$-fold iteration of the convolution of measure $\mu$ with itself, i.e. $\mu^{*0} = \delta_0$ and $\mu^{*k} = \mu^{*(k-1)} * \mu$ for $k\geq1$.
It is well-known that, since $\nu$ is finite and $\calL$ is bounded, we have
$$
P_t = e^{t\calL}.
$$
Therefore
\begin{align*}
P_t = \sum_{n=0}^{\infty} \frac{t^n}{n!} \calL^n &= \sum_{n=0}^{\infty} \sum_{i=0}^{n} (-1)^{n-i} \frac{t^n}{i!(n-i)!}\nu(\Rd)^{n-i} \nu^{*i}
\\
&=\sum_{i=0}^{\infty} \sum_{n=i}^{\infty} (-1)^{n-i} \frac{t^n}{i!(n-i)!}\nu(\Rd)^{n-i} \nu^{*i} \\
&= \sum_{j=0}^{\infty} \frac{(-t\nu(\Rd))^{j}}{j!} \sum_{i=0}^{\infty} \frac{t^i}{i!} \nu^{*i} =
e^{-t\nu(\Rd)} \exp(t\nu),
\end{align*}
where $\exp(\nu) = \sum_{n=0}^{\infty} \frac{1}{n!} \nu^{*n}$. This expansion follows also from Theorem \ref{thm:thm2}. Applying this result to $f=\g$, we extend \cite[Theorem 1.2]{MR3641640}, which holds for compactly supported probabilistic measures with radial density, to general finite measures.
\end{example}
\subsection{Heat content for the fractional Laplacian on $\Rd$}
Let $(p_t)_{t\geq 0}$ be the $\al$-stable semigroup in $\Rd$, $\al \in (0,2)$. We recall that in this case $\psi(\xi) = |\xi|^{\al}$ and the corresponding L\'{e}vy measure $\nu$ is the $\al$-stable L\'{e}vy measure $\nu^{(\al)}$.
The related function $h$ turns into $h(r) = c/r^{\al}$, for some $c>0$.
Let
$$
a_n := \frac{1}{\pi^{1+d/2}}\frac{(-1)^{n-1}}{n!} 2^{n\al} \Gamma\left(\frac{n\al}{2}+1\right) \Gamma\Big(\frac{n\al+d}{2}\Big) \sin\left(\frac{\pi n \al}{2}\right).
$$
For $\frac{n\al}{2} \notin \mathbb{N}$,
$$
a_n = \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al}.
$$
By Hiraba \cite[Remark 2.b)]{MR1287843}, for $\al<1$ and $x \in \Rd\setminus\{0\}$,
\begin{equation}\label{eq:Hiraba}
p_1(x) = \sum_{n=1}^{\infty} a_n |x|^{-n\al-d}.
\end{equation}
Our following two results extend \cite[Theorem 1.2]{MR3606559}. Note that they provide more detailed expansion than the one resulting from Corollary \ref{thm:cor2}, compare with Example \ref{ex_stable}.
\begin{theorem}\label{thm3}
Let $\al \in (0,1)$ be such that $1/\al \notin \mathbb{N}$ and let $\Omega \subset \Rd$ be an open set of finite Lebesgue measure and perimeter. Then
\begin{align*}
\lim_{t \to 0^+} t^{-1/\al}& \left( H(t) - \sum_{n=1}^{\left[\frac{1}{\al}\right]} \frac{(-1)^{n-1}}{n!} t^n \Per_{\nu^{(n\al)}} (\Omega) \right)\\
& \qquad\qquad\qquad=\frac{\pi^{\frac{d-1}{2}}}{\Gamma\left(\frac{d+1}{2}\right)} \Per(\Omega) \left(\int_0^1 r^d p_1(re_d) \, \ud r -\sum_{n=1}^{\infty} \frac{a_n}{1-n\al} \right).
\end{align*}
\end{theorem}
\begin{proof}
Without loss of generality, we can assume $\mathrm{diam}(\Omega)=1$. For $1/\al \notin \N$, $\left[1/\al\right] = \lceil1/\al\rceil -1$, and we will use this formula in order to avoid repeating similar calculations in the next proof. By \eqref{H_formula}, \eqref{perX} and the scaling property of $p_t$,
\begin{align*}
H(t) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} &\frac{(-1)^{n-1}}{n!} t^n \Per_{\nu^{(n\al)}} (\Omega) \\
&= \int_{\Rd} \left(\g(0)-\g(x)\right) \left(p_t(x) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} t^n \mathcal{A}_{d,-n\al} |x|^{-d-n\al} \right) \ud x \\
&=\int_{\Rd} \left(\g(0)-\g(t^{1/\al}x)\right) \left(p_1(x) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} |x|^{-d-n\al} \right) \ud x.
\end{align*}
We split the above integral into
\begin{equation}\label{int_split}
\int_{|x| \leq 1} +\int_{1 <|x| \leq t^{-1/\al}} + \int_{|x| > t^{-1/\al}} =: \mathrm{I}_1+\mathrm{I}_2+\mathrm{I}_3.
\end{equation}
We have
\begin{align*}
\mathrm{I}_1 &= \int_{|x| \leq 1} \left(\g(0)-\g(t^{1/\al}x)\right) \Bigg(p_1(x) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} |x|^{-d-n\al} \Bigg) \ud x \\
&= t^{1/\al}\int_{0}^{1} \int_{\mathbb{S}^{d-1}} r^d \, \frac{\g(0)-\g(t^{1/\al}r u)}{t^{1/\al}r} \Bigg(p_1(re_d) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} r^{-d-n\al} \Bigg) \sigma(\ud u) \, \ud r.
\end{align*}
By \eqref{prop_iv}, \eqref{prop_v} and Dominated Convergence Theorem,
$$
\lim_{t \to 0^+} t^{-1/\al} \mathrm{I}_1 = \frac{\pi^{\frac{d-1}{2}}}{\Gamma\left(\frac{d+1}{2}\right)} \Per(\Omega) \left( \int_0^1 r^d p_1(re_d) \ud r - \sum_{k=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \frac{\mathcal{A}_{d,-n\al}}{1-n\al} \right) .
$$
Next,
\begin{align*}
\frac{\mathrm{I}_3}{\g(0)} &= \int_{|x| > t^{-1/\al}} \left(p_1(x) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} |x|^{-d-n\al} \right) \ud x \\
& = \int_{|x| > t^{-1/\al}} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} a_n |x|^{-n\al-d} \, \ud x\\
&= \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} a_n \int_{|x| > t^{-1/\al}}|x|^{-n\al-d} \, \ud x
= \omega_{d-1} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} \frac{a_n}{n\al} t^{n}.
\end{align*}
We have
$$
|\mathrm{I}_3| \leq \g(0)\omega_{d-1} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} \frac{|a_n|}{n\al} t^{n}= O(t^{\lceil\frac{1}{\al}\rceil})
$$
for $t<1$,
thus
$$
\lim_{t\to 0^+} t^{-1/\al} \mathrm{I}_3 = 0.
$$
We have
\begin{align*}
\mathrm{I}_2 &= \int_{1< |x| < t^{-1/\alpha}} \left(\g(0)-\g(t^{1/\al}x)\right) \Bigg(p_1(x) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} |x|^{-d-n\al} \Bigg) \ud x \\
&= t^{1/\al}\int_{1}^{t^{-1/\alpha}} \int_{\mathbb{S}^{d-1}} r^d \, \frac{\g(0)-\g(t^{1/\al}r u)}{t^{1/\al}r} \Bigg(p_1(re_d) - \sum_{n=1}^{\lceil\frac{1}{\al}\rceil -1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} r^{-d-n\al} \Bigg) \sigma(\ud u) \, \ud r \\
&= t^{1/\al}\int_{1}^{t^{-1/\alpha}} \int_{\mathbb{S}^{d-1}} \, \frac{\g(0)-\g(t^{1/\al}r u)}{t^{1/\al}r} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} a_n \, \sigma(\ud u) \, r^{-n\al}\, \ud r.
\end{align*}
By \eqref{prop_iv},
\begin{align*}
|\mathrm{I}_2| &\leq \frac{\Per(\Omega)}{2} t^{1/\al} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} |a_n| \int_{1<|x|\leq t^{-1/\al}} |x|^{1-n\al-d} \, \ud x \\
&\leq \frac{\Per(\Omega)}{2} t^{1/\al} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} |a_n| \int_{|x|>1} |x|^{1-n\al-d} \, \ud x \\
&=\frac{\Per(\Omega)}{2} \omega_{d-1} t^{1/\al} \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} \frac{|a_n|}{n\al-1},
\end{align*}
hence
$$
\mathrm{I}_2 = \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} a_n \int_{1 <|x| \leq t^{-1/\al}} \left(\g(0)-\g(t^{1/\al}x)\right) |x|^{-n\al-d} \, \ud x
$$
and
$$
\lim_{t \to 0^+} t^{-1/\al} \mathrm{I}_2 = \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} a_n \lim_{t \to 0^+} t^{-1/\al} \int_{1 <|x| \leq t^{-1/\al}} \left(\g(0)-\g(t^{1/\al}x)\right) |x|^{-n\al-d} \, \ud x.
$$
We get
\begin{align*}
t^{-1/\al} \mathrm{I}_2= \sum_{n=\lceil\frac{1}{\al}\rceil}^{\infty} a_n \int_1^{t^{-1/\al} } \calM_{\Omega}(t,r) r^{-n\al}\, \ud r,
\end{align*}
where
\begin{align}\label{M_om}
\calM_{\Omega}(t,r) = \int_{\mathbb{S}^{d-1}} \frac{g_\Omega (0) - g_\Omega \left(rt^{1/\al}u\right)}{rt^{1/\al}}\,
\sigma (\ud u).
\end{align}
We claim that
\begin{align}\label{claim-new}
\lim_{t \to 0^+} \int_1^{t^{-1/\al}} \calM_{\Omega}(t,r) r^{-n\al}\, \ud r = \frac{\pi^{(d-1)/2}}{\Gamma\left((d+1)/2\right)}\Per (\Omega) \frac{1}{n\al-1}.
\end{align}
Indeed, by \eqref{prop_iv} and \eqref{prop_v},
\begin{align}\label{M_om1}
0\leq \calM_{\Omega}(t,r)\leq \frac{1}{2} \Per(\Omega)\, \sigma (\mathbb{S}^{d-1})
\end{align}
and, for any $r>0$,
\begin{align}\label{M_om2}
\lim_{t\to 0^+}\calM_{\Omega}(t,r) = \frac{\pi^{(d-1)/2}}{\Gamma\left((d+1)/2\right)}\Per (\Omega).
\end{align}
Moreover,
$$
\int_1^{t^{-1/\al}} r^{-n\al}\, \ud r \leq \int_1^{\infty} r^{-n\al}\, \ud r = \frac{1}{n\al-1}
$$
and hence \eqref{claim-new} follows by Dominated Convergence Theorem.
\end{proof}
\begin{theorem}\label{thm4}
Let $\al \in (0, 1)$ be such that $1/\al \in \mathbb{N}$ and let $\Omega \subset \Rd$ be an open set of finite Lebesgue measure and perimeter.
Then
\begin{align*}
\lim_{t \to 0^+} (t^{1/\al}\log(1/t))^{-1} \left( H(t) - \sum_{n=1}^{1/\al-1} \frac{(-1)^{n-1}}{n!} t^n \Per_{\nu^{(n\al)}} (\Omega) \right)
& = \frac{(-1)^{1/\alpha-1}}{(1/\alpha-1)!\pi}\Per(\Omega).
\end{align*}
\end{theorem}
\begin{proof}
By the Proof of Theorem \ref{thm3},
\begin{align*}
H(t) - \sum_{n=1}^{1/\al-1} &\frac{(-1)^{n-1}}{n!} t^n \Per_{\nu^{(n\al)}} (\Omega) = \mathrm{I}_1+\mathrm{I}_2+\mathrm{I}_3,
\end{align*}
where
\begin{align*}
\mathrm{I}_1 = t^{1/\al}\int_{0}^{1} \int_{\mathbb{S}^{d-1}} r^d \, \frac{\g(0)-\g(t^{1/\al}r u)}{t^{1/\al}r} \left(p_1(re_d) - \sum_{n=1}^{1/\al-1} \frac{(-1)^{n-1}}{n!} \mathcal{A}_{d,-n\al} r^{-d-n\al} \right) \sigma(\ud u) \, \ud r,
\end{align*}
\begin{align*}
\mathrm{I}_2&= \int_{1 <|x| \leq t^{-1/\al}} \left(\g(0)-\g(t^{1/\al}x)\right) \sum_{n=1/\al}^{\infty} a_n |x|^{-n\al-d} \, \ud x,
\end{align*}
and
\begin{align*}
\mathrm{I}_3 = |g_{\Omega}(0)| \omega_{d-1} \sum_{n=1/\al}^{\infty} \frac{a_n}{n\al} t^{n}.
\end{align*}
By \eqref{prop_iv}, \eqref{prop_v} and Dominated Convergence Theorem,
$$
\lim_{t \to 0^+} \left(t^{1/\al} \log(1/t)\right)^{-1} \mathrm{I}_1 = 0.
$$
Next,
$$
\lim_{t\to 0^+} (t^{1/\al} \log(1/t))^{-1} \mathrm{I}_3 = 0.
$$
By \eqref{prop_iv},
\begin{align*}
|\mathrm{I}_2| &\leq \frac{\Per(\Omega)}{2} t^{1/\al} \sum_{n=1/\al}^{\infty} |a_n| \int_{1<|x|\leq t^{-1/\al}} |x|^{1-n\al-d} \, \ud x \\
&= \frac{\Per(\Omega)}{2} \left(\sum_{n=1/\al+1}^{\infty} |a_n| t^{1/\al} \frac{t^{n-1/\al}-1}{1-n\al} + \frac{a_{1/\al}}{\al} t^{1/\al} \log(1/t)\right) \\
&\leq \frac{\Per(\Omega)}{2} \left(\sum_{n=1/\al+1}^{\infty} |a_n| \frac{t^{1/\al}}{n\al-1} + \frac{a_{1/\al}}{\alpha} t^{1/\al} \log(1/t)\right).
\end{align*}
Therefore
$$
\mathrm{I}_2 = \sum_{n=1/\al}^{\infty} a_n \int_{1 <|x| \leq t^{-1/\al}} \left(\g(0)-\g(t^{1/\al}x)\right) |x|^{-n\al-d} \, \ud x.
$$
We have
\begin{align*}
&(t^{1/\al} \log(1/t))^{-1} \int_{1 <|x| \leq t^{-1/\al}} \left(\g(0)-\g(t^{1/\al}x)\right) |x|^{-n\al-d} \, \ud x \\
&= \log(1/t)^{-1}\int_1^{t^{-1/\al}} \int_{\mathbb{S}^{d-1}} \frac{\g(0)-\g(t^{1/\al}ru)}{t^{1/\al} r} \sigma(\ud u) r^{-n\al} \ud r \\
&= \log(1/t)^{-1}\int_1^{t^{-1/\al}} \calM_{\Omega}(t,r) r^{-n\al} \ud r.
\end{align*}
We claim that
\begin{align}\label{eq:lim_log1}
\lim_{t\to 0^+} \, \log(1/t)^{-1} \int_1^{t^{-1/\al} }\!\!
\calM_{\Omega}(t,r) r^{-1} \, \ud r= \frac{\pi^{\frac{d-1}{2}}}{\al \Gamma\left(\frac{d+1}{2}\right)} \Per(\Omega),
\end{align}
where
$$
\calM_{\Omega} (t,r) = \int_{\mathbb{S}^{d-1}} \frac{\g(0)-\g(rt^{1/\al}u)}{rt^{1/\al}} \sigma (\ud u).
$$
Indeed,
by substitution,
\begin{align*}
\log(1/t)^{-1} \int_1^{t^{-1/\al}} \calM_{\Omega} (t,r)r^{-1} \, \ud r &= \log(1/t)^{-1} \int_0^{ \log(1/t)/\al} \calM_{\Omega} (t,e^r) \, \ud r
\\
&=
\int_0^{1/\al} \calM_{\Omega} (t,t^{-r}) \, \ud r,
\end{align*}
and by \eqref{prop_iv}, \eqref{prop_v} and Dominated Convergence Theorem,
\begin{align*}
\lim_{t \to 0^+} \int_0^{1/\al} \calM_{\Omega} (t,t^{-r}) \, \ud r
&= \int_0^{1/\al} \int_{\mathbb{S}^{d-1}} \lim_{t \to 0^+} \frac{\g(0)-\g(t^{1/\al-r}u)}{t^{1/\al-r}} \sigma (\ud u) \, \ud r \\
&= \frac{\pi^{(d-1)/2}}{\al \Gamma((d+1)/2)} \Per(\Omega).
\end{align*}
We claim that for $n \geq 1/\al+1$ we have
\begin{align}\label{eq:lim_log2}
\lim_{t \to 0^+} \log(1/t)^{-1}\int_1^{t^{-1/\al}} \calM_{\Omega}(t,r) r^{-n\al} \ud r = 0.
\end{align}
Indeed, by \eqref{M_om1}, \eqref{M_om1}, and since
\eqref{prop_iv} and \eqref{prop_v},
\begin{align*}
0\leq \calM_{\Omega}(t,r)\leq \frac{1}{2} \Per(\Omega)\, \sigma (\mathbb{S}^{d-1})
\end{align*}
and, for any $r>0$,
\begin{align*}
\lim_{t\to 0^+}\calM_{\Omega}(t,r) = \frac{\pi^{(d-1)/2}}{\Gamma\left((d+1)/2\right)}\Per (\Omega).
\end{align*}
Moreover,
$$
\int_1^{t^{-1/\al}} \log(1/t)^{-1} r^{-n\al}\, \ud r \leq \int_1^{\infty} r^{-n\al}\, \ud r = \frac{1}{n\al-1}
$$
for $t<1/e$, and hence \eqref{eq:lim_log2} follows by Dominated Convergence Theorem. \eqref{eq:lim_log1} and \eqref{eq:lim_log2} yield the thesis of the theorem.
\end{proof}
\subsection{Heat content for general stable operators on $\R$}
Let $\al \in (0,1) \cup (1,2)$, $\beta \in [-1,1]$ and $\gamma>0$. We consider convolution semigroup $(p_t)_{t\geq0}$ on $\R$ such that
$$
\psi(\xi) = \gamma|\xi|^{\al} \left(1-i\beta \operatorname{tg}\left(\frac{\pi\al}{2}\right) \sgn(\xi)\right), \quad \xi \in \R \,.
$$
The corresponding L\'evy measure on $\R$ is given by
$$
\nu(\ud x) = \frac{c_+ \mathds{1}_{x\geq0} + c_- \mathds{1}_{x<0}}{|x|^{1+\al}} \, \ud x,
$$
where
$$
c_+ = -\frac{1+\beta}{2\Gamma(-\al)\cos(\frac{\pi\al}{2})}\quad \mathrm{and} \quad c_- = -\frac{1-\beta}{2\Gamma(-\al)\cos(\frac{\pi\al}{2})}.
$$
Let $\Omega=(a,b) \subset \R$. We have $\g(x) = (b-a-|x|)\mathbf{1}_{[0,b-a)} (|x|)$. For $\alpha \in (0,1)$, $\Per_{\nu}(\Omega) = (c_++c_-) \al^{-1}(1-\al)^{-1} (b-a)^{1-\al}$.
We can fix the parameter $\gamma$ without loss of generality. Assume that $\gamma= \cos\left(\frac{\pi\beta\al}{2}\right)$ if $\al<1$ and $\gamma= \cos\left(\pi\beta\frac{2-\al}{2}\right)$ if $\al>1$.
Let
$$
b_n := \frac{(-1)^{n-1}}{\pi^{3/2}}\frac{\Gamma(n\al+1)}{n!} \sin\left(\pi n\al \rho\right),
$$
\vspace{0.3em}
where $\rho = \frac{1+\beta}{2}$ if $\al<1$, and $\rho = \frac{1-\beta(2-\al)/\al}{2}$ if $\al>1$.
By \cite[(2.5.1)]{MR854867}, for $\al < 1$,
\begin{equation}\label{eq:zolot1}
p_1(x) = \sum_{n=1}^{\infty} b_n x^{-n\al-1}
\end{equation}
as $x \to \infty$. By \cite[(2.5.4)]{MR854867}, for $\al>1$ and $\beta\neq-1$, and any $N \in \mathbb{N}$,
\begin{equation}\label{eq:zolot2}
p_1(x) = \sum_{n=1}^{N} b_n x^{-n\al-1} + O (x^{-(N+1)\al-1})
\end{equation}
as $x \to \infty$.
Let
$$
d_n := \frac{ (-1)^{n-1}}{\pi^{3/2}}
\frac{2\Gamma(n\al+1)}{n!} \sin\left(\frac{\pi n\al}{2}\right) \cos\left(\frac{\pi n \al \beta}{2}\right).
$$
This constant will appear in the following proposition, which complements \cite[Theorem 1.1]{MR3606559}. We generalize the results for $\al<1$ to the non-symmetric case. The last result is new even for the symmetric case, since previously only the first two terms of the expansion were known.
\begin{proposition}
Let $\Omega=(a,b), |\Omega| = b-a$.
\begin{enumerate}
\item Let $0<\al<1$ and $0<t<\min \{|\Omega|^{\alpha}, e^{-1} \}$.
\begin{enumerate}
\item If $1/\al \notin \mathbb{N}$, then there is a constant $C_{\alpha}$ independent of $\Omega$ such that
\begin{align*}
H(t) &=\frac{2}{\pi} \sum_{n=1}^{\left[\frac{1}{\alpha}\right]}(-1)^{n-1} \frac{\Gamma(n\alpha)}{(1-n\alpha)n!}
\sin\left(\frac{\pi n \al}{2}\right) \cos\left(\frac{\pi n \alpha \beta}{2}\right) |\Omega|^{1-n\alpha} t^{n}+\,\,C_{\alpha}\,t^{1/\alpha} +R_{\alpha}(t),
\end{align*}
where
$C_{\alpha} = \int_0^1 \int_{w}^{\infty} \left(p_1(x)+p_1(-x)\right) \ud x \ud w - \sum_{n=1}^{\infty} \frac{d_n}{n\al(1-n\al)}$ and $|R_{\alpha}(t)| \leq c \,t^{\left[\frac{1}{\alpha} \right]+1}$.
\item If $\alpha=1/N$ for some $N \in \mathbb{N}$, then there is a constant $C_{N}(\Omega)$ such that
\begin{align*}
H(t)&= \frac{2}{\pi} \sum_{n=1}^{N-1} (-1)^{n-1}
\frac{\Gamma(n/N)}{(1-n/N)n!}
\sin\left( \frac{\pi n}{2N}\right) \cos\left(\frac{\pi n \beta}{2N}\right)
|\Omega|^{1-n/N} t^{n}
\\&\quad+(-1)^{N-1}\,\frac{2}{\pi (N-1)!}\,
\cos \left(\frac{\pi \beta}{2}\right)
t^{N} \ln\left(\frac{1}{t}\right) + C_{N}(\Omega)\,t^{N}
+R_{1/N}(t),
\end{align*}
where $C_N(\Omega) = \int_0^1 \int_{w}^{\infty} \left(p_1(x)+p_1(-x)\right) \ud x \ud w\, + \, d_N \ln(|\Omega|) - \sum_{n\neq N} \frac{d_n}{n/N(1-n/N)}$, $|R_{1/N}(t)|\leq c\, t^{N+1}$.
\end{enumerate}
\item If $1<\al<2$, $|\beta| \neq 1$,
then, for any $N \in \mathbb{N}$,
\begin{align*}
H(t) &= t^{1/\al} \int_{\R} |x|p_1(x) \ud x + \frac{2}{\pi} \sum_{n=1}^{N} (-1)^{n-1} \frac{\Gamma(n\al+1)}{n!} \sin\left(\frac{\pi n \al}{2}\right) \cos\left(\frac{\pi n
\left(\beta\frac{2-\al}{\al}\right)}{2}\right)
\frac{1}{n\al(1-n\al)} |\Omega|^{1-n\al} t^n \\
&\quad+ R_{N}(t)
\end{align*}
as $t \to 0^+$, where $|R_N(t)| \leq c t^{N+1}$.
\end{enumerate}
\end{proposition}
Note that by \cite[Proposition 1.4]{Hardin}
$$
\int_{\R} |x|p_1(x) \ud x = \frac{2}{\pi} \Gamma\left(1-\frac{1}{\al}\right) \mathrm{Re} \left(1+i\beta\operatorname{tg}\left(\frac{\pi \al}{2} \right)\right)^{1/\al}.
$$
\begin{proof}
(i) can proved analogously to \cite[Theorem 1.1]{MR3606559}, using \eqref{eq:zolot1}. We will prove (ii). We have
We have
\begin{align*}
H(t) &= \int_{\R} \left(|\Omega| \wedge |x|\right) p_t(x) \, \ud x = \int_{\R} \left(|\Omega| \wedge t^{1/\al}|x|\right) p_1(x) \, \ud x \\
&= t^{1/\al} \int_{|x|<|\Omega|t^{-1/\al}} |x|p_1(x) \, \ud x + \int_{|x|\geq |\Omega|t^{-1/\al}} \left(|\Omega|-t^{1/\al}|x|\right) p_1(x) \, \ud x\\
& = t^{1/\al} \, \int_{\R} |x|p_1(x) \ud x +
\int_{|x|\geq |\Omega|t^{-1/\al}} \left(|\Omega|-t^{1/\al}|x|\right) p_1(x) \, \ud x.
\end{align*}
By \eqref{eq:zolot2}, for any $N \in \mathbb{N}$,
$$
p_1(x) = \sum_{n=1}^{N} b_n x^{-n\al-1} + O(x^{-(N+1)\al-1}),
$$
as $x \to \infty$.
Therefore
\begin{align*}
&\int_{|\Omega|t^{-1/\al}}^{\infty} \left(|\Omega|-t^{1/\al}x\right) p_1(x) \, \ud x \\
&= \int_{|\Omega|t^{-1/\al}}^{\infty} \left(\sum_{n=1}^{N} (-1)^{n-1} b_n x^{-n\al-1} + O (x^{-(N+1)\al-1})\right) \left(|\Omega|-t^{1/\al}x
\right) \ud x \\
&= \sum_{n=1}^{N} \frac{b_n}{n\al(1-n\al)} |\Omega|^{1-n\al} t^n + \int_{|\Omega|t^{-1/\al}}^{\infty} O (x^{-(N+1)\al-1}) \left(|\Omega|-t^{1/\al}x\right) \ud x.
\end{align*}
We also have
\begin{align*}
\left|\int_{|\Omega|t^{-1/\al}}^{\infty} O (x^{-(N+1)\al-1}) \left(|\Omega|-t^{1/\al}x\right) \ud x\right| &\leq C \int_{|\Omega|t^{-1/\al}}^{\infty} \left(t^{1/\al}x-|\Omega|\right) x^{-(N+1)\al-1} \, \ud x \\
& = C \frac{1}{(N+1)\al((N+1)\al-1)} |\Omega|^{1-(N+1)\al} \, t^{N+1}.
\end{align*}
The calculations for $\int_{-\infty}^{-\Omega|t^{-1/\al}} \left(|\Omega|+t^{1/\al}x\right) p_1(x) \, \ud x$ are analogous since $p_1(-x)$ corresponds to $p_1(x)$ with parameters $(\al, -\beta, \gamma)$.
\end{proof}
\bibliographystyle{abbrv}
|
2,869,038,155,243 | arxiv | \section{Introduction}
Financial applications in the risk industry are underpinned by large-scale stochastic simulations which are data, memory and computationally intensive \cite{simulation-1}. These simulations are run on a weekly, monthly or quarterly basis on production systems such as commodity clusters to generate risk metrics including Probable Maximum Loss (PML) \cite{pml1} and Tail Value-at-Risk (TVaR) \cite{tvar2} due to catastrophic events such as earthquakes, hurricanes and floods. The results obtained from these simulations are then interpreted by actuaries for key decision making and planning a financial year.
The simulations that run on a routine basis are sufficient if the risk metrics do not have to be updated before their next run. Consider a simulation that takes into account the fluctuation of one parameter, for example, currency, on a weekly basis. The simulation is performed on a weekly basis to update the risk metrics. However, this is not the case in real-time scenarios where risk metrics will need to be obtained on an ad hoc basis before the next routine run. For example, consider real-time online pricing when an underwriter needs to close a deal with a client over the telephone.
Production systems are not the right type of infrastructure for simulating on an ad hoc basis since they are firstly optimised to run routine simulations and accommodate data of known sizes, and secondly, over committed and at best fully utilised resources with no scope to satisfy the data, memory and computational requirements of ad hoc simulations. Consequentially, even if ad hoc simulations are performed on production clusters they tend to be slow. One solution to this problem would be to use dedicated systems for ad hoc simulations. However, this is not always possible since there is an additional investment on top of the maintenance costs of production clusters. An alternative solution to reduce the cost of investment is by using hardware accelerators as coprocessors in heterogeneous clusters \cite{hetcluster-1, hetcluster-2}. Though computation can be accelerated to suit ad hoc simulations the memory and data requirements cannot be always satisfied. Hardware accelerators have limited memory and thereby cannot handle large data in memory.
Cloud computing infrastructure has the potential to address the above challenges. Maintenance costs can be eliminated and resources can be scaled on-demand, which can satisfy the requirements of ad hoc risk simulations \cite{cloud-1, cloud-3}. In this paper, the research question, ``Are clouds ready to accelerate ad hoc financial simulations?'' is explored. One application, namely Aggregate Risk Analysis, widely employed in the risk industry is developed and deployed on the cloud. Parallel techniques to accelerate the analysis and techniques to efficiently accommodate data and handle memory on cloud VMs are investigated. The experimental studies on the cloud indicate that the application achieves up to a 60x acceleration on VMs with hardware accelerators but with poor acceleration due to wasted computational time per dollar spent. Nevertheless the cloud can accommodate financial simulations.
The remainder of this paper is organised as follows. Section \ref{relatedwork} considers research related to risk applications. Section \ref{aggregateriskanalysis} presents Aggregate Risk Analysis. Section \ref{experimentalstudies} presents an experimental study of sequential and parallel implementations on cloud VMs. This paper concludes in Section \ref{conclusion} by considering future work.
\section{Related Work}
\label{relatedwork}
The domain of computational finance and risk addresses problems related to achieving fast computations, surmounting challenges of data management and efficiently handling memory of computing systems. Therefore, this domain is dependent on the advances in high-performance computing. Research of financial applications on production-based computing systems have progressed from small scale clusters \cite{smallcluster1, smallcluster2} to large supercomputers \cite{supercomputer1, supercomputer2}, and the typical problem addressed is achieving fast computations. These applications are hosted either on in-house clusters or on supercomputing infrastructures to which the owners of the application have access.
A number of financial applications are being migrated from small clusters to be hosted on multiple core processors and many core coprocessor which are available at a low budget \cite{budgetplatform1}. For example, research related to financial applications exploiting the Cell BE processor \cite{cellbe-1} \cite{cellbe-2}, FPGAs \cite{fpga1} \cite{fpga2} and GPUs \cite{gpu1, gpu2, gpu3, gpu5}. In all the above research, the need for speeding up financial applications are presented and is achieved. However, ad hoc analytics in financial risk is important which is now possible with the availability of scalable on-demand VMs provided by cloud computing and the development of big data techniques. Given that the cloud is a potential high-performance computing platform to address big data problems it is now ripe to explore risk applications in the cloud context \cite{onlinerisk}. There is limited research exploring the feasibility of accelerating and accommodating financial simulations for ad hoc analysis on the clouds. The research reported in this paper is motivated towards this end.
\section{Aggregate Risk Analysis on the Cloud}
\label{aggregateriskanalysis}
Financial applications are underpinned by large-scale simulations which are both data, memory and computationally intensive. One such simulation is a Monte-Carlo like simulation performed on a portfolio of risk held by a reinsurer, referred to as Aggregate Risk Analysis \cite{s1, s2, s4}. This simulation provides actuaries and decision makers with millions of alternate views of catastrophic events, such as earthquakes, that can occur and the order in which they can occur in a year for portfolio risk management and real-time pricing. Millions of trials are simulated, with each trial comprising a set of possible future earthquake events and the probable loss for each trial is estimated.
Although Aggregate Risk Analysis is an embarrassingly parallel problem, there are significant challenges in achieving efficient parallelism. One major challenge is sharing large input data between the processing cores constrained by limited memory bandwidth. Further, the challenge of accommodating input data in limited memory hardware is constrained by the complex memory architecture of accelerating hardware such as GPUs.
Large and small sized data along with metadata are required for performing Aggregate Risk Analysis. The large data required is the Year Event Table, denoted as $YET$, which contains the occurrence of earthquake events for a year. The YET is obtained from a catalogue of possible future earthquakes that is generated using earthquake models. The frequency and the physical characteristic of the potential earthquake, and the damage the earthquake will cause are estimated by the hazard and vulnerability modules respectively of the earthquake model. The YET provides a million distinct views of potential earthquakes that can occur in a year.
Each record in a YET is called a Trial, denoted as $T_i$, which represents a possible sequence of event occurrences for any given year. The sequence of events is defined by a set of tuples containing the ID of an event and the time-stamp of its occurrence in a trial, $T_i = \{(E_{i, 1}, t_{i, 1}), \dots, (E_{i, k}, t_{i, k})\}$. The set is ordered by ascending time-stamp values. A typical YET may comprise one million trials, and each trial may have one thousand event time-stamp pairs. The YET is represented as
\begin{equation*}
\begin{array}{{l c l}}
YET & = & \{ T_i = \{(E_{i, 1}, t_{i, 1}), \dots, (E_{i, k}, t_{i, k})\} \},
\end{array}
\end{equation*}
\begin{center}
where $i = 1, 2, \dots$ and $k = 1, 2, \dots, 800, \dots, 1500$.
\end{center}
The small data required for Aggregate Risk Analysis is the Event Loss Tables, denoted as $ELT$, which represents the collection of specific events and their corresponding losses. Each record in an ELT is denoted using Event-Loss pairs $EL_{i} = \{E_{i}, l_{i}\}$ and a set of financial terms associated with the ELT $\mathcal{FT}_{1} =(\mathcal{FT}_{1_{1}}, \mathcal{FT}_{1_{2}}, \dots)$. A typical Aggregate Risk Analysis may comprise ten thousand ELTs, each containing tens of thousands of event losses with exceptions even up to a few million event losses. The ELTs is represented as
\begin{equation*}
ELT=\left\{
\begin{array}{l c l}
EL_{i} & = & \{E_{i}, l_{i}\},\\
\mathcal{FT}_{1} & = & (\mathcal{FT}_{1_{1}}, \mathcal{FT}_{1_{2}}, \dots)
\end{array}\right\}
\end{equation*}
\begin{center}
where $i = 1, 2, \dots, 10,000, \dots, 30,000$.
\end{center}
The metadata is defined as a Portfolio, denoted as $PF$, which contains a group of Programs, denoted as $P$ represented as $PF = \{P_{1}, P_{2}, \cdots, P_{n}\}$, with $n = 1, 2, \dots, 10$. Each Program in turn covers a set of Layers, denoted as $L$, that cover a collection of ELTs under a set of financial terms of the Layer. A single layer $L_i$ is composed of the set of ELTs $\mathcal{E} = \{ELT_1, ELT_2, \dots, ELT_j\}$, and two set of Layer Terms, denoted as $\mathcal{FT}_{2} = (\mathcal{FT}_{2_{1}}, \mathcal{FT}_{2_{2}})$ and $\mathcal{FT}_{3} = (\mathcal{FT}_{3_{1}}, \mathcal{FT}_{3_{2}})$.
A typical Layer covers approximately three to thirty individual ELTs. The Layer can be represented as
\begin{equation*}
L=\left\{
\begin{array}{l c l}
\mathcal{E} & = & \{ELT_1, ELT_2, \dots, ELT_j\}, \\
\mathcal{FT}_{2} & = & (\mathcal{FT}_{2_{1}}, \mathcal{FT}_{2_{2}}),\\
\mathcal{FT}_{3} & = & (\mathcal{FT}_{3_{1}}, \mathcal{FT}_{3_{2}})
\end{array}\right\}
\end{equation*}
\begin{center}
where $j = 1, 2, 3, \dots, 30$.
\end{center}
The algorithm (line no. 1-17 shown in Algorithm \ref{algorithm1}) for aggregate analysis has two stages. In the first stage, data is loaded into local memory what is referred to as the preprocessing stage in this paper. In this stage $YET$, $ELT$ and $PF$, are loaded into memory.
\begin{algorithm}
\caption{Aggregate Risk Analysis}
\label{algorithm1}
\SetAlgoLined
\DontPrintSemicolon
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\BlankLine
\Input{$YET$, $ELT$, $PF$}
\Output{$YLT$}
\BlankLine
\For{each Program, $P$, in $PF$}{
\For{each Layer, $L$, in $P$}{
\For{each Trial, $T$, in $YET$}{
\For{each Event, $E$, in $T$}{
\For{each $ELT$ covered by $L$}{
Lookup $E$ in the $ELT$ and find corresponding loss, $l_{E}$\;
Apply Financial Terms $\mathcal{FT}_{1}$ to $l_{E}$\;
$l_{T} \leftarrow$ $l_{T}$ + $l_{E}$\;
}
Apply Financial Terms $\mathcal{FT}_{2}$ to $l_{T}$\;
Apply Financial Terms $\mathcal{FT}_{3}$ to $l_{T}$\;
}
}
}
}
Populate $YLT$ using $l_{T}$\;
\BlankLine
\end{algorithm}
In the second stage, the four step simulation executed for each Layer and for each trial in the YET is performed as shown below and the resulting Year Loss Table ($YLT$) is produced.
In the first step shown in line no. 6 in which each event of a trial its corresponding event loss in the set of ELTs associated with the Layer is determined. In the second step shown in line nos. 7-9, secondary uncertainty is applied to each loss value of the Event-Loss pair extracted from an ELT. A set of contractual financial terms are then applied to the benefit of the layer. For this the losses for a specific event's net of financial terms $\mathcal{I}$ are accumulated across all ELTs into a single event loss shown in line no. 9. In the third step in line no. 11 the event loss for each event occurrence in the trial, combined across all ELTs associated with the layer, is subject to occurrence terms. In the fourth step in line no. 12 aggregate terms are applied.
The financial terms $\mathcal{FT}_{2}$ and $\mathcal{FT}_{3}$ applied on the loss values combined across all ELTs associated with the layer are Occurrence and Aggregate terms. Two occurrence terms, namely (i) Occurrence Retention, denoted as $\mathcal{FT}_{2_{1}}$, which is the retention or deductible of the insured for an individual occurrence loss, and (ii) Occurrence Limit, denoted as $\mathcal{FT}_{2_{1}}$, which is the limit or coverage the insurer will pay for occurrence losses in excess of the retention are applied. Occurrence terms are applicable to individual event occurrences independent of any other occurrences in the trial. The occurrence terms capture specific contractual properties of 'eXcess of Loss' \cite{excessofloss-1} treaties as they apply to individual event occurrences only. The event losses net of occurrence terms are then accumulated into a single aggregate loss for the given trial. The occurrence terms are applied as $l_{T} = min ( max ( l_{T} - \mathcal{FT}_{2_{1}} ), \mathcal{FT}_{2_{2}})$.
Two aggregate terms, namely (i) Aggregate Retention, denoted as $\mathcal{FT}_{3_{1}}$, which is the retention or deductible of the insured for an annual cumulative loss, and (ii) Aggregate Limit, denoted as $\mathcal{FT}_{3_{2}}$, which is the limit or coverage the insurer will pay for annual cumulative losses in excess of the aggregate retention are applied. Aggregate terms are applied to the trial's aggregate loss for a layer. Unlike occurrence terms, aggregate terms are applied to the cumulative sum of occurrence losses within a trial and thus the result depends on the sequence of prior events in the trial. This behaviour captures contractual properties as they apply to multiple event occurrences. The aggregate loss net of the aggregate terms is referred to as the trial loss or the year loss. The aggregate terms are applied as $l_{T} = min ( max ( l_{T} - \mathcal{FT}_{3_{1}} ), \mathcal{FT}_{3_{2}})$.
The analysis generates loss values associated with each trial of the YET which populates the Year Loss Table (YLT). Important risk metrics such as the Probable Maximum Loss (PML) and the Tail Value-at-Risk (TVaR) which are used for both internal risk management and reporting to financial regulators and rating agencies can be derived from the output of the analysis. Furthermore, these metrics flow into a final stage of the risk analytics pipeline, namely Enterprise Risk Management, where liability, asset, and other forms of risks are combined and correlated to generate an enterprise wide view of risk. Additional functions can be used to generate reports that will aid actuaries and decision makers, for example, reports presenting Return Period Losses (RPL). The simulations can also be extended for providing vital information required for disaster recovery and management.
\section{Experimental Studies}
\label{experimentalstudies}
In this section, the cloud computing platform investigated for implementing sequential and parallel Aggregate Risk Analysis is presented. The data structures chosen for representing the input and intermediate data to address memory bottlenecks and optimisations to improve the performance of the analysis are considered. An empirical study of the analysis both sequentially and in parallel on (single and multiple core) CPU and (single and multiple) GPU VMs on the cloud are presented.
\subsection{Platform}
Aggregate risk analysis was performed on public VMs available from the Amazon Elastic Compute Cloud (EC2)\footnote{\url{http://aws.amazon.com/ec2/}} and on private VMs. Table \ref{table1} shows the underlying hardware and specifications of the VMs used for this research.
The VMs offered by Amazon are referred to as instances. Five types of instances are offered by Amazon, namely the (i) general purpose, (ii) memory optimised, (iii) cluster compute, (iv) storage optimised, and (v) GPU instances are used for the analysis. Only instances with at least 15 GB of RAM are used (this is a requirement for the analysis).
The general purpose instances employed are the \texttt{m1} and \texttt{m3} instances, namely \texttt{m1.xlarge}, \texttt{m3.xlarge} and \texttt{m3.2xlarge}. The \texttt{m1} instance is a VM of Intel(R) Xeon(R) CPU E5-2650 and \texttt{m3} instances are VMs abstracted over Intel(R) Xeon(R) CPU E5-2670. Each virtual CPU (vCPU) of the \texttt{m3} instances is a hardware hyperthread on the underlying processor.
The memory optimised instances employed are the \texttt{m2} and \texttt{cr1} instances, namely \texttt{m2.xlarge}, \texttt{m2.2xlarge}, \texttt{m2.4xlarge} and \texttt{cr1.8xlarge}. The \texttt{m2} instances are VMs of Intel(R) Xeon(R) CPU E5-2665 and the \texttt{cr1} instance abstracts Intel(R) Xeon(R) CPU E5-2670. Each virtual CPU (vCPU) of the \texttt{cr1} instance is a hardware hyperthread on the underlying processor.
The compute optimised instance employed are the \texttt{cc1} and \texttt{cc2} instances, namely \texttt{cc1.4xlarge} and \texttt{cc2.8xlarge}. Both \texttt{cc1} and \texttt{cc2} instances are abstraction of Intel(R) Xeon(R) CPU X5570. Each virtual CPU (vCPU) of the \texttt{cc2} instance is a hardware hyperthread on the underlying processor.
The storage optimised instance employed are the \texttt{hi1} and \texttt{hs1} instances, namely \texttt{hi1.4xlarge} and \texttt{hs1.8xlarge}. The \texttt{hi1} instance abstracts Intel(R) Xeon(R) CPU E5620 and \texttt{hs4} is a VM over the Intel(R) Xeon(R) CPU E5-2650.
The GPU instance employed is \texttt{cg1.4xlarge} backed by two Intel(R) Xeon(R) CPU X5570 and two NVIDIA Tesla M2050 GPUs. Each GPU consists 448 processor cores and 3 GB of total global memory yielding 2.625 GB of user available memory and a memory bandwidth of 148.4 GB/sec.
Two private VMs \texttt{vm1} and \texttt{vm2} are employed. \texttt{vm1} is backed by Intel(R) Xeon(R) E5-2620 which is relatively old hardware when compared to the underlying processor for Amazon instances. \texttt{vm2} is backed by Intel(R) Xeon(R) E5-2665 similar to the underlying CPU on the \texttt{m2} instances. \texttt{vm2} is also supported by the same GPU on the Amazon GPU instance. The vCPUs of both VMs are hardware hyperthreads on the underlying processor and the Xen hypervisor is used.
The Ubuntu 13.10 cloud image is used on all VMs. Sequential and parallel versions of Aggregate Risk Analysis were implemented. C++ was used for the sequential implementation, OpenMP was used in the parallel implementation on multiple core instances, and CUDA was used for the sequential and parallel implementations on GPU instances. On the CPU instances, both versions were compiled using the GNU Compiler Collection g++ 4.6 and optimised using `\texttt{-O3}'; the parallel implementation required \texttt{-fopenmp} during compilation for including the OpenMP directive. The NVIDIA CUDA Compiler (nvcc 5.0) was used for compiling on the GPU instances. Message Passing Interface (MPI) was employed and added nearly 10\% to the execution times and was also found to lower the efficiency of the VMs; hence those results are not presented. However, for a multiple GPU instance a combination of MPI to communicate between the instances, OpenMP for exploiting parallelism of the virtual cores on each instance and CUDA programming for exploiting the GPU were employed.
\begin{table}
\begin{center}
\begin{tabular}{p{1.5cm} p{0.8cm} p{0.8cm} p{2.4cm} p{0.8cm} }
\hline
\textbf{Instance Type} & \textbf{No. of Virtual CPUs (VCPU)} & \textbf{Memory (GiB)} & \textbf{Processor Type} & \textbf{Clock Speed (GHz)}\\
\hline
\multicolumn{5}{c}{\emph{Amazon cloud VMs}}\\
\hline
\texttt{m1.xlarge} & 4 & 15.0 & Intel Xeon E5-2650 & 2.00 \\
\hline
\texttt{m2.xlarge} & 2 & 17.1 & Intel Xeon E5-2665 & 2.40 \\
\texttt{m2.2xlarge} & 4 & 34.2 & Intel Xeon E5-2665 & 2.40 \\
\texttt{m2.4xlarge} & 8 & 68.4 & Intel Xeon E5-2665 & 2.40 \\
\hline
\texttt{m3.xlarge} & 4 & 15.0 & Intel Xeon E5-2670 & 2.60 \\
\texttt{m3.2xlarge} & 8 & 30.0 & Intel Xeon E5-2670 & 2.60 \\
\hline
\texttt{cr1.4xlarge} & 32 & 244.0 & Intel Xeon E5-2670 & 2.60\\
\hline
\texttt{cc1.4xlarge} & 16 & 23.0 & Intel Xeon X5570 & 2.93\\
\texttt{cc2.8xlarge} & 32 & 60.5 & Intel Xeon X5570 & 2.93\\
\hline
\texttt{hi1.4xlarge} & 16 & 60.5 & Intel Xeon E5620 & 2.40\\
\texttt{hs1.4xlarge} & 16 & 117.0 & Intel Xeon E5-2650 & 2.00\\
\hline
\multirow{2}{*}{\texttt{cg1.4xlarge}} & 16 & 22.5 & Intel Xeon X5570 & 2.93 \\
& 448 & 3.0 & NVIDIA Tesla M2050 & 0.575 \\
\hline
\multicolumn{5}{c}{\emph{Private VMs}}\\
\hline
\texttt{vm1} & 12 & 128.0 & Intel Xeon E5-2620 & 2.00 \\
\hline
\multirow{2}{*}{\texttt{vm2}} & 16 & 64.0 & Intel Xeon E5-2665 & 2.40 \\
& 448 & 3.0 & NVIDIA Tesla M2050 & 0.575 \\
\hline
\end{tabular}
\caption{VMs employed for Aggregate Risk Analysis}
\label{table1}
\end{center}
\end{table}
\subsection{Implementation on the Cloud}
In all the implementations each trial in the YET is executed using a single thread. All data required for the analysis is available as an Amazon Elastic Block Storage (EBS)\footnote{\url{http://aws.amazon.com/ebs/}} volume which is attached onto an instance. Nearly fifteen hours of continuous data transfer was required to the EBS volume.
\begin{figure}[t]
\centering
\subfloat[Time taken on VMs]{\label{graphset0-1}\includegraphics[width=0.49\textwidth]{graphset0-1a.png}} \\%\hspace{12pt}
\subfloat[Time taken when number of Trials are varied on the fastest public and private VMs]{\label{graphset0-2}\includegraphics[width=0.49\textwidth]{graphset0-2a.png}}
\caption{Sequential performance of Aggregate Risk Analysis on VMs}
\label{graphset0}
\end{figure}
One important factor for obtaining good performance in the Aggregate Risk Analysis algorithm is the selection of a data structure for representing ELTs. The ELTs function as dictionaries with key-value pairs requiring fast random key lookup. A sparse representation of ELTs covered by a Layer using direct access tables was implemented. Although fast lookups are obtained a sparse representation is at the expense of high memory utilisation. If a YET consists of 1,000,000 events and an ELT consists of 10,000 event-loss pairs, then the direct access table would contain 1,000,000 event entries of which 990,000 events would have zero loss values. Considering that one Layer would cover fifteen ELTs in a typical analysis, 15 million event-loss pairs need to be generated in the memory of the instance of which 14,850,000 events have zero loss values.
Nevertheless a direct access table was employed in all implementations. Alternate compact representations were not chosen for the following reasons: (i) A search operation is required to find an event-loss pair in a compact representation. Sequential and binary search require $O(n)$ and $O(log(n))$ memory accesses respectively to locate an event-loss pair. Even if a constant-time space-efficient hashing scheme requiring a constant number of memory accesses is adopted there is considerable implementation and run-time performance complexity. This overhead will be high on GPUs with complex memory hierarchies consisting of global and shared memories. (ii) To perform aggregate analysis on a YET of one million trials, each trial comprising one thousand events, and for a layer covering fifteen ELTs, there are fifteen billion event-loss pairs. Direct access tables, although require large memory space, allow for the least number of memory accesses as each lookup in an ELT requires only one memory access per search operation.
Two data structure implementations of the ELTs were considered. In the first implementation, each ELT is an independent table, and therefore, in a read cycle, each thread independently looks up its events from the ELTs. All threads within a block access the same ELT. By contrast, in the second implementation, the ELTs are combined as a single table. Consequently, the threads then use the shared memory to load entire rows of the combined ELTs at a time. The latter implementation performs poorly compared to the former since for the threads to collectively load from the combined ELT each thread must first write which event it requires. This results in additional memory overheads.
On the CPU instance offering multiple virtual cores the entire data required for the analysis is processed in memory. The GPU implementation uses the GPU's global memory to store all of the required data structures. The parallel implementation on the GPU requires high memory transactions and leads to inefficient performance on the GPU platform. To surmount this challenge shared memory can be utilised over global memory.
The algorithm is optimised in the following four ways. Firstly, by chunking, which refers to processing a block of events of fixed size (or chunk size) for the efficient use of shared memory.
Chunking is more beneficial in the GPU implementation than in the CPU implementations. In the case of the GPU implementation looking up events in a trial and applying financial terms to losses at the Event and Layer level are chunked. Further, the financial and layer terms are stored in the streaming multi-processor's constant memory.
If the intermediate losses are represented in global memory, then while applying the financial terms at the Event and Layer level would require the global memory to be accessed and updated adding considerable memory overheads. The memory overhead is minimised by chunking when (i) the financial terms are applied, and (ii) reading events in a trial from the YET. Chunking reduces the number of global memory update and global read operations. Moreover, the benefits of data striding can also be used to improve speed-up.
Secondly, the implementation are optimised by loop unrolling, which refers to the replication of blocks of code included within for loops by the compiler to reduce the number of iterations performed by the for loop. This is done using the pragma directive.
Thirdly, the implementations on the CPU and GPU are optimised by using single precision operations when possible. Read operations are faster using float variables as they are only half the size of a double variable. The performance of single precision operations tend to be approximately twice as fast as double precision operations.
Fourthly, in the case of the GPU a further optimisation can be achieved by migrating data from both shared and global memory to the kernel registry. The kernel registry has the lowest latency compared to all other forms of memory available in the GPU architecture.
\subsection{Empirical Analysis}
The results obtained from the experimental studies are presented in this section. All data required for the analysis is stored as an EBS volume and attached onto the instances considered in Table \ref{table1}. Figure \ref{graphset0} to Figure \ref{graphset3} are results obtained on CPU instances; the multi-core architecture of the instances are exploited in the parallel implementation. Figure \ref{graph4} and Figure \ref{graphset5} are results obtained on the GPU instance; both single and multiple GPUs are exploited in the parallel implementation. In all experiments, the analysis uses as input a YET comprising one million trials, with each trial consisting of one thousand catastrophic events, and one Portfolio with one Program comprising one Layer covering sixteen ELTs. The input parameters are realistic and were chosen based on industry-wide practices.
\begin{figure*}
\centering
\subfloat[Two virtual core public instance]{\label{graphset1-1}\includegraphics[width=0.33\textwidth]{graphset1-1.png}} \hfill
\subfloat[Four virtual core public instances]{\label{graphset1-2}\includegraphics[width=0.33\textwidth]{graphset1-2.png}} \hfill
\subfloat[Eight virtual core public instances]{\label{graphset1-3}\includegraphics[width=0.33\textwidth]{graphset1-3.png}} \\
\subfloat[Sixteen virtual core public instances]{\label{graphset1-4}\includegraphics[width=0.33\textwidth]{graphset1-4.png}} \hfill
\subfloat[Thirty two virtual core public instances]{\label{graphset1-5}\includegraphics[width=0.33\textwidth]{graphset1-5.png}} \hfill
\subfloat[Private VMs]{\label{graphset1-6}\includegraphics[width=0.33\textwidth]{graphset1-6a.png}}
\caption{Parallel execution (single thread per virtual core) of Aggregate Risk Analysis on public (a) to (e) and private (f) VMs}
\label{graphset1}
\end{figure*}
\subsubsection{Results from CPU instances}
Figure \ref{graphset0-1} shows the graph plotted for the time taken for sequentially performing aggregate risk analysis on all instances shown in Table \ref{table1}. Under the general purpose instances, the \texttt{m1} is the slowest for performing the analysis requiring 565 seconds; the \texttt{m3} instance is over 37\% faster than the \texttt{m1} instance. The memory optimised instance \texttt{cr1} is the fastest for performing the analysis requiring 295 seconds which is 37\% faster than the memory optimised \texttt{m2} instances. The difference in the performance obtained on storage optimised instances is just over 20\%. Cluster instances \texttt{cc1} and \texttt{cc2} perform comparably to the \texttt{cg1} instance. The fastest sequential CPU performance on the cloud requires less than five minutes which is nearly 50\% fastest than the slowest sequential performance on the cloud. Private VMs \texttt{vm1} and \texttt{vm2} have surprisingly good performance. \texttt{vm1} takes only 340 seconds which is nearly 40\% faster than \texttt{m1} instances. The \texttt{vm2} VM completes the analysis in 288 seconds which is over 2\% faster than the best performance on Amazon. Figure \ref{graphset0-2} shows the both the increase in the total time taken for the fastest sequential analysis on the cloud when the number of trials are varied between two hundred thousand and one million trials.
The parallel implementation of the analysis on the CPU requires multiple threads to be executed on the instance which can be done in two ways. Firstly, by executing a single thread per virtual core, and secondly, by executing multiple threads per core.
Figure \ref{graphset1} shows the graphs obtained from the parallel implementation of the analysis when one thread is executed per virtual core on the instance. The graphs are organised based on the number of virtual cores on the instance. The instance with two virtual cores obtains nearly a 96\% efficiency when two threads are employed (Figure \ref{graphset1-1}). Instances with four virtual cores obtain upto 87.5\% efficiency (Figure \ref{graphset1-2}). The two instances with eight virtual cores have an average efficiency of over 70\% (Figure \ref{graphset1-3}).
The storage optimised, \texttt{cc1} and \texttt{cg1} instances with sixteen cores each exhibit very little speedup and efficiency for more than eight cores (Figure \ref{graphset1-4}). Surprisingly, there is no hardware acceleration obtained which is expected. Beyond eight cores it would seem that the cost of hardware and the use of virtualised hardware on the sixteen core VMs do not benefit the analysis. Another reason is that as the number of cores are increased the bandwidth to access memory is not equally increased which is a limiting factor.
Similarly, in the case of thirty two core instances no acceleration is obtained beyond sixteen cores (Figure \ref{graphset1-5}). The fastest parallel execution on the CPU is obtained on the cluster compute instance \texttt{cc2.8xlarge} taking 27 seconds with a speedup of nearly 11x over the fastest sequential implementation. The performance of \texttt{cr1.8xlarge} is second to the \texttt{cc2} instance requring 40 seconds when multiple threads are employed though it performs the sequential analysis the fastest.
The private VMs again outperform the public instances (Figure \ref{graphset1-6}). \texttt{vm1} takes 44 seconds achieving a speedup of 7.5x over its sequential performance and \texttt{vm2} takes 22 seconds achieving a speedup of 13x over its sequential performance.
Figure \ref{graphset2} shows the graphs obtained from the parallel implementation of the analysis when multiple threads are executed on the instances. In all cases, multiple threads per Amazon core do not provide any acceleration for the analysis. Increasing the number of threads per core results in an increase in the communication cost between threads. The private VMs \texttt{vm1} and \texttt{vm2} achieve a speedup of 9\% and 5\% respectively when multiple threads are employed per virtual core.
\begin{figure*}
\centering
\subfloat[Two virtual core public instance]{\label{graphset2-1}\includegraphics[width=0.33\textwidth]{graphset2-1.png}} \hfill
\subfloat[Four virtual core public instances]{\label{graphset2-2}\includegraphics[width=0.33\textwidth]{graphset2-2.png}} \hfill
\subfloat[Eight virtual core public instances]{\label{graphset2-3}\includegraphics[width=0.33\textwidth]{graphset2-3.png}} \\
\subfloat[Sixteen virtual core public instances]{\label{graphset2-4}\includegraphics[width=0.33\textwidth]{graphset2-4.png}} \hfill
\subfloat[Thirty two virtual core public instances]{\label{graphset2-5}\includegraphics[width=0.33\textwidth]{graphset2-5.png}} \hfill
\subfloat[Private VMs]{\label{graphset2-6}\includegraphics[width=0.33\textwidth]{graphset2-6a.png}}
\caption{Parallel execution (multiple threads per virtual core) of Aggregate Risk Analysis on public (a) to (e) and private (f) VMs}
\label{graphset2}
\end{figure*}
Figure \ref{graphset3-1} shows the graph plotted for the best time taken for performing parallel aggregate risk analysis on all instances shown in Table \ref{table1}. Under the general purpose instances, though the \texttt{m1} instance is the slowest for performing the sequential analysis and for parallel analysis the \texttt{m2.xlarge} is the slowest requiring 240 seconds. Virtual core acceleration is achieved on the \texttt{m1} instance which is over 1.5x faster than \texttt{m2.xlarge}. The \texttt{m3.2xlarge} is nearly 2 times faster than \texttt{m3.xlarge} and \texttt{m1.xlarge}. The cluster instance \texttt{cc2} followed by \texttt{cr1} are the fastest requiring 27 seconds and 40 seconds respectively. Hence upto a 21x speedup is obtained for parallel analysis by exploiting the multi-core architecture over the sequential analysis and upto a 9x speedup over the slowest parallel analysis. Again, private VMs outperform public instances. The best performance of of \texttt{vm2} is 21 seconds using multiple threads which is 22\% faster than the best performance achieved by the \texttt{cc2} instance. Similarly, \texttt{vm1} takes 38 seconds for the analysis on multiple threads which is 5\% faster than the second best performance by \texttt{cr1} instances on public instances.
Figure \ref{graphset3-2} shows the increase in the total time taken for the fastest parallel analysis on the cloud when the number of trials are varied between two hundred thousand and one million trials.
\begin{figure}[t]
\centering
\subfloat[Time taken on VMs]{\label{graphset3-1}\includegraphics[width=0.49\textwidth]{graphset3-1a.png}}\\
\subfloat[Time taken when number of Trials are varied on the fastest public and private VMs]{\label{graphset3-2}\includegraphics[width=0.49\textwidth]{graphset3-2a.png}} \\
\caption{Parallel execution of Aggregate Risk Analysis on public and private VMs}
\label{graphset3}
\end{figure}
\subsubsection{Results from GPU instance}
Single and multiple GPU instances (\texttt{cg1.4xlarge}) are considered for risk analysis on the cloud. CUDA provides an abstraction over the streaming multi-processors of the GPU, referred to as a CUDA block. Unlike the parallel implementations on the CPU instance an additional parameter that needs to be considered in the GPU implementations is the number of threads executed per CUDA block. To represent 1,000,000 trials of the analysis on the GPU instance consider each trial is executed on one thread. If 128 threads are executed on one streaming multi-processor there will be approximately 7813 blocks which need to be executed on 14 streaming multi-processors; each streaming multi-processor will therefore need to execute 558 blocks. All threads executed on a streaming multi-processor share fixed allocations of shared and constant memory. Therefore, there is a trade-off for optimal performance; each thread can access larger amounts of shared and constant memory if there are fewer threads per block, but then the global memory will required to be accessed more resulting in increased global memory overheads.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{graph4a.png}
\caption{Execution of Aggregate Risk Analysis on a single GPU on public and private VMs}
\label{graph4}
\end{figure}
Figure \ref{graph4} shows the time taken for parallel risk analysis on a single GPU instance when the number of threads per CUDA block are varied between 1 and 512. At least 16 threads per block are required to achieve performance comparable to the best parallel implementation which is noted on \texttt{cc2.8xlarge} instance. To exploit the full potential of hardware acceleration offered by GPUs at least 64 threads are required. An improvement in the performance is observed with 256 threads per block beyond which performance starts to diminish. The best time for performing the analysis on a single GPU is 19.93 seconds which is around 15x faster than the best sequential performance on the CPU, nearly 1.4x faster than the best multiple core performance on the CPU, and over 6x faster than the sequential performance on a GPU. On the private VM \texttt{vm2} takes only 16.86 seconds. This is nearly 16\% faster than the GPU on the public instance.
The performance of the analysis on multiple GPU instances is shown in Figure \ref{graphset5-1}. In the multiple GPU implementation the workload for the analysis is decomposed and made available to the multiple instances that employ GPUs. Each CPU thread schedules the workload on the GPUs. Time taken for the analysis on four Amazon GPU instances is 5.024 seconds which is approximately 3.97 times faster than employing a single GPU with close to 97\% efficiency. Compared to the sequential implementation on a single GPU a speedup of over 24x is achieved when multiple GPU instances are used. On the other hand \texttt{vm2} takes only 4.238 seconds which is 16\% faster than the multiple GPUs on the public instance.
Figure \ref{graphset5-2} shows the performance of the analysis on four GPUs when the number of threads per block is varied from 16 to 64. Experiments could not be pursued beyond 64 threads per block due to the limitation on the block size the shared memory can use. The best performance of 5.024 seconds on the public VM and 4.2375 seconds on the private VM is achieved when the number of threads per block is 32; the block size is the same as the WARP size of the GPU, therefore, an entire block of threads can be swapped when high latency operations occur. Increasing the number of threads per block does not improve the performance since there is shared memory overflow.
\begin{figure}
\centering
\subfloat[Performance of multiple GPUs]{\label{graphset5-1}\includegraphics[width=0.49\textwidth]{graphset5-1a.png}} \hspace{12pt}
\subfloat[Performance on four GPUs for varying threads per block]{\label{graphset5-2}\includegraphics[width=0.49\textwidth]{graphset5-2a.png}} \\
\caption{Aggregate Risk Analysis on public and private VMs with multiple GPUs}
\label{graphset5}
\end{figure}
\subsection{Summary}
The experimental studies indicate that the public clouds are a suitable platform for accommodating risk simulations. The data, memory and computational requirements can be met on the cloud VMs. Risk simulation can be accelerated on the public cloud although the simulations do not scale well over the virtual cores of the instances; for example, for thirty two core instances no acceleration is achieved beyond sixteen cores. This results in wasted computational time per dollar spent on the simulation. Hence, maximum performance cannot be achieved thereby not fully exploiting the potential of the public cloud. Nevertheless, a 60x speedup is achieved on public instances over a baseline implementation. Interestingly, the private VMs are faster than the public instances. For example, the sequential CPU implementation, parallel CPU implementation and the parallel GPU implementation on private VMs are up to 40\%, 22\% and 16\% faster than the best performance achieved on public instances.
\section{Conclusions}
\label{conclusion}
The cloud is a potential platform for performing ad hoc risk simulations important to financial applications. Scalable and on-demand resources of the cloud are attractive for running ad hoc simulations and for meeting their data, memory and computational requirements. The research reported in this paper was motivated towards experimentally verifying whether clouds are ready to accelerate financial simulations. A typical application employed in the financial industry, namely Aggregate Risk Analysis, was developed and deployed on a variety of cloud VMs. The implementation exploited parallelism to accelerate the application and efficient management of data and memory to accommodate the application on the cloud. The experimental results indicate that the application can be accommodated to run on the cloud and an acceleration of up to 60x over a baseline implementation can be obtained with hardware accelerators on the cloud. Nevertheless, there is poor efficiency in the acceleration achieved highlighting the inability to harness the full potential of all available compute cores resulting in wasted computational performance. It is noted that the private VMs perform better than the public VMs.
Migrating financial applications onto the cloud is viable since the cloud provides a suitable platform to accommodate the computational, data and memory demands of ad hoc simulations. This is of significant benefit to the financial industry as well as its associated industries since the scalability and availability of resources on an on-demand basis reduce maintenance costs. However, while acceleration was achieved for the simulation, in our experience it could not be run most efficiently on the public cloud since there was wasted computational time for every dollar spent.
|
2,869,038,155,244 | arxiv | \section{Introduction}
The integrate and fire (IF) model is well known in computational
neuroscience as a simplified model of a neuron \cite{Gerstner:2002p4013, Rieke:1997} and is typically used to study the dynamics
of large populations. The model consists of a leaky integrator followed by a
comparator. The leak corresponds to a gradual loss of the value of the
integral.
More recently, the IF model has also been considered as a sampler
\cite{Chen:0p2586,Wei:2005p2180,Lazar:2005p3764,Lazar:2003p1620}, where the
sampler output is tuned to the variation of the integral of the signal. This
feature can be exploited when sampling neural recordings, for which relevant
information is localized in small intervals where the signal has a high
amplitude \cite{singh09}.
The block diagram of the sampler is presented in Figure \ref{fig:IF_diagram}.
At every instant $s$, the continuous input $x(t)$ is integrated against an
averaging function $u_{k,s}(t)$ and the result is compared to a positive
and negative threshold. When either of these is reached, a pulse is created
at time $t_{k}=s$ representing the threshold value (positive or negative),
the value of the integrator is then reset and the process repeats.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = -90, width=0.4\textwidth]{BIF_diag.eps}
\caption{Block diagram for the BIF model.}
\label{fig:IF_diagram}
\end{center}
\end{figure}
The output is a nonuniformly spaced pulse train, where each of the pulses is
either 1 or -1. The averaging function $u_{k,s}(t)$ is defined by
$e^{\alpha(t-s)}\mathcal{X}_{[t_{k}, s]}$, where $\mathcal{X}_{I}$ is the
characteristic function of $I$ and $\alpha > 0$ is a constant that models the
leakage of the integrator due to practical implementations. The precise
firing condition determining the pulses is:
\begin{equation}
\label{uk}
\pm \theta = \int_{t_{k}}^{t_{k+1}} f(t)
e^{\frac{-(t_{k+1}-t)}{\alpha}} dt =: \inner{f}{u_k}.
\end{equation}
The simplicity of the sampler translates into an efficient hardware
implementation which saves both power and area when compared to conventional
analog-to-digital converters (ADC) \cite{Chen:0p2586}. These constraints are
severe, in the case of wireless brain machine interfaces
\cite{Sanchez:2008}, for which the entire system has to be embedded inside
the subject. Hence, the IF sampler allows us to move the complexity of the
design into the reconstruction algorithm while providing a simple front end
at the sampling stage.
The problem of reconstructing a signal from the IF output should be
distinguished from the study of the dynamics of a population of neurons,
when some stochastic assumption is made on the firing parameters
\cite{lazar09}. In this article we study the deterministic reconstruction of
a bandlimited signal from the integrate and fire output. Part of the
challenge of this stems from the fact that the sampling map that
associates a function to its samples is non-linear. Indeed,
we see from Equation \eqref{uk} that the magnitude of the samples is
always $\theta$. Moreover, exact reconstruction for the IF sampler is
impossible since the output of the sampler does not completely determine the
signal (see Example \ref{no_fire} below.)
In this article, we will show that it is however possible to approximately
reconstruct a bandlimited signal in $L^\infty$ norm with an error comparable
to the threshold $\theta$. Moreover, we give a concrete reconstruction
procedure which is of course non-linear but, nevertheless, easy to implement.
Since in many situations the IF sampler is so much more convenient
to implement than conventional analog-to-digital converters, the loss of
accuracy in the reconstruction is a very reasonable trade-off
\cite{Chen:0p2586}, specially if the final analysis of the reconstructed
data tolerates some small error \cite{singh09}.
The methods considered so far \cite{Lazar:2003p1620} reconstruct the signal
$f$ from the system of equations $\inner{f}{u_k} = \pm \theta$ (cf. Equation
\eqref{uk}), thus treating the reconstruction as a (linear) average sampling
problem (see \cite{fegr94, al02, suzh02-4, sc03}). These approaches impose
density restrictions on the set of sampling functions $\sett{u_k}_k$
(cf. Equation \eqref{uk}.)
Since these sampling functions depend on the signal, the density constraints
on them are somehow unnatural.
The key for the reconstruction method that we develop lies in the
observation that the information derived from the IF output is much richer
than the mere system of equations $\inner{f}{u_k} = \pm \theta$. It also
contains the information
that no proper subinterval $[t_k, t']$ of $[t_k,t_{k+1}]$ satisfies Equation
\eqref{uk}. We will exploit this extra information to give an approximate
reconstruction procedure for a general bandlimited function. Since the
sampling process starts at a certain instant $t_0$, an
additional assumption on the size of $f$ before $t_0$ is required in order to
fully reconstruct $f$. Roughly speaking, the assumption means that the
sampling scheme would not have produced any pulse before $t_0$.
In Section \ref{sec_problem} we formally describe the output of the IF
sampling scheme. This output depends on an initial time $t_0$ when the
process is started and two parameters: the threshold $\theta$ and the
constant $\alpha>0$ modeling the leakage on the sampler. In Section
\ref{sec_output} we show that the IF output is always a finite sequence.
Section \ref{sec_rec} gives the approximate reconstruction procedure and
Section \ref{sec_num} presents some numerical experiments.
\section{The integrate and fire sampling problem}
\label{sec_problem}
We now define precisely the integrate and fire sampling scheme. Throughout
the article we will assume the following.
\begin{assum}
\label{assum_1}
A bandlimited function $f \in \ensuremath{PW_\Omega}$ and numbers $t_0 \in {\mathbb R}$, $\alpha,
\theta>0$ are given.
\end{assum}
Here, $\ensuremath{PW_\Omega}$ is the Paley-Wiener space
\[
\ensuremath{PW_\Omega} := \set{f \in
L^2({\mathbb R})}{\supp(\hat{f}) \subseteq [-\Omega,\Omega]},
\]
of (complex-valued) bandlimited functions
and $\hat{f}(w) := \int_{\mathbb R} f(x) e^{-2\pi i w x} dx$ is the Fourier
transform of $f$. We call $t_0$ the \emph{initial time}, $\alpha$ the
\emph{firing
parameter}
and $\theta$ the \emph{threshold}. Using these parameters we formally define
the output of the sampler. We first define recursively a finite or countable
sequence $t_0 < \ldots < t_j \ldots$ called \emph{the time instants}. Suppose
that the instants $t_0 < \ldots < t_j$ have already been defined and consider
the function $F_j:[t_j,+\infty) \to \bC$ given by
\[
F_j(t) := \int_{t_j}^t f(x) e^{\alpha(x-t)} dx.
\]
Observe that $F_j$ is continuous and $F_j(t_j)=0$. If
$\abs{F_j(t)} < \theta$, for all $t \geq t_j$, then the process stops.
If $\abs{F_j(t)} \geq \theta$, for some $t \geq t_j$, by the continuity of
$F_j$, we can define $t_{j+1}$
as the minimum number satisfying the equation
\begin{equation}
\label{fire_tj}
\abs{\int_{t_j}^{t_{j+1}} f(x) e^{\alpha(x-{t_{j+1}})} dx} = \theta.
\end{equation}
Clearly, in this case $t_{j+1} > t_j$.
We have defined a finite or countable sequence of points $t_0 <
\ldots < t_j \ldots$. We will prove in Proposition \ref{samples_interval}
that this sequence is in fact finite. Let us assume the time instants $\sett{t_0, \ldots, t_n}$ and define the \emph{samples} $\sett{q_1, \ldots, q_n}$ by,
\begin{equation}
\label{qj}
q_j := \int_{t_{j-1}}^{t_j} f(x) e^{\alpha(x-t_j)} dx,
\qquad (1 \leq j \leq n).
\end{equation}
Observe that, by the definition of the time intervals, $\abs{q_j} = \theta$.
The output of the sampler is formally given by the time instants $\sett{t_0,
\ldots, t_n}$ and the numbers
$\sett{q_1, \ldots, q_n}$. We say that this output has been produced by the
\emph{integrate and fire}.
The succeeding results apply generally to complex-valued functions, but in the case of the application that motivated this sampling scheme, the
signal is taken to be real-valued, and
the output of the sampler is encoded as a train of
impulses, where only the sign of the samples $q_j$ is stored.
\section{Some remarks on the IF output}
First we note that bandlimited functions are not completely determined by
the output of the IF sampler.
\begin{example}
\label{no_fire}
There are non-zero bandlimited signals that will never
produce an output from the sampler. Take for
instance $f_{\theta}(x)=\frac{\theta\,\sin^2(\pi\,x)}{2\pi^2\,x^2}$.
Since
$\hat{f_{\theta}}(\omega)=\frac{\theta}{2}\max\{1-|\omega|,0\}$, $f_{\theta}$
is bandlimited. We have for any $t_0\in\mathbb{R}$,
$\left|\int_{t_0}^{t}f_{\theta}(x)\,e^{\alpha(x-t)}dx\right|\leq\int_{t_0}^{
t}\left|\frac{\theta\,\sin^2(\pi\,x)}{2\pi^2\,x^2}\right|\,dx\leq\frac{\theta
}{2}\int_{\mathbb{R}}\frac{\sin^2(\pi\,x)}{\pi^2\,x^2}\,dx=\frac{\theta}{2}
<\theta,$
for all $t\geq t_0.$
\end{example}
\label{sec_output}
We now prove that the set of time instants produced by the IF sampler is
indeed finite and give some
bounds on its distribution. To this end we introduce some auxiliary functions
that will be used throughout the remainder of the article.
Consider the function $g: {\mathbb R} \to {\mathbb R}$ given by $g(x) = e^{-\alpha
x}\chi_{[0, \infty]}$ and define,
\begin{align}
\label{deff_v}
v(t) := (f*g)(t) := \int_{-\infty}^t f(x) e^{\alpha(x-t)} dx.
\end{align}
Since $g \in L^1({\mathbb R})$, $v \in \ensuremath{PW_\Omega}$. In the Fourier domain, $v$ and $f$ are
related by
\begin{align}
\label{fv_fourier}
\hat{f}(w) = \left(2 \pi i w + \alpha\right) \hat{v}(w).
\end{align}
In the time domain, this can be expressed as
\begin{align}
\label{fv_time}
f(t) = \frac{\partial v(t)}{\partial t} + \alpha v(t).
\end{align}
\begin{obs}
\label{v_is_c0}
The function $v$ is continuous and $v(t) \longrightarrow 0$, when $t
\longrightarrow \pm \infty$.
\end{obs}
\begin{proof}
We have already observed that $v \in \ensuremath{PW_\Omega}$. Since $\hat{v} \in L^2$ and
$\supp(\hat{v}) \subseteq [-\Omega, \Omega]$, we have that $\hat{v} \in
L^1$ and the conclusion follows from the Riemann-Lebesgue Lemma.
\end{proof}
The following straightforward equation relates $v$ to the integrate and fire
process.
\begin{equation}
\label{v_if}
\int_s^t f(x) e^{\alpha(x-t)} dx = v(t) - e^{\alpha(s-t)} v(s),
\quad s \leq t.
\end{equation}
We can now prove that the output of the IF process is finite.
\begin{prop}
\label{samples_interval}
Under Assumption 1, the following holds.
\begin{itemize}
\item[(a)] The set of time instants produced by the integrate and fire
scheme is a finite set
$\sett{t_0, \ldots, t_n}$.
\item[(b)] The numbers of time instants $t_j$ in a given finite interval
$[a,b]$ is bounded by
\[
\frac{\norm{f}_2}{\theta} (b-a)^{1/2} +1.
\]
\item[(c)] If $f$ is integrable, the total number of time instants is
bounded by
\[
\frac{\norm{f}_1}{\theta}+1.
\]
\end{itemize}
\end{prop}
\begin{proof}
We first prove (b) and (c). Let $[a,b]$ be an interval and let $\sett{t_j,
\ldots, t_{j+m-1}}$ be $m$ consecutive time instants contained in $[a,b]$. If
$m \leq 1$ the bound is trivial, so assume that $m \geq 2$. For each $0 \leq
k \leq m-2$, using Equation \eqref{fire_tj}
we have,
\begin{align*}
\theta &= \abs{\int_{t_{j+k}}^{t_{j+k+1}} f(x) e^{\alpha(x-t_{j+k+1})} dx}
\\
&\leq \int_{t_{j+k}}^{t_{j+k+1}} \abs{f(x)} dx.
\end{align*}
Summing over the $m-1$ intervals determined by the points $\sett{t_j, \ldots,
t_{j+m-1}}$ we have,
\begin{align}
\label{estimated_number_of_points}
(m-1) \theta &\leq \int_a^b \abs{f(x)} dx.
\end{align}
Letting $a=-\infty$ and $b=+\infty$ yields (b). For (a), H\"older's
inequality gives,
\[
(m-1) \theta \leq \norm{f}_2 (b-a)^{1/2},
\]
and the conclusion follows.
Now we prove (a). Assume on the contrary that the IF process goes on forever
producing an infinite set of instants $\sett{t_j: j \geq 0}$. Given $s>t_0$,
by part (b), only a finite number of instants $t_j$ belong to $[t_0, s]$.
Therefore $t_n \rightarrow +\infty$, as $n \rightarrow +\infty$. Using
Equations \eqref{v_if} and \eqref{fire_tj} it follows that,
\begin{align*}
\theta &= \abs{\int_{t_j}^{t_{j+1}} f(x) e^{\alpha(x-t_{j+1})} dx}
\\
&= \abs{v(t_{j+1}) - e^{\alpha(t_j-t_{j+1})} v(t_j)}
\leq \abs{v(t_{j+1})} + \abs{v(t_j)}.
\end{align*}
This contradicts Observation \ref{v_is_c0}.
\end{proof}
\section{The reconstruction}
\label{sec_rec}
We now address the problem of approximately reconstructing a bandlimited
function from the integrate and fire output. Since the samples are taken in
the half-line $[t_0,+\infty)$ we will make some assumption about the size of
$f$ before the initial instant. Roughly speaking, the integrate and fire process would not have produced any sample in the
interval $(-\infty,t_0]$.
\begin{assum}
\label{assum_2}
The function defined in Equation \eqref{deff_v} satisfies,
\[
\abs{v(t)} \leq \theta,
\mbox{ for all $t \leq t_0$.}
\]
\end{assum}
Note that by Observation \ref{v_is_c0}, any $t_0 \ll 0$ satisfies this
assumption. To approximately reconstruct $f$ we will first approximately
reconstruct $v$ from the integrate and fire output and then derive
information about $f$ by means of Equation \eqref{fv_fourier}. We will use
the structure of the IF process to produce a number of approximate samples
for $v$.
First we argue that, from the output of the IF process, we have enough
information to approximate $v$ on the time instants $\sett{t_0, \ldots,
t_n}$.
Rewriting Equation \eqref{qj} in terms of $v$
(cf. Equation \eqref{v_if}) we have,
\begin{equation}
\label{rec_vtj}
v(t_{j+1}) = e^{\alpha(t_j-t_{j+1})}v(t_j) + q_{j+1},
\quad (0 \leq j \leq n-1).
\end{equation}
Since the value $v(t_0)$ may not be exactly known we cannot determine from
this recurrence relation all the values $v(t_j)$. However, we can construct
an approximation to these values. Let $w_0 := 0$ and define recursively,
\begin{equation}
\label{rec_wj}
w_{j+1} = e^{\alpha(t_j-t_{j+1})}w_j + q_{j+1},
\quad (0 \leq j \leq n-1).
\end{equation}
Observe that Assumption \ref{assum_2} implies that $\abs{w_0-v(t_0)} \leq
\theta$. Using this estimate as a starting point we can iterate on Equation
\eqref{rec_vtj} and \eqref{rec_wj} to get,
\begin{equation}
\label{estimate_w}
\abs{w_j - v(t_j)} \leq \theta,
\quad (0 \leq j \leq n).
\end{equation}
Consequently, using only the output of the IF sampling scheme, we have
constructed a set of values $\sett{w_0, \ldots, w_n}$ that approximates $v$
on
the instants $\sett{t_0, \ldots, t_n}$. The second step is to approximate $v$
on
an arbitrary point of ${\mathbb R}$.
To this end observe that, according to the definition of $t_j$
as the minimum number satisfying Equation \eqref{fire_tj}, we have that,
\begin{equation*}
\abs{\int_{t_j}^t f(x) e^{\alpha(x-t)} dx} \leq \theta,
\quad
\mbox{for all $t \in [t_j,t_{j+1}]$.}
\end{equation*}
Rewriting this inequality in terms of $v$ (cf. Equation \eqref{v_if}) gives,
\begin{equation}
\label{band_almost}
\biggl | v(t) - e^{\alpha(t_j-t)}v(t_j) \biggr | \leq \theta,
\quad
\mbox{for all $t \in [t_j,t_{j+1}]$.}
\end{equation}
Combining this last inequality with \eqref{estimate_w} yields,
\begin{equation}
\label{band}
\biggl | v(t) - e^{\alpha(t_j-t)} w_j \biggr | \leq 2 \theta,
\quad
\mbox{for all $t \in [t_j, t_{j+1}]$.}
\end{equation}
We now show that this inequality allows us to approximate $v$ anywhere on the
line.
\begin{claim}
\label{claim_approx_v}
Given an arbitrary time instant $t \in {\mathbb R}$, choose $x \in \Rdst$ in the
following way:
\begin{itemize}
\item[(a)] if $t < t_0$, let $x:=0$,
\item[(b)] if $t$ belongs to some (unique) interval $[t_j,t_{j+1})$, let
$x := e^{\alpha(t_j-t)} w_j$,
\item[(c)] if $t \geq t_n$, let $x:= e^{\alpha(t_n-t)} w_n$.
\end{itemize}
Then, $\abs{v(t)-x} \leq 2 \theta$.
\end{claim}
\begin{rem}
Observe that the procedure to obtain $x$ from $t$ depends only on the output
of the IF process.
\end{rem}
\begin{proof}
For case (a), the conclusion follows from Assumption \ref{assum_2}. For case
(b), the conclusion follows from Inequality \eqref{band}. For case (c), the
fact that the fire condition is never satisfied after $t_n$ gives,
\begin{equation}
\biggl | v(t) - e^{\alpha(t_n-t)}v(t_n) \biggr | \leq \theta.
\end{equation}
Combining this estimate with Inequality \eqref{estimate_w}, the conclusion
follows.
\end{proof}
We will now choose a window function.
\begin{assum}
\label{assum_3}
A Schwartz class function $\psi$ such that
\begin{itemize}
\item $\hat{\psi} \equiv 1$ on $[-\Omega,\Omega]$, and,
\item $\hat{\psi}$ is compactly supported,
\end{itemize}
has been chosen.
\end{assum}
Since $v \in \ensuremath{PW_\Omega}$, the classic oversampling trick for bandlimited functions
(see for example \cite{fe92-3} or \cite{fegr94}) implies that there exists a
number $0<\beta<(2 \Omega)^{-1}$ such that
\begin{equation}
\label{expansion_v}
v = \sum_{k\in{\mathbb Z}} v(\beta k) \psi(\cdot-\beta k).
\end{equation}
Using the procedure described in Claim \ref{claim_approx_v}, we produce a set
$\sett{s_k}_{k \in {\mathbb Z}}$
such that
\begin{equation}
\label{v_approx}
\abs{v(\beta k) - s_k} \leq \ 2 \theta,
\mbox{ for all $k\in{\mathbb Z}$}.
\end{equation}
Let $\varphi$ be the function defined by,
\begin{equation}
\label{deff_phi}
\hat{\varphi}(w) = \left(2 \pi i w + \alpha\right) \hat{\psi}(w).
\end{equation}
It follows that $\varphi$ is also a Schwartz function. Moreover, using
Equation
\eqref{fv_fourier} we have that,
\begin{equation}
\label{expansion_f}
f = \sum_{k\in{\mathbb Z}} v(\beta k) \varphi(\cdot-\beta k).
\end{equation}
Observe that, since $v \in \ensuremath{PW_\Omega}$, the sequence $\sett{v(\beta k)}_k \in
\ell^2$ and the series in Equation \eqref{expansion_f} converges in $L^2$ and
uniformly - in fact, it converges in the Wiener amalgam norm $W(C_0, L^2)$,
see for example \cite{fe92-3}, \cite{fegr94} and \cite{algr01}.)
Now we can define the approximation of $f$ constructed from the IF samples.
Let,
\begin{equation}
\label{approx_f}
\tilde{f} := \sum_{k\in{\mathbb Z}} s_k \varphi(\cdot-\beta k).
\end{equation}
Since, by Inequality \eqref{v_approx}, the sequence $\sett{s_k}_k$ is bounded
and $\varphi$ is a Schwartz function, it follows that Equation
\eqref{approx_f} defines a bounded function and that the convergence is
uniform (see \cite{fe92-3} or \cite{algr01}.)
The reconstruction algorithm consists then of calculating the approximated
samples $\sett{s_k}_k$ following Claim 1 and then convolving them with the
kernel $\varphi$, that can be pre-calculated.
We now give a precise error bound for the reconstruction.
\begin{theo}
\label{main_th}
Under Assumptions 1, 2 and 3, the function defined by Equation
\eqref{approx_f}
satisfies,
\[
\norm{f-\tilde{f}}_\infty \leq C \theta,
\]
for some constant C that only depends on $\Omega$ and the window function
chosen in Assumption \ref{assum_3}.
\end{theo}
\begin{proof}
According to Equations \eqref{expansion_f}, \eqref{approx_f} and Inequality \eqref{v_approx},
\begin{align*}
\norm{f-\tilde{f}}_\infty &\leq
\supess \sum_{k\in{\mathbb Z}} \abs{v(\beta k)-s_k} \abs{\varphi(\cdot-\beta k)}
\\
&\leq 2\theta \supess \sum_{k\in{\mathbb Z}} \abs{\varphi(\cdot-\beta k)}.
\end{align*}
It suffices to define $C:= 2 \sup \sum_{k\in{\mathbb Z}} \abs{\varphi(\cdot-\beta
k)}$.
Since $\varphi$ is a Schwartz function, $C<+\infty$ (see for example \cite{fe83}).
\end{proof}
\begin{rem}
We currently do not know what choice of the window function $\psi$ minimizes
the constant in the theorem. A more detailed study of the choice of the
window function should not only consider the size of that constant but also
the rate of convergence of the series in Equation \eqref{approx_f}.
\end{rem}
\section{Numerical experiments}
\label{sec_num}
We study the behavior of the reconstruction algorithm under variations in the threshold and the oversampling period for a specific choice of reconstruction kernel $\psi$.
The test signal $f$ is of finite length and real valued, produced as a
linear combination of five `sinc' kernels ($\sin(\pi x)/(\pi x)$)
at a 1Hz frequency, with random locations and weights. The amplitude of
the input has been normalized to 1. Although the theory covers infinite dimensional spaces, our simulations are limited by the practical implementations of the sampler and algorithms. The effects of truncation and quantization are not considered here.
This signal is encoded by the IF sampler with $\alpha =1$ and recovered using the procedure described in Section \ref{sec_rec}.
The reconstruction kernel $\psi$ is a raised cosine, defined by,
\begin{equation}
\psi(t) =
\sinc(t/T_s) \cos(\pi\gamma
t/T_s)\biggl ( 1-\frac{4\gamma^2t^2}{{T_s}^2}\biggr )^{-1},
\end{equation}
where $\gamma = 0.5$ and $T_s=0.25$ are determined by the maximum input frequency $\Omega$ and the desired oversampling period $\beta$ (cf. Equation
\eqref{expansion_v}.) Figure \ref{fig:raisedCos} shows the raised cosine
$\psi$ in the time and frequency domain.
Observe that the spectrum of $\psi$ is constant for frequencies less than the
input bandwidth and then decays smoothly towards zero.
The corresponding kernel $\varphi$ (cf. Equation \eqref{deff_phi}) is shown in Figure~\ref{fig:raisedCosfilt}.
\begin{figure}[ht]
\centering
\subfigure[Kernel $\psi$.]{
\includegraphics[width=0.45\textwidth]{./raisedCosine.eps}
\label{fig:raisedCos}
}
\subfigure[Kernel $\varphi$.]{
\includegraphics[width=0.45 \textwidth]{./filteredRaisedcos.eps} \label{fig:raisedCosfilt}
}
\caption{Reconstruction kernels.}
\end{figure}
Using $\varphi$ we recover $\tilde{f}$ (cf. Equation \eqref{approx_f}.), an approximation of $f$ as shown in Figure \ref{fig:timeRec_decLat}.
As expected the error decreases in regions with high density of samples. This behavior is evident from Figure \ref{fig:approxV}, the dense regions imply that the uniform samples will most likely coincide with the estimated values of $v(t)$ at the sample locations. On the other hand, for samples that are far apart the approximation follows a exponential decay from its original value which is not the natural trend in the signal.
Figure \ref{fig:approxV} shows $v(t)$ (solid line), and the approximated samples of $v$ on the lattice $\beta {\mathbb Z}$, constructed using the procedure described in Claim \ref{claim_approx_v} called $\sett{s_k}_k$ and the envelope $v(t) \pm \theta$ where these samples are known to lie (dashed line.)
Currently the reconstruction algorithm uses the approximated samples of $v(t)$ at the pulse locations to define the piecewise exponential bound and estimate the reconstruction coefficients on the uniform lattice. Based on the numerical experiments the algorithm can be improved by including the estimated value of $v(t)$ at the pulse locations although it implies reconstruction on a nonuniform grid.
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.7\textwidth]{./timeRec_Interp.eps}
\caption{Reconstruction of $f(t)$ from the impulse train.}
\label{fig:timeRec_decLat}
\end{center}
\end{figure}
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.7\textwidth]{./approxV.eps}
\caption{Reconstruction of $v(t)$ with $\beta = 1/4$, $\theta = 0.05$}
\label{fig:approxV}
\end{center}
\end{figure}
For both cases similar error bounds can be defined as in Theorem \ref{main_th}. The variation of the error in relation to the threshold (pulse rate) is shown in Figure \ref{fig:error}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\textwidth]{./errorvsthreshold.eps}
\caption{Variation of the error $||f - \widehat{f}||_{\infty}$ in relation to
the threshold and pulse rate (dotted line) with $\beta = 0.25$.}
\label{fig:error}
\end{center}
\end{figure}
The error depends on the choice of generator and the oversampling period $\beta$, as seen in Figure~\ref{fig:errorvsBeta}. The relationship between the kernels and the optimal oversampling period is still not evident.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.7\textwidth]{./errorvsbetaATthreshold.eps}
\caption{Variation of the error in relation to the oversampling period for different thresholds. The error is defined as $||f - \widehat{f}||_{\infty}$.}
\label{fig:errorvsBeta}
\end{center}
\end{figure}
\section{Acknowledgements}
The second and fourth authors were supported by NINDS (Grant Number: NS053561). The third author was partially supported by the following grants:
PICT06-00177, CONICET PIP 112-200801-00398 and UBACyT X149. The third and fourth authors' visit to the Numerical Harmonic Analysis Group (NuHAG) of the University of Vienna was funded by the European Marie Curie Excellence Grant EUCETIFA FP6-517154.
\clearpage
\bibliographystyle{abbrv}
|
2,869,038,155,245 | arxiv | \section{Introduction}
Water is characterized by well-known thermodynamic and kinetic liquid-state anomalies;
for example, the rise in density on isobaric heating (density anomaly) and
the increase in molecular mobility on isothermal compression (diffusivity anomaly)
\cite{ms98,pgd03}.
Since the anomalies of bulk water are connected with its behaviour as
a solvent in chemical and biological systems, an understanding of the structural
origins of such anomalous behaviour has attracted considerable attention\cite{dtvh05,kfs08}.
While the anomalies of water were initially presumed to be uniquely connected
to the hydrogen-bonded network of water\cite{bf33}, there is now evidence that
a number of liquids display water-like liquid-state anomalies, such as Te~\cite{Th76} Ga, Bi~\cite{LosAlamos}, S~\cite{Sa67,Ke83} Ge$_{15}$Te$_{85}$~\cite{Ts91},
silica~\cite{An00,Ru06b,Sh02,Po97}, silicon~\cite{Ta02} and BeF$_2$~\cite{An00,scc06,asc07,ac07,ac09pre,agc09}. The generic relationships between structure, entropy and mobility underlying
this diverse set of liquids with water-like anomalies, can be understood in terms of the behaviour of the excess entropy ($S_{ex}$), defined as the difference between the entropy ($S$) of the
liquid and the corresponding ideal gas at the same density and temperature~\cite{scc06,assac10,Ol06a,oscb10,met061,met062,etm06,ybs08}. A necessary condition for the fluid to show
water-like thermodynamic and transport anomalies is an excess entropy anomaly, corresponding to a rise in excess entropy on isothermal compression. The structural basis for the
excess entropy anomaly is the existence of distinct forms of local order or length scales in the low- and high-density regimes; competition between the two types of local order results in a rise in excess entropy at intermediate densities.
In addition to the singularity-free scenario for water-like thermodynamic and kinetic anomalies, it has been conjectured that the anomalies of water are due to the presence of a
second liquid-liquid critical point, corresponding to the onset of a line of first-order phase transitions between high- and low-density phases of water
\cite{pses92}. The relationship between the liquid-liquid critical point and water-like anomalies can be addressed by considering minimal models of liquids with isotropic, pair-additive
interactions that give rise to water-like anomalies, as well as liquid-liquid critical points~\cite{fmsbs01,ssbs98}. Such models demonstrate that the presence of two repulsive
length scales in the pair interaction is necessary in order to give rise to liquid-state anomalies. If, in addition to two repulsive length scales, the pair interaction has an attractive
component, the fluid can show a liquid-liquid critical point (LLCP), in addition to the liquid-gas critical point. While two length scales in the pair interaction appears to be a necessary
condition for seeing both the LLCP and water-like anomalies, it is possible to design isotropic potentials with two length scales where appropriate variation of parameters can result in
shifting either the LLCP or the water-like anomalies into the metastable or unstable regime~\cite{Ba09,Ol06a}.
In this paper, we study a family of liquids with continuous, core-softened pair interactions consisting of a hard core, a short-range shoulder and an attractive well at a larger
separation~\cite{Ba09}. Since the potentials share a common functional form consisting of a sum of one Lennard-Jones and four Gaussian terms, we refer to them as the family of
multi-Gaussian water-like liquids. By suitably varying the parameters, the outer attractive well can be left unchanged, while the shoulder can be progressively altered from being purely
repulsive to a deep attractive well. As the shoulder shifts from being purely repulsive to more attractive, the anomalous regime in the pressure-temperature plane shrinks and disappears
while the LLCP shifts to higher temperatures and lower pressures. The connection with atomistic models is made by ensuring that the limiting case of the double minimum potential that has
no anomalies corresponds to an isotropic potential that reproduces the oxygen-oxygen radial distribution function of ST4 water~\cite{hs93}. Using this set of anomalous fluids, we address a
number of questions related to the features of the pair interaction, in addition to the two length scales, that control the temperature-pressure regime of the water-like liquid state
anomalies versus the liquid-liquid critical point. Both the liquid-liquid critical point and the water-like anomalies require a change in the nature of local order in the liquid with
density, and therefore two length scales in the case of core-softened fluids. The liquid-liquid critical point, however, depends on the energetic bias towards segregation of the two length
scales with decreasing temperature. In contrast, the water-like liquid state anomalies require an excess entropy anomaly, involving a continuous transformation of
the liquid from low- to high-density through a range of quasi-binary states reflecting the competition between two length scales in the intermediate regime~\cite{Ol06a}.
In order to understand the relationship between the interaction potential, the water-like liquid state anomalies and the liquid-liquid critical point, it is therefore
necessary to consider the temperature-dependent stabilization of the low- and high-density length scales as well as the density-dependent changes in the entropy of the
system.
In order to understand how the change in interaction potential within the multi-Gaussian water models affects the thermodynamic and kinetic water-like anomalies, it is necessary to map out
the excess entropy anomaly for the different model fluids. We use the pair correlation entropy as a simple structural estimator of the excess entropy, defining it for a one-component
fluid of structureless particles as:
\begin{equation}
\frac{S_2}{Nk_B } = -2\pi\rho \int_0^\infty
\{g(r)\ln g(r) -\left[ g(r)-1\right] \} r^2 dr
\label{s2}
\end{equation}
where $g(r)$ is the radial distribution function. It is typically the dominant contribution to the excess entropy of a fluid expressed as a multi-particle correlation expansion of the form:
\begin{equation}
S_{ex}=S-S_{id}=S_2 +S_3 + \dots
\end{equation}
where $S_n$ is the entropy contribution due to $n$-particle spatial correlations~\cite{hsg52,ng58,hjr71,dcw87,be89}. The excess entropy and mobility anomalies are linked by excess entropy
scaling relations of the form
\begin{equation}
X^*=A\exp (\alpha (S_{ex}/Nk_B))\; ,
\end{equation}
where $X^*$ are dimensionless transport properties with either macroscopic
(Rosenfeld) or microscopic (Dzugutov) reduction
parameters and the scaling parameters, $\alpha$ and $A$, depend on the functional form
of the underlying interactions~\cite{yr771,yr772,yr99,md96}.
As an additional means to relate the interaction potential to the liquid-state
properties, we characterize the potential energy surface (PES) of the multi-Gaussian
family of water-like liquids using instantaneous normal mode analysis. In the Instantaneous Normal Mode (INM) approach, the key quantity
is the ensemble-averaged curvature distribution of the PES sampled by the system. For a system of $N$ particles, the mass-weighted Hessian associated with
each instantaneous configuration is diagonalized to yield $3N$ normal mode eigenvalues and eigenvectors and the ensemble-average of this
distribution is referred to as the INM spectrum. The INM spectrum of a liquid will have a substantial fraction of unstable modes with
negative eigenvalues that simulations suggest is strongly correlated with the diffusivity~\cite{sc01a,sc021,sc022,cfsos94,lssss00,rm97}. Random energy models of liquids also suggest
that for supercooled liquids there will be a connection between the fraction of imaginary modes, the diffusivity and the configurational entropy~\cite{tk00,kck02}. Our previous work on
INM analysis of a core-softened water-like fluid demonstrated that the instantaneous normal mode spectra carry significant information on the dynamical consequences of the interplay between
length scales characteristic of anomalous fluids~\cite{oscb10}. We have therefore performed an INM analysis of the multi-Gaussian water-like fluids to understand the relationship between
the interaction potential, anomalies and the liquid-liquid critical point.
The paper is organized as follows. The computational details of the simulations and the equation of state data for the multi-Gaussian family of water models are summarized in Section II.
Section III contains the results and the conclusions in Section IV.
\section{The model}
\label{sec:model}
\subsection{The potential}
\label{sec:potential}
The multi-Gaussian family of water-like fluids is defined by pair-additive,
continuous, core-softened interactions with the functional form
\begin{equation}
U(r) = \epsilon \left[ \left( \frac{\sigma}{r} \right)^{a} -
\left( \frac{\sigma}{r} \right)^{b} \right] + \sum_{j=1}^{4}h_{j}
\exp \left[ -\left( \frac{r-c_{j}}{w_{j}} \right)^{2} \right] \;\;.
\label{eq:potential}
\end{equation}
\begin{figure}[h]
\begin{centering}
\includegraphics[clip=true,width=8cm]{figures/potential.eps}
\par
\end{centering}
\caption{Interaction potential obtained by changing parameters $h_1$ in Eq.~(\ref{eq:potential}). The potential and the distances are in dimensionless
units $U^*=U/\gamma$ and $r^*=r/r_0$.}\label{fig:potential}
\end{figure}
The first term is a Lennard-Jones potential-like and the second one is composed of four Gaussians, each of width $w_j$ centered at $c_j$.
The potential and the distances are given in dimensionless units, $U^*=U/\gamma$ and $r^*=r/r_0$ where $\gamma$ is the energy scale and
$r_0$ is the length scale chosen so the closest approach between particles is about $r^*=1$, i.e., so that the second length scale associated with the
repulsive shoulder remains the same. Here we use $\epsilon/\gamma=0.02$ and $\sigma/r_0=1.47$. Modifying $h_1$ in the Eq.~(\ref{eq:potential}) allows us to change the depth
of the hard-core well, as illustrated in Fig.~\ref{fig:potential} while keeping the shape and location of the attractive well constant. We report here results for four different values for
$h_1$ and they are expressed as a multiple of a reference value $h_1^{ref}$ as shown in the Table~\ref{table1}. For all the four cases the values of
$a,b,\{c_j, w_j\}$ with $j=1, \dots, 4$ and $h^{ref}$. Table~\ref{table2} gives the parameter values in $\AA$ and $kcal/mol$
consistent with reproducing the oxygen-oxygen radial distribution of ST4 water using case D~\cite{HG93}.
\begin{center}
\begin{table}
\caption{Parameters $h_1$ for potentials A, B, C and D.}
\centering{}
\begin{tabular}{|c|c|}
\hline
Potential & Value of $h_1$ \tabularnewline \hline \hline
$A$ & $ 0.25\, h_1^{ref}$ \tabularnewline \hline
$B$ & $ 0.50\, h_1^{ref}$ \tabularnewline \hline
$C$ & $ 0.75\, h_1^{ref}$ \tabularnewline \hline
$D$ & $ 1.00\, h_1^{ref}$ \tabularnewline \hline
\end{tabular}
\label{table1}
\end{table}
\end{center}
\begin{center}
\begin{table}
\caption{Parameters for potentials A, B, C and D in units
of $\AA$ and of $kcal/mol$.}
\centering{}
\begin{tabular}{|c|c|c|c|}
\hline
Parameter & Value & Parameter & Value \tabularnewline \hline \hline
$a$ & \ $9.056$ \ & $w_{1}$ & \hspace{0.2cm} $0.253$ \tabularnewline \hline
$b$ & $4.044$ & $w_{2}$ & \hspace{0.2cm} $1.767$ \tabularnewline \hline
$\epsilon$ & $0.006$ & $w_{3}$ & \hspace{0.2cm} $2.363$ \tabularnewline \hline
$\sigma$ & $4.218$ & $w_{4}$ & \hspace{0.2cm} $0.614$ \tabularnewline \hline
$c_{1}$ & $2.849$ & $h_{1}^{ref}$ & $-1.137$ \tabularnewline \hline
$c_{2}$ & $1.514$ & $h_{2}$ & \hspace{0.2cm} $3.626$ \tabularnewline \hline
$c_{3}$ & $4.569$ & $h_{3}$ & $-0.451$ \tabularnewline \hline
$c_{4}$ & $5.518$ & $h_{4}$ & \hspace{0.2cm} $0.230$ \tabularnewline \hline
\end{tabular}
\label{table2}
\end{table}
\end{center}
\subsection{The simulation details}
\label{sec:simulation}
The properties of the system were obtained by $NVT$ molecular
dynamics using Nose-Hoover
heat-bath with coupling parameter $Q = 2$. The system is characterized by
500 particles
in a cubic box with periodic boundary conditions,
interacting with the intermolecular potential described above. All physical
quantities
are expressed in reduced units and defined as
\begin{eqnarray}
t^* &=& \frac{t(m/\gamma)^{1/2}}{r_0}\nonumber \\
T^* &=& \frac{k_{B}T}{\gamma }\nonumber \\
p^*& =& \frac{pr_0}{\gamma }\nonumber \\
\rho^{*} &=& \rho r_0^3 \nonumber \\
D^* &=& \frac{D m}{\gamma r_0^2} \;\;.
\end{eqnarray}
Standard periodic boundary conditions together with
predictor-corrector
algorithm were used to integrate the equations of motion with a
time step $\Delta t^{*}=0.002$ and potential cut off radius $r_{c}^{*}=3.5$.
The initial configuration is set on solid or liquid state and, in both cases, the
equilibrium state was reached after $t_{eq}^{*}=1000$
(what is in fact $500 000$ steps since $\Delta t^{*}=0.002$) . From this time
on the physical
quantities were stored in intervals of $\Delta t_R^* = 1$ during
$t_R^* = 1000$. The system is uncorrelated after $t_d^*= 10$, as judged from the velocity
auto-correlation function. $50$
descorrelated samples were used to get the average of the physical quantities.
At each state point, 100 configurations were sampled and used to
construct the instantaneous normal mode spectra and associated quantities.
We repeated the calculation for some state points using 500 configurations
and found no significant difference.
\subsection{Instantaneous Normal Modes Analysis}
The
potential energy of configuration ${\bf r}$ near ${\bf r_0}$
can be written as a Taylor expansion of the form:
\begin{eqnarray}
U({\bf r})=U({\bf r_0})-{\bf F} \bullet {\bf z}+\frac{1}{2} {\bf r^T}
\bullet {\bf H} \bullet {\bf z}
\label{eq:expansion}
\end{eqnarray}
where ${\bf z_i}=\sqrt{m_i}({\bf r_i}-{\bf r_0})$ are the mass-scaled position
coordinates of a particle $i$.
The first and second derivatives of $U({\bf r})$ with respect to the vector
${\bf z}$ are the force and the Hessian matrix, denoted by
${\bf F}$ and ${\bf H}$ respectively.
The eigenvalues of the Hessian ${\bf H}$ are $(\{ \omega_i^2\}, i=1,3N)$
representing the squares of normal mode frequencies, and ${\bf W}({\bf r})$
are the corresponding eigenvectors. In a stable solid, ${\bf r_0}$ can be
conveniently taken as the global minimum of the potential energy
surface $U(R)$, which implies that ${\bf F}=0$ and ${\bf H}$ has
only positive eigenvalues corresponding to oscillatory modes. The INM approach
for liquids interprets ${\bf r}$ as the configuration at time $t$
relative to the configuration ${\bf r_0}$ at time $t_0$. Since typical
configurations, ${\bf r_0}$ are extremely unlikely to be local minima,
therefore ${\bf F}\neq 0$ and
${\bf H}$ will have negative eigenvalues. The negative eigenvalue modes are
those which sample negative curvature regions of the PES,
including barrier crossing modes.
The ensemble-averaged INM spectrum, $\langle f(\omega) \rangle$, is
defined as
\begin{eqnarray}
f(\omega)= \left\langle \frac{1}{3N} \sum_{i=1}^{3N}
\delta(\omega-\omega_i)\right\rangle.
\label{eq:rho}
\end{eqnarray}
Quantities that are convenient for characterizing the instantaneous normal
mode spectrum are: (i) the fraction of imaginary frequencies ($F_{i}$), defined as
\begin{eqnarray}
F_{i}=\int_{im}^{} f({\omega}) d\omega
\label{eq:Fi}
\end{eqnarray}
and the Einstein frequency ($\omega_E$), given by
\begin{eqnarray}
\omega_E^2&=&\int_{}^{}\omega^2 f(\omega) d\omega \nonumber \\
&=& \frac{\langle Tr {\bf H} \rangle}{m(3N-3)}
\label{eq:omegaE}
\end{eqnarray}
where the integral is performed over the entire range of frequencies, real as well as imaginary.
\section{Results}
\label{results}
\subsection{Phase Diagram}
Figure~\ref{fig:PT} illustrates the pressure-temperature phase diagram for the four cases of the potential~\cite{Ba09}.
Due to the presence of the attractive interaction, all four cases have a liquid-gas transition with an associated critical point that
is not shown here. In addition, all the four model liquids studied here have a liquid-liquid critical point. Cases A, B and C have water-like
density and diffusional anomalies. The solid bold lines represent the locus of temperatures of maximum density (TMD) for different isobars.
State points enclosed by the TMD locus represent the regime of density anomaly within which $(\partial\rho /\partial T)_P > 0$.
The maximum temperature along the TMD locus, denoted by $T^{max}_{TMD}$, is the threshold temperature for onset of the density anomaly.
The dot-dashed lines are the temperatures of maximum and minimum diffusivity along different isotherms~\cite{Ba09}.
This overall change in the nature of the liquid-state phase diagram for the four multi-Gaussian liquids is summarized in Figure~\ref{fig:PT-CP}.
Clearly, as the second length scale shifts from an inflexion point on the repulsive shoulder to a well with progressively increasing depth and curvature, the region
of liquid state anomalies shrinks and disappears. The figure illustrates how the pressure and temperature associated
the liquid-gas and liquid-liquid critical points vary with the potentials $A,B,C$ and $D$. The same graph also shows that as the shoulder becomes deeper, the maximum temperature of the
TMD locus, which marks the onset temperature for thermodynamically anomalous behaviour, approaches the liquid-liquid critical point.
Since the thermodynamic and mobility anomalies of water are
correlated, we first focus on understanding the thermodynamic
condition for the presence of density anomaly. This may be stated
as:
\begin{equation}
\frac{\partial S}{\partial \rho}= -\frac{V^2\alpha}{N K_T}>0
\label{eq:ds/drho}
\end{equation}
where $\alpha$ is the thermal expansion coefficient and $K_T$ is the isothermal compressibility.
For the system to have a large anomalous region, the ratio $\alpha/K_T$ should be therefore large
and negative. Near the critical point, the compressibility, $K_T$, and thermal expansion coefficient, $\alpha_T$, diverge, however
the compressibility diverges with a large exponent making the ratio zero. In this case, the
condition given by Eq.~(\ref{eq:ds/drho}) can not be fulfilled. This suggests that
near the liquid-liquid critical point the system prefers to undergo a phase separation into high- and low-density liquids, rather than show a smooth entropy anomaly.
\begin{figure}[ht]
\begin{centering}
\begin{tabular}{cc}
\includegraphics[clip=true,width=6cm]{figures/PT25.eps} & \includegraphics[clip=true,width=6cm]{figures/PT50.eps} \tabularnewline
\includegraphics[clip=true,width=6cm]{figures/PT75.eps} & \includegraphics[clip=true,width=6cm]{figures/PT100.eps} \tabularnewline
\end{tabular}
\par
\end{centering}
\caption{Pressure-temperature phase diagram for cases $A$, $B$, $C$ and
$D$. The
thin solid
lines are the isochores $0.30<\rho^*<0.65$. The liquid-liquid critical
point is
the dot, the locus of temperatures of maximum density is the solid thick line
and the locus of
diffusion extrema is the dot-dashed line.}
\label{fig:PT}
\end{figure}
\begin{figure}[h]
\begin{centering}
\begin{tabular}{cc}
\includegraphics[clip=true,width=7cm]{figures/PT-CP.eps}
\end{tabular}
\par
\end{centering}
\caption{Pressure versus temperature locations of the critical points for the potentials $A-D$.}
\label{fig:PT-CP}
\end{figure}
\subsection{Excess Entropy and Pair Correlations}
As discussed in the previous section, the density anomaly
corresponds to a set of state points for which
$(\partial S /\partial\rho )_T > 0$. The total
entropy is a sum of the ideal ($S_{id}$) and excess ($S_{ex}$)
contributions. Since $S_{id}$ decreases monotonically with
increasing density, therefore a density anomaly must imply
the presence of an excess entropy
anomaly, $(\partial S_{ex} /\partial\rho )_T > 0$.
Errington \emph{et al.} have further shown that the
strength of the excess entropy anomaly required to give rise to
density anomaly is given by the condition
$\Sigma_{\mathrm{ex}}= \left(\partial (S_{\mathrm{ex}}/Nk_B) /\partial
\ln \rho\right)_T>1$~\cite{Er06}. By approximating the excess entropy with the
two-body correlation contribution of $s_2$ [see Eq.~(1)],
we relate the structural information in the radial distribution
function of the fluid to the thermodynamic behaviour.
Figure~\ref{fig:s2-sex} illustrates the $s_{2}^*(\rho)=S_2/Nk_B$ versus $\rho^*$ for
various temperatures and for the potentials $A,B,C$ and $D$. For cases
$A,B$ and $C$, at low temperatures, there is a rise in excess entropy
on isothermal compression characteristic of water-like liquids~\cite{scc06,ac09pre,agc09,Mi06b,oscb10} that
contrasts with the behaviour of simple liquids where free volume arguments
are sufficient to justify a monotonic decrease in entropy on
isothermal compression. For case $D$, no anomaly is observed in the
pair entropy
even at low temperatures. The progressive attenuation of the anomalies
on going from case $A$ to case $D$, is illustrated in
Figure~\ref{fig:s2-sex-T-all}
which compares the behavior of the pair entropy versus density
for all studied potentials at a given temperature, $T^*=0.9$.
This graph together with Figure~\ref{fig:PT-CP}
indicates that as the maximum temperature at the
TMD line approaches the liquid-liquid critical
temperature, the pair entropy curve becomes
more flat and the anomalous behavior disappears.
The origin of the pair entropy anomaly in fluids with
two length scales can be explained in terms of a
competition between two length scales at intermediate densities. Only a single length scale dominates in the low- and high-density limits while at intermediate densities, where
both length scales are present, can be regarded as quasi-binary systems with a mixing entropy. The radial distribution functions shown in our previous study clearly demonstrate
the presence of two length scales. They also show that with increasing temperature, the shorter length scale peak of $g(r)$ becomes more prominent in cases $A, B$ and $C$. In
contrast, in case $D$, both length scales associated with the first and second peak of the $g(r)$ broaden with increasing temperature as a consequence of which there
is no emergence of an anomaly with decreasing temperature.
The crucial question to ask in the multi-Gaussian family of
water models is why, despite the presence of two length
scales at intermediate densities, the pair entropy anomaly
is progressively lost as the shoulder goes from being an inflexion point to a minimum with about twice the depth
as the outer, attractive well. Clearly the rise in entropy
with isothermal compression due to mixing of two length scales
is counteracted by additional effects. To understand this
we note that the entropy of an one-dimensional harmonic oscillator of frequency $\omega$ is given by:
\begin{equation}
\frac{s_\omega}{Nk_B} = 1 -\ln (\beta\hbar\omega ) \; .
\end{equation}
The increasing curvature of the short-range minimum,
relative to the attractive minimum, implies that a pair separated
trapped in the
shoulder minimum will have lower vibrational entropy than one trapped in the
broad shallow attractive minimum. As a consequence, at intermediate
densities, while
the presence of two length scales will increase entropy, the loss of entropy
when the pairs are located in the short-range minimum will tend to
decrease entropy.
As the shoulder minimum becomes deeper, the second effect becomes
more important
and the excess entropy anomaly disappears. In systems such as the two-scale
linear ramp, such curvature-dependent effects will be absent.
It is also interesting to consider the shifting of the liquid-liquid critical
point to lower pressures and higher temperatures. For a temperature-driven
phase separation into low-density liquid (LDL) and high-density liquid (HDL), increasing
energetic stabilization of one length scale relative to the other is
required. In case A, this is clearly due to the outer attractive well
with a depth of about $\Delta U^* \approx 0.3$ and
$T_c^* \approx 0.3$. In case D, this is due to the shorter
length scale with a well depth of about $\Delta^*=1.00$ and
$T_c^* \approx 0.8$. For a shallow shoulder, the high
density liquid is stabilized under high pressure. Within the HDL phase
particles are occupying the shoulder scale. The density
anomalous region, characterized by having particles in the two scales
occurs at the pressure range of the
low density liquid phase. As the shoulder scale
becomes deeper, less pressure is needed to form the high density
liquid. The LDL phase occupies a smaller pressure range
and therefore the density anomalous region shrinks. For a
very deep shoulder well as in case D, the HDL requires no
pressure to be formed, the LDL is at negative pressures
and the anomalous density regime disappears.
\begin{figure}[htbp]
\centering
\includegraphics[clip=true,scale=0.32]{figures/s2.vs.rho-25.eps}
\includegraphics[clip=true,scale=0.32]{figures/s2.vs.rho-50.eps}
\includegraphics[clip=true,scale=0.32]{figures/s2.vs.rho-75.eps}
\includegraphics[clip=true,scale=0.32]{figures/s2.vs.rho-100.eps}
\caption{Pair entropy versus density for for the cases
$A$, $B$, $C$ and $D$ for various temperatures.}
\label{fig:s2-sex}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[clip=true,scale=0.5]{figures/s2.vs.rho-T.eps}
\caption{ Pair entropy versus density for the cases
$A$, $B$, $C$ and $D$ at $T^*=0.9$. }
\label{fig:s2-sex-T-all}
\end{figure}
\subsection{Diffusion and Rosenfeld Reduction Parameter}
\begin{figure}[htbp]
\centering
\includegraphics[clip=true,scale=0.3]{figures/DR.vs.s2-25-cut.eps}
\includegraphics[clip=true,scale=0.3]{figures/DR.vs.s2-50-cut.eps}
\includegraphics[clip=true,scale=0.3]{figures/DR.vs.s2-75-cut.eps}
\includegraphics[clip=true,scale=0.3]{figures/DR.vs.s2-100-cut.eps}
\caption{Diffusion in Rosenfeld units as a function of $-s_{2}^*$ for the cases $A, B, C$ and $D$. }
\label{fig:D-DR}
\end{figure}
Previously we have shown that the diffusion coefficient in the cases $A,B$ and $C$ decreases with
the decrease of the density for a certain range of densities~\cite{Ba09}. The region
in the pressure temperature phase diagram limited by the maxima and minima of
the diffusion coefficient is illustrated as dot-dashed lines in Fig.~\ref{fig:PT}. In the case
$D$ the diffusion coefficient increases with the decrease of the density as in normal liquids.
It is interesting to notice that the same behavior is also observed in the pair entropy
suggesting that the anomalies present in these two quantities might be related. In order
to check this hypothesis we now consider the scaling relationship between the
diffusivity and the pair entropy. Using the Rosenfeld macroscopic reduction
parameters for the length as $\rho^{-1/3}$ and the thermal velocity as $(k_BT/m)^{1/2}$,
the dimensionless diffusivity is defined as
\begin{equation}
D_R \equiv D\frac{\rho^{1/3}}{(k_BT/m)^{1/2}}\; .
\label{eq:D*sexc}
\end{equation}
The scaling of the reduced diffusivity, $D_R$ with pair entropy, $s_2^*$ is illustrated
in Fig.~\ref{fig:D-DR}. Previous results for core-softened fluids~\cite{assac10} suggest
that $\Delta S= S_{ex} -S_2$ tends to be density dependent in anomalous fluids, resulting in
a stronger isochores dependence when $\ln(D_R)$ is plotted against $S_2$, rather than against
$S_{ex}$. In the present study, we have computed only $S_2$ and therefore Fig.~\ref{fig:D-DR}
shows scaling with respect to $S_2$. For case $A$, the collapse of data from all the state
points on a single line is quite good. As we progress from case $A$ to case $D$, the
isochores dependence of the scaling parameters becomes more pronounced suggesting that
the density dependence of $\Delta S$ increases on going from
case $A$ to case $D$.
\subsection{The Instantaneous Normal Mode Spectrum}
The variation in anomalous behaviour in the multi-Gaussian family of
water-like liquids studied here suggests that in addition to
length scales, we need to look at other features of the pair potential
e.g. its first and second derivatives. Instantaneous normal mode analysis
provides a way to summarize information on the curvature distribution of
the potential energy landscape. In Figure~\ref{fig:f-omega}, we show the INM spectra of
liquids bound by the four potentials (A, B, C and D) at a common state
point of $\rho^*=0.50$ and temperature $T^*=0.8$. The crucial
features are as follows:
\begin{itemize}
\item A low-frequency split peak in the real branch centered at about $\omega =10$, that
does not vary significantly between the four cases and must reflect modes associated with the outer attractive well;
\item A high-frequency peak in the real branch, centered at approximately 30, 35, 40 and 50 for cases $A$, $B$, $C$ and $D$ respectively, which must correspond to
motion in neighborhood of the shoulder length scale. As the curvature of the short-range minimum
increases, this features shifts to higher frequencies and becomes more prominent;
\item The imaginary branch reflects regions of negative curvature in the neighborhood
of barriers and inflexion points. Case $A$, where there is no barrier in the pair interaction but only an inflexion point has a single peak like a
simple liquid. However, this peak is broad because of the core-softened repulsive wall and the
fraction of imaginary modes is large. As the barrier between the short and long length scales becomes more pronounced in the pair interaction, the
second peak in the imaginary branch becomes more prominent.
\end{itemize}
Thus the real branch is dominated by vibrational modes associated
with motion in the attractive and shoulder length scales while the
imaginary modes branch is dominated by negative curvature modes
associated with transitions between the shoulder and attractive wells. For the
case $A$, the real branch has three peaks related to the
three basins: the shoulder scale, the attractive scale and a second
attractive scale located at further distance in Fig.~\ref{fig:potential}.
It has just one imaginary peak that indicating that
transitions between the two length scales do not require local barrier crossing.
For the cases $B$ and $C$ the imaginary branch has two peaks
suggestive of modes connecting between the shoulder
scale, attractive scale and second attractive scales.
The peak with largest frequency in the real branch
has larger frequency in the case $C$ than in the case $A$ and is related to the shoulder scale.
For the case $D$, the shoulder is deep and so the frequency related to the shoulder scale has a very
high frequency. The imaginary branch has two distinct oscillation modes that exclude transitions between the
shoulder scale and the other scales and therefore no anomalies are expected.
The above discussion suggest that INM spectra carry fairly detailed information on the dynamics of transitions between the two length
scales. The two features which are a compact signature of INM spectra are the Einstein frequency and the fraction of imaginary modes.
Isotherms of the Einstein frequency as a function of density for all four cases show a monotonic increase with density and
do not show any significant signatures of the water-like anomalies. The fraction of imaginary modes, in contrast, correlates
strongly with the anomalous behaviour of the pair entropy and the diffusivity. Figure~\ref{fig:Fi.vs.rho} shows the $F_i$ curves versus
density for various isotherms of all four multi-Gaussian model fluids studied here. The parallel behaviour of the $s_2(\rho )$ and $F_i(\rho )$ curves at
corresponding isotherms is immediately obvious, though the $F_i(\rho)$ have a stronger non-monotonic behaviour
than $s_2(\rho)$ curves. This can be seen most clearly for a high-temperature isotherm.
\begin{figure}[htbp]
\centering
\includegraphics[clip=true,scale=0.35]{figures/f-omega-25.eps}
\includegraphics[clip=true,scale=0.35]{figures/f-omega-50.eps}
\includegraphics[clip=true,scale=0.35]{figures/f-omega-75.eps}
\includegraphics[clip=true,scale=0.35]{figures/f-omega-100.eps}
\caption{Normal models versus frequency for the four studies cases. The density is fixed, $\rho^*=0.50$ in all the cases and the temperature is varied.} \label{fig:f-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[clip=true,scale=0.32]{figures/fi-vs-rho-25.eps}
\includegraphics[clip=true,scale=0.32]{figures/fi-vs-rho-50.eps}
\includegraphics[clip=true,scale=0.32]{figures/fi-vs-rho-75.eps}
\includegraphics[clip=true,scale=0.32]{figures/fi-vs-rho-100.eps}
\caption{Fraction of imaginary modes versus density for fixed temperatures, $T^* = 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90$ and $1.10 $ from bottom to
top for cases $A, B$ and $C$. For case $D$ the start at $T^*=0.50$.}
\label{fig:Fi.vs.rho}
\end{figure}
\section{Conclusions}
This paper examines the relationship between water-like anomalies and the
liquid-liquid critical point in a family of model fluids with
multi-Gaussian, core-softened pair interactions. The pair interaction
in this family of liquids is composed of a sum of Lennard-Jones and Gaussian
terms, in such a manner that the longer length scale associated with
a shallow, attractive well is kept constant while the shorter length scale
associated with the repulsive shoulder changes from an inflexion point to
a minimum of progressively increasing depth. The maximum depth of the shoulder
length scale is chosen so that the resulting potential reproduces the
oxygen-oxygen radial distribution function of the ST4 model of water.
As the energetic stabilization of the shoulder length scale increases, the
liquid-liquid critical point shifts to higher temperatures and lower
pressures. Simultaneously, the temperature for onset of the density
anomaly decreases and the region of liquid state anomalies in the
pressure-temperature plane diminishes. The condition for the presence
of anomalies is inconsistent with divergences near a critical point,
so that in the limiting case of maximum shoulder well depth, the anomalies
disappear.
To understand our results for the phase diagram and liquid-state anomalies
of the multi-Gaussian family of water-like fluids, it is important to
note that, in addition to the presence of two length scales, it is necessary
to consider the energetic and entropic effects as determined by local
minima and curvatures of the pair interaction. As the shoulder depth increases,
the pressure required to form the high density liquid decreases and the
temperature up to which the high-density liquid is stable increases. This explains
the shift of the liquid-liquid critical point to much lower pressures and
higher temperatures. To understand the entropic effects associated with the
changes in the interaction potential, we computed the pair correlation
entropy and demonstrated the attenuation of the excess entropy anomaly as the shoulder
length scale changed from an inflexion point to a deep minimum. In conjunction
with Rosenfeld-scaling of transport properties, this is consistent with
the progressive loss of water-like thermodynamic, structural and transport anomalies.
The excess entropy anomaly in two-scale, isotropic fluids is due to a
rise in entropy as a result of competition between two length scales at intermediate
densities. In the case of continuous potentials, the vibrational entropy
associated with the two length scales becomes important. To index the overall
curvature distribution in the liquid, we have used instantaneous normal mode
analysis and shown the fraction of imaginary frequency modes correlates
well with the anomalous behaviour of the diffusivity and the pair correlation
entropy. A detailed analysis of the INM spectrum shows that as the shoulder
well increases in depth, there is a simultaneous rise in the positive curvature
associated with the shoulder minimum as well as the negative curvature of the
barrier separating the
shoulder minimum from the attractive minimum. Consequently, the
vibrational entropy associated with pairs of particles separated
by the shoulder distance decreases, relative to that of pairs trapped
in the outer attractive well. Therefore the mixing entropy
due to the presence of two length scales is counteracted by the
changes in vibrational entropy associated by the two length scales.
A general conclusion that emerges from this study is that even though the ratio between the two length scales is important
for locating the temperature range of the anomalies~\cite{Ya07}, additional energetic and entropic effects associated with local
minima and curvatures of the pair interaction can play an important role. The liquid-liquid phase separation depends on the relative energies
associated with the two length scales whereas the water-like anomalies depend upon a continuous rise in entropy as a function of isothermal
compression. A number of recent studies of core-softened fluids illustrate this conclusion. For example, energetic and entropic effects play a very different
role in the discrete and discontinuous versions of the shouldered well potential~\cite{Ol06a}.
In the discrete case, the enthalpic implications do not change significantly and the liquid-liquid critical point is not significantly different in the two systems. In contrast, the
continuous potential allows for a smooth transformation through a
range of quasi-binary states from low- to high-density and shows water-like
anomalies. A more recent study of core-softened
fluids~\cite{Si10} shows that increasing the depth of the attractive well,
while leaving the shoulder feature constant,
results in disappearance of the anomalies while shifting the liquid-liquid
critical point to lower pressures and higher temperatures.
\bibliographystyle{aip}
|
2,869,038,155,246 | arxiv | \section{Introduction and main results}\label{sec:i}
Let $d\in\NN$. For $x,y\in\Rd$ and $t>0$ we consider the Gaussian kernel
$$g(t,x,y)=g(t,y-x)=(4\pi t)^{-d/2} e^{-|y-x|^2/(4t)}.$$
It is the fundamental solution of the heat equation $\partial_t=\Delta$.
For a
function $V\colon\Rd\to\RR$
we let $G$
be
the fundamental solution of $\partial_t=\Delta+V$, determined by the following Duhamel or perturbation formula for $t>0$, $x,y\in \Rd$,
\[
G(t,x,y)=g(t,x,y)+\int_0^t \int_\Rd G(s,x,z)V(z)g(t-s,z,y)dzds.
\]
We aim at the {\it sharp global Gaussian bounds} of $G$, which mean that there are numbers $0< c_1\le 1 \le c_2$ such that
\begin{align}\label{est:sharp_uni}
c_1 \leq \frac{G(t,x,y)}{g(t,x,y)}\leq c_2\,, \quad t>0,\ x,y\in\Rd.
\end{align}
Clearly, \eqref{est:sharp_uni} implies the plain global Gaussian bounds, which only require numbers $0<\varepsilon_1, c_1 \leq 1\leq \varepsilon_2, c_2 <\infty$ such that for all $t>0$ and $x,y\in\Rd$,
\begin{align}\label{est:gaus}
c_1\, (4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t\varepsilon_1}} \leq G(t,x,y)\leq c_2\, (4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t\varepsilon_2}}.
\end{align}
To characterize \eqref{est:sharp_uni} we let
\begin{align*}
S(V,t,x,y)=\int_0^t \int_{\Rd} \frac{g(s,x,z)g(t-s,z,y)}{g(t,x,y)}|V(z)|\,dzds\,, \quad t>0,\ x,y\in \Rd.
\end{align*}
This will often be abbreviated to $S(V,t)$, $S(V)$ or $S$, and we always assume that $V$ is Borel measurable.
Denote, as usual,
\begin{equation*}\label{def:sSbi}
\|S(V) \|_{\infty}=\sup_{t>0,\,x,y\in\Rd} S(V,t,x,y).
\end{equation*}
The results of Bogdan, Hansen and Jakubowski \cite{MR2457489} and Zhang \cite{MR1978999} give enough evidence in favor of using $S(V)$
in this and more general contexts.
We will say that $V$ has bounded potential for bridges globally in time, if $\|S(V) \|_{\infty}<\infty$,
in which case
we can largely resolve \eqref{est:sharp_uni} thanks to the following
folklore
result.
\begin{lem}\label{lem:gen_neg}
If $\eta:=\|S(V^+)\|_\infty<1$
and $S(V^-)$ is locally bounded,
then
\begin{align}\label{gen_est}
e^{-S(V^-,t,x,y)} \leq \frac{G(t,x,y)}{g(t,x,y)}\leq \frac{1}{1-\eta}, \qquad t>0, \ x,y\in \Rd \,.
\end{align}
If $V\leq 0$, then \eqref{est:sharp_uni} holds if and only if $\|S(V)\|_\infty<\infty$.
If $V\geq 0$, then \eqref{est:sharp_uni} implies $\|S(V)\|_\infty<\infty$.
\end{lem}
Here, as usual, $V^+=\max(0,V)$ and $V^-=\max(0,-V)$.
The last statement of the lemma easily follows from Duhamel formula.
The rest of the lemma is an excerpt from \cite[Lemma~1.1 and Lemma~1.2]{2015arXiv151107167B}, where it is proved based on
\cite{MR2457489, MR3000465}.
We note that $S(V)=\infty$
for every nontrivial $V$ in dimensions
$d = 1$ and $2$,
see, e.g., \cite[Lemma~1.3]{2015arXiv151107167B}, and so \eqref{est:sharp_uni} is impossible for nontrivial $V\ge 0$ and nontrivial $V\le 0$ in these dimensions.
To characterize the boundedness of
$S(V)$,
for $d\geq 3$ and $x,y\in \Rd$ we define
\begin{align}\label{Kernel}
\Kz (x,y)= \frac{e^{-
\left(|x||y|-x\cdot y \right)/2}}{|x|^{d-2}} \left(1+|x||y| \right)^{d/2-3/2}\,,
\end{align}
where $x\cdot y$ is the usual scalar product, and we let
\begin{eqnarray*}
\Kz (V,x,y)&=&\int_{\Rd} |V(z)|\Kz (z-x,y)\,dz\,.
\end{eqnarray*}
We also denote
$$\|V \|_{\Kz }= \|
\Kz (V)\|_\infty\,.$$
Here is our main result.
\begin{thm}\label{thm:J_0}
There are constants
$M_1$, $M_2$ depending only on $d$, such that
\begin{equation}\label{eq:cSK0}
M_1 \| V \|_{\Kz } \leq
\|S(V)\|_{\infty}
\leq M_2 \| V \|_{\Kz }\,.
\end{equation}
\end{thm}
Here by constants we mean positive numbers. The proof of Theorem~\ref{thm:J_0} is given in Section~\ref{sec:proofs}.
In view of \eqref{eq:cSK0} and of the second and the third statements of Lemma~\ref{lem:gen_neg}, the condition $\|V\|_{\Kz}<\infty$ may replace $\|S(V)\|_{\infty}<\infty$ in characterizing \eqref{est:sharp_uni}, which will be often used without mention.
Similarly, sufficient smallness of $\|V\|_K$ yields \eqref{gen_est} in view of the first statement of Lemma~\ref{lem:gen_neg}.
\begin{cor}\label{cor:sGbbyK}
If $V\leq 0$, then \eqref{est:sharp_uni} holds if and only if $\Kz (V)$ is bounded.
\end{cor}
Compared
with
$S(V)$,
$\Kz (V)$ is easier to investigate, because $K(V)$ has one argument less than $S(V)$. This
leads to considerable progress in analysis of \eqref{est:sharp_uni}, which we now present.
For $d\geq 3$ we let $C_d=\Gamma(d/2-1)/(4\pi^{d/2})$ and
\begin{align*}
-\Delta^{-1} V(x)
= \int_0^{\infty}\int_{\Rd}g(t,x,z) V(z)\,dzdu
=C_d\int_{\Rd} \frac{1}{|z-x|^{d-2}}V(z)\,dz\,.
\end{align*}
For $d=3$ the formula for $\Kz $ simplifies and we easily obtain
\begin{align}\label{eq:d_3}
\|V\|_{\Kz }= C_{d}^{-1}\, \|\Delta^{-1} |V| \|_{\infty}\,,
\end{align}
thus
$\|\Delta^{-1} |V|\|_{\infty}$
resolves \eqref{est:sharp_uni} in the same way as $\|S(V)\|_{\infty}$.
For instance, if $d=3$ and $V\leq 0$, then
the sharp global Gaussian bounds \eqref{est:sharp_uni} are equivalent to the condition
$\|\Delta^{-1} V \|_{\infty}<\infty$.
This equivalence was first proved by Milman and Semenov \cite[Remark~(3) on p. 4]{MR1994762}.
The main focus of the present paper is on the case of $d\ge 4$. Let
$$\|V \|_{d/2}=\left(\int_\Rd |V(z)|^{d/2}dz\right)^{2/d}.$$
\begin{prop}\label{thm:D_Ld/2}
If $d\geq 4$, then
\begin{equation}\label{eq:KV}
C_d^{-1}\|\Delta^{-1}|V|\|_{\infty}\le
\|V\|_{\Kz } \leq 2^{(d-3)/2}\Big( C_d^{-1}\|\Delta^{-1}|V| \|_{\infty} + \kappa_d \|V \|_{d/2}\Big).
\end{equation}
\end{prop}
The result is an analogue of \cite[Corollary~1]{MR1642818}.
In Section~\ref{sec:proofs} we give the proof and specify the constant $\kappa_d$.
As a consequence,
$\|\Delta^{-1}V\|_{\infty}<\infty$ is necessary for
\eqref{est:sharp_uni} if $V\le 0$ and if $V\ge 0$, cf. Lemma~\ref{lem:gen_neg}.
On the other hand for every $d\geq 3$
there is $V\leq 0$ such that
$\|V\|_{\Kz}<\infty$, i.e., \eqref{est:sharp_uni} holds,
but
$V\notin L^1(\Rd)\cup \bigcup_{p>1}L^p_{loc}(\Rd)$, in particular $\|V \|_{d/2}=\infty$, see
\cite{2015arXiv151107167B}.
A long-standing open problem on \eqref{est:sharp_uni} for $V\le 0$ posed by Liskevich and Semenov \cite[p. 602]{MR1642818} reads as follows: ``The validity of the two-sided estimates for the case $d>3$ without the additional assumption $V\in L^{d/2}$ is an open question.''
The question is whether $\|V\|_{\Kz}$ and $\|\Delta^{-1}V\|_{\infty}$ are comparable for $d>3$.
It turns out that the answer is negative, as follows.
\begin{prop}\label{thm:dist}
Let $d\geq 4$.
For $\mathbf z=(z_1, z_2,\ldots,z_d)\in\Rd$ we write $\mathbf z=(z_1,\mathbf z_2)$,
where $\mathbf z_2=(z_2,\ldots,z_d)\in \RR^{d-1}$.
We define
\begin{eqnarray*}
A&=&\{(z_1,\mathbf z_2)\in \Rd : \ z_1>4, \ |\mathbf z_2|\leq \sqrt{z_1}\}, \quad \mbox{ and }\\
V(z_1,\mathbf z_2)&=&-\frac{1}{z_1} \ind_{A}(z_1,\mathbf z_2).
\end{eqnarray*}
Then $\| \Delta^{-1} V \|_{\infty}<\infty$ and
$\|V\|_{\Kz }=\infty$.
There even is function $V \le 0$ with compact support and such that $\|\Delta^{-1}V\|_{\infty}<\infty$ but $\|V\|_{\Kz }=\infty$.
\end{prop}
Generally,
for $d\ge 4$,
neither finiteness nor smallness of $\|\Delta^{-1} V \|_{\infty}$
are sufficient for the
comparability of $g$ and $G$, even for $V$ with fixed sign and compact support.
Here are a few more comments to relate our result to existing literature.
In \cite{MR2064932} Milman and Semenov denote
$e(V,0)=\|\Delta^{-1}|V|\|_{\infty}$ and introduce
$e_*(V,0)=\sup_{\alpha\in\Rd}\|V (-\Delta+2\alpha\cdot\nabla)^{-1}\|_{1\to 1}$ to describe \eqref{est:sharp_uni} -- see \cite[Theorem~1C]{MR2064932}.
The spatial anisotropy introduced by $\alpha\cdot\nabla$ has a similar role as
that seen in the integral defining $S(V,t,x,y)$ and there are constants $c_1$, $c_2$ depending only on $d\ge 3$ such that
$$
c_1 \|V\|_{\Kz } \leq e_*(V,0)\leq c_2 \|V\|_{\Kz }\,.
$$
This is proved in \eqref{last_step} below. For $d=3$ we have $e(V,0)=e_*(V,0)$. On the contrary, for $d\geq 4$ by Proposition~\ref{thm:dist} there is $V\leq 0$ such that $e(V,0)<\infty$ but $e_*(V,0)=\infty$.
For the last remark we restrict ourselves to $V\le 0$.
Then
the condition $
\|\Delta^{-1}V\|_{\infty}<\infty$ characterizes
the plain global Gaussian bounds \eqref{est:gaus}, see \cite{MR1456565}.
By \eqref{eq:d_3}, for $d=3$ (and $V\leq 0$)
the plain global Gaussian bounds \eqref{est:gaus} hold if and only if the sharp global Gaussian bounds \eqref{est:sharp_uni} hold.
In contrast, by Proposition~\ref{thm:dist}
for
$d\geq 4$ the property \eqref{est:gaus}
is weaker than \eqref{est:sharp_uni}.
The remainder of the paper is structured as follows.
In Section~\ref{sec:proofs} we prove
Theorem~\ref{thm:J_0},
Proposition~\ref{thm:D_Ld/2}
and
Proposition~\ref{thm:dist}.
Section~\ref{appendix}
gives auxiliary results,
in particular the following crucial estimate of an
inverse-Gaussian type integral.
\begin{thm}\label{thm:est2}
Let $c>0$, $\beta> 1$ and
\begin{align*}
f(a,b)&=\int_0^{\infty} u^{-\beta} e^{-c \left[ \sqrt{u}b - \frac{a}{\sqrt{u}} \right]^2}\,du\,,\qquad a,b>0\,.
\end{align*}
We have
$$
f(a,b)\overset{\beta,c}{\approx} \frac{(1+4ab)^{\beta-3/2}}{a^{2(\beta-1)}}\,.
$$
\end{thm}
Here $\overset{\beta,c}{\approx}$ means that the ratio of both sides is bounded above and below by constants depending only on $\beta$ and $c$.
\section{Proofs of main results}\label{sec:proofs}
For $t>0$, $x,y\in \Rd$, we consider
\begin{align}
N(V,t,x,y)&:=\int_0^{t/2}\int_{\Rd}\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|dzd\tau \nonumber\\
&+\int_{t/2}^t\int_{\Rd} \frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4(t-\tau))}}{(t-\tau)^{d/2}} |V(z)|dzd\tau= N(V,t,y,x) \,.\label{def:N}
\end{align}
Clearly, $S(V)=S(|V|)$ and $N(V)=N(|V|)$.
Because of the work of Zhang \cite{MR1978999},
$N$ is a convenient approximation of $S$.
Namely, by \cite[Lemma 3.1, Lemma 3.2]{MR1978999}, there are constants $m_1,m_2$ depending only on $d$ such that
\begin{align}
S(V,t,x,y)&\geq m_1\, N(V,t/2,x,y)\,,\qquad t>0, \ x,y\in \Rd \,,\tag{L} \label{L}\\
S(V,t,x,y)&\leq m_2\, N(V,t,x,y)\,,\qquad t>0, \ x,y\in \Rd \,.\tag{U} \label{U}
\end{align}
As seen in \cite{MR1978999}, the comparability even holds for the kernels of $S$ and $N$.
In this section we prove out main result, i.e., Theorem~\ref{thm:J_0}. We start by using $N(V,t)$, \eqref{U} and \eqref{L}, to estimate $S(V,t)$.
\begin{lem}\label{lem:upr}
Let $t>0$. We have
\begin{align*}
\int_0^{t/2}\int_{\Rd}\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau \leq N(V,t)(x,y)\,,\quad x,y\in \Rd\,,
\end{align*}
and
\begin{align*}
\sup_{x,y}N(V,t)(x,y) \leq 2 \,\sup_{x,y} \int_0^{t/2}\int_{\Rd}\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau\,.
\end{align*}
\end{lem}
\begin{proof}
The first inequality follows by the definition of $N(V,t)(x,y)$. For the proof of the second one we note that
\begin{align*}
\int_{t/2}^t\int_{\Rd} &\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4(t-\tau))}}{(t-\tau)^{d/2}} |V(z)|dzd\tau\\
&\qquad = \int_0^{t/2}\int_{\Rd} \frac{e^{-|z-x+(\tau/t)(x-y)|^2/(4\tau)}}{\tau} |V(z)|dzd\tau\,.
\end{align*}
\end{proof}
For $x,y\in\Rd$ we let
$$J(x,y)=\int_0^{\infty} \tau^{-d/2} e^{-\frac{|x-\tau y|^2}{4\tau}} d\tau\,.$$
In view of the discussion in Section~\ref{sec:i} we have
\begin{align*}
e_*(V,0)&=\sup_{\alpha \in \Rd}\|(-\Delta+2\alpha\cdot\nabla )^{-1}|V|\|_{\infty}\\
&=(4\pi)^{-d/2}\sup_{x,y\in\Rd} \int_{\Rd} J(z-x,y) |V(z)|\,dz\,.
\end{align*}
\begin{lem}\label{lem:J_0}
We have
\begin{align*}
\sup_{t>0,\,x,y\in\Rd} S(V,t,x,y) \geq m_1\sup_{x,y\in\Rd} \int_{\Rd} J(z-x,y)|V(z)|\,dz\,.
\end{align*}
and
\begin{align*}
\sup_{t>0,\,x,y\in\Rd} S(V,t,x,y)\leq 2\, m_2 \sup_{x,y\in\Rd} \int_{\Rd} J(z-x,y)|V(z)|\,dz\,,
\end{align*}
\end{lem}
\begin{proof}
By \eqref{L} and Lemma~\ref{lem:upr},
\begin{align*}
\sup_{t>0,\,x,y\in\Rd}& S(V,t,x,y)\geq m_1 \sup_{t>0,\,x,y\in\Rd} N(|V|,t/2)(x,y)\\
&\geq m_1 \sup_{t>0,\,x,y\in\Rd} \int_0^{t/4}\int_{\Rd}\frac{e^{-|z-y+(2\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau\\
&= m_1\sup_{t>0,\,y,w\in\Rd} \int_0^{t/4}\int_{\Rd}\frac{e^{-|z-y+\tau w|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau\\
&= m_1 \sup_{y,w\in\Rd} \int_{\Rd} J(z-y,w) |V(z)|\,dz\,.
\end{align*}
By \eqref{U} and Lemma~\ref{lem:upr},
\begin{align*}
\sup_{t>0,\,x,y\in\Rd}& S(V,t,x,y)\leq m_2\sup_{t>0,\, x,y\in\Rd} N(V,t)(x,y)\\
&\leq 2\, m_2 \sup_{t>0,\, x,y\in\Rd} \int_0^{t/2}\int_{\Rd}\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau\\
&\leq 2\, m_2 \sup_{t>0,\, y,w\in\Rd} \int_0^{t/2}\int_{\Rd}\frac{e^{-|z-y+\tau w|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau\\
&= 2 \, m_2 \sup_{y,w\in\Rd} \int_{\Rd} J (z-y,w) |V(z)|\,dz\,.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:J_0}]
We claim \eqref{eq:cSK0} holds
with $M_1>0$ that depends only on $d$, and
$M_2= m_2 2^{d} \int_0^{\infty} (1\vee r)^{d/2-3/2}\, r^{-1/2}\, e^{-r}\,dr$.
To this end, according to Lemma~\ref{lem:J_0}, we analyze
\begin{align*}
J(z-x,y)=
\int_0^{\infty} \tau^{-d/2} e^{-\frac{|z-x-\tau y|^2}{4\tau}} d\tau\,.
\end{align*}
Obviously, $J=\infty$ if $d=1$ or $d=2$. For $d\geq 3$ we observe that
$$
\frac{|z-x-\tau y|^2}{4\tau}=\frac1{4}\left[\frac{|z-x|}{\sqrt{\tau}}-\sqrt{\tau}|y| \right]^2+\frac1{2}\big(|z-x||y|-(z-x)\cdot y\big)\,,
$$
and thus
$$
J(z-x,y)= e^{-\frac1{2}\left(|z-x||y|-(z-x)\cdot y \right)}
\int_0^{\infty} \tau^{-d/2}
e^{-\frac1{4}\left[\sqrt{\tau}|y|-\frac{|z-x|}{\sqrt{\tau}} \right]^2} d\tau\,.
$$
Finally, by Theorem~\ref{thm:est2}
with $a=|z-x|/2$, $b=|y|/2$, $\beta=d/2$ and $c=1$,
\begin{align}\label{last_step}
J(z-x,y)\overset{d}{\approx}
\Kz (z-x,y)\,.
\end{align}
This also gives the explicit constants, as a consequence of Remark~\ref{rem:expl_final}. For instance we can take $M_2= 8m_2 \sqrt{\pi}$ if $d=3$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:D_Ld/2}]
The left hand side inequality follows from the identity
$\Kz (V)(x,0)= C_d^{-1} (-\Delta^{-1})|V|(x)$.
If $y=0$, then the upper bound trivially holds. For $y\neq 0$
we consider two domains of integration. We have
\begin{align*}
\int_{|z-x||y|\leq 1} &\Kz (z-x,y)|V(z)|\,dz
\leq 2^{(d-3)/2} \int_{|z-x||y|\leq 1} \frac{1}{|z-x|^{d-2}} |V(z)|dz\\
&\leq \frac{2^{(d-3)/2}}{C_d} |\!|\Delta^{-1}|V| |\!|_{\infty}\,.
\end{align*}
Furthermore, by a change of variables and H{\"o}lder inequality,
\begin{align*}
&\int_{|z-x||y|\geq 1} \Kz (z-x,y)|V(z)|dz\\
&\leq 2^{(d-3)/2} \!\!\! \int_{|z-x||y|\geq 1}
\frac{e^{-\frac1{2}\left(|z-x||y|-(z-x)\cdot y \right)}}{(|z-x||y|)^{(d-1)/2}} |y|^{d-2}|V(z)|dz
\leq 2^{(d-3)/2}\kappa_d |\!|V |\!|_{d/2}
\,,
\end{align*}
where
$$
\kappa_d=\left(
\int_{|w|>1}
\left(e^{-\frac1{2}(|w|-w\cdot 1 )}|w|^{-(d-1)/2}\right)^{d/(d-2)} dw \right)^{(d-2)/d} \,.
$$
The finiteness of $\kappa_d$
follows from Lemma~\ref{lem:const} below.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:dist}]
We use the notation introduced in the formulation of the theorem.
First we prove that $\|V\|_{\Kz }=\infty$.
Let $\mathbf y=(1,\mathbf 0)\in \mathbb R^d$, $\mathbf x=0$.
Observe that for $\mathbf z\in A$ we have
$$ 0\leq |\mathbf z||\mathbf y|-\mathbf z\cdot \mathbf y=|\mathbf z|-z_1 = \frac{|\mathbf z_2|^2}{\sqrt{z_1^2+|\mathbf z_2|^2}+z_1}\leq \frac{z_1}{\sqrt{z_1^2+|\mathbf z_2|^2}+z_1}\leq 1 $$
and thus also
$ z_1\le |\mathbf z| \le 2z_1$. Then,
\begin{align*}
\|V\|_{\Kz }&\geq \int_{\mathbb R^d} e^{-\frac{1}{2}(|\mathbf z||\mathbf y|-\mathbf z\cdot \mathbf y)}|V(\mathbf z)|\frac{(1+|\mathbf z||\mathbf y|)^{\frac{d}{2}-\frac{3}{2}}}{|\mathbf z|^{d-2}}\, d\mathbf z
\geq c\int_A \frac{1}{z_1}\frac{z_1^{\frac{d}{2}-\frac{3}{2}}}{z_1^{d-2}}\, d\mathbf z\\
&=c\int_{4}^\infty \int_{|\mathbf z_2|<\sqrt{z_1}} z_1^{-1+2-d+\frac{d}{2}-\frac{3}{2}}\, d\mathbf z_2\, dz_1
=c_1\int_{4}^\infty z_1^{-1+2-d+\frac{d}{2}-\frac{3}{2}+\frac{1}{2}(d-1)}\, dz_1\\
&=c_1\int_{4}^\infty \frac{1}{z_1}\, dz_1=\infty.
\end{align*}
We now prove that $\|\Delta^{-1} |V| \|_{\infty}<\infty$.
By the symmetric rearrangement inequality (see \cite[Chapter~3]{MR1817225}) we have
\begin{align*}
\sup_{\mathbf x\in \mathbb R^d}\int_{\mathbb R^d} \frac{1}{|\mathbf z-\mathbf x|^{d-2}} |V(\mathbf z)|\, d\mathbf z
= \sup_{x_1\in \mathbb R} \int_4^{\infty} \int_{\RR^{d-1}} \frac{\ind_{|\mathbf z_2|<\sqrt{z_1}}}{[(z_1-x_1)^2+|\mathbf z_2|^2\,]^{(d-2)/2}}\frac1{z_1}d\mathbf z_2\,dz_1
\end{align*}
It suffices then to consider $\mathbf x=(x_1,0,\ldots, 0)$ and
we only need to show that the following three integrals are uniformly bounded for $x_1\geq 4$.
The first integral is
\begin{align*}
I_1&=\int_{x_1+\sqrt{x_1}}^\infty \int_{|\mathbf z_2|<\sqrt{z_1}}
\frac{1}{|z_1-x_1|^{d-2}+|\mathbf z_2|^{d-2}} \frac{1}{z_1}\, d\mathbf z_2\, dz_1\\
& \leq \int_{x_1+\sqrt{x_1}}^\infty \int_{|\mathbf z_2|<\sqrt{z_1}}
\frac{1}{|z_1-x_1|^{d-2}} \frac{1}{z_1}\, d\mathbf z_2\, dz_1\\
&= c \int_{x_1+\sqrt{x_1}}^\infty
\frac{1}{|z_1-x_1|^{d-2}} \frac{1}{z_1} z_1^{\frac{1}{2}(d-1)}\, dz_1
= c \int_{\sqrt{x_1}}^\infty
\frac{1}{z_1^{d-2}} \ (z_1+x_1)^{\frac{d}{2}-\frac{3}{2}}\, dz_1\\
&\leq c' \int_{\sqrt{x_1}}^{x_1}
\frac{1}{z_1^{d-2}} \ x_1^{\frac{d}{2}-\frac{3}{2}}\, dz_1
+c' \int_{x_1}^\infty
\frac{1}{z_1^{d-2}} \ z_1^{\frac{d}{2}-\frac{3}{2}}\, dz_1 \leq c''<\infty.
\end{align*}
The second integral we consider is
\begin{align*}
I_2 & = \int_2^{x_1-\sqrt{x_1}} \int_{|\mathbf z_2|<\sqrt{z_1}}
\frac{1}{{|z_1-x_1|^{d-2}+|\mathbf z_2|^{d-2}}} \frac{1}{z_1}\, d\mathbf z_2\, dz_1\\
& \leq
\int_2^{x_1-\sqrt{x_1}} \int_{|\mathbf z_2|<\sqrt{z_1}}
\frac{1}{{|z_1-x_1|^{d-2}}} \frac{1}{z_1}\, d\mathbf z_2\, dz_1
=c \int_2^{x_1-\sqrt{x_1}}
\frac{1}{{(x_1-z_1)^{d-2}}} z_1^{\frac{d}{2} -\frac{3}{2}}\, dz_1\\
&= c \int_{\sqrt{x_1}}^{x_1-2} \frac{1}{w^{d-2}} (x_1-w)^{\frac{d}{2}-\frac{3}{2}}\, dw
\leq c \int_{\sqrt{x_1}}^{x_1-2} \frac{1}{w^{d-2}} x_1^{\frac{d}{2}-\frac{3}{2}}\, dw
\le c'<\infty.
\end{align*}
The remaining integral is
\begin{align*}
I_3
&=\int_{x_1-\sqrt{x_1}}^{x_1+\sqrt{x_1}} \int_{|\mathbf z_2|<\sqrt{z_1}}
\frac{1}{{|\mathbf z-\mathbf x|^{d-2}} }\frac{1}{z_1}\, d\mathbf z
\leq 2\int_{x_1-\sqrt{x_1}}^{x_1+\sqrt{x_1}} \int_{|\mathbf z_2|<2\sqrt{x_1}}
\frac{1}{{|\mathbf z-\mathbf x|^{d-2}}} \frac{1}{x_1}\, d\mathbf z\\
&\leq 2 \int_{B(\mathbf x,\, 3 \sqrt{x_1})}\frac{1}{{|\mathbf z-\mathbf x|^{d-2}}} \frac{1}{x_1}\, d\mathbf z
\leq c< \infty.
\end{align*}
To prove the second statement of Proposition~\ref{thm:dist}, for $s>0$ we let
$
{\it d}_s
f(x)=sf(\sqrt{s}x)$. Note that the dilatation does not change the norms:
$$
\| \Delta^{-1}({\it d}_s f) \|_{\infty}=\| \Delta^{-1} f \|_{\infty}\,, \qquad \|{\it d}_s f\|_{\Kz }=\|f\|_{\Kz } \,.
$$
Moreover, ${\rm supp} ({\it d}_s f) \subseteq B(0,r/\sqrt{s})$ if ${\rm supp} (f)\subseteq B(0,r)$, $r>0$.
Now, let $V\geq 0$ be a potential such that
$\|\Delta^{-1}V\|_{\infty}=C<\infty$ and $\|V\|_{\Kz }=\infty$.
Then for any $r>0$ we have $\|\Delta^{-1} (V\ind_{B_r})\|_{\infty}\leq C$ and $\|V\ind_{B_r}\|_{\Kz }\to \infty$ as $r\to \infty$.
For $n\in \NN$ we define $$V_n={\it d}_{r_n^2}(V\ind_{B_{r_n}}) \,,$$
where $r_n$ is chosen such that
$
\|V\ind_{B_{r_n}}\|_{\Kz }\geq 4^n
$.
Thus ${\rm supp}(V_n)\subseteq B(0,1)$.
Finally we consider $\tilde{V}=\sum_{n=1}^{\infty}V_n/2^n$. Then,
$$
\|\tilde{V}\|_{\Kz }\geq \|V_n\|_{\Kz }/2^n\geq 2^n \to \infty\,,\quad n \to \infty \,,
$$
and
$$
\|\Delta^{-1}\tilde{V}\|_{\infty}\leq \sum_{n=1}^{\infty} \|\Delta^{-1}V_n\|_{\infty}/2^n \leq C\,.
$$
\end{proof}
\section{Appendix}
\label{appendix}
In this section we collect
auxiliary calculations.
\begin{lem}\label{lem:h}
Let $\gamma>-1/2$. Then
$$
h(x)=\int_0^{\infty} \left(x+s^2 \right)^{\gamma} e^{-cs^2}ds\, \overset{\gamma,c}{\approx}\, \left(1+x\right)^{\gamma}\,,\qquad x\geq 0\,.
$$
\end{lem}
\begin{proof}
By putting $r=s^2$ we get
\begin{align*}
h(x)=\left(1+x\right)^{\gamma} \int_0^{\infty} \left( \frac{x+r}{1+x} \right)^{\gamma} r^{-1/2}\,\frac{e^{-cr}dr}{2}\,,
\end{align*}
Since for all $x,r\geq 0$ we have
\begin{align*}
1\vee r \geq \frac{x}{1+x} + \frac{r}{1+x} \geq \begin{cases} r/2\,, \quad {\rm for}\quad x\in(0,1)\,,\\ 1/2\,, \quad {\rm for} \quad x \geq 1\,,\end{cases}
\end{align*}
the last integral in the above is comparable with a positive constant depending only on $\gamma$ and $c$.
\end{proof}
\begin{rem}\label{rem:expl}
\rm
If $\gamma\geq 0$, then
$h(x) \leq \, C \left(1+x\right)^{\gamma}$, $x\geq 0$,
where $C=\frac1{2}\int_0^{\infty} (1\vee r)^{\gamma} r^{-1/2} e^{-cr}dr $.
\end{rem}
\begin{lem}\label{lem:Iapp}
Let $c>0$, $\beta> 1$ and
\begin{align*}
I_{\rm app}(a,b)=\int_0^{\infty}
\left(\frac{s+\sqrt{4ab+s^2}}{2a} \right)^{2(\beta-1)}
\frac{e^{-cs^2}\,ds}{\sqrt{4ab+s^2}}\,,\qquad a,b>0\,.
\end{align*}
Then
\begin{align*}
I_{\rm app}(a,b) \overset{\beta,c}{\approx} \frac{\left(1+ 4ab \right)^{\beta -3/2}}{a^{2(\beta-1)}}\,.
\end{align*}
\end{lem}
\begin{proof}
Observe that $0\leq s \leq \sqrt{4ab+s^2}$. Thus with $h(x)$ and $\gamma=\beta-3/2$ from Lemma~\ref{lem:h} we have
\begin{align}\label{expl_1}
2^{-2(\beta-1)} a^{-2(\beta-1)} \leq \frac{I_{\rm app}(a,b)}{h(4ab)}\leq a^{-2(\beta-1)}\,.
\end{align}
The assertion follows by Lemma~\ref{lem:h}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:est2}]
By substitution $u=(a/b)r$ we obtain
$$
f(a,b)=(a/b)^{1-\beta}\int_0^{\infty} r^{-\beta+1} e^{-c ab\left[\sqrt{r}-\frac1{\sqrt{r}} \right]^2} \frac{dr}{r}\,.
$$
By change of variables from $r$ to $1/r$ we get
$$
f(a,b)=(a/b)^{1-\beta}\int_0^{\infty} r^{\beta-1} e^{-c ab\left[\sqrt{r}-\frac1{\sqrt{r}} \right]^2} \frac{dr}{r}\,.
$$
Finally, we let $ \sqrt{r} - 1/\sqrt{r}=s/\sqrt{ab}$, then $\left( \sqrt{r} -s/\sqrt{4ab} \right)^2= 1+s^2/(4ab)$. Note that $\sqrt{r}>s/\sqrt{4ab}$, hence $$r=\left( s/\sqrt{4ab}+\sqrt{1+s^2/(4ab)}\right)^2=\left( s+\sqrt{4ab+s^2} \right)^2 /(4ab)\,,$$
and
\begin{align*}
dr&= 2 \left( s+\sqrt{4ab+s^2} \right) \left( 1+s/\sqrt{4ab+s^2}\right)\,ds /(4ab)\\
&= 2 \left( s+\sqrt{4ab+s^2} \right)^2 \,ds/\left(4ab\sqrt{4ab+s^2}\right)\\
&= 2 r\, ds/\sqrt{4ab+s^2}\,.
\end{align*}
This gives
\begin{align*}
&f(a,b)=2 \int_{-\infty}^{\infty} \left(\frac{s+\sqrt{4ab+s^2}}{2a} \right)^{2(\beta-1)} \frac{e^{-cs^2}ds}{\sqrt{4ab+s^2}}\,.
\end{align*}
By splitting the last integral we have
\begin{align*}
f(a,b)=2 \int_0^{\infty} \left(\frac{s+\sqrt{4ab+s^2}}{2a} \right)^{2(\beta-1)} \frac{e^{-cs^2}ds}{\sqrt{4ab+s^2}}\\
+2 \int_0^{\infty} \left(\frac{-s+\sqrt{4ab+s^2}}{2a} \right)^{2(\beta-1)} \frac{e^{-cs^2}ds}{\sqrt{4ab+s^2}}\,.
\end{align*}
Since $\beta>1$ and $0\leq -s +\sqrt{4ab+s^2}\leq s+\sqrt{4ab+s^2}$, we have
\begin{align}\label{expl_2}
2\, I_{\rm app}(a,b)\leq f(a,b)\leq 4\, I_{\rm app}(a,b)\,.
\end{align}
The proof is ended by an applications of Lemma \ref{lem:Iapp}.
\end{proof}
\begin{rem}\label{rem:expl_final}
\rm
Using \eqref{expl_2}, \eqref{expl_1} and Remark~\ref{rem:expl}
we get an explicit constant in the upper bound in Theorem~\ref{thm:est2} for $\beta\geq 3/2$:
$$
f(a,b)\leq C \,\frac{(1+4ab)^{\beta-3/2}}{a^{2(\beta-1)}}\,,
$$
where
$$
C=2 \int_0^{\infty} (1\vee r)^{\beta-3/2}\, r^{-1/2}\, e^{-cr}\,dr\,.
$$
In particular if $\beta=3/2$, then $C= \sqrt{4\pi/c}$.
\end{rem}
We now verify the finiteness of $\kappa_d$ from the statement of Proposition~\ref{thm:D_Ld/2}.
\begin{lem}\label{lem:const}
Let $d\geq 3$. Then,
\begin{align*}
\int_{\Rd\setminus B(0,1)} e^{-(|w|-w\cdot 1)}|w|^{-\beta}dw <\infty\quad \iff \quad \beta>(d+1)/2\,.
\end{align*}
\end{lem}
\begin{proof}
We always have
$$\int_{\{w\in \Rd\setminus B(0,1): \ w\cdot 1\le |w|\sqrt{2}/2\}} e^{-(|w|-w\cdot 1)}|w|^{-\beta}dw<\infty,$$
therefore we only need to characterize the finiteness of the complementary integral.
We will follow the usual notation for spherical coordinates in $\Rd$ \cite{MR1530579}. In particular, $w\cdot 1=r\cos \varphi_1$ and the Jakobian is $r^{d-1}\prod_{k=1}^{d-2} \sin^k(\varphi_{d-1-k})$. We denote $\varphi=\varphi_1$, and we consider
\begin{align*}
&\int_1^{\infty} \int_0^{\pi/4} e^{-r(1-\cos \varphi)} r^{-\beta+d-1}\sin^{d-2}\varphi \,d\varphi\, dr\\
&=\int_0^{\pi/4} h(\varphi) \frac{\sin^{d-2}\varphi }{(1-\cos \varphi)^{d-\beta}}\, d\varphi\,,
\end{align*}
where $h(\varphi)=\int_{1-\cos \varphi}^{\infty} e^{-s} s^{-\beta+d-1}ds$.
If $\beta=d$, then $h(\varphi)\approx 1+|\log(1-\cos\varphi)|$ and $\int_0^{\pi/4}h(\varphi)\sin^{d-2}\varphi d\varphi<\infty$, as needed.
If $\beta>d$, then $h(\varphi)\approx (1-\cos \varphi)^{d-\beta}$, and the integral is finite, too.
If $\beta<d$, then $h(\varphi)\approx 1$ and $\int_0^{\pi/4} \frac{\sin^{d-2}\varphi }{(1-\cos \varphi)^{d-\beta}} d\varphi\approx \int_0^{\pi/4} \varphi^{(d-2)-2(d-\beta)}d\varphi$, which converges if and only if $\beta>(d+1)/2$.
\end{proof}
\newpage
\bibliographystyle{abbrv}
|
2,869,038,155,247 | arxiv | \section{Introduction}
The origin of supersymmetry breaking remains an open question.
More important, for phenomenological purposes, it is to know how the
breaking of supersymmetry is transmitted to the ordinary particles.
The most popular scenario arises in the context of supergravity.
In these theories supersymmetry is assumed to be broken
in some isolated hidden sector and transmitted to
the observable sector by gravity \cite{nilles}.
These models, however, suffer from certain drawbacks.
The
degeneracy of the scalar quarks needed to avoid large
flavor changing neutral currents (FCNC) is not usually guaranteed at
low energies.
Also the breaking of
supersymmetry results in the non-flat limit
leading to cosmological disasters
(the Polonyi problem \cite{poloni}).
In this letter we will consider
an alternative scenario.
It is well known that extra U(1) factors
normally appear in effective field
theories arising from strings.
One of these U(1) is usually anomalous.
The cancellation of its anomalies occurs
by the Green-Schwarz mechanism \cite{gs} and requires that
both hidden and observable fields transform non-trivially under
this U(1). Thus, this anomalous U(1)
seems to be a natural new candidate for
transmitting the supersymmetry breaking from the hidden
to the observable sector. Here we will study this possibility.
Since the U(1) is anomalous, ${\rm Tr}{\bf Q} \not= 0$,
a Fayet-Iliopoulos term of ${\cal O}(M^2_P)$ is
always generated \cite{witten}. This term
facilitates the
breaking of supersymmetry in the flat limit, avoiding
the Polonyi problem. The scale of
supersymmetry breaking
can be smaller than $M_P$ and can originate dynamically.
In the presence of
gravity,
realistic scalar and
gaugino masses are induced
in the observable sector.
We find that
the $D$-term contribution can be larger than the
gravity mediated $F$-term contribution, resulting in a
hierarchy of soft masses.
This is a
crucial difference with the conventional
hidden sector scenarios in supergravity models.
As we will show, our model can lead to a certain degree of
squark degeneracy and suppressed FCNC.
It also allows for an explanation of
the observed quark mass hierarchy
($m_{t,b}\gg m_{u,d},m_{c,s}$)
and predicts an inverse hierarchy for the squarks
($m^2_{\tilde u,\tilde d}
\simeq m^2_{\tilde c,\tilde s}\gg m^2_{\tilde t,\tilde b}$).
Anomalous U(1) have been considered before
to predict the weak mixing angle \cite{ibanez},
fermion \cite{u1} or sfermion \cite{scalar}
masses; in these previous analysis, however,
the anomalous U(1) does not play any role in the breaking
of supersymmetry.
\section{Supersymmetry Breaking with an Anomalous U(1)}
Let us consider
a pair of chiral superfields
$\phi^-$ and $\phi^+$ with charges equal to $ -1$ and $+ 1$ respectively
under a gauge U(1).
We will assume that
there are other positively charged fields $Q_i$
such that ${\rm Tr}{\bf Q} > 0$
and the U(1) is anomalous.
This results into the appearance
of a Fayet-Iliopoulos term $\xi={\cal O}(M^2_P{\rm Tr}{\bf Q})$
\cite{witten}.
In string theories the generated Fayet-Iliopoulos term
can be calculated and is given by \cite{fayetil}
\begin{equation}
\xi = {g^2{\rm Tr}{\bf Q} \over 192\pi^2}M^2_P\, .
\end{equation}
The
$D$-term contribution to the effective potential takes the form
\begin{equation}
{g^2 \over 2}D^2 = {g^2 \over 2}\left (\sum_i
q_i|Q_i|^2+|\phi^+|^2 - |\phi^-|^2+\xi\right)^2\, ,
\label{dtermcon}
\end{equation}
where $q_i$ is the U(1)-charge of the field $Q_i$.
If eq.~(\ref{dtermcon}) is the only term in the potential,
the vacuum expectation value (VEV) of $\phi^-$ adjusts
to compensate $\xi$, and supersymmetry will not be broken.
However, according to the old observation
by Fayet \cite{fayet},
this can lead to the spontaneous breakdown of the
supersymmetry if the
$\phi^-$ field has a non-zero mass term in the superpotential:
\begin{equation}
W = m\phi^+\phi^-\, .
\end{equation}
We will show below that such a mass term can
in fact be generated dynamically.
For the moment, let us consider it
as a new input of the theory and look for its consequences.
Minimization of the potential
shows that the VEVs of the scalar components are
\begin{equation}
\langle\phi^+\rangle = 0,~~~\langle\phi^-\rangle^2
= \xi- {m^2 \over g^2}\, ,
\label{vacuums}
\end{equation}
and the VEVs of the $F$- and $D$-components are given by
\begin{equation}
\langle F_{\phi^+}\rangle = m\sqrt{\xi- {m^2 \over g^2}},~~~
\langle F_{\phi^-}\rangle = 0,~~~ \langle D\rangle =\frac{m^2}{g^2}\, .
\label{vacuum}
\end{equation}
The spectrum of the theory is the following: (1) The Goldstone boson
$Im\phi^-$ is eaten up by the gauge field that gets a mass
$g\sqrt{\xi - {m^2 \over g^2}}$ \cite{argu};
(2) its superpartner $Re\phi^-$
gets a mass $g\sqrt{\xi -{m^2 \over g^2}}$ from the $D$-term and
becomes a member of the massive gauge superfield;
(3) the complex scalar $\phi^+$ gets a squared-mass $2m^2$;
(4) one linear combination
of the chiral fermions and
the gaugino gets a Dirac mass $g\sqrt{\xi- {m^2 \over 2g^2}}$,
whereas the orthogonal
combination is the massless Goldstino.
Let us now embed this model in a supergravity theory.
It is easy to show that the
broken global supersymmetry cannot be restored by the supergravity
interactions.
This is because
an unbroken
supergravity with vanishing vacuum energy
implies $\langle W\rangle = 0$ and therefore
that all $\partial_\phi W$ and $D_A$ vanish too; this
contradicts the initial assumption that supersymmetry was broken
in the flat limit.
Under supergravity,
the VEVs of the fields will be shifted from eqs.~(\ref{vacuums})
and (\ref{vacuum}),
but the relation
\begin{equation}
{\langle F^2 \rangle \over \langle D \rangle} \sim\xi\, ,
\label{relation}
\end{equation}
will still hold.
\section{The Sparticle Spectrum}
In a supergravity theory the
supersymmetry
breaking is
communicated by gravity from the hidden sector ($\phi^+,\phi^-$)
to the observable
sector ($Q_i$).
The scalar masses receive contributions of order
\begin{equation}
m^2_Q\simeq \frac{\langle F_{\phi^+}\rangle^2}{M^2_P}
\simeq\frac{m^2\xi}{M^2_P}\simeq \varepsilon m^2\, ,
\label{fterm}
\end{equation}
where $\varepsilon\equiv \xi/M^2_P$ that in string theories takes the value
$\varepsilon=g^2{\rm Tr}{\bf Q}/192\pi^2$.
These contributions are,
in principle, non-universal, since they depend on the
K\"ahler potential \cite{nilles}.
The gaugino masses can arise from the operator
\begin{equation}
\int d^2\theta \frac{\phi^+\phi^-}{M^2_P}W_aW_a\, ,
\label{operator}
\end{equation}
where $W_a$ is the superfield that contains the gauge field strength
of the standard model SU$(a)$ group, $a=1,2,3$.
Thus, gaugino masses are given by
\begin{equation}
m_\lambda\simeq\frac{\langle F_{\phi^+}\phi^-\rangle}{M^2_P}\simeq
\varepsilon m\, .
\label{gaugi}
\end{equation}
Notice that the presence of the field $\phi^-$ with a VEV of order $M_P$
is crucial to give acceptable gaugino masses from
the operator eq.~(\ref{operator}).
The absence of this field in other models in which
supersymmetry is also broken in the
flat limit, leads to very light gauginos \cite{ads} (see
however ref.~\cite{nelson}).
In string theories the operator eq.~(\ref{operator}) can only be
induced at the one-loop level since only the dilaton couples to $W_aW_a$
at the tree level. Larger contributions to the gaugino masses, however,
can arise from integrating out heavy states as we will
show in the next section.
Since in our scenario $\langle D\rangle$ is different from zero,
extra contributions to the scalar masses arise from
the $D$-term for fields
that transform under the anomalous U(1).
{}From eqs.~(\ref{dtermcon}) and (\ref{vacuum}), these are given by
\begin{equation}
\Delta m^2_{Q_i}=q_i\, m^2\, .
\label{dterm}
\end{equation}
Notice that these contributions can be much larger than the $F$-term
contributions eq.~(\ref{fterm}) if $\varepsilon\ll 1$. Thus,
this scenario allows for
a hierarchy of soft masses:
\begin{equation}
\Delta m^2_Q>m^2_Q>m^2_\lambda\, .
\end{equation}
This is different from models
in which the U(1) does not play any role in the breaking of supersymmetry.
In those models the $D$-term contribution to
the scalar masses is always of the same order
as the $F$-term contribution \cite{scalar}.
The spectrum eqs.~(\ref{fterm}), (\ref{gaugi}) and
(\ref{dterm}) is a general feature
of this {\it hybrid} scenario where the breaking of supersymmetry
is transmitted by both gravity and U(1)-gauge interactions and is due to
the generic relation eq.~(\ref{relation}).
This allows for a solution to the
supersymmetric flavor problem, {\it i.e.}
the required degeneracy between the first and
second family squarks $\delta m^2_{Q}/m^2_Q\ll 1$.
If these two families of squarks transform non-trivially
under the U(1), they receive the universal contribution of eq.~(\ref{dterm})
which, for $\varepsilon\ll 1$, can be much larger
than the non-universal contribution eq.~(\ref{fterm}) and
therefore
\begin{equation}
\frac{\delta m^2_{Q}}{m^2_Q}\simeq{\varepsilon}\ll 1\, .
\end{equation}
Decreasing $\varepsilon$ increases not only the degeneracy of
the first two family squarks,
but also increases their soft masses with respect to the other ones and then
further suppresses the supersymmetric FCNC contributions.
Obviously, $\varepsilon$ cannot be much smaller than 1, otherwise
the gaugino masses obtained
from (\ref{gaugi}) are too small.
The best scenario that we envisage is to have the three
quark families transforming
under the U(1) as $\{1,1,0\}$ respectively \cite{pos}.
{}For reasonable values of
$\varepsilon=g^2{\rm Tr}{\bf Q}/192\pi^2\simeq 10^{-2}$, we get
for $m\simeq 5$ TeV:
\begin{equation}
m_\lambda\simeq 50\ {\rm GeV}\ ,\
m_{Q_3}\simeq 500\ {\rm GeV}\ ,\
m_{Q_{1,2}}\simeq 5\ {\rm TeV}\, .\label{splitting}
\end{equation}
This is a spectrum very similar to that in ref.~\cite{pt}. The
FCNC are suppressed enough. Furthermore,
this scenario provides a solution to
the supersymmetric CP problem
\cite{edm}.
This is because the first
family of squarks are so heavy that
their contribution to the electric dipole moment
of the neutron is small,
even if
the CP-violating phases are of ${\cal O}(1)$.
It is important to remark that
the large mass splitting eq.~(\ref{splitting})
does not lead to a naturalness
problem, since the first two families are almost decoupled from the Higgs
\cite{dg,pt}.
The above anomalous U(1) could also play a role in explaining
the fermion masses in the
same spirit as in ref.~\cite{u1}. Here, however,
we are constrained to have the
first two families with equal U(1) charges
(in order to avoid too large FCNC) \cite{pos}.
Although a complete model will not be attempted in this letter,
it is interesting to note that
if, as we mentioned above, the Higgs and the $3^{rd}$ family are
neutral under this U(1) but the $1^{st}$ and $2^{nd}$
ones are charged, a tree-level mass
is only allowed for the $3^{rd}$ family, explaining why the top and
bottom masses are much larger than the others. This scenario
relates the mass hierarchy of the quarks to that
in eq.~(\ref{splitting}) for the squarks.
It is worth to point out that, contrary to most of the flavor models,
our scenario allows for gauging extra flavor symmetries,
since the universal contribution eq.~(\ref{dterm})
dominates over any other
non-universal $D$-term contribution.
\section{A Scenario of Dynamical Supersymmetry Breaking}
Up to now we have assumed that
$m\sim 1$ TeV is just a new scale in the model.
In this section we will show
that this scale can be generated dynamically.
We only need a gauge group that at some intermediate scale
$\Lambda$ becomes strongly interacting and leads to a field
condensation.
The simplest example is an SU(2) group with two doublets $\Phi$ and
$\bar\Phi$, neutral under the anomalous U(1).
At energies below the scale $\Lambda$, the low-energy effective
theory can be described in terms of the gauge-invariant quantity
$X\equiv \Phi\bar\Phi$ \cite{ads}.
The superpotential is given by
\begin{equation}
W=\lambda\frac{X}{M_P}\phi^+\phi^-+\frac{\Lambda^5}{X}\, ,
\end{equation}
where the first term has been assumed to be present
in the classical theory; the second term is generated
non-perturbatively by instantons \cite{ads}.
If no Fayet-Iliopoulos term is present in the theory,
the vacuum has a run-away behaviour,
$X\rightarrow\infty$ with $\phi^+,\phi^-\rightarrow 0$.
However, when the U(1) $D$-term of eq.~(\ref{dtermcon}) is considered,
the field $\phi^-$ is forced to get a VEV and drives $X$ to a
value around $\Lambda$. This generates the effective scale
$m=\lambda\langle X\rangle/M_P$ and the
breaking of supersymmetry.
The only difference with respect to the model
of sect.~2
is that $\phi^+$ now gets a VEV of order $\sqrt{\xi}$
and then $\langle F_{\phi^-}\rangle\sim m\sqrt{\xi}$.
A new contribution to the gaugino masses can now arises
from the operator
\begin{equation}
\frac{1}{16\pi^2}\int d^2\theta
\frac{\phi^-}{\sqrt{\xi}}W_aW_a\, ,
\label{axion}
\end{equation}
that can be induced if extra heavy matter fields
(transforming under the
standard model group) are present and get their masses from couplings
to $\phi^-$.
It can be shown that these couplings do not modify
the supersymmetry-broken vacuum.
Although the operator eq.~(\ref{axion}) is
suppressed by a one-loop factor, it is enhanced with respect to
the gravity-induced operators since $\sqrt{\xi}< M_P$.
Eq.~(\ref{axion}) generates a mass term for the gauginos
given by
\begin{equation}
m_\lambda\simeq\frac{1}{16\pi^2}\frac{\langle F_{\phi^-}\rangle}
{\sqrt{\xi}}
\simeq \frac{m}{16\pi^2}\, ,
\end{equation}
that can be as large as eq.~(\ref{gaugi}).
The simplicity of this dynamical model resides in the fact that
the strongly interacting gauge group is only needed for
generating the small scale $m$
and not for breaking the supersymmetry by itself
as in ref.~\cite{ads}.
Here it is the Fayet-Iliopoulos term
that plays the new and crucial role of triggering
the breaking of supersymmetry.
\section{The Polonyi Problem}
Perhaps the main cosmological difficulty of the
supergravity models with a conventional hidden sector
is the Polonyi problem \cite{poloni}. This arises
because models in which supersymmetry gets restored in the flat limit
predict light ${\cal O}(m_{3/2})$ scalar particles with
VEVs of ${\cal O}(M_P)$, with an extremely flat potential and $1/M_P$
suppressed interactions.
In the early universe these
fields are expected to sit far away from their
present (zero-energy) vacua. The reason is that in the early universe
(during inflation or in the heat bath) these flat directions
get large soft masses equal to $\alpha H^2$,
where $H$ is the Hubble parameter and
$\alpha$ is a number of order 1 that
depends on the details of the cosmological scenario \cite{hub}.
For particles with non-zero VEVs this leads, almost for
sure, to a classical displacement from the present vacuum at
the early times ($\Delta \sim M_P$) and to the subsequent coherent
oscillations around the true minimum after inflation. The amplitude
and consequently the energy stored in the oscillations is determined by
the initial deviation and will overclose the universe if the
displacement is larger than $\sim 10^{-9}M_P$ \cite{poloni}.
For $\alpha > 0$ the displacement
is generically given by the value of the present VEV, whereas for
$\alpha < 0$ it can be much larger. Therefore, a light decoupled
scalar with a VEV larger than $10^{-9}M_P$ is problematic, whereas
scalars with smaller VEVs (at present) can be diluted by inflation.
Now it is clear why the Polonyi problem can be overcome in theories
with flat space supersymmetry breaking. Such theories
do not necessarily require scalars with large VEVs and vanishing mass
in the globally supersymmetric limit. In our models, the
field that gets a VEV of order $M_P$ is heavy; it is eaten up
by the massive U(1)-gauge superfield.
\section{Conclusions}
\noindent$\bullet$
We pointed out that an anomalous gauge U(1) symmetry is a natural
candidate for being the mediator and messenger of supersymmetry breaking.
It allows for simple models of
dynamical supersymmetry breaking in the flat limit.
\noindent$\bullet$ These models can be embedded in a supergravity
theory and generate realistic scalar and gaugino soft masses.
The supersymmetry breaking is communicated by gravity
and the gauge U(1). This {\it hybrid} scenario allows for
a solution to the supersymmetric flavor and CP problem.
The resulting phenomenology
is very different from that of the usual models with universal
soft masses \cite{pt}.
\noindent$\bullet$ Since supersymmetry is broken in the flat limit,
there is no Polonyi problem. All the hidden sector fields
are either very massive or get VEV below the Planck scale.
\vspace{.6cm}
It is a pleasure to thank Gian Giudice, Amit Giveon,
Luis Ib\'a\~nez, Fernando Quevedo and Misha Shifman
for very useful discussions.
{\it Note added}: After submitting this paper, we learned about a related
work by P. Bin\'etruy and E. Dudas, preprint hep-th 9607172.
We thank E. Dudas for comments.
\newpage
|
2,869,038,155,248 | arxiv | \section{Introduction and Main Results}
In this paper we continue our analysis, begun in \cite{PPWZ05}, of
a recent model describing the proliferation of prions. This model
has been introduced in Greer, Pujo-Menjouet and Webb \cite{GPW04},
based on the works of Masel, Jansen and Nowak \cite{MJN99}, Nowak,
Krakauer, Klug and May \cite{NKKM98} and others. For comprehensive
explanations and discussions of the model and the relevant
biochemical literature we refer to \cite{GPW04}. Here we only
give a very short description of the model.
Prions are proteins that are believed to be responsible for
certain diseases like BSE and the Creutzfeld-Jacob disease. There
are two basic forms of prions of interest here, the {\em Prion
Protein Cellular} $PrP^C$ and the {\em Prion Protein Scrapie}
$PrP^{Sc}$. The single molecule proteins $PrP^C$, also called {\em
monomers} in the sequel, are protease resistent proteins which
have a cell protective function and are produced by the body,
regularly. On the other hand, the infectious prion $PrP^{Sc}$ is a
string-like {\em polymer} formed of monomeric $PrP^C$. Above a
critical chain length $x_0>0$ the polymers are more stable than
the $PrP^C$, and they can grow to chains containing thousands of
monomers. $PrP^{Sc}$ has the ability to replicate by splitting, we
assume binary splitting here.
So there are three main processes which govern the dynamics of
prions in this model.
\begin{itemize}
\item growth in length by polymerization with rate $\tau>0$;
\item binary splitting with rate $\beta(x)>0$, a polymer of length $x>0$ splits into one of length $0<y<x$ and
one of length $x-y$ with probability $\kappa(y,x)$;
\item natural degradation with rate $\gamma>0$ for the monomers and with rate $\mu(x)$ for the polymers with length $x$.
\end{itemize}
The model proposed in \cite{NKKM98} further assumes that polymers
of length $0<x\leq x_0$ immediately decompose completely into
monomers. This reflects the assumption that $PrP^{Sc}$ polymers
are unbranched and form a simple $\alpha$-helix with $x_0$ monomer
units per turn. An $\alpha$-helix of length less than $x_0$ is
incomplete and thus is much less stable. Denoting the numbers of
monomers at time $t$ by $V(t)$ and the density of polymers by
$u(t,x)$, we obtain the following model equations.
\begin{eqnarray}
\label{pde}
&&\partial_tV(t) =\lambda -\gamma V(t) -\tau V(t)\int_{x_0}^\infty u(t,x)dx +2\int_0^{x_0}x\int_{x_0}^\infty\beta(y)\kappa(x,y)u(t,y)dydx\nonumber\\
&&\partial_t u(t,x)+\tau V(t)\partial_x u(t,x)+(\mu(x)+\beta(x))u(t,x)=2\int_x^\infty\beta(y)\kappa(x,y)u(t,y)dy\\
&& V(0)=V_0\geq 0, \quad u(t,x_0)=0, \quad u(0,x)=u_0(x),\nonumber
\end{eqnarray}
where $t\geq0$ and $x_0\leq x <\infty$. Here $\lambda>0$ is a
constant background source of monomers. Observe that the splitting
function $\kappa(y,x)$ should satisfy the following properties.
$$ \kappa(y,x)\geq0,\quad \kappa(y,x)=\kappa(x-y,x),\quad \int_0^x \kappa(y,x)dy =1,$$
for all $x\geq x_0$, $y\geq0$, and $\kappa(y,x)=0$ if $y>x$ or
$x\leq x_0$. Note that these conditions imply
$$ 2\int_0^x y\kappa(y,x)dy =x,\quad x>0.$$
In fact,
\begin{eqnarray*}
&& 2\int_0^x y\kappa(y,x)dy = \int_0^x y\kappa(y,x)dy + \int_0^x y\kappa(x-y,x)dy\\
&& =\int_0^x y\kappa(y,x)dy + \int_0^x (x-y)\kappa(y,x)dy=
x\int_0^x \kappa(y,x)dy=x.
\end{eqnarray*}
This implies that mass does not change via the splitting process,
and by a simple computation we obtain the following relation for
the total number of monomers in the system.
$$ \frac{d}{dt}[V(t)+ \int_{x_0}^\infty xu(t,x)dx] = \lambda-\gamma V(t) -\int_{x_0}^\infty x\mu(x) u(t,x)dx,\quad t\geq0.$$
In \cite{NKKM98} it is further assumed that splitting is
equi-distributed (polymer chains are equally likely to split at
all locations), and that the rate of splitting is proportional to
length. This reflects again the hypothesis that polymers form
$\alpha$-helices and are not folded in more complicated
configurations, which would make certain segments of the chain
less likely to split than others. Therefore, we make the further
assumptions
$$ \kappa(y,x) = 1/x\; \mbox{ if } x>x_0 \; \mbox{ and } 0<y<x,\quad \kappa(y,x)=0 \; \mbox{ elsewhere },$$
$\beta(x)=\beta x$ is linear, and $\mu(x)\equiv \mu$ constant.
Then the model contains only 6 parameters, and can even be reduced
to a system of 3 ordinary differential equations. In fact,
introduce the new functions
$$ U(t)=\int_{x_0}^\infty u(t,y)dy \quad \mbox{ and } \quad P(t)=\int_{x_0}^\infty yu(t,y)dy,$$
representing the total number of polymers, and the total number of
monomers in polymers at time $t$, respectively. Integrating the
equation for $u(t,x)$ over $[x_0,\infty)$ we get
\begin{eqnarray*}
\frac{d}{dt} U(t)&=&-\tau V(t)u(t,x)|_{x_0}^\infty-\mu U(t)-\beta P(t)+2\beta \int_{x_0}^\infty\int_x^\infty u(t,y)dydx\\
&=& -\mu U(t)-\beta P(t) +2\beta\int_{x_0}^\infty u(t,y)(y-x_0)dy\\
&=& -\mu U(t)-\beta P(t)+2\beta P(t)-2\beta x_0 U(t),
\end{eqnarray*}
hence
$$\dot{U}(t)= -(\mu+2\beta x_0) U(t)+\beta P(t).$$
Multiplying the equation for $u(t,x)$ by $x$, integration yields
\begin{eqnarray*}
\frac{d}{dt} P(t)&=&-\tau V(t)(xu(t,x)|_{x_0}^\infty-\int_{x_0}^\infty u(t,y)dy)\\
&&-\mu P(t)-\beta \int_{x_0}^\infty u(t,x)x^2dx+2\beta \int_{x_0}^\infty x\int_x^\infty u(t,y)dydx\\
&=& \tau V(t)U(t)-\mu P(t)-\beta \int_{x_0}^\infty u(t,x) x^2dx +\beta\int_{x_0}^\infty u(t,y)(y^2-x_0^2)dy\\
&=& \tau V(t)U(t) -\mu P(t) -\beta x_0^2 U(t),
\end{eqnarray*}
hence
$$\dot{P}(t)= \tau U(t)V(t)-\mu P(t)-\beta x_0^2U(t).$$
Thus we obtain the following closed model involving only ordinary
differential equations.
\begin{eqnarray}
\label{model} \dot{U}&=& \beta P -\mu U -2\beta x_0 U\nonumber\\
\dot{V} &=& \lambda -\gamma V -\tau UV +\beta x_0^2 U\\
\dot{P} &=& \tau UV -\mu P -\beta x_0^2 U\nonumber
\end{eqnarray}
with initial conditions
$$ U(0)=U_0\geq 0,\quad V(0)=V_0\geq 0, \quad P(0)=P_0\geq x_0 U_0.$$
This way the partial differential equation for the density
$u(t,x)$ decouples from the ordinary differential equations. Once
the solutions of \eqref{model} are known, one has to solve only a
linear partial integro-differential eqution to obtain $u(t,x)$.
The system (1.2) is identical to the "basic virus dynamics model"
that is discussed at length in \cite{MaNo02}.
Concerning the ode-system \eqref{model} we have the following
result from Pr\"uss, Pujo-Menjouet, Webb and Zacher \cite{PPWZ05}.
\begin{theorem} Suppose $x_0,\beta,\gamma,\lambda,\mu,\tau>0$ are given constants. Then the system
(\ref{model}) induces a global semiflow on the set
$K=\{(U,V,P)\in{\mathbb R}^3:\; U,V,P-x_0U\geq0\}$. There is precisely one
disease free equilibrium $(0,\lambda/\gamma,0)$ which is globally
exponentially stable if and only if
$\mu+x_0\beta>\sqrt{\lambda\beta\tau/\gamma}$, and asymptotically
stable in case of equality. On the other hand, if
$\mu+x_0\beta<\sqrt{\lambda\beta\tau/\gamma}$ there is the unique
disease equilibrium
$$\Big(\frac{\lambda\beta\tau-\gamma(\mu+\beta
x_0)^2}{\mu\tau(\mu+2\beta x_0)},\frac{(\mu+\beta
x_0))^2}{\beta\tau}, \frac{\lambda\beta\tau-\gamma(\mu+\beta
x_0)^2}{\beta\mu\tau}\Big)$$
which is globally exponentially
stable in $K\setminus[\{0\}\times{\mathbb R}_+\times\{0\}]$.
\end{theorem}
It is the purpose of this paper to study the full system
\eqref{pde} under the assumptions of equi-distributed splitting,
linear splitting rate, and constant rates of degradation.
Since $V(t)+\int_{x_0}^\infty xu(t,x)dx$ is the total number of
monomers in the system, which should be finite at any time, it
seems reasonable to study \eqref{pde} in the standard cone $Z_+:=
{\mathbb R}_+\times L_1^+((x_0,\infty);xdx)$ of the Banach space $Z:=
{\mathbb R}\times L_1((x_0,\infty);xdx)$. The following theorem summarizes
our results.
\begin{theorem}
Assume equi-distributed splitting with linear splitting rate
$\beta(x)=\beta x$ and constant degradation rates $\gamma$ and
$\mu(x)\equiv \mu$. Suppose $\lambda,\tau,\beta,\gamma,\mu,x_0>0$.
Then \eqref{pde} generates a global semiflow in the natural phase
space $Z_+$. Furthermore,
\\(i) \, if $\lambda\beta\tau/\gamma\leq (\mu+\beta x_0)^2$, then
the disease-free equilibrium $\bar{z}=(\lambda/\gamma,0)$ is
globally asymptotically stable in $Z_+$, and even exponentially
in the case of strict inequality;
\\(ii) \, if $\lambda\beta\tau/\gamma> (\mu+\beta x_0)^2$, then there is a
unique disease equilibrium $z_*=(V_*,u_*)$ which is globally
asymptotically stable in $Z_+\setminus({\mathbb R}_+\times\{0\})$. It is
given by
$$ V_*= \frac{(\mu+\beta x_0)^2}{\beta\tau},\quad u_*(x)= \frac{2\beta}{\mu\tau}
\frac{\lambda\beta\tau-\gamma(\mu+\beta x_0)^2}{(\mu+\beta
x_0)(\mu+2\beta x_0)}\Phi\big(\frac{\beta(x-x_0)}{\mu+\beta
x_0}\big),$$
where $\Phi(r)= (r+r^2/2)\exp(-(r+r^2/2))$.
\end{theorem}
The remaining part of this paper deals with the proof of this
result. Recall that the function $\omega(t):= \tau V(t)$ can be
considered as known, by Theorem 1.1, and $\omega(t)\rightarrow
\omega_\infty$ exponentially, where either $\omega_\infty =
\lambda/\gamma$ in the disease-free or $\omega_\infty= (\mu+\beta
x_0)^2/\beta$ in the disease case. Hence we have to solve a linear
nonautonomous partial integro-differential equation of first
order. For this we shall use standard techniques from the theory
of $C_0$-semigroups and we refer to the monograph Arendt, Batty,
Hieber and Neubrander \cite{ABHN01} as a general reference for the
results employed below.
We proceed in four steps. First we study the autonomous case
where $\omega\equiv \omega_\infty$. In Section 2 we show that
there is a unique $C_0$-semigroup $T(t)=e^{-Lt}$ associated with
the pde-part of \eqref{pde} in $X=L_1((x_0,\infty);xdx)$, which is
positive and contractive, and even exponentially stable in the
disease-free case. The resolvent of $L$ is shown to be compact in
Section 3, hence $L$ has only point spectrum in the closed right
half-plane. In the disease case, we further show that 0 is the
only eigenvalue of $L$ on the imaginary axis, it is simple and so
the ergodic projection ${\mathcal P}$ onto the kernel $N(L)$ of $L$ along
the range $R(L)$ of $L$ exists and is rank one. We compute an
element $e\in N(L)$ which is positive. A result of Arendt, Batty,
Lubich and Phong \cite{ABHN01} then shows that $T(t)$ is strongly
ergodic, i.e. $\lim_{t\rightarrow\infty}T(t)={\mathcal P}$ strongly in $X$.
Wellposedness of the nonautonomous problem is proved in Section 4
by means of monotone convergence, it is shown that the evolution
operator exists and is bounded. Moreover, bounds for $\partial_x
u(t,\cdot)$ in $X$ are derived. Finally, in Section 5 we put
together these results to prove Theorem 1.2.
While we assume throughout that $\beta(x) = \beta x, \, \mu(x) =
\mu$ (constant), and $y\kappa(x,y) = 1$ for $x < y ,\, y > x_0$,
$\kappa(x,y) = 0$ elsewhere, our methods extend to versions of
(1.1) where these assumptions do not hold. We do not carry out
these generalizations since it is not clear which would be
biologically reasonable. On the other hand, the equation discussed
in this paper
$$
\partial_t u(t,x) = -\tau V(t)\partial_x u(t,x) -(\mu +\beta x)u(t,x) +2\beta\int_x^\infty u(t,y)dy
$$
for $x > x_0, \, t> 0$, with initial and boundary data as in
(1.1), can be solved with an integral transformation followed by
the method of characteristics. Namely, define
$$v(t,x)= \int_x^\infty \int_y^\infty u(t,\xi) \, d\xi \, dy =
\int_x^\infty (\xi - x)u(t,\xi) \, d\xi, \quad \partial_x^2 v(t,x)
= u(t,x) \, .
$$
Then a computation shows that $v$ solves the first order partial
differential equation without integral term
$$
\partial_t v(t,x)=-\tau V(t) \partial_x v(t,x)-(\mu+\beta x)v(t,x)
$$
for $x > x_0, \, t>0$, with initial data $v(0,x)$ obtained by
integrating $u_0$ twice and boundary data $v(t,x_0) = P(t) - x_0
U(t)$. The equation for $v$ may be solved by the method of
characteristics, and $u$ is recovered from $\partial_x^2 v(t,x) =
u(t,x)$. The solution depends on the initial data in the region
$\{(x,t) \, | \, x > x_0 + \tau \int_0^tV(s) ds \, \}$ and on the
boundary data in the complement of this region. Since $V(t)$
always has a positive limit, it is evident that the contribution
from the initial data is swept out towards large $x$-values and
decays exponentially, in fact, at a rate like $e^{-\epsilon t^2}$
for some $\epsilon > 0$. If the disease-free state is stable, then
$\left( P(t), U(t) \right) \to (0,0)$ as $t \to \infty$, which
implies that the solution $u$ converges to zero also in the region
where it depends on the boundary data. In the case of a positive
disease equilibrium, $P(t) - x_0 U(t)$ has a positive limit as $t
\to \infty$, which determine the limiting equilibrium distribution
$u_*$ given in Theorem 1.2. This method breaks down if
$\beta(\cdot), \, \mu(\cdot)$, or $\kappa(\cdot,\cdot)$ have more
complicated forms, as the reader will readily confirm.
\section{The Linear Automomous Problem}
\subsection{Functional Analytic Setting}
We consider the problem
\begin{eqnarray}
\label{pde0}
\partial_t u(t,x)+\omega \partial_x u(t,x)+(\mu+\beta x)u(t,x)= 2\beta\int_x^\infty u(t,y)dy,\\
u(0,x)= u_0(x),\quad u(t,x_0)=0, \quad t>0,\;x>x_0.\nonumber
\end{eqnarray}
Set $w(t,x)=u(t,x+x_0)$, $x\geq0$. Then this problem becomes the
following one on ${\mathbb R}_+$.
\begin{eqnarray}
\label{pde1}
\partial_t w(t,x)+\omega \partial_x w(t,x)+(\mu_0+\beta x)w(t,x)= 2\beta\int_x^\infty w(t,y)dy,\\
w(0,x)=g(x):=u_0(x+x_0),\quad w(t,0)=0, \quad
t>0,\;x>0.\nonumber
\end{eqnarray}
Here we have set $\mu_0=\mu+\beta x_0$. $\omega$ plays the role of
$\tau V$ at $\infty$, i.e.\
$$\omega=\tau V(\infty)=\lambda\tau/\gamma$$ in the disease-free case or
$$\omega=\tau V(\infty)=(\mu+\beta x_0)^2/\beta=\mu_0^2/\beta$$
in the disease case.
We want to study (\ref{pde1}) in the basic space
$X=L_1({\mathbb R}_+;(a+x)dx)$, where we choose as the norm
$$ ||w||=a|w|_1+|xw|_1,$$
with $a>0$ to be determined later. We define two linear operators
in $X$ by means of
$$ Au(x)= \omega u^\prime(x)+(\mu_0+\beta x)u(x),\quad x\in{\mathbb R}_+,$$
with domain
$$D(A)=\{ u\in W^1_1({\mathbb R}_+)\cap X: \; x^2u\in L_1({\mathbb R}_+), x u^\prime(x)\in L_1({\mathbb R}_+),\; u(0)=0\},$$
and $$Bu(x)=2\beta\int_x^\infty u(y)dy,\quad D(B)=D(A).$$ Both
operators are well-defined and linear, $B$ will be considered as a
perturbation of $A$.
\subsection{$m$-Accretivity of $A$}
We have
\begin{eqnarray*}
\int_0^\infty Au \operatorname{sgn}{u} dx &=& \omega\int_0^\infty |u|^\prime dx +\mu_0|u|_1 +\beta|xu|_1\\
&=& \mu_0|u|_1 +\beta|xu|_1,
\end{eqnarray*}
and
\begin{eqnarray*}
\int_0^\infty Au \operatorname{sgn}{u} xdx &=& \omega\int_0^\infty |u|^\prime xdx +\mu_0|xu|_1 +\beta|x^2u|_1\\
&=& -\omega|u|_1+\mu_0|xu|_1 +\beta|x^2u|_1.
\end{eqnarray*}
Employing the bracket in $L_1$ this implies
$$[Au,u]_+\geq (a\mu_0-\omega)|u|_1+(a\beta +\mu_0)|xu|_1\geq \eta||u||,$$
for some $\eta>0$ provided $\mu_0>\omega/a$. Hence for such $a$,
$A$ is strictly accretive, in particular closable.
Next we compute the resolvent of $A$. The equation
$(\lambda+A)u=f$ is equivalent to solving the ode
\begin{equation}\label{help}
\lambda u(x)+ \omega u^\prime(x)+(\mu_0+\beta x)u(x)=f(x),\quad
x>0,
\end{equation}
with initial condition $u(0)=0$. Therefore we obtain
$$u=(\lambda+A)^{-1} f(x)=\frac{1}{\omega}\int_0^x
\exp{-[(\lambda+\mu_0)(x-y)/\omega+\beta(x^2-y^2)/2\omega]}
f(y)dy.$$
If $f\in L_1({\mathbb R}_+)$ then on easily obtains the estimate
$$ |u|_1\leq |f|_1/(\lambda+\mu_0).$$
If also $xf\in L_1({\mathbb R}_+)$ then
\begin{eqnarray*}
|x^2u(x)|&\leq& \frac{1}{\omega}\int_0^x e^{-(\lambda+\mu_0)(x-y)/\omega}(x^2-y^2)e^{-\beta(x^2-y^2)/2\omega)}|f(y)|dy\\
&+& \frac{1}{\omega}\int_0^x ye^{-\beta(x-y)2y/2\omega}y|f(y)|dy,
\end{eqnarray*}
hence
$$|x^2u|_1\leq \frac{1}{\omega}\frac{\omega}{\lambda+\mu_0}\frac{2\omega}{\beta e}|f|_1
+\frac{1}{\omega}\frac{\omega}{\beta^2}|xf|_1.$$
This shows that
$x^2u\in L_1({\mathbb R}_+)$, hence $xu\in L_1({\mathbb R}_+)$, and then by
equation \eqref{help} also $u^\prime\in L_1({\mathbb R}_+)$ as well as $x
u^\prime\in L_1({\mathbb R}_+)$, i.e.\ $u\in D(A)$. This shows that $A$ is
$m$-accretive.
As a consequence we note that $-A$ generates a $C_0$-semigroup in
$X$ which is also positive and strictly contractive, hence
exponentially stable.
\subsection{Accretivity of $A-B$}
We have
$$|\int_x^\infty u(x)dx|_1\leq |xu|_1,\quad |x\int_x^\infty u(x)dx|_1\leq \frac{1}{2}|x^2u|_1,$$
and therefore
$$\int_0^\infty (Au-Bu)\operatorname{sgn}(u)dx\geq \mu_0|u|_1+\beta|xu|_1-2\beta|xu|_1,$$
as well as
$$\int_0^\infty(Au-Bu)\operatorname{sgn}(u) xdx\geq -\omega|u|_1+\mu_0|xu|_1.$$
This yields
$$[(A-B)u,u]_+\geq (\mu_0a-\omega)|u|_1+(\mu_0-\beta a)|xu|_1\geq0,$$
for all $u\in D(A)$, provided $\mu_0a\geq\omega$ and $\mu_0\geq
\beta a$. Such a choice of $a>0$ is possible if and only if the
condition $\omega/\mu_0\leq \mu_0/\beta$ is met, i.e.\ if and only
if
$$\omega\leq \mu_0^2/\beta$$
holds true. Now in the disease-free case we have
$\omega=\lambda\tau/\gamma$, while in the disease case
$\omega=\mu_0^2/\beta$; then $a=\mu_0/\beta$. Thus $A-B$ will be
strictly accretive in the disease-free case while it will be
accretive only in the disease case. In the first case, the decay
rate can easily be estimated not to be smaller than
$\mu_0-\sqrt{\lambda\beta\tau/\gamma}$.
\subsection{Density of the Range of $A-B$}
Let $f\in L_1({\mathbb R}_+;(a+x)dx)$ be given and assume $f\geq0$. Set
$u_1=(1+A)^{-1}f$ and define the sequence $u_n$ inductively by
means of
$$ u_{n+1}=u_1+(1+A)^{-1} Bu_n.$$
Then $u_1\geq0$, and $u_{2}-u_1=(1+A)^{-1}Bu_1\geq0$, hence by
induction $u_{n+1}\geq u_n$ pointwise, since $B$ is positive. This
shows that the sequence of functions $u_n$ is nonnegative and
increasing pointwise. Moreover,
$$\omega u^\prime_n +(1+\mu_0+\beta x)u_n= f+2\beta\int_x^\infty u_{n-1}(y)dy\leq f+2\beta\int_x^\infty u_n(y)dy,$$
which implies
$$(1+\mu_0)|u_n|_1+\beta|xu_n|_1\leq |f|_1+2\beta|xu_n|_1,$$
and
$$ -\omega|u_n|_1+(1+\mu_0)|xu_n|_1+\beta^2|x^2u_n|_1\leq |xf|_1+\beta|x^2u_n|_1.$$
Choosing $a$ as above this yields an a priori bound for the
sequence $(u_n)$
$$||u_n||=a|u_n|_1+|xu_n|\leq C||f||,$$
and therefore we may conclude by the monotone convergence theorem
$u_n\rightarrow u_\infty$ as $n\rightarrow\infty$. If in addition
$x^2f\in L_1({\mathbb R}_+)$ then we obtain in a similar way boundedness of
$x^2u_n$ in $X$. This implies $(1+A-B)u_n=
f+B(u_{n-1}-u_n)\rightarrow f$ in $X$ as $n\rightarrow\infty$,
hence $u_\infty\in D(\overline{A-B})$ and
$u_\infty=(1+\overline{A-B})^{-1}f$. Since $L_1=L_1^+-L_1^+$ we
may conclude $R(1+\overline{A-B})=X$, i.e.\ the closure of $A-B$
is $m$-accretive.
\bigskip
\begin{remark}
The above proof shows that the resolvent of $\overline{A-B}$ is
positive, hence the semigroup generated by this operator will be
as as well.
\end{remark}
\subsection{ Irreducibility}
Suppose $f\in X$ is nonnegative and $u$ solves
$$\omega u^\prime +(\lambda +\mu_0+\beta x)u =f+2\beta\int_x^\infty u(y)dy,\quad x\geq0,$$
with initial value $u(0)=0$. If $f\not\equiv0$ then let
$x_1:=\inf\operatorname{supp} f$. We have
$$u(x)=\frac{1}{\omega}\int_0^x \exp{-[(\lambda+\mu_0)(x-y)/\omega+\beta(x^2-y^2)/2\omega]} [f(y)+Bu(y)]dy.$$
Since we already know $u(x)\geq0$, this formula implies $u(x)>0$
for all $x>x_1$. But then $\int_x^\infty u(y)dy>0$ for all
$x\geq0$, and so so $u(x)>0$ for all $x>0$. This proves the
irreducibility of the semigroup generated by $\overline{A-B}$.
\subsection{$A-B$ is not Closed}
Unfortunately, the sum $A-B $ is not closed. We show this by the
following example.
\begin{example}
Set $u=\chi/x^3$ where $\chi$ denotes a cut-off function which is
$0$ on $[0,1]$ and $1$ on $[2,\infty)$. Then $u,u^\prime u,xu\in
L_1({\mathbb R}_+)$, but $x^2u\not\in L_1({\mathbb R}_+)$, and $u(0)=0$. On the
other hand,
\begin{eqnarray*}
f(x)&:=& \omega u^\prime(x)+(\lambda +\mu_0+\beta x)u(x)-2\beta\int_x^\infty u(y)dy\\
&=& \omega\chi^\prime/x^3-3\omega\chi/x^4+(\lambda+\mu_0)\chi/x^3
+\beta \chi/x^2- 2\beta \int_x^\infty\chi(y)dy/y^3
\end{eqnarray*}
Since
\begin{eqnarray*}
&&\chi(x)/x^2-2\int_x^\infty\chi(y)dy/y^3 = \chi(x)/x^2+\chi(y)/y^2|_x^\infty-\int^\infty_x\chi^\prime(y)dy/y^2\\
&=&-\int_x^\infty \chi^\prime(y)dy/y^2,
\end{eqnarray*}
we obtain
$$f=\omega \chi^\prime(x)/x^3-3\omega\chi(x)/x^4+(\lambda+\mu_0)\chi(x)/x^3-\beta \int_x^\infty \chi^\prime(y)dy/y^2.$$
Obviously, $f$ as well as $xf$ belong to $L_1({\mathbb R}_+)$, so $A-B$
with domain $D(A)$ is not closed.
\end{example}
\subsection{Summary}
Let us summarize what we have shown so far.
\begin{theorem}
Suppose $\beta\omega\leq \mu_0^2$. Then problem (\ref{pde1}) is
well-posed in $X=L_1({\mathbb R}_+;(a+x)dx)$ and admits an associated
$C_0$-semigroup $T(t)=e^{-Lt}$ which is positive. If $a$ is chosen
from the interval $a\in[\omega/\mu_0, \mu_0/\beta]$ then $T(t)$ is
nonexpansive.
In the strictly disease free case
$\omega=\lambda\tau/\gamma<\mu_0^2/\beta$, the semigroup $T(t)$ is
exponentially stable with type $\omega_0(T)\leq
-\mu_0+\sqrt{\lambda\beta\tau/\gamma}<0$.
\end{theorem}
\section{Asymptotic Behavior of the Autonomous Problem}
\subsection{Compactness}
Set $L=\overline{A-B}$. Since $L$ is m-accretive in
$X=L_1({\mathbb R}_+;(a+x)dx)$, the spectrum $\sigma(L)$ is contained in
the closed right halfplane. We want to show that the resolvent of
$L$ is compact. For this purpose we derive another representation
of $(\lambda+L)^{-1}$ for $\lambda>0$. Let $f\in X$ and set
$u=(\lambda+L)^{-1}f$. Then we obtain
$$u=(\lambda+A)^{-1}f + (\lambda+A)^{-1} B u,$$
and
\begin{eqnarray*}
(\lambda+A)^{-1}Bu&=& 2\beta(\lambda+A)^{-1}[\int_x^\infty u(y)dy]\\
&=& \frac{2\beta}{\omega} \int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}[\int_y^\infty u(r)dr]dy\\
&=& \frac{2\beta}{\omega} \int_x^\infty u(r)[\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy]dr\\
&+& \frac{2\beta}{\omega} \int_0^x u(r) [\int_0^r e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy]dr\\
&=& k_\lambda(x) \int_x^\infty u(r)dr + \omega(\lambda+A)^{-1}
[k_\lambda u],
\end{eqnarray*}
where
$$k_\lambda(x)= \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy.$$
Note that
$$0\leq k_\lambda(x)\leq \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}dy\leq \frac{2\beta}{\lambda+\mu_0},$$
i.e.\ $k_\lambda\in L_\infty({\mathbb R}_+)$. We thus have the identity
$$u(x)-k_\lambda(x)\int_x^\infty u(y)dy = (\lambda+A)^{-1}f(x)+ \omega(\lambda+A)^{-1}[k_\lambda u]=: g(x),$$
and $u(0)=0$. We may solve this equation for $u$ to the result
$$ u(x)= g(x)-k_\lambda(x)\int_0^x \exp\left(-\int_y^x k_\lambda(r)dr\right) g(y)dy + k_\lambda(x) \exp \left(-\int_0^x k_\lambda(s)ds\right)<q_\lambda| f>,$$
where
$$<q_\lambda,f> :=
\frac{1}{(\lambda+\mu_0)^2-\omega\beta}((\lambda+\mu_0)\int_0^\infty
f(s)ds+\beta \int_0^\infty sf(s)ds)\,.$$
This way we have the
representation
\begin{equation}
\label{repres} (\lambda+L)^{-1}f = (1-R_\lambda)(\lambda+A)^{-1}[
1+\omega k_\lambda (\lambda+L)^{-1}]f +k_\lambda(x)
\exp\left(-\int_0^x k_\lambda(s)ds \right)<q_\lambda| f>,
\end{equation}
with
$$(R_\lambda g)(x) = k_\lambda(x)\int_0^x \exp \left(-\int_y^x k_\lambda(r)dr\right) g(y)dy.$$
Next $D(A)$ embeds compactly into $X$, hence $(\lambda+A)^{-1}$ is
compact. From boundedness of $k_\lambda$ we may then conclude that
$(\lambda+L)^{-1}$ is compact, as soon as we know that the
Volterra operator $R_\lambda$ is bounded in $X$.
To prove the latter we estimate as follows
\begin{eqnarray*}
||R_\lambda g||&=&\int_0^\infty (a+x) k_\lambda(x)|\int_0^x \exp\left(-\int_y^x k_\lambda(r)dr\right) g(y)dy|dx\\
&\leq& \int_0^\infty|g(y)|[\int_y^\infty (a+x)k_\lambda(x)\exp\left(-\int_y^x k_\lambda(r)dr\right)dx]dy\\
&=& \int_0^\infty |g(y)| [ (a+y) + \int_y^\infty \exp\left(-\int_y^x k_\lambda(r)dr\right)dx]dy\\
&\leq& C_\lambda\int_0^\infty |g(y)|(a+y)dy=C_\lambda||g||,
\end{eqnarray*}
as we show now.
\begin{eqnarray*}
k_\lambda(x)&=& \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy\\
&\geq & \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)y/\omega}e^{-\beta xy/\omega}dy\\
&=& \frac{2\beta}{\lambda+\mu_0+\beta x} (1- e^{-(\lambda+\mu_0+\beta x)x/\omega})\\
&\geq & \frac{2\beta}{\lambda+\mu_0+\beta x}\cdot \frac{(\lambda+\mu_0+\beta x)x/\omega}{1+(\lambda+\mu_0+\beta x)x/\omega)}\\
&=& \frac{2\beta x}{\omega +(\lambda+\mu+\beta x)x},
\end{eqnarray*}
by the elementary inequality $1-e^{-x} \geq x/(1+x)$. This implies
\begin{eqnarray*}
\int_y^x k_\lambda(r)dr &\geq& 2\beta\int_y^x rdr/(\omega +(\lambda+\mu_0+\beta r)r\\
&=& \int_y^x \frac{2\beta r
+\lambda+\mu_0}{\omega+(\lambda+\mu_0)r +\beta r^2}dr
-(\lambda+\mu_0)\int_y^x \frac{dr}
{\omega+(\lambda+\mu_0)r +\beta r^2}\\
&\geq& \log \frac{\omega +(\lambda+\mu_0)x + \beta
x^2}{\omega+(\lambda+\mu_0)y +\beta y^2} - c_\lambda,
\end{eqnarray*}
since the second integral is bounded. This estimate finally yields
$$\int_y^\infty \exp\left(-\int_y^x k_\lambda(r)dr\right)dx\leq e^{c_\lambda}\int_y^\infty\frac{\omega +(\lambda+\mu_0)y + \beta y^2}
{\omega+(\lambda+\mu_0)x +\beta x^2}dx\leq C_\lambda(a+y).
$$
This completes the proof of compactness of the resolvent of $L$.
\subsection{Ergodicity}
Since the resolvent of $L$ is compact we know that the spectrum of
$L$ consists only of eigenvalues of finite multiplicity, these are
poles of the resolvent of $L$. By accretivity of $L$ we have the
inequality $|(\lambda+L)^{-1}|_{{\cal B}(X)}\leq 1/{\rm
Re}\lambda$, ${\rm Re} \lambda>0$, hence the resolvent can only
have poles of first order on the imaginary axis. This shows that
all eigenvalues on the imaginary axis are semisimple. Compactness
of the resolvent implies also that the range of $\lambda+L$ is
closed, for each $\lambda\in{\mathbb C}$. In particular, we have the direct
sum decomposition $X= N(L)\oplus R(L)$, i.e.\ ergodicity in the
sense of Abel.
Now we concentrate on the disease equilibrium which means
$a=\mu_0/\beta$ and $\omega=\mu_0^2/\beta$. A function $e(x)$
belongs to the kernel of $L$ if
$$ \omega e^\prime (x)+(\mu_0+\beta x) e(x)- 2\beta \int_x^\infty e(y)dy =0, \quad x>0,\; e(0)=0,$$
or equivalently
$$ e^{\prime\prime}(x)+\frac{\beta}{\mu_0}(1+\frac{\beta}{\mu_0}x) e^\prime(x)+ 3 \frac{\beta^2}{\mu_0^2} e(x)=0, \quad x>0,\; e(0)=0.$$
The scaling $e(x)= v(\beta x/\mu_0)$ reduces this problem to
$$ v^{\prime\prime}(z) +(1+z)v^\prime(z)+3 v(z)=0,\quad z>0,\; v(0)=0.$$
By the initial condition $v(0)=0$, this shows that the kernel of
$L$ can be only one-dimensional, and a simple computation yields
that
$$v(z)= (z+z^2/2) e^{-(z+z^2/2)}, \quad z>0,$$
is a solution. Therefore $N(L)={\rm span}\{ e\}$, with $e(x)=
(\beta/\mu_0)^2v(\beta x/\mu_0)$, and another simple computation
yields
$$ \int_0^\infty (a+x) e(x)dx =1.$$
Since $L$ is Fredholm with index zero, the kernel $N(L^*)$ of the
dual of $L$ has also a one-dimensional kernel which are the
constant functions. The ergodic projection ${\mathcal P}$ onto the kernel
of $L$ along the range of $L$ is then given by
\begin{equation}\label{projection}
{\mathcal P} u(x)= [\int_0^\infty(a+x) u(x)dx] e(x)= <u|e^*>e(x), \quad
x>0.
\end{equation}
Suppose there are no other eigenvalues of $L$ on the imaginary
axis. Then $L^*$ also has no other eigenvalues on the imaginary
axis, and then by the theorem of Arendt, Batty, Lubich and Phong
we may conclude that
$$ e^{-Lt} u \rightarrow {\mathcal P} u \quad \mbox{ as } t\rightarrow\infty, \mbox{ for each } u\in X,$$
i.e.\ the semigroup generated by $-L$ is strongly ergodic.
We show now that there are in fact no eigenvalues other than 0 on
the imaginary axis. Suppose on the contrary that
$$ i\rho u(x) +\omega u^\prime(x) +(\mu_0+\beta x)u(x)= 2\beta\int_x^\infty u(y)dy, \quad x>0,\; u(0)=0,$$
$u\neq0$. Multiplying this equation with $\bar{u}/|u|$, taking
real parts, and integrating over ${\mathbb R}_+$ we obtain
\begin{equation}\label{i}
\mu_0|u|_1 +\beta|xu|_1 = 2\beta {\rm Re}\int_0^\infty
u(x)\int_0^x \bar{u}(y)/|u(y)|dydx\leq 2\beta |xu|_1,
\end{equation}
and similarly, multiplying with $x\bar{u}(x)/|u(x)|$ we get
\begin{equation}\label{ii}
-\omega|u|_1+ \mu_0|xu|_1+\beta |x^2u|_1= 2\beta {\rm Re}
\int_0^\infty u(x)\int_0^x y\bar{u}(y)/|u(y)|dydx\leq \beta
|x^2u|_1.
\end{equation}
Multiplying the first inequality with $a=\mu_0/\beta$ and adding
the second we arrive at a contradiction if at least one of the
inequalities (\ref{i}), (\ref{ii}) is strict. Hence we must have
$$ {\rm Re}\int_0^\infty u(x)\int_0^x \bar{u}(y)/|u(y)|dydx= |xu|_1,$$
which implies with $\arg u(x)=\theta(x)$
$$x\equiv{\rm Re}\int_0^x e^{i(\theta(x)-\theta(y))}dy= \frac{1}{2}\frac{d}{dx} |\int_0^x e^{i\theta(y)}dy|^2,$$
or equivalently
$$ |\int_0^x e^{i\theta(y)}dy|^2=x^2, \quad x>0.$$
But this is only possible if $\theta(y)$ is constant, w.l.o.g.\ we
may assume $\theta =0$ i.e.\ $u(x)$ is nonnegative, which in turn
yields $\rho=0$ since $u\neq0$ by assumption.
\bigskip
\subsection{Summary}
Let us summarize what we have shown in this section.
\begin{theorem}
Assume the disease case $\omega=\mu_0^2/\beta$, $a=\mu_0/\beta$.
The the semigroup $T(t)=e^{-Lt}$ is strongly ergodic, it converges
strongly to the projection ${\mathcal P}$ onto the kernel $N(L)$ of $L$
along its range $R(L)$. The kernel is one-dimensional and spanned
by $e(x)= (\beta/\mu_0)^2\Phi(\beta x/\mu_0)$, where
$\Phi(z)=(z+z^2/2)e^{-(z+z^2/2)}$, and the projection ${\mathcal P}$ is
given by
$$ {\mathcal P} u(x)=[\int_0^\infty(a+y)u(y)dy]e(x)= <e^*|u>e(x), \quad x>0,\; u\in X.$$
\end{theorem}
\noindent
{\em Remark.} We do not know whether the ergodicity is
exponential since it is not clear that the type of the semigroup
$e^{-Lt}$ restricted to $R(L)$ is negative.
\section{Well-posedness of the Non-Autonomous Evolution}
\subsection{The Trivial Evolution}
Let $\omega\in C({\mathbb R}_+)$ be positive, such that
$0<\omega_\infty=\lim_{t\rightarrow\infty} \omega(t)$ exists, and
assume $\omega(\cdot)-\omega_\infty\in L_1({\mathbb R}_+)$. Let
$$\omega_+=\max_{s\geq0} \omega(s)\quad \mbox{ and }\quad \omega_-=\min_{s\geq0} \omega(s),$$
and note that $\omega_+\geq \omega_->0$. We are particularly
interested in the cases $\omega_\infty=\lambda\tau/\gamma$, the
disease-free case, and $\omega_\infty= \mu_0^2/\beta$, the disease
case. We want to show that the nonautonomous problem is well-posed
in $X=L_1({\mathbb R}_+;(a+x)dx)$. We begin with the problem
\begin{eqnarray}
\label{trivial}
\partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=0,\quad x>0, t>s\geq0\\
u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, x>0.\nonumber
\end{eqnarray}
The method of characteristics yields easily the evolution operator
$U_0(t,s)$ for this problem. It is given by
\begin{eqnarray}
\label{evop}
[U_0(t,s)g](x)=u(t,x)= g(x-\int_s^t\omega(\tau)d\tau)e^{-\phi(t,s,x)},\\
\phi(t,s,x)=\mu_0(t-s)+
\beta(t-s)(x-\int_s^t\omega(\tau)d\tau)+\beta\int_s^t(t-\tau)\omega(\tau)d\tau,\nonumber
\end{eqnarray}
if we extend $g$ trivially to ${\mathbb R}$. We obviously have the estimate
$|U_0(t,s)|_{{\cal B}(X)}\leq e^{-\mu_0(t-s)}$, and $u(t,x)$ is a
strong solution in $X$ if the initial function $g$ belongs to $D$
defined by
$$ D:=\{g\in L_1({\mathbb R}_+):\;x^2g,g^\prime,xg^\prime\in L_1({\mathbb R}_+), g(0)=0\}.$$
We also need the solution of
\begin{eqnarray}
\label{trivial1}
\partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=0,\quad x>0, t>s\geq0\nonumber\\
u(s,x)=0,\quad u(t,0)=h(t),\quad t>s\geq0, x>0.
\end{eqnarray}
Again the method of characteristics applies and yields with
$K(t,x) = \int_{\rho(t,x)}^t(r-\rho(t,x))\omega(r)dr$ the formula
$$ [V_0(t,s)h](x)=u(t,x)=h(\rho(t,x))e^{-[\mu_0(t-\rho(t,x)+\beta x(t-\rho(t,x))-\beta K(t,x)]},
$$
for $x<\int_s^t\omega(r)dr$, and zero elsewhere, where the
function $\rho(t,x)$ is defined by the equation
\begin{equation}
\label{kappa} x=\int_\rho^t \omega(r)dr;
\end{equation}
note that this equation has a unique solution $\rho(t,x)\in
(s,t)$, since $\omega(r)\geq \omega_- >0$ for all $r\geq0$, by
assumption, and $x<\int_s^t\omega(r)dr$. Observe that with
$K_0(t,s) =\int_s^t\omega(r)dr$ we have
\begin{eqnarray*}
&&\int_0^\infty (a+x)[V_0(t,s)h](x)dx\leq |h|_\infty\int_0^{K_0(t,s)}(a+x)e^{-\mu_0(t-\rho(t,x))}dx\\
&& \leq |h|_\infty \int_s^t (a+\int_\sigma^t\omega(r)dr)e^{-\mu_0(t-\sigma)} \omega(\rho(t,x)) d\sigma\\
&&\leq |h|_\infty \omega_+\int_0^{t-s}(a+\omega_+\sigma)e^{-\mu_0
\sigma}d\sigma\leq C|h|_\infty ,
\end{eqnarray*}
by the variable transformation $\sigma=\rho(t,x)$. Thus the part
coming from a nontrivial bounded boundary value $h$ is bounded in
$X$.
\subsection{Well-posedness for the Full Problem}
Let us now consider the full problem, i.e.
\begin{eqnarray}
\label{full}
\partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=2\beta\int_x^\infty u(t,y)dy,\\
u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, \; x>0.\nonumber
\end{eqnarray}
Since the standard cone in $X$ is reproducing, i.e.
$L_1=L_1^+-L_1^+$, we may restrict attention to nonnegative
initial functions $g$. We define the sequence $u_n$ inductively by
$$u_1(t):=U_0(t,s)g, \quad u_{n+1}(t)=u_1(t)+\int_s^t U_0(t,r)Bu_n(r)dr, \quad t\geq s\geq 0.$$
Since $U_0(t,s)$ is positive the functions $u_n$ are as well, and
$u_2(t)\geq u_1(t)$ since $B$ is positive. Inductively we obtain
with
$$u_{n+1}(t)-u_n(t)=\int_s^t U_0(t,r)B(u_n(r)-u_{n-1}(r))dr, \quad t\geq s\geq0,$$
that the functions $u_n$ are pointwise increasing w.r.t.\
$n\in{\mathbb N}$.
Suppose that $g\in D$. Then $u_n$ is a strong solution of
\begin{eqnarray*}
&&\partial_t u_n(t,x)+\omega(t)\partial_x u_n(t,x) +(\mu_0+\beta x) u_n(t,x)=2\beta\int_x^\infty u_{n-1}(t,y)dy\\
&&\qquad\qquad\leq 2\beta\int_x^\infty u_n(t,y)dy,\quad x>0, t>s\geq0\\
&&u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, x>0,
\end{eqnarray*}
i.e.\ $u_n$ is a strong lower solution of (\ref{full}).
Multiplying the equation with $x^i$ and integrating over ${\mathbb R}_+$
this yields with $z_i(t)=|x^i u_n(t)|_1$
$$\partial_t z_0(t) +\mu_0 z_0(t)+\beta z_1(t)\leq 2\beta z_1(t),$$
for $i=0$, and for $i=1$
$$\partial_tz_1(t)-\omega(t) z_0(t)+\mu_0z_1(t)+\beta z_2(t)\leq \beta z_2(t).$$
Setting $z(t)=(z_0(t),z_1(t))^T$,
$b(t)=(0,(\omega(t)-\omega_\infty)z_0(t))^T$, and defining $G$ by
the $2\times2$-matrix with entries
$-\mu_0,\beta,\omega_\infty,-\mu_0$, this inequality becomes
$$\partial_tz(t)\leq Gz(t)+b(t), \quad t\geq s\geq0.$$
The eigenvalues of $G$ are given by $\lambda_\pm= -\mu_0 \pm
\sqrt{\beta\omega_\infty}$ which are both nonpositive if
$\beta\omega_\infty\leq \mu_0^2$, which is true in both, the
disease-free and the disease case. Since $e^{Gt}$ is positive we
may conclude
$$z(t)\leq e^{G(t-s)}z(s)+\int_s^t e^{G(t-r)}b(r)dr.$$
Boundedness of $e^{Gt}$ then implies an inequality of the form
$$|z(t)|\leq C + C\int_s^t |\omega(r)-\omega_\infty||z(r)|dr,\quad t\geq s\geq0,$$
which implies boundedness of $z(t)$ on $[s,\infty)$ since
$(\omega(\cdot)-\omega_\infty)\in L_1({\mathbb R}_+)$ by assumption. Note
that the constant $C$ depends only on the parameters
$\mu_0,\beta,\omega_\infty$ and on $||g||$.
Therefore the functions $u_n(t)$ are bounded in $X$ uniformly in
$t$ and $n$. By monotone convergence we may conclude
$u_n(t)\rightarrow u(t)$ in $X$ for each $t\geq s$. Since $B$ is
positive, $Bu_n\rightarrow Bu$ in $L_1({\mathbb R}_+)$ as well, and then
also
\begin{equation}
\label{mild}
u(t)=U_0(t,s)g+\int_s^t U_0(t,r)Bu(r)dr,\quad t\geq s\geq0,
\end{equation}
at least in $L_1({\mathbb R}_+)$. A density argument finally shows that
this conclusion is valid for all initial data $g\in X$.
\bigskip
\noindent
{\em Remark.} It is not clear that solutions of (\ref{mild}) are
unique. The reason for this is that $B$ is unbounded. Therefore
we need another definition of mild solution.
\bigskip
\noindent
{\bf Definition.}\,{\em Let $f\in L_{1,loc}(R_+;X)$. \\
(i)\, We call a function $u\in C({\mathbb R}_+;X)$ strong solution of
\begin{eqnarray}
\label{full1}
\partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=2\beta\int_x^\infty u(t,y)dy+f(t,x),\nonumber\\
u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, x>0.
\end{eqnarray}
if $u\in C^1({\mathbb R}_+;X)\cap C({\mathbb R}_+;D)$ and (\ref{full1}) is valid pointwise.\\
(ii)\, We call a function $u\in C({\mathbb R}_+;X)$ mild solution of
(\ref{full1}) if there are $f_n\in L_{1,loc}(R_+;X)$ and strong
solutions $u_n$ of (\ref{full1}) such that $u_n\rightarrow u$ and
$f_n\rightarrow f$ as $n\rightarrow\infty$, in $X$, uniformly on
compact intervals. }
\bigskip
\noindent Suppose that $g\in D$ has compact support. Then each
iteration $u_n(t)$ has also compact support, namely
$$\operatorname{supp} u_n(t)\subset \operatorname{supp} g + \omega_+[0,t],$$
for each $n\in{\mathbb N}$. Therefore each function $u_n(t)$ is a strong
solution of (\ref{full1}) with inhomogeneity
$f_n(t)=B(u_{n-1}(t)-u_n(t))$. This proves that the limit $u(t)$
is a mild solution. Approximation then shows that (\ref{full}) has
at least one mild solution, for each initial value $g\in X$.
Uniqueness of mild solutions can be obtained as follows. If $u$ is
a strong solution of (\ref{full1}) then the equation yields as
above the inequality
$$ \partial_t ||u(t)||\leq \omega_+ ||u(t)||+ ||f(t)||,\quad
t>0,$$
hence
$$ ||u(t)||\leq e^{\omega_+(t-s)}||g||+\int_s^t
e^{\omega_+(t-r)}||f(r)||dr.$$
By approximation this inequality is
also valid for mild solutions, hence $u\equiv 0$ in case $f\equiv
g=0$. Thus mild solutions are unique and of course they satisfy
the integral equation (\ref{mild}).
\subsection{ Summary.}
We have proved the following result about well-posedness of \eqref{full}
\begin{theorem}
Suppose $\omega\in C({\mathbb R}_+)$ is a given strictly positive function,
such that $\omega_\infty=\lim_{t\rightarrow\infty} \omega(t)>0$
exists and $\omega(\cdot)-\omega_\infty\in L_1({\mathbb R}_+)$. Then
\eqref{full} is well-posed in the sense of the definition given
above. There exists a unique evolution operator $U(t,s)$ in $X$
generated by \eqref{full}, which is bounded in $X$, uniformly in
$0\leq s\leq t<\infty$, and positive. Moreover, \eqref{full} has
finite speed of propagation with maximum speed less than $
\omega_+= \sup_{t\geq0} \omega(t)$.
\end{theorem}
\subsection{Higher Order Bounds}
Consider an initial function $g\in C_0^\infty(0,\infty)$. Then
$u_1$ is smooth as well and has compact support for each $t\geq
s$. Then the same holds true for $u_2$, hence by induction for all
$u_n$. Setting $v_n=\partial_x u_n$ we have the following problem
for $v_n$.
\begin{eqnarray}
\label{derivative}
&&\partial_t v_n+\omega(t)\partial_x v_n +(\mu_0+\beta x) v_n= -\beta [u_n+2u_{n-1}],\\
&&v_n(s,x)=g^\prime(x),\quad v_n(t,0)=\psi_n(t),\quad t>s\geq0,
x>0\nonumber
\end{eqnarray}
where $\psi_n(t)= \frac{2\beta}{\omega(t)}|u_{n-1}(t)|_1$. This
implies
$$\partial_x u_n(t)=v_n(t) = U_0(t,s)g^\prime -\beta\int_s^t U_0(t,r)[u_n(r)+2u_{n-1}(r)]dr+w_n(t),\quad t\geq s\geq0,$$
with
$$w_n(t)=2\beta V_0(t,s)[|u_{n-1}(\cdot)|_1/\omega(\cdot)].$$
Uniform boundedness of $u_n$ in $X$ and exponential stability of
the evolution operator $U_0(t,s)$ in $X$ then implies boundedness
of $\partial_x u_n$ in $X$. Passing to the limit we get
$$\partial_x u(t) = U_0(t,s)g^\prime -3\beta\int_s^t U_0(t,r)u(r)dr+ w(t),\quad t\geq s\geq0,$$
where
$$w(t,x)=2\beta V_0(t,s)[|u(\cdot)|_1/\omega(\cdot)]. $$
This yields $\partial_xu\in C_b([s,\infty);X)$. The last identity
was proven for $g\in C_0^\infty(0,\infty)$, but via density can be
extended to $g\in D$.
\section{Convergence}
We are now ready to prove the main result on convergence. Let us
first look at the disease-free case. Then with $A(t)$, $B$,
defined as in section 2, and $L(t)=\overline{A(t)-B}$, we know
that $L(t)$ is strictly accretive for large times $t$ if the
parameter $a$ is chosen in
$a\in(\lambda\tau/\gamma\mu_0,\mu_0/\beta)$. This proves
exponential stability of the trivial solution in the disease-free
case, with decay rate at least
$\mu_0-\sqrt{\lambda\beta\tau/\gamma}$.
Suppose we have a solution $u$ of the nonautonomous problem in the
disease case such that $\partial_xu(t)$ is bounded in $X$. Then we
may write
\begin{eqnarray}
\label{full2} &&\partial_t u+\omega_\infty\partial_x u
+(\mu_0+\beta x) u-2\beta\int_x^\infty u(t,y)dy
=(\omega_\infty-\omega(t))\partial_xu,\\
&&u(0,x)=g(x),\quad u(t,0)=0,\quad t>0, x>0.\nonumber
\end{eqnarray}
Therefore we obtain the identity
$$u(t)=e^{-Lt}g +\int_0^t e^{-L(t-r)}(\omega_\infty-\omega(r))\partial_xu(r))dr,\quad t\geq0.$$
We know from Section 9 that $e^{-Lt}$ converges strongly in $X$ to
the ergodic projection ${\mathcal P}$. On the other hand, the scalar
function $\omega(\cdot)-\omega_\infty$ belongs to $L_1({\mathbb R}_+)$ by
assumption. This then implies
$$u(t)\rightarrow u_\infty\in R({\mathcal P}).$$
Thus we have convergence in $X$ to a unique element for all
nonnegative solutions with initial values in $D$. Since the
evolution operator associated with (\ref{full}) is bounded in $X$,
this convergence extends to all initial values $u_0\in X$.
Returning now to the system \eqref{pde}, we may compute the limit
$u_\infty$. For this purpose recall that $U(t)=\int_{x_0}^\infty
u(t,x)dx\rightarrow U_\infty$ and $P(t)=\int_{x_0}^\infty
u(t,x)xdx\rightarrow P_\infty$. This implies
$$ u_\infty = \lim_{t\rightarrow\infty} {\mathcal P} u(t)= \lim_{t\rightarrow\infty} [aU(t)+P(t)-x_0U(t)]e= [\mu U_\infty/\beta+P_\infty]e.$$
Note that $u_\infty$ is independent of the initial values $V_0$
and $u_0$.
This completes the proof of Theorem 1.2.
\bibliographystyle{amsnum}
|
2,869,038,155,249 | arxiv | \section{Introduction}
\IEEEPARstart {R}{encently}, the orthogonal time frequency space (OTFS) modulation has been proposed to achieve reliable communications in high-mobility scenarios \cite{Hadani2017},\cite{Raviteja2018},\cite{OAP}. OTFS provides both time and frequency diversity because each symbol is spread over the time and frequency domains through the two dimensional inverse symplectic finite Fourier transform (ISFFT) \cite{Hadani2017},\cite{Raviteja2018}. When the number of channel paths is small, the effective channel in the delay-Doppler (DD) domain is sparse, which allows efficient data detection using the message passing techniques \cite{Raviteja2018}. A variety of OTFS detection methods have been proposed in the literature to harvest the time and frequency diversity promised by OTFS \cite{2020Iterative, Raviteja2019, OTFSHighDopplerChannel, performance2020, LiA2017, Raviteja2019Practical, zemen2017, otfslmmserecv}. However, all the detection methods assume perfect channel state information, which has to be estimated in practice.
In OTFS, the delay shifts and Doppler shifts are discretized in the DD domain. In general, a wideband system is able to provide sufficient delay resolution, so that fractional delay shifts do not need to be considered \cite{fundWC}. However, the Doppler resolution depends on the time duration of the OTFS block. To fulfill the low latency requirement in future wireless communications, the time duration of the OTFS block should be relatively small, hence fractional Doppler shifts have to be considered to avoid significant modelling errors due to the assumption of integer Doppler shifts \cite{Raviteja2018},\cite{pilotref}. Therefore factional Doppler shifts have to be considered in OTFS channel estimation.
A number of channel estimation methods have been proposed in the literature. In \cite{MIMOOTFSDETECTIONEST}, an entire OTFS block is used to accommodate pilot symbols for channel estimation, and the estimated channel is used for data detection in the subsequent OTFS block. This results in a considerable loss in spectrum efficiency and the detection performance may deteriorate due to the channel variation between two OTFS blocks.
To solve this problem, pilot and data symbols are placed in the same OTFS block in \cite{pilotref}, where guard interval is used to avoid interference between pilot and data symbols. With this scheme, channel estimation and data detection can be performed for the same OTFS block. In \cite{chestmassivemimo}, channel estimation in massive multiple input and multiple output (MIMO)-OTFS systems is studied, where the downlink time-varying massive MIMO channels are transformed to the delay-Doppler-angle domain, and the channel estimation is formulated as a sparse signal recovery problem. In \cite{uplinkotfsmassivemimo}, the uplink-aided downlink massive MIMO-OTFS channel estimation over the delay-Doppler-angle domain is studied. However, in these works, only integer Doppler shifts are considered, except \cite{pilotref}. In \cite{pilotref}, a single pilot symbol is used to facilitate the design of a low complexity thresholding method \textcolor{black}{to acquire the channel in the DD domain}. Although factional Doppler shifts are considered for OTFS with the bi-orthogonal waveform in \cite{pilotref}, it is not clear how to estimate the channel with fractional Doppler shifts for OTFS with a more practical waveform, e.g., the rectangular waveform. In addition, the method estimates the DD domain channel directly, and does not provide the estimates of the channel gains and fractional Doppler shifts.
In this work, we address the issue of OTFS channel estimation with particular attention to fractional Doppler shifts. As the OTFS channel is parameterized by the channel gains and fractional Doppler shifts, we estimate these parameters rather than the DD domain channel directly, mainly due to two reasons. Firstly, this can lead to much better performance as the number of variables to be estimated is significantly smaller. Secondly, the estimates of Doppler shifts can be useful, e.g., for estimating the velocity of mobile users. In addition, both the bi-orthogonal waveform and the rectangular waveform are considered. We formulate the \textcolor{black}{OTFS} channel estimation as a structured signal recovery problem, which is solved using Bayesian inference. With a factor graph representation of the problem, a message passing algorithm is developed to estimate the channel gains and fractional Doppler shifts jointly. In contrast to \cite{pilotref}, we show that using a few pilot symbols rather than a single pilot symbol \textcolor{black}{is} more preferable as the peak to average power ratio (PAPR) of OTFS signals can be significantly reduced. Our proposed algorithm can work with either single pilot symbol or multiple pilot symbols, and significantly outperforms the state-of-the-art method. The Cramer-Rao lower bound (CRLB) for the \textcolor{black}{channel} estimation is derived to verify the performance of the proposed algorithm. The bit error rates (BER) of the OTFS system with perfect channel and estimated channel are also compared to demonstrate the effectiveness of the proposed algorithm.
\textcolor{black}{The main contributions of this work are summarized as follows:}
\begin{itemize}
\item \textcolor{black}{The OTFS DD domain channel is acquired by estimating the relevant parameters, i.e., the channel gains and fractional Doppler shifts. The estimation of the parameters is formulated as a novel structured sparse signal recovery problem with a Bayesian treatment.}
\item \textcolor{black}{Both the bi-orthogonal waveform and the rectangular waveform are considered in this work. To the best of our knowledge, the investigations on the fractional Doppler shift estimation are very limited in the literature. The work in \cite{pilotref}, where the DD domain channel is estimated directly, does not provide fractional Doppler estimate explicitly, and it is not applicable to the case of the rectangular waveform.}
\item \textcolor{black}{ Due to the uniqueness of the formulated structured sparse signal recovery problem, a dedicated message passing algorithm is proposed to efficiently solve the problem. The algorithm is able to work with either a single pilot symbol or multiple pilot symbols, which can significantly reduce the PAPR of the OTFS signal.}
\item \textcolor{black}{To evaluate the performance of the estimator, the CRLB is derived. Comparison results with existing methods are provided to demonstrate the advantages of the proposed method.}
\end{itemize}
For simplicity, this work is focused on a plain OTFS system, but the proposed algorithm can be extended for a more complex system, such as MIMO-OTFS.
The remainder of the paper is organized as follows. In Section \ref{sec:model}, we introduce OTFS modulation and demodulation, and OTFS input-output relation in the DD domain.
\textcolor{black}{Then} we formulate the channel estimation as a structured sparse signal recovery problem in Section \ref{sec:formulation}, and develop the message passing based algorithm in Section \ref{sec:algorithm}. The CLRB for channel estimation is derived in Section \ref{sec:crlb}. Simulation results are provided in Section \ref{sec:simulation}, followed by conclusions in Section \ref{sec:conclusion}.
\textit{Notations}- Boldface lower-case and upper-case letters denote column vectors and matrices, respectively. The superscripts $(\cdot)^T$ and $(\cdot)^*$ represent transpose and conjugate operations, respectively. We \textcolor{black}{use} $[\cdot]_M$ to denote the modulo-$M$ operation. The probability density function of a complex Gaussian variable with mean {$\hat x$} and variance $\nu_x$ is represented by $\mathcal{CN}(x;{\hat x},\nu_x)$. The Gamma distribution with scale $\epsilon$ and rate $\eta$ is represented as $Ga(x; \epsilon, \eta)$. The uniform distribution over the range $[a, b]$ is represented by $U[a,b]$. The relation $f(x)=cg(x)$ for some positive constant $c$ is written as $f(x)\propto g(x)$. The notation $\otimes$ represents the Kronecker product, and $\boldsymbol{a}\cdot\boldsymbol{b}$ and $\boldsymbol{a}\cdot/\boldsymbol{b}$ represent the element-wise product and division between vectors $\boldsymbol{a}$ and $\boldsymbol{b}$, respectively. We use $Tr(\boldsymbol{A})$ \textcolor{black}{and} $\boldsymbol{I}_N$ to denote the trace of $\boldsymbol{A}$ and \textcolor{black}{an} $N\times N$ identity matrix, respectively.
We use $|x|^2$ to denote the magnitude squared operation for $x$ and use $||\boldsymbol{x}||_2^2$ to denote the squared norm of vector $\boldsymbol{x}$. We use $q_j$ to denoted the $j$th element of $\boldsymbol{q}$. The superscript $t$ of $\textbf{s}^t$ denotes the iteration index in an iterative algorithm. The notation $\mathbb{E}(x)$ is used to denote the expectation of the random variable $x$. \textcolor{black}{The notations $\mathcal{R}[a]$ and $\mathcal{I}[a]$ denote the real and imaginary parts of the complex variable $a$, respectively}.
\tikzstyle{block} = [draw, fill=white, rectangle,
minimum height=2.5em, minimum width=4em, align = center]
\begin{figure*}[htbp]
\centering
\begin{tikzpicture}[auto, node distance=1.7cm,>=latex']
\node[block, ](isfft){ISFFT};
\node[block, right = 1.4cm of isfft, text width=5em](heisenberg){Heisenberg \\ transform};
\node[block, right = 0.8cm of heisenberg, text width=5em](channel){Channel \\ $h(\tau,\nu)$};
\node[block, right = 0.8cm of channel, text width=5em](wigner){Wigner \\ transform};
\node[block, right = 1.5cm of wigner](sfft){SFFT};
\node[below = 0.02 of wigner, ](tf){Time-Frequency Domain};
\node[below = 0.6cm of sfft, xshift=-0.4cm](dd){Delay-Doppler Domain};
\node[left = 1.2cm of isfft](xkl){};
\draw[thick, ->, >=stealth] (xkl) -- node[above]{$x[k,l]$}(isfft);
\draw[thick, ->, >=stealth] (isfft) -- node[above]{$X[n,m]$}(heisenberg);
\draw[thick, ->, >=stealth] (heisenberg) -- node[above]{$s(t)$}(channel);
\draw[thick, ->, >=stealth] (channel) -- node[above]{$r(t)$}(wigner);
\draw[thick, ->, >=stealth] (wigner) -- node[above]{$Y[n,m]$}(sfft);
\node[right = 1.3cm of sfft](ykl){};
\draw[thick, ->, >=stealth] (sfft) -- node[above]{$y[k,l]$}(ykl);
\draw[very thick, dashed] ($(heisenberg)+(-2.3,1)$) rectangle ($(wigner)+(2.34,-1)$);
\draw[very thick, dashed] ($(isfft)+(-1.8,1.5)$) rectangle ($(sfft)+(1.8,-1.5)$);
\end{tikzpicture}
\caption{OTFS modulation and demodulation \cite{Raviteja2018}. }
\label{fig:model}
\end{figure*}
\section{OTFS System Model} \label{sec:model}
\subsection{System Model in the DD Domain}
As shown in Fig. \ref{fig:model}, the OTFS modulation and demodulation are implemented with 2D inverse SFFT (ISFFT) and SFFT at the transmitter and receiver, respectively \cite{Hadani2017}\cite{Monk2016OTFSO}.
A (coded) bit sequence is mapped to symbols $\{x[k,l], k = 0,\cdots,N-1, l =0,\cdots M-1\}$ in the DD domain, where $x[k,l]\in\mathcal{A}=\{\alpha_1, ....\alpha_{|\mathcal{A}|}\}$ with $|\mathcal{A}|$ being the cardinality of $\mathcal{A}$, and $l$ and $k$ denote the indices of the delay and Doppler shifts, respectively. As shown in Fig. \ref{fig:model}, ISFFT is performed to convert the symbols to signals in the time-frequency (TF) domain, i.e.,
\begin{eqnarray}
X_{tf}[n,m] = \frac{1}{\sqrt{MN}}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}x[k,l]e^{j2\pi(\frac{nk}{N}-\frac{ml}{M})}.
\end{eqnarray}
Then $\{X_{tf}[m,n]\}$ are converted to a continuous-time waveform $s(t)$ using the Heisenberg transform with a transmit waveform $g_{tx}(t)$, i.e.,
\begin{align}
s(t) = \sum_{n=0}^{N-1}\sum_{m=0}^{M-1}X_{tf}[n, m]g_{tx}(t-nT)e^{j2\pi m\Delta f(t-nT)},
\end{align}
where $\Delta f$ is the subcarrier spacing and $T = 1/\Delta f$.
The signal $s(t)$ is then transmitted through a time-varying channel and the received signal in the time domain can be expressed as
\begin{align}
r(t) = \int\int h(\tau,\nu)s(t-\tau)e^{j2\pi\nu(t-\tau)} d\tau d\nu,
\end{align}
where $h(\tau, \nu)$ is the channel impulse response in the (continuous) DD domain. The channel impulse response can be expressed as
\begin{align}
h(\tau, \nu) = \sum_{i=1}^P h_i\delta(\tau-\tau_i)\delta(\nu - \nu_i),
\end{align}
where $\delta(\cdot)$ is the Dirac delta function, $P$ is the number of resolvable propagation paths, and $h_i$, $\tau_i$ and $\nu_i$ represent the gain, delay shift and Doppler shift associated with the $i$th path, respectively. The delay and Doppler shift taps for the $i$th path are
\begin{eqnarray}
\tau_i=\frac{l_i}{M\Delta f}, \\
\nu_i=\frac{k_i+\kappa_i}{NT}, \label{eq:ddtap}
\end{eqnarray}
where $0 \leq l_i \leq l_{max}$ and $-k_{max} \leq k_i \leq k_{max}$ are the delay index and Doppler index of the $i$th path, $l_{max}$ and $k_{max}$ represent the largest indices of the delay taps and Doppler taps, respectively, and $\kappa_{i}$ $\in[-0.5,0.5]$ is the fractional Doppler shift associated with the $i$th path.
At the receiver side, a receive waveform $g_{rx}(t)$ is used to transform the received signal $r(t)$ to the TF domain i.e.,
\begin{align}
Y(t,f) = \int g_{rx}^{*}(t'-t)r(t')e^{-j2\pi f(t'-t)}dt',
\end{align}
which is then sampled at $t=nT$ and $f=m\Delta f$, yielding $Y[n, m]$.
Finally, an SFFT is applied to $\{Y[n, m]\}$ to obtain the signal $y[k,l]$ in the DD domain, i.e.,
\begin{align}
y[k,l] = \frac{1}{\sqrt{MN}} \sum_{n=0}^{N-1}\sum_{m=0}^{M-1}Y[n,m]e^{-j2\pi(\frac{nk}{N} - \frac{ml}{M})}.
\end{align}
If the transmit waveform $g_{tx}(t)$ and receive waveform $g_{rx}(t)$ satisfy the bi-orthogonal property \cite{Hadani2017}, the channel input-output relationship in the DD domain can be expressed as \cite{Raviteja2018,Raviteja2019Practical}
\begin{eqnarray}
y[k,l]=\sum_{i=1}^P\sum_{q=-N_i}^{N_i}h_i\frac{1-e^{-j2\pi(-q-\kappa_i)}}{N-Ne^{-j\frac{2\pi}{N}(-q-\kappa_i)}}e^{-j2\pi \frac{l_i(k_i+\kappa_i)}{MN}} \nonumber \\
\times x\big[[k-k_i+q]_N,[l-l_i]_M\big]
+ \omega[k,l],\label{eq:IdealFracY}
\end{eqnarray}
where
$N_i\ll N$ is an integer, and $ \omega[k,l]$ denotes the Gaussian noise in the DD domain with mean 0 and variance $\gamma^{-1}$ (or precision $\gamma$). We can see that for each path, the transmitted signal is circularly shifted, and scaled by a channel gain. We stack $\{x[k,l]\}$ to form a vector $\boldsymbol{x}\in \mathbb{C}^{MN \times 1}$, where the $j$th element $x_j$ is $x[k,l]$ with $j = kM + l$. Similarly, a vector $\boldsymbol{y}\in \mathbb{C}^{MN \times 1}$ can also be constructed from $\{y[k,l]\}$. Then \eqref{eq:IdealFracY} can be rewritten in \textcolor{black}{a} vector form as
\begin{eqnarray}
\boldsymbol{y} = \boldsymbol{H}_{bi}\boldsymbol{x} + \boldsymbol{\omega}, \label{eq:idealVectorFracY}
\end{eqnarray}
where $\boldsymbol{\omega}$ is the corresponding noise vector, and $\boldsymbol{H}_{bi} \in \mathbb{C}^{MN \times MN}$ represents the effective channel in the DD domain, which can be expressed as
\begin{eqnarray}
\boldsymbol{H}_{bi} = \sum_{i=1}^{P}\sum_{q=-N_i}^{N_i}\boldsymbol{I}_N(-[q-k_i]_N)\otimes \Big[\boldsymbol{I}_M(l_i)h_i \nonumber \\ \times \left(\frac{1 - e^{-j2\pi(-q-\kappa_i)}}{N - Ne^{-j\frac{2\pi}{N}(-q-\kappa_i)}}\right)e^{-j2\pi\frac{l_i(k_i+\kappa_i)}{MN}}\Big], \label{eq:idealDDH}
\end{eqnarray}
where $\boldsymbol{I}_N(-[q-k_i]_N)$ denotes an $N\times N$ matrix obtained by circularly shifting the rows of an identity matrix by $-[q-k_i]_N$, and $\boldsymbol{I}_M(l_i)$ is obtained similarly.
When the rectangular waveform is used for both $g_{tx}(t)$ and $g_{rx}(t)$, the received signal $y[k,l]$ in the DD domain can be expressed as \cite{Raviteja2018,Raviteja2019Practical}
\begin{eqnarray}
y[k,l]=\sum_{i=1}^P\sum_{q=-N_i}^{N_i}h_i e^{j2\pi\left(\frac{l-l_i}{M}\right)\left(\frac{k_i +\kappa_i}{N}\right)}\alpha_i(k,l,q)\nonumber \\
\times x[[k-k_i+q]_N, [l-l_i]_M] + \omega[k,l], \label{eq:RectFracY}
\end{eqnarray}
where
\begin{equation}
\alpha_i(k,l,q)=\begin{cases}
\frac{1}{N}\beta_i(q) & l_i\leq l < M \\
\frac{1}{N}(\beta_i(q)-1) e^{-j2\pi\frac{[k-k_i+q]_N}{N}} & 0\leq l < l_i
\end{cases}, \label{eq:rectalpha}
\end{equation}
and
\begin{eqnarray}
\beta_i(q)=\frac{e^{-j2\pi(-q-\kappa_i)}-1}{e^{-j\frac{2\pi}{N}(-q-\kappa_i)}-1}.
\end{eqnarray}
Similarly, we can also rewrite (\ref{eq:RectFracY}) in a vector form as
\begin{align}
\boldsymbol{y} = \boldsymbol{H}_{rect}\boldsymbol{x}+\boldsymbol{\omega},
\end{align}
and the channel matrix $\boldsymbol{H}_{rect} \in \mathbb{C}^{MN \times MN}$ can be written as
\begin{align}
\boldsymbol{H}_{rect}=& \sum_{i=1}^{P}\sum_{q=-N_i}^{N_i}\boldsymbol{I}_N(-[q-k_i]_N)\otimes \nonumber \\ &\left(\boldsymbol{\Lambda}\boldsymbol{I}_M(l_i)h_i e^{-j2\pi\frac{l_i(k_i+\kappa_i)}{MN}}\right) \cdot\boldsymbol{\Delta}^{-[q-k_i]_N},
\end{align}
where $\boldsymbol{\Lambda}\in \mathbb{C}^{M\times M}$ is a diagonal matrix and the $l$th diagonal element $\boldsymbol{\Lambda}_{ll}$ can be expressed as
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\columnwidth]{txpattern.pdf}
\caption{An OTFS block with pilot symbols inserted in the DD domain \cite{chestmassivemimo}.}
\label{fig:txpattern}
\end{figure}
\begin{align}
\boldsymbol{\Lambda}_{ll}=
\begin{cases}
e^{j2\pi\frac{l(k_i+\kappa_i)}{MN}}\beta_i(q)/N & l_i\leq l < M \\
e^{j2\pi\frac{l(k_i+\kappa_i)}{MN}}\left(\beta_i(q) - 1\right)/N & 0\leq l < l_i\\
\end{cases}.
\end{align}
Note that $\boldsymbol{\Delta}^{-[q-k_i]_N}$ is an $MN\times MN$ block matrix, which is obtained by circularly shifting the blocks in a matrix $\boldsymbol{\Delta}$ by $-[q-k_i]_N$. The matrix $\boldsymbol{\Delta}$ is a block diagonal matrix
\begin{align}
\boldsymbol{\Delta} = \left (\begin{array}{cccccc}
\boldsymbol{\Delta}_0 &\boldsymbol{0} & \cdots& \boldsymbol{0} \\
\boldsymbol{0} &\boldsymbol{\Delta}_1 & \cdots & \boldsymbol{0} \\
\vdots & \vdots &\ddots & \vdots\\
\boldsymbol{0} & \boldsymbol{0} &\cdots & \boldsymbol{\Delta}_{N-1}\\
\end{array}\right),
\end{align}
where $\boldsymbol{\Delta}_n = \boldsymbol{\Psi I}_M(l_i)$ and $\boldsymbol{\Psi}\in \mathbb{C}^{M\times M}$ is a diagonal matrix with the $m$th diagonal element ${\Psi}_{mm}$ given as
\begin{align}
{\Psi}_{mm}=
\begin{cases}
1 & l_i\leq m < M \\
e^{-j2\pi m/N} & 0\leq m < l_i\\
\end{cases}.
\end{align}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{rxpattern.pdf}
\caption{Received OTFS signal block where the signals in the orange area are used for channel estimation \cite{chestmassivemimo}.}
\label{fig:rxpattern}
\end{figure}
\subsection{Data and Pilot Symbols Arrangement}
In this paper, we use the scheme in \cite{pilotref} and \cite{chestmassivemimo}, where pilot symbols are inserted to the DD plane with guard interval to avoid the interference between data and pilot symbols, \textcolor{black}{as} shown in Fig. \ref{fig:txpattern}. This allows that channel estimation and data detection are carried out for the same OTFS block.
As shown in Fig. \ref{fig:txpattern}, the size of the pilot symbol block is $M_p \times N_p$. To avoid the interference between pilot and data symbols, the size of guard interval should be at least $2(k_{max} + \widehat{N})$ along the Doppler dimension and $l_{max}$ along the delay dimension, where we assume $\{N_i, i=1,2,\cdots, P\}$ take a same value $\widehat{N}$. \textcolor{black}{With this scheme, the overhead due to pilot symbols and guard interval is $(2l_{max}+M_p)(4k_{max}+4\widehat{N}+N_p)/MN$.}
The received signal in the DD domain is shown in Fig. \ref{fig:rxpattern}, where the signals in the orange area correspond to the pilot symbols, and \textcolor{black}{they} are used for channel estimation.
{In order to minimize the spectrum overhead due to pilot symbols, the use of a small number of pilot symbols is highly desirable. To guarantee the channel estimation quality, the power of the pilot symbols should be sufficiently high, which can be much larger than the power of data symbols. Fortunately, the operation of ISFFT at the transmitter can spread the \textcolor{black}{power of} pilot symbols, therefore \textcolor{black}{greatly} alleviating the PAPR problem \cite{pilotref}. In \cite{pilotref},
a single pilot symbol with a very high power is used \textcolor{black}{to facilitate the design of a low complexity channel estimation algorithm based on thresholding. However, the use of a few pilot symbols can be more preferable because this can significantly reduce the PAPR at the cost of a small loss in spectrum efficiency.} In Table \ref{tab:papr}, we compare the average PAPR of OTFS signals in the time domain, where we assume that the rectangle waveform is used. The pilot symbols are arranged in a single column and placed along the delay dimension, i.e., $N_p=1$. We assume that the signal to noise ratio (SNR) of data symbols is 14 dB, and compare the average PAPR of the OTFS signals versus the number of pilot symbols and the SNR of pilot symbols (denoted by SNRp). It can be seen that more pilot symbols are used, a lower PAPR can be achieved. \textcolor{black}{Moreover, with a fixed power budget for pilot,} the use of multiple pilot symbols leads to considerable PAPR reduction. \textcolor{black}{Take an example: the use of a single pilot symbol with SNRp = 50dB has the same power budget as the use of 10 pilot symbols with SNRp = 40dB. From Table \ref{tab:papr}, a significant PAPR reduction of about 8dB can be achieved. It is worth mentioning that, with the proposed method in this paper, almost the same estimation performance can be achieved for both cases (as shown later), making the use of multiple pilot symbols very attractive.}
\begin{table}[htb]
\centering
\renewcommand\arraystretch{1.4}
\caption{PAPR of OTFS signals in the time domain, where $M=128$, $N=32$ and the SNR of data symbol is 14 dB.}\label{tab:papr}
\begin{tabular}{|>{\centering}p{40pt}|>{\centering}p{100pt}|>{\centering \arraybackslash }p{70pt}|}
\hline
SNRp & \# of pilot symbols & PAPR \\
\hline
\multirow{2}*{40dB} & 1 & 12.5527 dB\\
\cline{2-3}
~ & 10 & 10.4095 dB\\
\hline
\multirow{2}*{50dB} & 1 & 18.7619 dB\\
\cline{2-3}
~ & 10 & 11.3900 dB\\
\hline
\end{tabular}
\end{table}
In this work, we design a flexible channel estimation method, which is able to work with a single or multiple pilot symbols. Moreover, the proposed method is able to estimate the channel gains and the fractional Doppler shifts, which are crucial to improving the estimation performance as demonstrated later.
\section{Structured Sparse Signal Recovery For OTFS Channel Estimation} \label{sec:formulation}
The OTFS channel can be estimated using the pilot symbols and the received signals in the orange area shown in Fig. \ref{fig:rxpattern}. It can be seen from (\ref{eq:IdealFracY}) and (\ref{eq:RectFracYSimplified}) that the OTFS channel is parameterized by the nonzero channel gains $\{h_i\}$, the fractional Doppler shifts $\{\kappa_i\}$ and their indices $\{l_i, k_i\}$, $i=1,\cdots,P$. These parameters will be estimated, based on which the OTFS channel in the DD domain can be constructed.
\subsection{Bi-Orthogonal Waveform} \label{section:BiWave}
{Model \eqref{eq:IdealFracY} involves a number of unknown variables, including $h_i$, $\kappa_i$, $l_i$ and $k_i$ for $i=1,\cdots,P$. In addition, $P$ is unknown as well. These make the estimation very difficult. To overcome this, we define two variables $t$ and $d$ to represent the indices of delay shifts and Doppler shifts, respectively. As only the pilot symbols need to be considered, we have $t \in [0, l_{max} ]$ and $d \in [-k_{max}, k_{max} ]$. Then \eqref{eq:IdealFracY} can be rewritten as
\begin{eqnarray}
y[k,l]=\sum_{t=0}^{l_{max}} \sum_{d=-k_{max}}^{k_{max}} h_{t,d}e^{-j2\pi \frac{t(d+\kappa_{d})}{MN}}\sum_{q=-\widehat{N}}^{\widehat{N}}f(q,\kappa_{d}) \nonumber \\
x\big[[k-d+q]_N,[l-t]_M\big]
+ \omega[k,l], \label{eq:IdealRecvFracY}
\end{eqnarray}
where
\begin{eqnarray}
f(q,\kappa_{d})=\frac{1-e^{-j2\pi(-q-\kappa_{d})}}{N-Ne^{-j\frac{2\pi}{N}(-q-\kappa_{d})}}. \label{eq:fqkappa}
\end{eqnarray}
The received signals $\{y[k,l]\}$ in (\ref{eq:IdealRecvFracY}) corresponding to the pilot symbols are in the orange area in Fig. \ref{fig:rxpattern}.
Our aim is to find the non-zero elements in $\{h_{t,d}, \kappa_{d}, t \in [0, l_{max} ], d \in [-k_{max}, k_{max}] \}$ based on $\{y[k,l]\}$. Therefore, the estimation can be formulated as a sparse signal recovery problem.
To facilitate the design of the estimation algorithm, we rewrite \eqref{eq:IdealRecvFracY} in a matrix form as
\begin{eqnarray}
\boldsymbol{y} = \textcolor{black}{\boldsymbol{X}_{bi}}\boldsymbol{c}+\boldsymbol{\omega}, \label{eq:idealVectorY}
\end{eqnarray}
where $\boldsymbol{y} \in \mathbb{C}^{Z\times 1}$, $Z=(l_{max}+M_p)(N_p + 2k_{max}+2\widehat{N})$ is formed by stacking $\{y[k,l]\}$ as a vector, $\textcolor{black}{\boldsymbol{X}_{bi}} \in \mathbb{C}^{Z\times (l_{max}+1)(2k_{max}+1)(2\widehat{N}+1)}$ is constructed based on the pilot symbols, $\boldsymbol{\omega} \in \mathbb{C}^{Z\times 1}$ is obtained by stacking $\{\omega[k,l]\}$, and $\boldsymbol{c}\in \mathbb{C}^{(l_{max}+1)(2k_{max}+1)B\times 1}$ with $B=2\widehat{N}+1$ can be expressed as
\begin{align}
\boldsymbol{c} = [\boldsymbol{c}^T_{0,-k_{max}}, ~\boldsymbol{c}^T_{1,-k_{max}}, ~\cdots, ~\boldsymbol{c}^T_{t, d}, \cdots, \boldsymbol{c}^T_{l_{max},k_{max}}]^T, \label{eq:cvecdef}
\end{align}
where $\boldsymbol{c}_{t, d} \in \mathbb{C}^{B \times 1}$ is given as
\begin{eqnarray}
&&\boldsymbol{c}_{t,d} = h_{t,d}e^{-j2\pi\frac{(d+\kappa_d)t}{MN}}\cdot \nonumber \\
&& \ \ [f(-\widehat{N}, \kappa_d),f(-\widehat{N}+1, \kappa_d),\cdots, f(\widehat{N}, \kappa_d)]^T.
\end{eqnarray}
\textcolor{black}{We can see that the vector $\boldsymbol{c}$ has a structure}, i.e., $\boldsymbol{c}$ is block sparse, and for a non-zero block (subvector) $\boldsymbol{c}_{t,d}$, it is parameterized by only two parameters $h_{t,d}$ and $\kappa_d$.
We will recover the structured sparse vector $\boldsymbol{c}$ and obtain the estimates of the non-zero channel gains and fractional Doppler shifts.
\subsection{Rectangular Waveform}
To facilitate channel estimation, we place the pilot symbols to ensure that their delay index $l \geq l_{max}$, so that (\ref{eq:RectFracY}) is reduced to
\begin{eqnarray}
y[k,l]=\!\!\!\!\!\!\!\!&&\sum_{i=1}^P\sum_{q=-N_i}^{N_i}h_i\frac{1-e^{-j2\pi(-q-\kappa_i)}}{N-Ne^{-j\frac{2\pi}{N}(-q-\kappa_i)}}e^{j2\pi \frac{(l-l_i)(k_i+\kappa_i)}{MN}} \nonumber \\
&& ~~\times x\big[[k-k_i+q]_N,[l-l_i]_M\big] + \omega[k,l]. \label{eq:RectFracYSimplified}
\end{eqnarray}
Comparing (\ref{eq:RectFracYSimplified}) to (\ref{eq:IdealFracY}), we can find that their difference lies in that \textcolor{black}{each transmitted symbol
in (\ref{eq:RectFracYSimplified}) }is rotated by an additional phase $e^{j2\pi \frac{l(k_i+\kappa_i)}{MN}}$.
Similar to the case of bi-orthogonal waveform, model (\ref{eq:RectFracYSimplified}) can be rewritten in a matrix form
\begin{align}
\boldsymbol{y} = \textcolor{black}{\boldsymbol{X}_{rect}}\boldsymbol{c}+\boldsymbol{\omega}, \label{eq:rectVectorY}
\end{align}
where $\boldsymbol{c}$ is the structured sparse vector to be recovered, and the $(z,n)$th element of matrix $\boldsymbol{X}_{rect}$ is given as
\begin{align}
\textcolor{black}{{X}_{rect}^{z,n} = e^{j2\pi\frac{l_z(d_n+\kappa_n)}{MN}}{X}_{bi}^{z,n}} \label{eq:rectX},
\end{align}
where $l_z$ is the delay index corresponding to the $z$th element of $\boldsymbol{y}$, $d_n$ and $\kappa_n$ are the parameters $d$ and $\kappa_d$ of the $n$th element of vector $\boldsymbol{c}$ (refer to the definition of vector $\boldsymbol{c}$ in \eqref{eq:cvecdef}, \textcolor{black}{and ${X}_{bi}^{z,n}$ denotes the the $(z,n)$th element of matrix $\boldsymbol{X}_{bi}$}. It is noted that when $z$ and $n$ are given,
$l_z$ and $d_n$ are known, but $\kappa_n$ is unknown. So it is different from the case of the bi-orthogonal waveform in (\ref{eq:idealVectorY}) that the matrix \textcolor{black}{$\boldsymbol{X}_{rect}$} depends on the fractional Doppler shifts, which are unknown. Later we will show that this can be solved by using an iterative estimation strategy.
As discussed above, the channel estimation problem can be formulated as structured sparse signal recovery. Next, we will develop a Bayesian method to recover the block sparse vector $\boldsymbol{c}$ and obtain the estimates of $\{h_{t,d}\}$ and $\{\kappa_d\}$.
In particular, the factor graph techniques \cite{Kschischang2001} are used, based on which an efficient message passing algorithm is developed.
\section{Bayesian Approach and Message Passing Algorithm} \label{sec:algorithm}
In this section, we first focus on OTFS with the bi-orthogonal waveform, and then extend our discussion to the case of the rectangular waveform.
For the convenience of notation, we define $j = (l_{max}+1)(k_{max}+d)+t+1$. As $t \in [0, l_{max}]$ and $d \in [-k_{max}, k_{max}]$, there is one-to-one map between the index $j$ and the index pair $(t,d)$. Then we define
\begin{align}
\boldsymbol{c}_j = \boldsymbol{c}_{t,d}, ~ h_j = h_{t,d}, ~ \kappa_j = \kappa_d,
\end{align}
which will be used hereafter.
Inspired by the sparse Bayesian learning \cite{Tipping2001Sparse}, we assume a Gaussian prior with mean 0 and precision $\lambda_j$ for $h_j$ to promote sparsity, i.e.,
\begin{align}
p(h_j|\lambda_j) = \mathcal{CN}(h_j; 0, \lambda_j^{-1}),
\end{align}
where the hyperparameter $\lambda_j$ is Gamma distributed, i.e., $p(\lambda_j)=Ga(\lambda_j;\epsilon_j, \eta_j)$. \textcolor{black}{The hyper-parameters $\epsilon_j$ and $\eta_j$ are respectively set to be 1 and 0, leading to a uniform or noninformative distribution for $\lambda_j$ \cite{cslaplace}.} As the noise precision $\gamma$ is normally unknown, it will also be estimated with an improper prior $p(\gamma)$\textcolor{black}{\cite{Tipping2001Sparse}}.
To facilitate the algorithm design, {we introduce an auxiliary vector
\textcolor{black}{
\begin{align}
\boldsymbol{g}_j \triangleq& [f(-\widehat{N}, \kappa_j),f(-\widehat{N}+1, \kappa_j),\cdots, f(\widehat{N}, \kappa_j)]^T e^{-j2\pi\frac{t(d+\kappa_j)}{MN}} \nonumber \\
\triangleq & [\Phi(-\widehat{N}, \kappa_j), \cdots, \Phi(q, \kappa_j), \cdots, \Phi(\widehat{N}, \kappa_j)]^T, \label{eq:nuj}
\end{align}
}
where
\begin{equation}
\Phi(q, \kappa_j)=f(q,\kappa_j)e^{-j2\pi\frac{t(d+\kappa_j)}{MN}}.
\end{equation}
Then we have
\begin{align}
\boldsymbol{c}_j = h_j \boldsymbol{g}_j.
\end{align}
Define $\boldsymbol{h} =[h_1,\cdots h_J]^T$, $\boldsymbol{g}=[\boldsymbol{g}_1^T, \cdots, \boldsymbol{g}_J^T ]^T$, $\boldsymbol{\kappa}=[\kappa_1, \cdots, \kappa_J]^T$ and $\boldsymbol{\lambda}=[\lambda_1, \cdots, \lambda_J]^T$, where $J=(l_{max}+1)(2k_{max}+1)$. Then the joint conditional distribution of the unknown variables can be factorized as
\begin{align}
p&(\boldsymbol{c}, \boldsymbol{h}, \boldsymbol{g}, \boldsymbol{\kappa},\boldsymbol{\lambda}, \gamma |\boldsymbol{y}) \nonumber \\ \propto & ~p(\boldsymbol{y}|\boldsymbol{c},\gamma)p(\boldsymbol{c}|\boldsymbol{h},\boldsymbol{g})p(\boldsymbol{h}|\boldsymbol{\lambda})p(\boldsymbol{\lambda})p(\boldsymbol{g}|\boldsymbol{\kappa})p(\boldsymbol{\kappa})p(\gamma) \nonumber\\
=&~ p(\gamma) p(\boldsymbol{y}|\boldsymbol{c},\gamma){\prod}_{j,b}p(c_{jb}|{h}_j,g_{jb})p(h_j|\lambda_j)p(\lambda_j)\nonumber \\ &\qquad\qquad\times p(g_{jb}|\kappa_j)p(\kappa_j)\nonumber \\
\triangleq &~ f_{\gamma}(\gamma)f_{\boldsymbol{y}}(\boldsymbol{c},\gamma){\prod}_{j,b}f_{c_{jb}}(c_{jb},h_j,g_{jb})f_{h_j}(h_j, \lambda_j)f_{\lambda_j}(\lambda_j) \nonumber \\ & \qquad\qquad \times f_{g_{jb}}(g_{jb},\kappa_j)f_{\kappa_j}(\kappa_j),\label{eq:idealFactor}
\end{align}
\textcolor{black}{where $1\leq j \leq J$, $1 \leq b \leq B$, and $c_{jb}$ and $g_{jb}$ represent the $b$th element of $\boldsymbol{c}_j$ and $\boldsymbol{g}_j$, respectively. The correspondence between the factors and distributions (and their detailed forms) are listed in Table \ref{tab:factor}.} \textcolor{black}{We aim to obtain the approximate marginals of the parameters (thereby their estimates) by performing approximate inference. In particular, we use the factor graph techniques, and the variational message passing (VMP) \cite{VAMP, MF2002} and belief propagation (BP) message passing \cite{Kschischang2001} are combined to achieve efficient approximate inference. We note that variational inference has been used for channel estimation, e.g., the works in \cite{Reviewer3R1} and \cite{Reviewer3R2}.}
\tikzstyle{factornode} = [draw, fill=white, circle, inner sep=1pt,minimum size=0.8cm]
\tikzstyle{funnode} = [draw, rectangle,fill=black!100, minimum size = 0.6cm]
\begin{figure}[htbp]
\centering
\begin{tikzpicture} [scale=0.84, transform shape]
\node (cj1)[factornode] at (0,0) {$c_{j1}$};
\node (cjb)[factornode, below = 0.34cm of cj1] {$c_{jb}$};
\node (cjB)[factornode, below = 0.34cm of cjb] {$c_{jB}$};
\node (vdot1)[above = 0.2cm of cj1] {$\boldsymbol{\vdots}$};
\node (vdot2)[below = 0.34cm of cjB] {$\boldsymbol{\vdots}$};
\node (cJ1)[factornode, below = 0.34cm of vdot2] {$c_{J1}$};
\node (cJb)[factornode, below = 0.34cm of cJ1] {$c_{Jb}$};
\node (cJB)[factornode, below = 0.34cm of cJb] {$c_{JB}$};
\node (fcj1)[right = 0.4cm of cj1, funnode] {};
\node (fcjb)[right = 0.4cm of cjb, funnode] {};
\node (fcjB)[right = 0.4cm of cjB, funnode] {};
\node (fcJ1)[right = 0.4cm of cJ1, funnode] {};
\node (fcJb)[right = 0.4cm of cJb, funnode] {};
\node (fcJB)[right = 0.4cm of cJB, funnode] {};
\node [above= -0.1cm of fcj1]{$f_{c_{j1}}$};
\node [above= -0.1cm of fcjb]{$f_{c_{jb}}$};
\node [above= -0.1cm of fcjB]{$f_{c_{jB}}$};
\node [above= -0.1cm of fcJ1]{$f_{c_{J1}}$};
\node [above= -0.1cm of fcJb]{$f_{c_{Jb}}$};
\node [above= -0.1cm of fcJB]{$f_{c_{JB}}$};
\node (gj1)[factornode, right = 1cm of fcj1] {$g_{j1}$};
\node (gjb)[factornode, below = 0.34cm of gj1] {$g_{jb}$};
\node (gjB)[factornode, below = 0.34cm of gjb] {$g_{jB}$};
\node (hj)[factornode, below = 0.34cm of gjB] {$h_j$};
\node (gJ1)[factornode, below = 0.34cm of hj] {$g_{J1}$};
\node (gJb)[factornode, below = 0.34cm of gJ1] {$g_{Jb}$};
\node (gJB)[factornode, below = 0.34cm of gJb] {$g_{JB}$};
\node (hJ)[factornode, below = 0.34cm of gJB] {$h_J$};
\node (fgj1)[funnode, right = 0.4cm of gj1] {};
\node (fgjb)[funnode, right = 0.4cm of gjb] {};
\node (fgjB)[funnode, right = 0.4cm of gjB] {};
\node (fhj)[funnode, right = 0.4cm of hj] {};
\node (fgJ1)[funnode, right = 0.4cm of gJ1] {};
\node (fgJb)[funnode, right = 0.4cm of gJb] {};
\node (fgJB)[funnode, right = 0.4cm of gJB] {};
\node (fhJ)[funnode, right = 0.4cm of hJ] {};
\node [above= -0.1cm of fgj1]{$f_{g_{j1}}$};
\node [above= -0.1cm of fgjb]{$f_{g_{jb}}$};
\node [above= -0.1cm of fgjB]{$f_{g_{jB}}$};
\node [above= -0.1cm of fhj]{$f_{h_j}$};
\node [above= -0.1cm of fgJ1]{$f_{g_{J1}}$};
\node [above= -0.1cm of fgJb]{$f_{g_{Jb}}$};
\node [above= -0.1cm of fgJB]{$f_{g_{JB}}$};
\node [above= -0.1cm of fhJ]{$f_{h_J}$};
\node (kappaj)[factornode, right = 0.4cm of fgjb] {$\kappa_j$};
\node (lambdaj)[factornode, right = 0.4cm of fhj] {$\lambda_j$};
\node (kappaJ)[factornode, right = 0.4cm of fgJb] {$\kappa_J$};
\node (lambdaJ)[factornode, right = 0.4cm of fhJ] {$\lambda_J$};
\node (fkappaj)[funnode, right = 0.2cm of kappaj] {};
\node (flambdaj)[funnode, right = 0.2cm of lambdaj] {};
\node (fkappaJ)[funnode, right = 0.2cm of kappaJ] {};
\node (flambdaJ)[funnode, right = 0.2cm of lambdaJ] {};
\node [above= -0.1cm of fkappaj]{$f_{\kappa_j}$};
\node [above= -0.1cm of flambdaj]{$f_{\lambda_j}$};
\node [above= -0.1cm of fkappaJ]{$f_{\kappa_J}$};
\node [above= -0.1cm of flambdaJ]{$f_{\lambda_J}$};
\node (fy)[rectangle,draw, left = 0.4cm of cj1, text height=8cm, minimum size = 0.6cm, yshift=-3.4cm, align=center] {};
\node (fytext)[left = -0.6cm of fy, align=center] {$f_{\boldsymbol{y}}$};
\node (gamma)[factornode, left = 0.3cm of fy] {$\gamma$};
\node (fgamma)[funnode, left = 0.3cm of gamma] {};
\node [above= -0.1cm of fgamma]{$f_{\gamma}$};
\node (part1) at (5.6, 0.8) {$\uppercase\expandafter{\romannumeral2}$};
\node (part1) at (-3.2, 0.8) {$\uppercase\expandafter{\romannumeral1}$};
\draw (cj1) -- (fcj1);
\draw (cjb) -- (fcjb);
\draw (cjB) -- (fcjB);
\draw (cJ1) -- (fcJ1);
\draw (cJb) -- (fcJb);
\draw (cJB) -- (fcJB);
\draw (gj1) -- (fcj1.east) -- (hj);
\draw (gjb) -- (fcjb.east) -- (hj);
\draw (gjB) -- (fcjB.east) -- (hj);
\draw (gJ1) -- (fcJ1.east) -- (hJ);
\draw (gJb) -- (fcJb.east) -- (hJ);
\draw (gJB) -- (fcJB.east) -- (hJ);
\draw (gj1) -- (fgj1.west) -- (fgj1.east) -- (kappaj);
\draw (gjb) -- (fgjb.west) -- (fgjb.east) -- (kappaj);
\draw (gjB) -- (fgjB.west) -- (fgjB.east) -- (kappaj);
\draw (hj) -- (fhj) -- (lambdaj) -- (flambdaj);
\draw (gJ1) -- (fgJ1.west)-- (fgJ1.east) -- (kappaJ);
\draw (gJb) -- (fgJb.west)-- (fgJb.east) -- (kappaJ);
\draw (gJB) -- (fgJB.west)-- (fgJB.east) -- (kappaJ);
\draw (hJ) -- (fhJ) -- (lambdaJ) -- (flambdaJ);
\draw (kappaj) -- (fkappaj);
\draw (kappaJ) -- (fkappaJ);
\draw (fgamma) -- (gamma);
\draw (gamma) -- ($(gamma) + (0.7,0)$);
\draw (cj1) -- ($(cj1) + (-0.82,0)$);
\draw (cjb) -- ($(cjb) + (-0.82,0)$);
\draw (cjB) -- ($(cjB) + (-0.82,0)$);
\draw (cJ1) -- ($(cJ1) + (-0.82,0)$);
\draw (cJb) -- ($(cJb) + (-0.82,0)$);
\draw (cJB) -- ($(cJB) + (-0.82,0)$);
\draw[thick, dashed, rounded corners] ($(0.6,1.4)$) rectangle ($(6.4,-8.6)$);
\draw[thick, dashed, rounded corners] ($(-0.6,1.4)$) rectangle ($(-3.6,-8.6)$);
\end{tikzpicture}
\caption{\textcolor{black}{ Factor graph representation of \eqref{eq:idealFactor}.}}
\label{fig:idealFactorGraph}
\end{figure}
\begin{table}[htb]
\color{black}
\centering
\renewcommand\arraystretch{1.2}
\caption{\textcolor{black}{Correspondence between the factors and distributions in (\ref{eq:idealFactor})}.}\label{tab:factor}
\begin{tabular}{>{\centering}p{30pt}>{\centering}p{60pt} >{\centering \arraybackslash }p{120pt}}
\hline
Factor & Distribution & Function \\
\hline
$f_{\gamma}$ & $p(\gamma)$ & $\gamma^{-1}$ \\
$f_{\boldsymbol{y}}$ & $p(\boldsymbol{y}|\boldsymbol{c},\gamma)$ & $\mathcal{CN}(\boldsymbol{y}; \boldsymbol{X}_{bi}\boldsymbol{c},\gamma^{-1}\boldsymbol{I}_Z)$ \\
$f_{c_{jb}}$ & $p(c_{jb}|h_j, g_{jb})$ & $\delta(c_{jb} - h_jg_{jb})$ \\
$f_{h_j}$ & $p(h_j|\lambda_j)$ & $\mathcal{CN}(h_j; 0, \lambda_j^{-1})$ \\
$f_{\lambda_j}$ & $p(\lambda_j)$ & $Ga(\lambda_j;\epsilon_j, \eta_j)$ \\
$f_{g_{jb}}$ & $p(g_{jb}|\kappa_j)$ & $\delta(g_{jb} - \Phi(-\widehat{N}+b-1,\kappa_j))$ \\
$f_{\kappa_j}$ & $p(\kappa_j)$ & $U[-0.5,0.5]$ \\
\hline
\end{tabular}
\end{table}
The factorization of (\ref{eq:idealFactor}) can be visualized by the factor graph shown in Fig. \ref{fig:idealFactorGraph}, where we partition the factor graph into two parts: Part \uppercase\expandafter{\romannumeral1} and Part \uppercase\expandafter{\romannumeral2}.
\textcolor{black}{To keep the graph clear, we only show the function nodes and variables nodes associated with the $j$th and $J$th blocks of $\boldsymbol{c}$}.
In the following, we derive the message computations for the forward (from left to right) and backward (from right to left) passing in both parts. We use $m_{A\rightarrow B}(x)$ to denote a message passed from a function node $A$ to a variable node $B$, which is a function of $x$, and use $n_{B\rightarrow A}(x)$ to denote a message passed from a variable node $B$ to a function node $A$ \textcolor{black}{, which is also a function of $x$}. Meanwhile, the arrows above the mean and the variance of a Gaussian message indicate the direction of the message passing. In addition, we use $b(x)$ to denote the belief of a variable $x$. Note that, if a forward computation requires backward messages, the relevant messages in the previous iteration is used by default.
\subsection{Message Computations in Part \uppercase\expandafter{\romannumeral1}}
Assume that the belief of $\boldsymbol{c}$ is known, which turns out to be Gaussian, i.e., $b(\boldsymbol{c})\propto \mathcal{CN}(\boldsymbol{c};\boldsymbol{c}^p,\boldsymbol{V}_c^p)$, as given later in (\ref{eq:beliefcvar}) and (\ref{eq:beliefcmean}). {The VMP or mean field (MF)}
is used at the function node $f_{\boldsymbol{y}}(\boldsymbol{c},\gamma)$, and the message from $f_{\boldsymbol{y}}(\boldsymbol{c},\gamma)$ to the variable node $\gamma$ can be computed as
\begin{align}
m_{f_{\boldsymbol{y}}\rightarrow \gamma} (\gamma) &\propto \exp \int \ln f_{\boldsymbol{y}}(\boldsymbol{c},\gamma)b(\boldsymbol{c})d\boldsymbol{c} \nonumber\\
&\propto {\gamma^Z}\exp\{-\gamma[(\boldsymbol{y}-\boldsymbol{X}_{bi}\boldsymbol{c}^p)^H(\boldsymbol{y}-\boldsymbol{X}_{bi}\boldsymbol{c}^p) \nonumber \\ & \qquad \qquad + Tr\{\boldsymbol{X}_{bi}\boldsymbol{V_c}^p\boldsymbol{X}_{bi}^H\}]\}.
\end{align}
With the prior $f_{\gamma}(\gamma) \propto 1/\gamma$, the belief $b(\gamma)$ can be expressed as
\begin{align}
b(\gamma) \propto \gamma^{-1}m_{f_{\boldsymbol{y}}\rightarrow \gamma }(\gamma).
\end{align}
Then, with the MF rule and the belief $b(\gamma)$, we can compute the outgoing message $m_{f_{\boldsymbol{y}}\rightarrow {c}_{jb}}(c_{jb})$ as
\begin{align}
&m_{f_{\boldsymbol{y}}\rightarrow{c}_{jb}}(c_{jb}) \nonumber \\ &\propto \exp\int \ln f_{\boldsymbol{y}}(\boldsymbol{c},\gamma)b(\gamma)b(\boldsymbol{c}_{\sim c_{jb}})d\gamma d\boldsymbol{c}_{\sim c_{jb}},
\end{align}
where $\boldsymbol{c}_{\sim c_{jb}}$ denotes a vector obtained by removing $c_{jb}$ from $\boldsymbol{c}$.
However, the computation of the message involves high complexity. An efficient way is to first compute the belief of $c_{jb}$, then compute the outgoing extrinsic message. According to the MF rule, the message $m_{f_{\boldsymbol{y}}\rightarrow\boldsymbol{c}}(\boldsymbol{c})$ can be expressed as
\begin{align}
&m_{f_{\boldsymbol{y}}\rightarrow\boldsymbol{c}}(\boldsymbol{c})\nonumber \\ &\propto \exp \int \ln f_{\boldsymbol{y}}(\boldsymbol{c},\gamma)b(\gamma)d\gamma \nonumber\\
&\propto ~\mathcal{CN}\left(\boldsymbol{c}; (\boldsymbol{X}_{bi}^H\boldsymbol{X}_{bi})^{-1}\boldsymbol{X}_{bi}^H\boldsymbol{y}, \hat{\gamma}^{-1}(\boldsymbol{X}_{bi}^H\boldsymbol{X}_{bi})^{-1}\right),
\end{align}
where
\begin{align}
\hat{\gamma} &= \int \gamma b(\gamma) d\gamma \nonumber \\
&= \frac{Z}{(\boldsymbol{y} - \boldsymbol{X}_{bi}\boldsymbol{c}^p)^H(\boldsymbol{y} - \boldsymbol{X}_{bi}{c}^p) + Tr\{\boldsymbol{X}_{bi}{V_c}^p\boldsymbol{X}_{bi}^H\}}. \label{eq:noiseprecision}
\end{align}
As the incoming message $m_{f_{c_{jb}}\rightarrow c_{jb}}(c_{jb})\propto \mathcal{CN}(c_{jb};\overset{\scriptscriptstyle\leftarrow}{c_{jb}},\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}})$, the belief of $\boldsymbol{c}$ is Gaussian with covariance matrix $\boldsymbol{V}_c^p$ and mean vector $\boldsymbol{c}^p$, which can be computed as
\begin{align}
\boldsymbol{V}_c^p =& (\boldsymbol{V}_c^{-1} +\hat{\gamma}\boldsymbol{X}_{bi}^H\boldsymbol{X}_{bi})^{-1},\label{eq:beliefcvar} \\
\boldsymbol{c}^p =& \boldsymbol{V}^p_c (\boldsymbol{V}_c^{-1}\overset{\scriptscriptstyle\leftarrow}{\boldsymbol{c}} + \hat{\gamma}\boldsymbol{X}_{bi}^H\boldsymbol{y}),\label{eq:beliefcmean}
\end{align}
{where $\boldsymbol{V}_{\boldsymbol{c}}$ is a diagonal matrix with the diagonal elements given by $\{\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}}\}$ and $\overset{\scriptscriptstyle\leftarrow}{\boldsymbol{c}}$ is a column vector that consists of $\{\overset{\scriptscriptstyle\leftarrow}{c}_{jb}\}$. The computations of the mean $\overset{\scriptscriptstyle\leftarrow}{c}_{jb}$ and the variance $\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}}$ are delayed to \eqref{eq:cpriormean} and \eqref{eq:cpriorvar}.}
Hence the outgoing message $m_{f_{\boldsymbol{y}}\rightarrow {c}_{jb}}(c_{jb})$ is also Gaussian, and can be expressed as \cite{GuoAConcise}
\begin{align}
m_{f_{\boldsymbol{y}}\rightarrow {c}_{jb}}(c_{jb}) = \mathcal{CN}(c_{jb};\overset{\scriptscriptstyle\rightarrow}{c}_{jb},\overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}})
\end{align}
with
\begin{align}
\overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}} =& \left(\frac{1}{\nu_{c_{jb}}^p} - \frac{1}{\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}}}\right)^{-1}, \label{eq:extrinsicvar}\\
\overset{\scriptscriptstyle\rightarrow}{c}_{jb} =& \overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}}\left(\frac{{c}_{jb}^p}{\nu_{c_{jb}}^p} - \frac{\overset{\scriptscriptstyle\leftarrow}{c}_{jb}}{\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}}}\right),\label{eq:extrinsicmean}
\end{align}
where ${c}_{jb}^p$ is the $b$th element of the $j$th block of ${\boldsymbol{c}}^p$, and $\nu_{c_{jb}}^p$ is the $b$th element of the $j$th block of the vector that consists of the diagonal elements of $\boldsymbol{V}_{\boldsymbol{c}}^p$.
\subsection{Message Computations in Part \uppercase\expandafter{\romannumeral2}}
\subsubsection{Forward Message Passing} With the incoming message from Part \uppercase\expandafter{\romannumeral1} $n_{c_{jb}\rightarrow f_{c_{jb}}} (c_{jb}) = m_{f_{\boldsymbol{y}}\rightarrow {c}_{jb}} (c_{jb}) = \mathcal{CN}(c_{jb};\overset{\scriptscriptstyle\rightarrow}{c}_{jb}, \overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}})$ and the factor $f_{c_{jb}}(c_{jb}, h_j,g_{jb})=\delta(c_{jb}-h_jg_{jb})$, we can obtain an intermediate function node $\tilde{f}_{c_{jb}}(h_j, g_{jb})$ according to BP \cite{Kschischang2001}, i.e.,
\begin{align}
\tilde{f}_{c_{jb}}(h_j, g_{jb}) =& \int f_{c_{jb}}(c_{jb}, h_j,g_{jb}) n_{c_{jb}\rightarrow f_{c_{jb}}}(c_{jb})d{c_{jb}} \nonumber \\
=& \mathcal{CN}(h_jg_{jb};\overset{\scriptscriptstyle\rightarrow}{c}_{jb}, \overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}}).
\end{align}
With the local function $\tilde{f}_{c_{jb}}(h_j, g_{jb})$, we can compute the message from $f_{c_{jb}}$ to $h_j$ by treating $g_{jb}$ as a constant, i.e.,
\begin{align}
m_{f_{c_{jb}}\rightarrow h_j} (h_j) = \mathcal{CN}(h_j; \overset{\scriptscriptstyle\rightarrow}{h}_{jb}, \overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}}),
\end{align}
where
\begin{align}
\overset{\scriptscriptstyle\rightarrow}{h}_{jb} =& \frac{\overset{\scriptscriptstyle\rightarrow}{c}_{jb}}{\hat{g}_{jb}}, \label{add1h} \\
\overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}} =& \frac{\overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}}}{|\hat{g}_{jb}|^2} \label{eq:left2h}
\end{align}
with $\hat{g}_{jb}$ being the mean of the Gaussian belief of $g_{jb}$, which is computed in (\ref{eq:vbeliefmean}).
\textcolor{black}{The product of the Gaussian messages $\{m_{f_{c_{jb}}\rightarrow h_j}(h_j), \forall b \}$ is still Gaussian \cite{GuoAConcise}, i.e.,}
\begin{align}
q_j(h_j) = \prod_b m_{f_{c_{jb}}\rightarrow h_j} (h_j)\propto \mathcal{CN}(h_j;\hat{q}_j, \nu_{q_j}),
\end{align}
where
\begin{align}
{\nu}_{q_j} =& \left(\sum_b\frac{1}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}}}\right)^{-1}, \\
\hat{q}_j =& {\nu}_{q_j}\sum_b\frac{\overset{\scriptscriptstyle\rightarrow}{h}_{jb}}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}}}.
\end{align}
With the message $m_{f_{h_j}\rightarrow h_j} (h_j) \propto \mathcal{CN}(h_j;0,\widehat{\lambda}_j^{-1})$, which is given later in (\ref{eq:fg2g}), the belief $b(h_j)$ of $h_j$ is obtained as
\begin{align}
b(h_j) = q_j(h_j)m_{f_{h_j}\rightarrow h_j}(h_j) \propto \mathcal{CN}(h_j;\hat{h}_j, \nu_{h_j}),
\end{align}
where
\begin{align}
\hat{h}_j =& \frac{\hat{q}_j}{1+{\nu}_{q_j}\widehat{\lambda}_j}, \label{eq:gbeliefmean} \\
\nu_{h_j} =& \left(\frac{1}{{\nu}_{q_j}} + \widehat{\lambda}_j\right)^{-1}. \label{eq:gbeliefvar}
\end{align}
The message $m_{f_{h_j}\rightarrow \lambda_j}(\lambda_j)$ is then computed by using the MF rule, i.e.,
\begin{align}
m_{f_{h_j}\rightarrow \lambda_j}(\lambda_j) \propto& exp\left\{\int \ln f_{h_j}(h_j, \lambda_j)b(h_j)d{h_j}\right\}\nonumber \\ \propto & \lambda_jexp\{-\lambda_j(|\hat{h}_j|^2 + \nu_{h_j})\},
\end{align}
so the belief $b(\lambda_j)$ of the hyperparameter $\lambda_j$ is given as
\begin{align}
b(\lambda_j) =& m_{f_{h_j}\rightarrow \lambda_j}(\lambda_j)f_{\lambda_j}(\lambda_j) \nonumber \\ \propto& \lambda_j^{\epsilon_j} exp\left\{-\lambda_j(\eta_j+|\hat{h}_j|^2 + \nu_{h_j})\right\}.
\end{align}
The message from $f_{c_{jb}}$ to $g_{jb}$ is computed with the MF rule, i.e.,
\begin{align}
m_{f_{c_{jb}}\rightarrow g_{jb}} (g_{jb}) \propto& exp\left\{\int \ln\tilde{f}_{c_{jb}}(h_j, g_{jb})b(h_j)d{h_{j}}\right\} \nonumber \\
=& \mathcal{CN}(g_{jb}; \overset{\scriptscriptstyle\rightarrow}{g}_{jb}, \overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}}),
\end{align}
where
\begin{align}
\overset{\scriptscriptstyle\rightarrow}{g}_{jb} = & \frac{\overset{\scriptscriptstyle\rightarrow}{c}_{jb}\hat{h}_{j}^*}{|\hat{h}_j|^2 + \nu_{h_j}}, \label{eq:rightvjbmean} \\
\overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}} = & \frac{\overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}}}{|\hat{h}_j|^2 + \nu_{h_j}}. \label{eq:rightvjbvar}
\end{align}
It is noted that the local function node $f_{g_jb}$ includes a nonlinear function $\Phi(q,\kappa_j)$, which makes the message computation about $\kappa_j$ intractable. \textcolor{black}{Inspired by the extended Kalman filter \cite{extended} }, to solve this problem, $\Phi(q,\kappa_j)$ is linearized by using the first order Taylor expansion with the estimate of $\kappa_j$ in last iteration, i.e.,
\begin{align}
\Phi(q,\kappa_j) \approx \Phi(q, \hat{\kappa}_j^{'}) + \Phi'(q, \hat{\kappa}_j^{'})(\kappa_j - \hat{\kappa}_j^{'}), \label{eq:taylor}
\end{align}
with
\begin{align}
\Phi'(q, \hat{\kappa}_j^{'}) =& \left(\frac{-j2\pi t}{MN}\right)\Phi(q, \hat{\kappa}_j^{'})+ \nonumber\\ & e^{-j2\pi\frac{t(d+\hat{\kappa}_j^{'})}{MN}} \frac{1}{N}\sum_{n=1}^{N-1}j\frac{2n\pi}{N}e^{j\frac{2n\pi}{N}(q + \hat{\kappa}_j^{'})},
\end{align}
where $\hat{\kappa}_j^{'}$ denotes the estimates of $\kappa_j$ in last iteration.} {Then, with the BP rule and the approximation in (\ref{eq:taylor}), the message $m_{f_{g_{jb}}\rightarrow \kappa_j} (\kappa_j)$ can be expressed as
\textcolor{black}{
\begin{align}
m_{f_{g_{jb}}\rightarrow \kappa_j}(\kappa_j) =&
\int f_{g_{jb}}(g_{jb},\kappa_j)m_{f_{c_{jb}}\rightarrow g_{jb}}(g_{jb}) d{g_{jb}}. \label{eq:msgfgjb2kappa}
\end{align}
Note that $\kappa_j$ is a real valued variable. To ensure that the message $m_{f_{g_{jb}}\rightarrow \kappa_j} (\kappa_j)$ is real, we rewrite the function $f_{g_{jb}}(g_{jb},\kappa_j)$ as
\begin{align}
&f_{g_{jb}}(g_{jb},\kappa_j)\nonumber\\
&=\delta\left(\mathcal{R}[g_{jb}] - \mathcal{R}[\Phi(Q,\kappa_j)]\right)\delta\left(\mathcal{I}[g_{jb}] - \mathcal{I}[\Phi(Q,\kappa_j)]\right),
\end{align}
where $Q=-\widehat{N}+b-1$. Then the message $m_{f_{g_{jb}}\rightarrow \kappa_j}(\kappa_j)$ in (\ref{eq:msgfgjb2kappa}) can be obtained as
\begin{align}
m_{f_{g_{jb}}\rightarrow \kappa_j}(\kappa_j) \propto & \mathcal{N}(\kappa_j; \overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}^{\mathcal{R}},\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{R}})\mathcal{N}(\kappa_j; \overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}^{\mathcal{I}},\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{I}}) \nonumber \\
\propto & \mathcal{N} (\kappa_j; \overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb},\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}),
\end{align}
where
\begin{align}
\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}^{\mathcal{R}} =& \frac{\mathcal{R}[\overset{\scriptscriptstyle\rightarrow}{g}_{jb}] - \mathcal{R}[\Phi(Q, \hat{\kappa}_j^{'})] + \mathcal{R}[\Phi'(Q, \hat{\kappa}_j^{'})]\hat{\kappa}_j^{'}}{\mathcal{R}[\Phi'(Q, \hat{\kappa}_j^{'})]} , \\
\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{R}} =& \frac{\overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}}}{2|\mathcal{R}[\Phi'(Q, \hat{\kappa}_j^{'})]|^2},\\
\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}^{\mathcal{I}} =& \frac{\mathcal{I}[\overset{\scriptscriptstyle\rightarrow}{g}_{jb}] - \mathcal{I}[\Phi(Q, \hat{\kappa}_j^{'})] + \mathcal{I}[\Phi'(Q, \hat{\kappa}_j^{'})]\hat{\kappa}_j^{'}}{\mathcal{I}[\Phi'(Q, \hat{\kappa}_j^{'})]} , \\
\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{I}} =& \frac{\overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}}}{2|\mathcal{I}[\Phi'(Q, \hat{\kappa}_j^{'})]|^2},
\end{align}
and
\begin{align}
\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}} =& \left(\frac{1}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{R}}} + \frac{1}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{I}}}\right)^{-1},\label{eq:rightkappajbmean} \\
\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb} =& \overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}\left(\frac{\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}^{\mathcal{R}}}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{R}}} + \frac{\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}^{\mathcal{I}}}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}^{\mathcal{I}}}\right).\label{eq:rightkappajbvar}
\end{align}
}
The prior of $\kappa_j$ is a uniform distribution over $[-0.5, 0.5]$, i.e., $-0.5 \leq \kappa_j \leq 0.5$. To simplify the computation of the belief $b(\kappa_j)$, we simply carry out the clipping operation, i.e.,
\begin{equation}
b(\kappa_j)\propto\begin{cases}
\mathcal{N}(\kappa_j; -0.5, 0); & \hat\kappa_j \leq -0.5 \\
\mathcal{N}(\kappa_j; 0.5, 0); & \hat\kappa_j \geq 0.5 \\
\mathcal{N}(\kappa_j; \hat{\kappa}_j, \nu_{\kappa_j}); & otherwise \\
\end{cases} \label{eq:kappabelief}
\end{equation}
where
\begin{equation}
\nu_{\kappa_j} = \left(\sum_b\frac{1}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}}\right)^{-1},
\end{equation}
\begin{equation}
\hat{\kappa}_j = \nu_{\kappa_j}\sum_b\frac{\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}}.
\end{equation}
\subsubsection{Backward Message Passing} We firstly compute the message $m_{f_{h_j}\rightarrow h_j}(h_j)$ from the function node $f_{h_j}$ to $h_j$ by using the MF rule as follows
\begin{align}
m_{f_{h_j}\rightarrow h_j}(h_j) \propto & exp\left\{\int \ln f_{h_j}(h_j, \lambda_j)b(\lambda_j)d{\lambda_j}\right\} \nonumber \\
\propto & \mathcal{CN}(h_j; 0, \hat{\lambda}_j^{-1}), \label{eq:fg2g}
\end{align}
where
\textcolor{black}{
\begin{align}
\hat{\lambda}_j = \int \lambda_j b(\lambda_j)d\lambda_j = \frac{\epsilon_j+1}{\eta_j + |\hat{h}_j|^2+\nu_{h_j}}. \label{eq:lambdaj}
\end{align}
}
Then the message $n_{h_j\rightarrow f_{c_{jb}}}(h_j)$ from variable node $h_j$ to $f_{c_{jb}}$ is updated by the BP rule, i.e.,
\begin{align}
n_{h_j\rightarrow f_{c_{jb}}}(h_j) = \frac{b(h_j)}{m_{f_{c_{jb}} \rightarrow h_j}} \propto \mathcal{CN}(h_j;\overset{\scriptscriptstyle\leftarrow}{h}_{jb}, \overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}}),
\end{align}
where
\begin{align}
\overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}} =& \left({\frac{1}{\nu_{h_j}}} - \frac{1}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}}}\right)^{-1}, \label{add}
\end{align}
\begin{align}
\overset{\scriptscriptstyle\leftarrow}{h}_{jb} =& \overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}}\left(\frac{\hat{h}_j}{\nu_{h_j}} - \frac{\overset{\scriptscriptstyle\rightarrow}{h}_{jb}}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}}}\right). \label{add1}
\end{align}
The backward message $n_{\kappa_j\rightarrow f_{g_{jb}}}(\kappa_j)$ is Gaussian with mean $\overset{\scriptscriptstyle\leftarrow}{\kappa}_{jb}$ and variance $\overset{\scriptscriptstyle\leftarrow}{\nu}_{\kappa_{jb}}$ and can be calculated as
\begin{align}
\overset{\scriptscriptstyle\leftarrow}{\nu}_{\kappa_{jb}} =& (1/\nu_{\kappa_j} - 1/\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}})^{-1}, \label{eq:leftkappajbvar}\\
\overset{\scriptscriptstyle\leftarrow}{\kappa}_{jb} =& \overset{\scriptscriptstyle\leftarrow}{\nu}_{\kappa_{jb}} (\hat{\kappa}_j/\nu_{\kappa_j} - \overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}/\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}).\label{eq:leftkappajbmean}
\end{align}
For the cases that $\hat{\kappa}_j=0.5$ or $-0.5$ and $\nu_{\kappa_j}=0$ in (\ref{eq:kappabelief}), we set
\begin{align}
\overset{\scriptscriptstyle\leftarrow}{\nu}_{\kappa_{jb}} = \nu_{\kappa_j}, \quad \overset{\scriptscriptstyle\leftarrow}{\kappa}_{jb} = \hat{\kappa}_j.
\end{align}
Then the message $m_{f_{g_{jb}}\rightarrow g_{jb}}(g_{jb})$ is calculated by using the BP rule, i.e.,
\begin{align}
m_{f_{g_{jb}}\rightarrow g_{jb}} (g_{jb})=& \int {f_{g_{jb}}(g_{jb},\kappa_j)n_{\kappa_j\rightarrow f_{g_{jb}}}(\kappa_j)}d{\kappa_j} \nonumber \\ \propto & \mathcal{CN}(g_{jb};\overset{\scriptscriptstyle\leftarrow}{g}_{jb},\overset{\scriptscriptstyle\leftarrow}{\nu}_{g_{jb}}),
\end{align}
where
\begin{align}
\overset{\scriptscriptstyle\leftarrow}{g}_{jb} =& \Phi(Q, \hat{\kappa}_j^{'}) + \Phi'(Q, \hat{\kappa}_j^{'})(\overset{\scriptscriptstyle\leftarrow}{\kappa}_{jb} - \hat{\kappa}_j^{'}), \label{eq:leftvjbmean}\\
\overset{\scriptscriptstyle\leftarrow}{\nu}_{g_{jb}}=& \overset{\scriptscriptstyle\leftarrow}{\nu}_{\kappa_{jb}}|\Phi'(Q, \hat{\kappa}_j^{'})|^2. \label{eq:leftvjbvar}
\end{align}
Hence we can obtain the belief $b(g_{jb})\propto \mathcal{CN}(g_{jb};\hat{g}_{jb},\nu_{g_{jb}})$ with
\begin{equation}
\nu_{g_{jb}} = \left(\frac{1}{\overset{\scriptscriptstyle\leftarrow}{\nu}_{g_{jb}}} + \frac{1}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}}}\right)^{-1}, \label{eq:vbeliefvar} \\
\end{equation}
\begin{equation}
\hat{g}_{jb} = \nu_{g_{jb}}\left(\frac{\overset{\scriptscriptstyle\leftarrow}{g}_{jb}}{\overset{\scriptscriptstyle\leftarrow}{\nu}_{g_{jb}}} + \frac{\overset{\scriptscriptstyle\rightarrow}{g}_{jb}}{\overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}}}\right). \label{eq:vbeliefmean}
\end{equation}
Finally, by combining the incoming message $n_{g_{jb}\rightarrow f_{c_{jb}}}(g_{jb}) = m_{f_{g_{jb}}\rightarrow g_{jb}}(g_{jb})$ and $n_{h_j\rightarrow f_{c_{jb}}}(h_j)\propto \mathcal{CN}(h_j;\overset{\scriptscriptstyle\leftarrow}{h}_{jb}, \overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}})$, the message
$m_{f_{c_{jb}}\rightarrow c_{jb}}(c_{jb})$ is Gaussian with mean $\overset{\scriptscriptstyle\leftarrow}{c}_{jb}$ and variance $\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}}$, which are computed as \cite{BIUTAMP}
\begin{align}
\overset{\scriptscriptstyle\leftarrow}{c}_{jb}=&\overset{\scriptscriptstyle\leftarrow}{h}_{jb}\overset{\scriptscriptstyle\leftarrow}{g}_{jb}, \label{eq:cpriormean}\\
\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}} =& |\overset{\scriptscriptstyle\leftarrow}{h}_{jb}|^2 + |\overset{\scriptscriptstyle\leftarrow}{g}_{jb}|^2\overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}} + \overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}}. \label{eq:cpriorvar}
\end{align}
The message passing algorithm is summarized in Algorithm \ref{algorithm:ideallmmse}. \textcolor{black}{The algorithm can be terminated when it reaches a maximum number of iteration or the difference between the estimates of parameters of two consecutive iterations is less than a threshold.}
\subsection{Extension to Rectangular Waveform}
The derivation of Algorithm 1 in the above is for OTFS with the bi-orthogonal waveform. Thanks to the iterative process of the algorithm, it can be readily extended for OTFS with the rectangular waveform. As shown in \eqref{eq:rectX}, the difference between $\boldsymbol{X}_{bi}$ and $\boldsymbol{X}_{rect}$ is that there is a Doppler shift-dependent factor for each of the elements in $\boldsymbol{X}_{rect}$, and the parameter $\kappa_n$ is unknown. Here we note that when $z$ and $n$ are given, the values of $l_z$ and $d_n$ are known.
This problem can be solved \textcolor{black}{by taking advantage of} the iterative estimation strategy.
We can start the iterative process with initialization $\kappa_n=0$ \textcolor{black}{and treat ${\boldsymbol{X}}_{rect}$ as a known matrix with the estimated Doppler shifts in last iteration plugged in (\ref{eq:rectX})} (here we abuse the use of the notation ${\boldsymbol{X}}_{rect}$, which is actually an estimate of the true ${\boldsymbol{X}}_{rect}$ with the estimate of $\kappa_n$).
Then the algorithm developed for the OTFS with the bi-orthogonal waveform can be used to recover $\boldsymbol{c}$, i.e., $\kappa_d$ is estimated, so that $\kappa_n$ can be found based on the corresponding $\kappa_d$. Then with the estimated $\kappa_n$, ${\boldsymbol{X}}_{rect}$ is updated for the next round iteration. Hence, we only need to add an extra step after step \ref{al1:kappaupdate} of Algorithm \ref{algorithm:ideallmmse} to update ${\boldsymbol{X}}_{rect}$ for OTFS with the rectangular waveform.
\begin{algorithm}
\setstretch{1.25}
\caption{Message Passing Algorithm for OTFS Channel Estimation}
Initialize $\overset{\scriptscriptstyle\leftarrow}{c}_{jb} = 0$, $\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}} = 1$, \textcolor{black}{$\hat{\lambda}_j = 1$} for $\forall j,b$; \textcolor{black}{$\hat{\gamma}=1$}, and $t=1$.
\\
\textbf{Repeat}
\begin{algorithmic}[1]
\STATE update $\boldsymbol{V}_c^p$ and $\boldsymbol{c}^p$ with (\ref{eq:beliefcvar}) and (\ref{eq:beliefcmean}); \\
\STATE update noise precision \textcolor{black}{$\hat{\gamma}$} with (\ref{eq:noiseprecision});\\
\STATE update the $\overset{\scriptscriptstyle\rightarrow}{c}_{jb}$ and $\overset{\scriptscriptstyle\rightarrow}{\nu}_{c_{jb}}$ with (\ref{eq:extrinsicmean}) and (\ref{eq:extrinsicvar});\\
\STATE \textcolor{black}{update $\overset{\scriptscriptstyle\rightarrow}{h}_{jb}$ and $\overset{\scriptscriptstyle\rightarrow}{\nu}_{h_{jb}}$ with (\ref{add1h}) and (\ref{eq:left2h});} \\
\STATE update the belief $b(h_j)$ of $h_j$ with (\ref{eq:gbeliefmean}) and (\ref{eq:gbeliefvar});\\
\STATE update \textcolor{black}{$\hat{\lambda}_j$} with (\ref{eq:lambdaj});\\
\STATE update the belief $b(h_j)$ of $h_j$ with (\ref{eq:gbeliefmean}) and (\ref{eq:gbeliefvar});\\
\STATE update $\overset{\scriptscriptstyle\leftarrow}{h}_{jb}$ and $\overset{\scriptscriptstyle\leftarrow}{\nu}_{h_{jb}}$ with (\ref{add1}) and (\ref{add});\\
\STATE update $\overset{\scriptscriptstyle\rightarrow}{g}_{jb}$ and $\overset{\scriptscriptstyle\rightarrow}{\nu}_{g_{jb}}$ with (\ref{eq:rightvjbmean}) and (\ref{eq:rightvjbvar}); \\
\STATE update $\overset{\scriptscriptstyle\rightarrow}{\kappa}_{jb}$ and $\overset{\scriptscriptstyle\rightarrow}{\nu}_{\kappa_{jb}}$ with (\ref{eq:rightkappajbmean}) and (\ref{eq:rightkappajbvar}); \\
\STATE update the belief $b(\kappa_j)$ of $\kappa_j$ with (\ref{eq:kappabelief}); \label{al1:kappaupdate}\\
\textcolor{black}{update ${\boldsymbol{X}}_{rect}$ with estimated Doppler shifts plugged in (\ref{eq:rectX}) in the case of rectangular waveform;}\\
\STATE update $\overset{\scriptscriptstyle\leftarrow}{\nu}_{\kappa_{jb}}$ and $\overset{\scriptscriptstyle\leftarrow}{\kappa}_{jb}$ with (\ref{eq:leftkappajbvar}) and (\ref{eq:leftkappajbmean});\\
\STATE update $\overset{\scriptscriptstyle\leftarrow}{g}_{jb}$ and $\overset{\scriptscriptstyle\leftarrow}{\nu}_{g_{jb}}$ with (\ref{eq:leftvjbvar}) and (\ref{eq:leftvjbmean});\\
\STATE update the belief $b(g_{jb})$ of $g_{jb}$ with (\ref{eq:vbeliefvar}) and (\ref{eq:vbeliefmean});\\
\STATE update $\overset{\scriptscriptstyle\leftarrow}{c}_{jb}$ and $\overset{\scriptscriptstyle\leftarrow}{\nu}_{c_{jb}}$ with (\ref{eq:cpriormean}) and (\ref{eq:cpriorvar});\\
\STATE $t=t+1$.
\end{algorithmic}
\textbf{Until terminated}
\label{algorithm:ideallmmse}
\end{algorithm}
\section{Cramer-Rao Lower Bound} \label{sec:crlb}
To evaluate the performance of the proposed algorithm, we derive the CRLB for the channel gain and Doppler shift estimation. We assume that the locations of the nonzero elements in $\boldsymbol{c}$ are known to facilitate the derivation of the CRLB. Hence, the CRLB derived in this section is a loose performance bound for the estimation.
We firstly rewrite (\ref{eq:IdealFracY}) and (\ref{eq:RectFracYSimplified}) as
\begin{align}
y[k,l] = u[k,l] + \omega[k,l],
\end{align}
where
\begin{align}
u[k,l] = \sum_{i=1}^P\textcolor{black}{\sum_{q=-\hat{N}}^{\hat{N}}}& h_if(q,\kappa_i)e^{-j2\pi \frac{l_i(k_i+\kappa_i)}{MN}} \nonumber \\
&\times x([k-k_i+q]_N,[l-l_i]_M)
\end{align}
for the bi-orthogonal waveform, and
\begin{align}
u[k,l] = \sum_{i=1}^P\textcolor{black}{\sum_{q=-\hat{N}}^{\hat{N}}}&h_if(q,\kappa_i)e^{j2\pi \frac{(l-l_i)(k_i+\kappa_i)}{MN}} \nonumber \\
& \times x([k-k_i+q]_N,[l-l_i]_M)
\end{align}
for the rectangular waveform.
In order to derive the CRLB, we define $\boldsymbol{\theta} = [h_1,h_p,\cdots,h_P,\kappa_1,\kappa_p,\cdots,\kappa_P]^T$. According to \cite{1993Fundamentals}, the CRLB of the $j$th element in $\boldsymbol{\theta}$ is the $j$th diagonal element of the inverse of the Fisher information
matrix, i.e.,
\begin{align}
\theta_j^{CRLB} = [\boldsymbol{I}^{-1}(\boldsymbol{\theta})]_{jj},
\end{align}
where the Fisher information matrix $\boldsymbol{I}(\boldsymbol{\theta})$ has a size of $2P \times 2P$ with the $(i,j)$th element given as
\begin{align}
[\boldsymbol{I}(\boldsymbol{\theta})]_{ij} = -\mathbb{E}\left[\frac{\partial^2 \ln p(\boldsymbol{y};\boldsymbol{\theta}) }{\partial {\theta}_i\partial {\theta}_j}\right]
\label{eq:fisherelement}
\end{align}
for $i=1,2,\cdots, 2P$, $j=1,2,\cdots, 2P$, and the expectation is taken with respect to $p(\boldsymbol{y};\boldsymbol{\theta})$. The logarithm of the likelihood function $\ln p(\boldsymbol{y};\boldsymbol{\theta})$ can be expressed as
\begin{align}
\ln p(\boldsymbol{y};\boldsymbol{\theta}) = -Z\ln(\pi\gamma^{-1}) - \gamma\sum_{z=0}^{Z-1}|y_z-u_z|^2,
\end{align}
where $y_z$ denotes the $z$th element in $\boldsymbol{y}$ and $u_z$ denotes the corresponding $u[k,l]$. Then, (\ref{eq:fisherelement}) can be rewritten as
\begin{align}
[\boldsymbol{I}(\boldsymbol{\theta})]_{ij} = \gamma\sum_{z=0}^{Z-1}\left[-\frac{\partial u_z}{\partial {\theta}_i}\frac{\partial u_z^*}{\partial {\theta}_j} - \frac{\partial u_z^*}{\partial {\theta}_i}\frac{\partial u_z}{\partial {\theta}_j} \right].
\end{align}
For simplicity, we define $k' \triangleq [k-k_p+q]_N$ and $l' \triangleq [l-l_p]_M$.
When $1\leq p \leq P$ ($p$ represents $i$ or $j$), the derivative $\frac{\partial{u}_z}{\partial {\theta}_p}$ is about the channel gain $h_p$. For the bi-orthogonal waveform, we have
\begin{align}
\frac{\partial{u}_z}{\partial h_p} = \sum_{q=-\widehat{N}}^{\widehat{N}}&f(q,\kappa_p)e^{-j2\pi \frac{l_p(k_p+\kappa_p)}{MN}} x[k',l'],
\end{align}
while for the rectangular waveform we have
\begin{align}
\frac{\partial{u}_z}{\partial h_p} = \sum_{q=-\widehat{N}}^{\widehat{N}}&f(q,\kappa_p)e^{j2\pi \frac{(l-l_p)(k_p+\kappa_p)}{MN}} x[k',l'].
\end{align}
When $P < p' \leq 2P$ ($p'$ represents $i$ or $j$), the derivative $\frac{\partial{u}_z}{\partial {\theta}_{p'}}$ is about $\kappa_p$, where $p=p'-P$. For the bi-orthogonal waveform, we have
\begin{align}
\frac{\partial{u}_z}{\partial \kappa_p} =& \sum_{q=-\widehat{N}}^{\widehat{N}}h_p \biggl[\left(\frac{1}{N}\sum_{n=1}^{N-1}j\frac{2n\pi}{N}e^{j\frac{2n\pi}{N}(q + \kappa_p)}\right)e^{-j2\pi \frac{l_p(k_p+\kappa_p)}{MN}} \nonumber \\ +& f(q,\kappa_p)\left(e^{-j2\pi \frac{l_p(k_i+\kappa_p)}{MN}}\frac{-j2\pi l_p}{MN}\right)\biggl] x[k',l'],
\end{align}
while for the rectangular waveform
\begin{align}
\frac{\partial{u}_z}{\partial \kappa_p} =& \sum_{q=-\widehat{N}}^{\widehat{N}}h_p \biggl[\left(\frac{1}{N}\sum_{n=1}^{N-1}j\frac{2n\pi}{N}e^{j\frac{2n\pi}{N}(q + \kappa_p)}\right)e^{j2\pi \frac{(l-l_p)(k_p+\kappa_p)}{MN}} \nonumber \\ +& f(q,\kappa_p)\left(e^{j2\pi \frac{(l-l_p)(k_p+\kappa_p)}{MN}}\frac{j2\pi (l-l_p)}{MN}\right)\biggl] x[k',l'].
\end{align}
To evaluate the performance of the proposed algorithm in terms of the normalized mean squared error (NMSE), we use the average normalized CRLB for the estimation of channel gains and fractional Doppler shifts, which are defined as
\begin{align}
\hat{\boldsymbol{h}}_{bound} =& \frac{\sum_{j=1}^{P}{\theta}_j^{CRLB}}{||\boldsymbol{h}_{\boldsymbol{\theta}}||_2^2},
\end{align}
\begin{align}
\hat{\boldsymbol{\kappa}}_{bound} =& \frac{\sum_{j=P+1}^{2P}{\theta}_j^{CRLB}}{||\boldsymbol{\kappa}_{\boldsymbol{\theta}}||_2^2},
\end{align}
where $\boldsymbol{h}_{\boldsymbol{\theta}}=(h_1, h_2,\cdots, h_P)^T$ and $\boldsymbol{\kappa}_{\boldsymbol{\theta}}=(\kappa_1, \kappa_2,\cdots, \kappa_P)^T$.
\section{Simulation Results} \label{sec:simulation}
In this section, we evaluate the performance of the proposed message passing algorithm in terms of the NMSE of the estimated channel parameters $\hat{\boldsymbol{h}}$ and $\hat{\boldsymbol{\kappa}}$, and the reconstructed channel matrices $\widehat{\boldsymbol{H}}_{bi}$ and $\widehat{\boldsymbol{H}}_{rect}$. The NMSE is defined as
\textcolor{black}{
\begin{align}
NMSE(\boldsymbol{x})=\frac{1/L\sum_{l=1}^{L}||\hat{\boldsymbol{x}}_l - \boldsymbol{x}||_2^2}{||\boldsymbol{x}||_2^2},
\end{align}
where $\hat{\boldsymbol{x}}_l$ denotes the estimate of a variable $\boldsymbol{x}$ in the $l$th trial, and $L$ is the number of trials.}
In addition, the bit error rate (BER) of data detection based on the reconstructed channel matrix is also evaluated. We set $M = 128$ and $N=32$, i.e., there are $32$ time slots and $128$ subcarriers in the TF domain. The carrier frequency is 3 GHz, the subcarrier spacing is 2 kHz, and the quadrature phase shift keying (QPSK) is adopted for modulation. The
velocity of the mobile user is set to be $v = 120 km/h$, leading to a maximum Doppler frequency shift index $k_{max}=4$. We assume that the maximum delay index {$l_{{max}}$ = 10}, the Doppler index of the $i$th path is uniformly distributed over $[-k_{max}, k_{max}]$, and the delay index is uniformly distributed over $[1, l_{max}]$, excluding the first path ($\l_i=0$). As mentioned before, the fractional Doppler $\kappa_i$ has a uniform distribution over $[-0.5, 0.5]$. The channel path gains $\{h_j\}$ are independently drawn from the complex Gaussian distribution $\mathcal{N}(0, 1/P)$. Furthermore, we define the SNR of pilot and data as
\textcolor{black}{
\begin{align}
{\rm SNRp}=&10\log10\left(\frac{\mathcal{P}_p}{\gamma^{-1}}\right), \\
{\rm SNRd}=&10\log10\left(\frac{\mathbb{E}|x_d|^2}{\gamma^{-1}}\right),
\end{align}
respectively, where $\mathcal{P}_p$ denotes the average power of pilot symbols.}
\subsection{NMSE performance comparison}
We examine the NMSE performance for the estimation of ${\boldsymbol{h}}$, ${\boldsymbol{\kappa}}$ and ${\boldsymbol{H}}$ for both the bi-orthogonal and the rectangular waveforms. For comparison, we also include the threshold based channel estimation method in {\cite{pilotref}} for the case of the bi-orthogonal waveform (as it is not clear how to perform the channel estimation in the case of the rectangular waveform with fractional Doppler shifts). The CRLB derived in the previous section is also shown for reference.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{H_proposed_hongyi.pdf}
\caption{NMSE of $\widehat{\boldsymbol{H}}$ of the proposed algorithm and the threshold based method in \cite{pilotref}.} \label{fig:H_compare}
\end{figure}
\begin{table}[htb]
\centering
\caption{NMSE and PAPR comparisons: 1 pilot symbol versus 10 pilot symbols with the same power budget.}\label{tab:powerbudget} %
\begin{tabular}{@{}ccccc}
\midrule[1.4pt]
\multirow{2}*{SNRp} & \multicolumn{3}{c}{NMSE} & \multirow{2}*{PAPR}\\
\cmidrule(lr){2-4}
~ & $\hat{\boldsymbol{h}}$ & $\hat{\boldsymbol{\kappa}}$ & $\widehat{\boldsymbol{H}}$\\
\midrule[1.4pt]
50dB \\ 1 pilot ~ & -37.95 dB & -34.48 dB & -40.17 dB & 18.76 dB \\
\midrule
40dB \\ 10 pilots & -36.95 dB & -32.10 dB & -39.19 dB & 10.41 dB\\
\midrule[1.4pt]
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{bi_nmse_pilot10_P_SNRp.pdf}
\caption{NMSE comparison for the bi-orthogonal waveform with different $P$: (a) NMSE of $\hat{\boldsymbol{h}}$; (b) NMSE of $\hat{\boldsymbol{\kappa}}$.} \label{fig:bi_nmse_pilot10}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{rect_nmse_pilot10_P_SNRp.pdf}
\caption{NMSE comparison for the rectangular waveform with different $P$: (a) NMSE of $\hat{\boldsymbol{h}}$; (b) NMSE of $\hat{\boldsymbol{\kappa}}$.} \label{fig:rect_nmse_pilot10}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{bi_nmse_p6_pilots_SNRp.pdf}
\caption{NMSE comparison for the bi-orthogonal waveform with different number of pilot symbols: (a) NMSE of $\hat{\boldsymbol{h}}$; (b) NMSE of $\hat{\boldsymbol{\kappa}}$.} \label{fig:bi_nmse_p6}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{rect_nmse_p6_pilots_SNRp.pdf}
\caption{NMSE comparison for the rectangular waveform with different number of pilot symbols: (a) NMSE of $\hat{\boldsymbol{h}}$; (b) NMSE of $\hat{\boldsymbol{\kappa}}$.} \label{fig:rect_nmse_p6}
\end{figure}
In Fig. \ref{fig:H_compare}, we compare the NMSE performance of the reconstructed channel matrix $\widehat{\boldsymbol{H}}$ between our proposed algorithm and the threshold based method in \cite{pilotref}. Only a single pilot symbol is used and the number of paths $P=6$ and $10$ are considered. Moreover, with the recommendation in \cite{pilotref}, the threshold is set to be $3\sigma_p$ for the threshold based method, where $\sigma_p = (10^{{\rm SNRp}/10})^{-0.5}$. Note that the NMSE of $\hat{\boldsymbol{h}}$ and $\hat{\boldsymbol{\kappa}}$ cannot be compared as the threshold based method do not provide these estimates.
It can be seen that the proposed algorithm significantly outperforms the threshold based method in all the cases. The reason is that the threshold based method does not estimate the channel gains and fractional Doppler shifts, and it is equivalent to the estimate of the vector $\boldsymbol{c}$ without considering its structure. The results indicate that exploiting the structure of the vector $\boldsymbol{c}$ is crucial to improving the estimation performance.
{In Tab. \ref{tab:powerbudget}, we compare the PAPR and NMSE performance of $\hat{\boldsymbol{h}}$, $\hat{\boldsymbol{\kappa}}$ and $\widehat{\boldsymbol{H}}$ by using our proposed algorithm in two cases: a single pilot symbol and 10 pilot symbols. We assume a same power budget, so a single pilot symbol with SNRp = 50dB corresponds to 10 pilot symbols with SNRp = 40dB. From the table we can see that, the NMSE performance of using 10 pilot symbols is very close to that of using a single pilot symbol, but the PAPR with 10 pilot symbols is only 10.41 dB, in contrast to 18.76 dB for the case of using a single pilot symbol.}
The NMSE performances of the algorithm for different SNRp and number of paths $P$ are shown in Fig. \ref{fig:bi_nmse_pilot10} and Fig. \ref{fig:rect_nmse_pilot10}, where the number of pilot symbols is set to be 10. The performance of the threshold based method is not included as it only works with a single pilot symbol.
It can be seen that a smaller $P$ leads to a better performance as the vector $\boldsymbol{c}$ is more sparse and the number of parameters to be estimated is smaller. Comparing them to the CRLB, we can see that the NMSE performance of the proposed algorithm is fairly good as the CRLB is a loose lower bound, which is derived with the assumption that the locations and the number of the paths are known. In Fig. \ref{fig:bi_nmse_p6} and Fig. \ref{fig:rect_nmse_p6}, we show the performance of the proposed method with different numbers of pilot symbols. As expected, with more pilot symbols, the proposed algorithm delivers better performance.
\subsection{BER performance comparison}
\begin{figure}[htbp]
\centering
\subfigure[$P=6$; SNRp = 35dB; 1 pilot symbol]{
\centering
\includegraphics[width=0.4\columnwidth]{ber_p6_snrp35_pilot1.pdf} \label{fig:ber_p6_snrp40_pilot1}
}
\subfigure[$P=6$; SNRp = 40dB; 1 pilot symbol]{
\centering
\includegraphics[width=0.4\columnwidth]{ber_p6_snrp40_pilot1.pdf} \label{fig:ber_p6_snrp45_pilot1}
}
\subfigure[$P=10$; SNRp = 40dB; 1 pilot symbol]{
\centering
\includegraphics[width=0.4\columnwidth]{ber_p10_snrp40_pilot1.pdf} \label{fig:ber_p10_snrp40_pilot1}
}
\subfigure[$P=10$; SNRp = 40dB; 10 pilot symbols]{
\centering
\includegraphics[width=0.4\columnwidth]{ber_p10_snrp40_pilot10.pdf} \label{fig:ber_p6_snrp40_pilot10}
}
\caption{BER performance comparison versus SNRp.} \label{fig:ber_comparison}
\end{figure}
We compare the BER performance of the OTFS system with the UTAMP based detector proposed in \cite{2020Iterative}. Assume that the bi-orthgonal waveform is used so that we can compare the performance of the system with the threshold based channel estimation method \cite{pilotref}. {As a benchmark, we also show the BER performance of the OTFS system with perfect channel matrix. In addition, in order to demonstrate the significance of considering fractional Doppler shifts, we include the performance of the OTFS system with the assumption of integer Doppler shifts (although fractional Doppler shifts exist in the generation of the OTFS channels).} The results are shown in Fig. \ref{fig:ber_comparison}. It can be seen that, when the fractional Doppler shifts are ignored, the system performance is degraded severely due to the modelling errors. In addition, in the case of single pilot symbol, the system with the proposed channel estimation algorithm delivers much better BER performance than that with the threshold based channel estimation method. \textcolor{black}{Meanwhile}, the performance of the system with our proposed channel estimation algorithm can be very close to that of the system with perfect channel matrix. By comparing Fig. \ref{fig:ber_p6_snrp45_pilot1} \textcolor{black}{with} Fig. \ref{fig:ber_p10_snrp40_pilot1}, we can see that a larger $P$ leads to a better BER performance because more diversity gain can be achieved.
\section{Conclusions} \label{sec:conclusion}
In this paper, we have addressed the issue of the estimation of OTFS channels with fractional Doppler shifts, where both the bi-orthogonal waveform and the rectangular waveform are considered. The estimation is formulated as a sparse structured signal recovery problem \textcolor{black}{and a Bayesian treatment is investigated}. With a factor graph representation of the problem, we have derived an message passing based algorithm to estimate the channel gains and fractional Doppler shifts. The CRLB has also been derived to evaluate the estimation performance of the proposed algorithm. It has been shown that the proposed algorithm significantly outperforms the existing algorithm, and it is able to work with multiple pilot symbols to achieve significant PAPR reduction.
\section*{Acknowledgment}
The authors would like to thank Prof. Jinhong Yuan at the University of New South Wales for his valuable comments and suggestions.
\bibliographystyle{IEEEtran}
|
2,869,038,155,250 | arxiv | \section{Tilings}
\label{sec:tilings}
There are many different ways to define tilings of the discrete plane $\mathbb{Z}^2$. The historical definition as presented by Wang is that of unit square tiles with colored edges (nowadays called \emph{Wang tiles}). The domino problem is to decide whether one can arrange copies of a given set of such tiles on the plane so that the adjacent sides of two neighbor tiles have the same color.
Although the description of the problem with Wang tiles is extremely simple, it is not the easiest way to deal with tilings. A more modern approach consists in defining a tile set as a set of local constraints in the form of a finite set of forbidden patterns. A tiling of the plane according to such a tile set is a coloring of the plane such that no forbidden pattern appears. Both definitions are known to be equivalent.
\subsection{Patterns and Configurations}
\begin{definition}[Configuration]\label{def:configuration}
Given a finite set of symbols $\Sigma$, a $\Sigma$-\emph{configuration} is a mapping $\mathfrak{C}:\mathbb{Z}^2\rightarrow \Sigma$ that associates a symbol of $\Sigma$ to each element of $\mathbb{Z}^2$ (elements of the plane $\mathbb{Z}^2$ will be referred to as \emph{cells}).
\end{definition}
\begin{definition}[Pattern]\label{def:pattern}
Given a finite set of symbols $\Sigma$, a $\Sigma$-\emph{pattern} is a mapping $\mathcal{P}: D_\mathcal{P} \rightarrow \Sigma$ from a finite subset of cells $D_\mathcal{P}\subset \mathbb{Z}^2$ to $\Sigma$.
\end{definition}
\begin{definition}[Tile Set]\label{def:tile_set}
A \emph{tile set} is a couple $\tau=(\Sigma, \mathcal{F})$ where $\Sigma$ is a finite set of symbols and $\mathcal{F}$ is a finite set of patterns called \emph{forbidden patterns}.
\end{definition}
\begin{definition}[Tilings]\label{def:valid}
Given a finite set of symbols $\Sigma$, we say that a $\Sigma$-pattern $\mathcal{P}:D_\mathcal{P}\rightarrow \Sigma$ \emph{appears} in a $\Sigma$-configuration $\mathfrak{C}$ if there exists a vector $v\in\mathbb{Z}^2$ such that
\[
\forall x \in D_\mathcal{P}, \mathcal{P}(x) = \mathfrak{C}(x+v)
\]
A $\Sigma$-configuration $\mathfrak{C}$ is said to be \emph{valid} for a tile set $\tau=(\Sigma, \mathcal{F})$ if it contains none of the patterns in $\mathcal{F}$. A tiling of the plane by a tile set $\tau$ is a valid configuration for $\tau$.
\end{definition}
\subsection{Periodicity}
\begin{definition}[Periodicity]\label{def:periodicity}
A configuration $\mathfrak{C}$ is said to be \emph{periodic} if there exists a non-zero vector $v\in\mathbb{Z}^2$ such that $\mathfrak{C}$ is invariant by a translation of $v$ ($v$ is a vector of periodicity of $\mathfrak{C}$):
\[\forall x\in \mathbb{Z}^2, \mathfrak{C}(x) = \mathfrak{C}(x+v)\]
A configuration is said to be \emph{bi-periodic} if it has two independent vectors of periodicity.
\end{definition}
\begin{definition}[Aperiodicity]\label{def:aperiodicity}
A tile set is said to be \emph{aperiodic} if it admits at least one tiling of the plane but admits no periodic tiling.
\end{definition}
The two following propositions will be of use later. The first one is folklore and its proof will be omitted.
\begin{proposition}
\label{pro:bi-periodicity}
If a tile set admits a valid periodic tiling of the plane it admits a valid bi-periodic tiling of the plane.
\end{proposition}
\remark The contraposition of Proposition~\ref{pro:bi-periodicity} states that if a tile set cannot tile the plane bi-periodically it cannot tile it periodically either. We will use this in our construction of an aperiodic tile set as it is easier to prove that there exist no bi-periodic valid configuration.
\begin{proposition}\label{pro:vertical}
If a configuration is bi-periodic it has both a vertical and a horizontal vector of periodicity.
\end{proposition}
\proof
If $(x,y)$ and $(x', y')$ are two independent vectors of periodicity of a configuration then $x'.(x, y) - x.(x',y') = (0, x'y - xy')$ and $y'.(x,y)-y.(x',y')=(xy'-x'y, 0)$ also are.
\qed
\section{Construction of an Aperiodic Tile Set}
\label{sec:the_main_construction}
\subsection{General Overview}
\label{sub:general_overview}
The aperiodic tile set that we are going to describe is based on the following simple observation: if a picture contains arbitrarily large squares such that none of these squares intersect each other, the picture cannot be periodic. Indeed, because squares do not intersect a translation vector that leaves the picture unchanged must be larger than the side of every square for if a square is translated less than the length of its sides it intersects its original position.
What we will do now is design a set of local constraints that only accepts pictures on the discrete plane that contain arbitrarily large non-intersecting squares.
These ``pictures'' will contain lines of different sorts made of horizontal, vertical and diagonal segments. To be consistent with the previous definitions of configurations, tile sets and tilings we should be describing configurations as symbols on the cells of $\mathbb{Z}^2$. However it will be much easier to explain (and understand) the construction by describing geometrical shapes.
This means that when we will say something like ``blue lines are made of horizontal and vertical segments, have no extremities and cannot cross'' what this really means is that we have symbols representing blue lines going through cells vertically, horizontally and changing directions (for example entering from the top side and exiting from the right side). Once represented, it is easy to enforce the stated properties with a set of forbidden patterns. In this case the forbidden patterns are those where a blue line is interrupted because it exits a cell from one side but does not enter its neighbor from the corresponding side. The fact that blue lines cannot cross is simply enforced by having no symbol corresponding to a crossing on a cell (no symbol corresponds to a blue line going through a cell both vertically and horizontally).
Our construction will consist in two main types of lines that we will call ``blue lines'' and ``arms''. Blue lines will be made to draw non intersecting squares while arms will be used to control the size of squares and connect them together to build a structure that enables us to prove the existence of arbitrarily large squares.
\subsection{Blue Lines}
\label{sub:blue_lines}
Blue lines are made of vertical and horizontal segments. They have no extremities (only infinite or closed paths) and cannot cross or overlap. They are oriented (they have an \emph{inner} and an \emph{outer} side) and can only change their direction by turning towards the inside.
All of these rules are local conditions and can therefore be enforced by a tile set.
Because blue lines cannot cross and can only turn towards the inside, finite blue lines can only be rectangles. For the same reasons, infinite blue lines can only be of three kinds, each corresponding to a degenerate rectangle with some bi-infinite or semi-infinite sides (see Figure \ref{fig:blue_rectangles}).
\begin{figure}[htbp]
\includegraphics[width=6cm]{figures/blue_rectangles}
\caption{Possible blue paths. The infinite paths can have 0, 1 or 2 angles.}
\label{fig:blue_rectangles}
\end{figure}
\subsection{Blue Squares}
\label{sub:blue_squares}
We now want to make sure that only squares are valid. To do so, the usual method is to draw a diagonal line from the upper-left and lower-right angles of every blue rectangle. This diagonal line is oriented towards the inside of the rectangle and is not allowed to meet a blue line other than the angles from which it starts. If the rectangle is a square, the two diagonal lines merge into one but if it is not a square the diagonals will reach a side of the rectangle, which is forbidden.
This however only works if there are no smaller squares inside larger ones. Because we need some small blue squares to lie on the diagonal of larger ones, we will have to allow the diagonal line to ``go around'' a square, but only from one corner to the other as shown in Figure~\ref{fig:diagonals}.
\begin{figure}[htbp]
\includegraphics[height=7cm]{figures/diagonals}
\caption{Diagonal line used to ensure all closed blue paths are squares}
\label{fig:diagonals}
\end{figure}
We can show inductively that all closed blue paths are squares:
\begin{itemize}
\item if a path has no other blue path inside, the diagonal line goes straight from its upper-left to its lower-right corners, it is therefore a square;
\item if all blue paths inside a larger one are squares, the diagonal line only goes around squares and hence it remains on the real diagonal of the larger one, the large one is a square too.
\end{itemize}
\remark A small blue square inside a larger one can only be either perfectly aligned with the latter's diagonal or far enough from it so that it does not intersect it.
\subsection{Infinite Paths}
\label{sub:infinite_paths}
\begin{lemma}\label{lem:infinite}
The only possible infinite blue paths in a valid bi-periodic configuration are infinite straight lines (no angle).
\end{lemma}
\proof
According to the basic rules of blue lines, infinite blue paths can be of three different kinds (illustrated by Figure \ref{fig:blue_rectangles}): they can have zero, one or two angles.
However, because of the diagonal line that starts from the upper left and lower right angles of any blue line, infinite paths with two angles cannot be valid (see Figure \ref{fig:two_angles}).
\begin{figure}[htbp]
\includegraphics[height=5cm]{figures/two_angles}
\caption{Two-angled infinite paths cannot be valid.}
\label{fig:two_angles}
\end{figure}
Moreover, no valid bi-periodic configuration can contain an infinite path having only one angle. Indeed, such a configuration must be both horizontally and vertically periodic (Proposition~\ref{pro:bi-periodicity}) and any finite horizontal or vertical translation of the infinite angle would intersect it.
The only remaining case of infinite blue path is that of bi-infinite vertical or horizontal straight lines (with no angle).
\qed
\subsection{Arms}
\label{sub:arms}
Blue squares alone are not sufficient to ensure the aperiodicity of the tile set. What we will do now is organize them into groups in such a way that for every blue square of finite size we can prove the existence of a larger finite blue square. In order to group them, we extend vertical and horizontal lines from every corner of a blue square towards the exterior (see Figure~\ref{fig:arms}). These new lines are called \emph{arms}.
The basic properties of arms can be described by the following rules:
\begin{itemize}
\item Arms are horizontal or vertical continuous straight lines. They do not turn.
\item Arms and blue lines cannot overlap.
\item Arms are allowed to cross other perpendicular arms.
\item The extremities of an arm must be angles of blue paths (some extremities might not exist if the arm is semi or bi-infinite).
\item The orientations of two blue squares connected by an arm must match: an arm cannot connect the upper (resp. right) side of a square to the lower (resp. left) side of another.
\item There can be at most one point on an arm where it crosses a blue line.
\end{itemize}
The last rule is the key to most of the properties that we will need later. It might appear as a non-local constraint as it is formulated as a global condition on the arm but it can be enforced locally by orienting the arms from their extremities as shown in Figure \ref{fig:arms_oriented}: blue lines are only allowed to cross an arm where the two opposite orientations meet (which needs not be the middle of the arm).
\begin{figure}[htbp]
\centering
\includegraphics[height=7cm]{figures/arms}
\caption{Arms extend from every angle of a blue path towards the exterior. In the right part there are three errors: the middle one is an orientation error (down side connected to an up side) while the two others are arms that cross more than one blue line.}
\label{fig:arms}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=4cm]{figures/arms_oriented}
\caption{Orientation on the arms to enforce locally the fact that an arm can cross at most one blue line. The blue line can only cross where the orientations meet.}
\label{fig:arms_oriented}
\end{figure}
Two blue squares are said to be \emph{neighbors} if they are connected by an arm.
\subsection{Size Matching}
\label{sub:size_matching}
\begin{lemma}\label{lem:size}
In a valid bi-periodic configuration, if two blue squares are neighbors they are of equal size.
\end{lemma}
\proof
By contradiction, let us assume there exists a valid bi-periodic configuration having two connected squares of different size. Let us consider one of the smallest squares so connected to a larger square. In order to describe the situation, we will consider that the two squares are connected horizontally by an arm joining their lower sides (as shown in Figure~\ref{fig:size}).
\begin{figure}[htbp]
\centering
\includegraphics[width=12cm]{figures/size}
\caption{Why arms cannot connect squares of different sizes.}
\label{fig:size}
\end{figure}
There has to be an arm that starts from the upper side of the smaller square and goes towards the larger. Because this arm is not allowed to cross two blue lines, it cannot go entirely through the larger square. Thus it must be connected to the upper side of another blue square, either before entering the larger square or inside it (both cases are illustrated by Figure~\ref{fig:size}). In both cases, this third square at the other extremity of the arm must be smaller than the initial small square~:
\begin{itemize}
\item if the third square is outside of the larger one, its vertical sides cannot cross the arm connecting the two initially considered squares for this would mean this arm is crossed by two blue lines;
\item if the third square is inside the larger one, it cannot cross the side of the larger square;
\end{itemize}
This contradicts the fact that the initial square was chosen as being one of the smallest squares connected to a square of different size.
\qed
\begin{lemma}\label{lem:neighbors}
In a valid bi-periodic configuration, every finite blue square has exactly four neighbors, one in each direction, and it is connected to each of its neighbors by two arms.
\end{lemma}
\proof Because all connected squares have the same size, if two squares are connected by an arm, they are also connected by a second arm. Since every finite blue square has eight arms, it is connected to at most four neighbors.
Moreover there can be no semi-infinite horizontal or vertical line in a configuration that is both vertically and horizontally periodic (the line would have to be bi-infinite) hence every arm connected to a square is connected to another one. Every square must then have at least four neighbors, one in each direction.
\qed
\subsection{Groups}
\label{sub:groups}
It is now time to add a communication between the different finite blue squares in order to organize them in groups in such a way that for each group of neighbor squares there exists a larger finite square associated with this group, in turn leading to the proof that there exist arbitrarily large finite squares.
To do this, we add two coordinates $(x, y)\in (\mathbb{Z}/3\mathbb{Z})^2$ to every blue line, and the arms only allow a connection that corresponds to a correct arrangement of squares: the right neighbor of a square $(x, y)$ must have coordinates $(x +1, y)$ and its up neighbor must have coordinates $(x, y+1)$ (all additions are performed modulo $3$).
To realize this with a tile set we use different sorts of blue lines for each possible set of coordinates (9 possibilities), and different sorts of arms depending on the coordinates of the squares they connect. Obviously we require that the coordinates of a blue line are constant along the line (which is a local condition) and that the coordinates of an arm (the coordinates of the squares it connects) are also constant along one arm.
The structure is then enforced by the arms at their extremities: a horizontal arm whose coordinates are $((x, y), (x+1, y))$ must be connected to a square of coordinates $(x,y)$ by its left extremity and to a square of coordinates $(x+1, y)$ by its right extremity, and similarly with vertical arms of coordinates $((x, y), (x, y+1))$.
\begin{lemma}\label{lem:oneone}
In a valid bi-periodic configuration, if there exists a blue square then there exists a blue square of the same size with coordinates $(1, 1)$.
\end{lemma}
\proof This is a straightforward consequence of Lemmas \ref{lem:size} and \ref{lem:neighbors} and the coordinates system. All squares have neighbors in all directions and all neighbors have the same size. Because the first (resp. second) coordinate is incremented by $1$ modulo $3$ each time we consider the right (resp. up) neighbor, we eventually find a square of coordinates $(1, 1)$.
\qed
The construction is now nearing its end. All we need to do is ensure that in any possible bi-periodic configuration, for every finite blue square there exists another blue square that is larger. The easy way to prove that a square is larger than another is to have the large one contain the other. Because for every square there is a $(1, 1)$ square of the same size, it is enough to make every $(1, 1)$ square be inside a larger one.
To do so, we slightly change the arms connecting $(1, 1)$ squares to their neighbors. Instead of being allowed to cross at most one blue line, these arms are \emph{required} to cross exactly one blue line. Moreover the inner side of the crossing blue line must be towards the $(1,1)$ square. This is easy to do with local constraints by requiring a blue line to cross such an arm where the opposite orientations meet (as explained in sub-section \ref{sub:arms} and illustrated by Figure \ref{fig:arms_oriented}). We can now prove the following lemma:
\begin{lemma}\label{lem:large_squares}
In a valid bi-periodic configuration, every finite $(1,1)$ blue square is contained in a larger finite blue square.
\end{lemma}
\proof
Consider a finite $(1, 1)$ blue square. By lemma \ref{lem:neighbors} it has both an up and a right neighbor. The arms that connect it to these neighbors are each crossed by a blue line, with its inside turned towards the $(1,1)$ square. The situation is illustrated in Figure \ref{fig:large_squares} (a). The two blue lines that cross the arms must be connected:
\begin{itemize}
\item if the vertical one turns before the position of the horizontal one, it will have to cross the arm that is already crossed by the horizontal portion of blue line (b) or turn once more and move a second time though the arm it has already crossed once;
\item if the vertical blue line goes further up than the position of the horizontal one, the horizontal one must turn before and cross one of the two arms that have already been crossed (c).
\end{itemize}
The two blue lines that cross the arms are two sides of the same blue path (d). This blue path contains the $(1,1)$ square and is therefore larger.
\qed
\begin{figure}[htbp]
\centering
\includegraphics[width = 10cm]{figures/large_squares}
\caption{The two blue lines that cross the right and top arms of a $(1,1)$ square are connected.}
\label{fig:large_squares}
\end{figure}
\subsection{Aperiodicity}
\label{sub:aperiodicity}
All we need to do now is make sure there is a blue square somewhere in any valid configuration. The simplest way to do this locally is to forbid large patterns that have no blue angle. In our specific case, patterns of size 2 are sufficient so we add this last rule to our tiling constraints: every $2\times 2$ pattern must contain a blue angle.
We can now prove the key proposition of the construction:
\begin{proposition}\label{pro:aperiodicity}
There exists no valid periodic configuration.
\end{proposition}
\proof
By Proposition \ref{pro:bi-periodicity}, we need only show that there exists no valid bi-periodic configuration. As a consequence of the last rule any valid configuration has a blue angle. By Lemma \ref{lem:infinite} this angle is part of a finite blue square. Finally, by Lemmas \ref{lem:oneone} and \ref{lem:large_squares} for every finite blue square in a valid bi-periodic configuration there exists a larger finite blue square. This means that any such configuration contains arbitrarily large non-intersecting blue squares, which contradicts its periodicity.
\qed
\subsection{Valid Configuration}
\label{sub:valid_configuration}
We still need to show that there exists at least one valid configuration, for the tile set would otherwise be of very limited interest. We will now show that the configuration illustrated by Figure~\ref{fig:configuration} is valid.
\begin{figure}[htbp]
\centering
\includegraphics[width = 15cm]{figures/configuration}
\caption{A valid configuration}
\label{fig:configuration}
\end{figure}
This configuration is very regular and has a simple structure. It contains squares of size $3^k$ for every $k\in\mathbb{N}$. For every $k$, the squares of size $3^k$ are arranged regularly, each being at a distance $2.3^k$ from its neighbors. They are then considered in groups of $3\times 3$ and there is a square of size $3^{k+1}$ that has the same center as the central square of size $3^k$ of each group. The central square in each $3\times 3$ group has coordinates $(1,1)$ while the others have the matching coordinates (the arms are not represented in the Figure for better clarity).
This configuration satisfies all the rules of the tile set:
\begin{itemize}
\item blue paths are all finite blue squares;
\item because squares are so regularly arranged, smaller squares that intersect the diagonal of a larger one are perfectly aligned with this diagonal;
\item arms connect squares of the same size, and every square has four neighbors;
\item the squares of size $3^{k+1}$ contain the squares of size $3^k$ that have coordinates $(1,1)$ but none of their neighbors so they cross all the required arms.
\end{itemize}
Two things must still be justified. The first is that the arrangement of squares that has been described can fill an infinite configuration and more precisely that there is always room for the larger squares without overlapping the previously existing lines. This can be proved by observing that between two consecutive columns (resp. rows) of squares of size $3^k$, if we ignore all the larger squares, there is an empty column (resp. row) of cells (cells on which there is no blue line). This property can be proved inductively since it is true for the squares of size $1$ and at each step, the squares of size $3^{k+1}$ occupy two out of three empty columns and rows, leaving exactly one empty column or row between neighboring squares. The construction can therefore be continued indefinitely.
Lastly we must verify that in the valid configuration no arm is crossed by more than one blue line. This fact is closely related to the previous point: the sides of squares of length $3^k$ lie on rows and columns that were not crossed by smaller squares (the empty rows and columns previously discussed). No smaller square can therefore cross arms connecting two squares of side $3^k$. Finally, between two neighboring squares there is exactly one empty column or row on which a larger square could have its side and hence at most one blue line can cross an arm.
\section{Conclusion}
\label{sec:conclusion}
What we have described is a set of local rules (all rules concern neighboring cells and can be described with $2\times 2$ forbidden patterns) that admits infinite valid configurations but none of these are periodic. Although the local rules remain simple and the number of geometric structures used is quite limited (blue lines, arms and diagonals), the number of symbols necessary to represent them on the cells is very large. Because each cell can contain different combinations of lines and that said lines must be different depending on the information they hold (orientation, coordinates in a group of squares, number of blue lines crossed by an arm, etc.) tens of thousands of different symbols are used.
In order to keep the construction as simple as possible we have only proved that the tile set was aperiodic but it is not sufficient to prove the undecidability of the domino problem as it is. The construction can be strengthened however by forcing blue squares of length one to be regularly arranged as they are in the configuration described in Subsection \ref{sub:valid_configuration}. It is then possible to show inductively that the larger squares are also regularly arranged by observing the empty columns and rows. By doing so one can then embed partial space-time diagrams of a Turing Machine in the free space of each blue square as it is done in Robinson's construction (see Figure \ref{fig:turing}). If the halting state of the Turing machine is not included in the tile set, large valid space-time diagrams of the Turing machine cannot appear in a tiling. The produced tile set can hence tile the plane if and only if the Turing machine does not halt, which proves the undecidability of the domino problem.
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{figures/turing}
\caption{Computation area in a square of size 27. Only the intersections correspond to cells of the space-time diagram of the Turing machine, the horizontal and vertical lines are used to transmit the data. In this example the square can compute $8\times 8$ cells of the space-time diagram.}
\label{fig:turing}
\end{figure}
The structure of the valid tilings can also be easily altered. Groups could be larger and their inner structure can be more complex. For instance it would be possible to mimic the behavior of recursive geometric constructions such as the space-filling curves of Peano \cite{Peano1890} or Hilbert \cite{Hilbert1891} to enforce their structure with a tile set.
\bibliographystyle{plain}
|
2,869,038,155,251 | arxiv |
\section{Introduction}
In the first half of this paper, we discuss the dual complex of Fano pairs.
A dual complex is a combinatorial object which expresses how the components of $\Delta ^{=1}$ intersect for a dlt pair $(X, \Delta)$.
In \cite{dFKX17}, they study the dual complex of a dlt modification of a pair $(X, \Delta)$,
and show that the dual complex is independent of the choice of a dlt modification (minimal dlt blow-up) up to PL homeomorphism.
In \cite{KX16} and \cite{Mau}, the dual complexes of log canonical dlt pairs $(X, \Delta)$ with $K_X + \Delta \sim _{\mathbb{Q}} 0$ are studied.
Our main theorem is the contractibility of the dual complexes of weak Fano pairs (see \cite[22]{KX16} for a similar result).
\begin{thm}[{$=$ Theorem \ref{thm:nefbig0}}]\label{thm:main1}
Let $(X, \Delta)$ be a projective pair over an algebraic closed field of characteristic zero.
Assume that $-(K_X + \Delta)$ is nef and big.
Then for any dlt blow-up $g: (Y, \Delta _Y) \to (X, \Delta)$,
the dual complex $\mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible,
where we define $\Delta _Y$ by $K_Y + \Delta _{Y} = g^* (K_X + \Delta)$.
\end{thm}
\noindent
In this theorem, the coefficients of $\Delta$ might be larger than one contrary to the setting in \cite{dFKX17}.
We prove that the dual complex is independent of the choice of a dlt blow-up
(which is possibly not minimal) up to homotopy equivalence in our setting
(Proposition \ref{prop:indep}, see also Remark \ref{rmk:independence}).
The proof of Proposition \ref{prop:indep} depends on the weak factorization theorem \cite{AKMW02} and does not work
in positive characteristic even in dimension three.
Hence, we get the following weaker theorem in positive characteristic.
\begin{thm}[{$=$ Theorem \ref{thm:nefbig}}]\label{thm:main1'}
Let $(X, \Delta)$ be a three dimensional projective pair over an algebraic closed field of characteristic larger than five.
Assume that $-(K_X + \Delta)$ is nef and big.
Then, there exists a dlt blow-up $g: (Y, \Delta _Y) \to (X, \Delta)$ such that
the dual complex $\mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible,
where we define $\Delta _Y$ by $K_Y + \Delta _{Y} = g^* (K_X + \Delta)$.
\end{thm}
In the latter half of this paper, we discuss an application of the above result on the dual complex
to the study on Witt vector cohomology in positive characteristic.
In \cite{Esn03}, it is shown that the vanishing $H^i(X, W \mathcal{O}_{X, \mathbb{Q}}) = 0$ holds for $i > 0$
and a geometrically connected smooth Fano variety $X$ defined over an algebraic closed field $k$.
This vanishing theorem is very impressive because it is not known whether
$H^i(X, \mathcal{O}_{X}) = 0$ holds or not
by the lack of the Kodaira vanishing theorem in positive characteristic.
In \cite{GNT}, Esnault's result is generalized to klt pairs of dimension $3$ and
in $\mathrm{char} (k) > 5$.
In \cite{NT}, the result in \cite{GNT} is generalized as a vanishing theorem of Nadel type (see Theorem \ref{thm:WNV}).
The following main theorem of this paper is another generalization of the result in \cite{GNT}
(we note that the result in \cite{GNT} is Theorem \ref{thm:main2} with the additional restriction that
the pair $(X, \Delta)$ is klt).
\begin{thm}[{$=$ Theorem \ref{thm:WAFV}}]\label{thm:main2}
Let $k$ be a perfect field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective $\mathbb{Q}$-factorial log canonical pair over $k$ with $-(K_X + \Delta)$ ample.
Then $H^i(X,W \mathcal{O}_{X, \mathbb{Q}}) = 0$ holds for $i > 0$.
\end{thm}
\noindent
By the vanishing theorem of Nadel type (Theorem \ref{thm:WNV}), the proof of Theorem \ref{thm:main2} is reduced to
the topological study of the non-klt locus of the pair $(X, \Delta)$ (Proposition \ref{prop:normality} and Proposition \ref{prop:tree}).
For the proof of Proposition \ref{prop:normality} and Proposition \ref{prop:tree},
we use the result on the dual complex (Theorem \ref{thm:main1'}) to obtain the topological information.
An important application of Witt vector cohomology is the rational point formula
on varieties defined over a finite field (cf.\ \cite{Esn03, GNT, NT}).
One of the motivation of the papers \cite{Esn03, GNT, NT} is to generalize the Ax-Katz theorem \cite{Ax64, Kat71},
which states that any hypersurface $H \subset \mathbb{P}^n$ of degree $d \le n$ defined over $\mathbb{F}_q$
has a rational point.
Theorem \ref{thm:main2} suggests that the Ax-Katz theorem might be generalized to singular ambient spaces,
and we actually obtain the following theorem in dimension three.
\begin{thm}[{$=$ Theorem \ref{thm:RPF}}]\label{thm:main3}
Let $k$ be a finite field of characteristic $p > 5$.
Let $(X, \Delta)$ be a geometrically connected three-dimensional projective $\mathbb{Q}$-factorial log canonical pair over $k$ with $-(K_X + \Delta)$ ample.
Then the number of the $k$-rational points on the non-klt locus of $(X, \Delta)$ satisfies
\[
\# \mathrm{Nklt}(X, \Delta) (k) \equiv 1 \mod {|k|}.
\]
In particular, there exists a $k$-rational point on $\mathrm{Nklt}(X, \Delta)$.
\end{thm}
\noindent
If $X$ is klt and $\Delta$ is a reduced divisor, then
$\mathrm{Nklt}(X, \Delta) = \mathrm{Supp} (\Delta)$ holds and Theorem \ref{thm:main3} claims that
there exists a $k$-rational point on $\mathrm{Supp} (\Delta)$.
This formulation can be seen as a generalization of the Ax-Katz theorem from the view point of birational geometry.
\begin{ackn}
We would like to thank Professors Yoshinori Gongyo and Hiromu Tanaka for the discussions and Mirko Mauri for useful comments and suggestions.
The author is partially supported by the Grant-in-Aid for Young Scientists
(KAKENHI No.\ 18K13384).
\end{ackn}
\section{Preliminaries}\label{section:prelimi}
\subsection{Notation}\label{subsection:notation}
\begin{itemize}
\item We basically follow the notations and the terminologies in \cite{Har77} and \cite{Kol13}.
\item
For a field $k$, we say that $X$ is a \textit{variety over} $k$ if
$X$ is an integral separated scheme of finite type over $k$.
\item
A \textit{sub log pair} $(X, \Delta)$ over a filed $k$ consists of a normal variety $X$ over $k$ and
an $\mathbb{R}$-divisor $\Delta$ such that $K_X + \Delta$ is $\mathbb{R}$-Cartier.
A sub log pair is called \textit{log pair} if $\Delta$ is effective.
Note that the coefficient of $\Delta$ may be larger than one in this definition.
\item
Let $\Delta = \sum r_i D_i$ be an $\mathbb{R}$-divisor where $D_i$ are distinct prime divisors.
We define
$\Delta ^{\ge 1} := \sum_{r_i \ge 1} r_i D_i$ and
$\Delta ^{\wedge 1} := \sum r_i ' D_i$ where $r_i ' := \min \{ r_i, 1 \}$.
We also define $\Delta ^{> 1}, \Delta ^{< 1}$, and $\Delta ^{= 1}$ similarly.
\item Let $X$ be a variety over $k$.
We denote by $(\star)$ the condition that $k$ and $X$ satisfy one of the following conditions:
\begin{itemize}
\item[(i)] $\mathrm{ch} (k) = 0$, or
\item[(ii)] $\mathrm{ch} (k) > 5$ and $\dim X = 3$.
\end{itemize}
This condition is necessary for running a certain MMP appeared in this paper \cite{BCHM10, HX15, Bir16, BW17, Wal18, HNT}.
\end{itemize}
\subsection{Results on minimal model program}
In this subsection we review results on minimal model program.
In this subsection, $k$ is an algebraic closed field.
First, we review the definition of singularities of log pairs.
In this paper, we treat the following definitions only under the condition $(\star)$,
because we do not know whether the definitions perform well also in $\dim X > 3$ and $\mathrm{ch} (k) >0$.
\begin{defi}
\begin{enumerate}
\item
Let $(X, \Delta)$ be a log pair over $k$.
For a proper birational $k$-morphism $f: X' \to X$ from a normal variety $X'$
and a prime divisor $E$ on $X'$, the \textit{log discrepancy} of $(X, \Delta)$
at $E$ is defined as
\[
a_E (X, \Delta) := 1 + \mathrm{coeff}_E (K_{X'} - f^* (K_X + \Delta)).
\]
\item
A log pair $(X, \Delta)$ is called \textit{klt} (resp.\ \textit{log canonical}) if
$a_E (X, \Delta) > 0$ (resp. $\ge 0$) for any prime divisor $E$ over $X$.
\item
A log pair $(X, \Delta)$ is called \textit{dlt} if the coefficients of $\Delta$ are at most one and
there exists a log resolution $g: Y \to X$ of the pair $(X, \Delta)$ such that
$a_E (X, \Delta) > 0$ holds for any $g$-exceptional prime divisor $E$ on $Y$.
\item
Let $(X, \Delta)$ be a dlt pair and let $\Delta ^{=1} = \sum _{i \in I} E_i$ be the irreducible decomposition.
For any non-empty subset $J \subset I$,
an connected component of $\bigcap _{i \in J} E_i$ is called a \textit{stratum} of $\Delta ^{=1}$.
\end{enumerate}
\end{defi}
\begin{rmk}
The above definition (3) is equivalent to the definition in \cite[Definition 2.37]{KM98}:
\begin{itemize}
\item The coefficients of $\Delta$ are at most one.
Moreover, there exists an open subset $U \subset X$ such that $(U, \Delta |_U)$ is log smooth and no non-klt center of
$(X, \Delta)$ is contained in $X \setminus U$.
\end{itemize}
This equivalence is shown in \cite{Sza94} in characteristic zero.
The equivalence is also true in positive characteristic (in dimension three)
since the Szab\'{o}'s resolution lemma (\cite[Lemma 2.3.19]{Fuj17}) also holds by \cite[Proposition 4.1]{CP08}
(see also \cite[Proposition 2.3.20]{Fuj17} and \cite[2.4, 2.5]{Bir16}).
Hence, even in our definition, being dlt is preserved under the MMP (\cite[Corollary 3.44]{KM98}).
\end{rmk}
The following proposition is necessary for defining the dual complexes of dlt pairs.
\begin{prop}[{\cite[Theorem 4.16]{Kol13}, \cite[Section 3.9]{Fuj07}, \cite[Proposition 1]{DH16}}]\label{prop:dltstrata}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial dlt pair over $k$ and
$\Delta ^{=1} = \sum _{i \in I} E_i$ be the irreducible decomposition.
We assume the condition $(\star)$ (defined in Subsection \ref{subsection:notation}).
Then the following hold.
\begin{enumerate}
\item Let $J \subset I$ be a subset. If $\bigcap _{i \in J} E_i \not= \emptyset$,
then each connected component of $\bigcap _{i \in J} E_i$ is normal (hence irreducible) and
has codimension $\# J$.
\item Let $J \subset I$ be a subset, and let $j \in J$.
Then each connected component of $\bigcap _{i \in J} E_i$
is contained in the unique connected component of $\bigcap _{i \in J \setminus \{ j \}} E_i$.
\end{enumerate}
\end{prop}
\begin{proof}
See \cite[Theorem 4.16]{Kol13}. The assertion that each connected component of $\bigcap _{i \in J} E_i$ is irreducible
is not explicitly written in \cite[Theorem 4.16]{Kol13}.
However it follows from the fact that the intersection of any two log canonical centers is also a union of log canonical centers
(cf.\ \cite[Theorem 9.1]{Fuj11}, \cite[Lemma 1]{DH16}).
(2) is trivial.
\end{proof}
In this paper, we will use the terminology ``dlt blow-up" in the following sense.
\begin{defi}
Let $(X, \Delta)$ be a log pair over $k$ and let $g:Y \to X$ be a projective birational $k$-morphism.
We call $g$ a \textit{dlt blow-up} of $(X, \Delta)$ if the following conditions hold:
\begin{itemize}
\item[(1)] $a_E(X, \Delta) \le 0$ holds for any $g$-exceptional prime divisor $E$.
\item[(2)] $(Y, \Delta _Y ^{\wedge 1})$ is a $\mathbb{Q}$-factorial dlt pair, where
$\Delta _Y$ is the $\mathbb{R}$-divisor defined by $K_Y + \Delta _Y = g^*(K_X + \Delta)$.
\end{itemize}
\end{defi}
\begin{thm}\label{thm:dltmodif}
Let $(X, \Delta)$ be a log pair over $k$ with the condition $(\star)$.
Then a dlt blow-up of $(X, \Delta)$ exists.
Further, we can take a dlt blow-up $g: V \to X$ with the following additional condition:
\begin{itemize}
\item[(3)] $g^{-1}(\mathrm{Nklt} (X, \Delta)) = \mathrm{Nklt} (V, \Delta _V)$ holds.
\end{itemize}
\end{thm}
\begin{proof}
By Step 2 in the proof of \cite[Proposition 3.5]{HNT},
in order to show the existence of a dlt blow-up with the condition (3),
it is sufficient to show the existence of a usual dlt blow-up (that is with the only conditions (1) and (2)).
For the existence of a dlt blow-up, the same proof as in \cite[Theorem 10.4]{Fuj11} works as follows.
Let $f: Y \to X$ be a log resolution of $(X, \Delta)$.
Let $F = \sum F_i$ be the sum of the $f$-exceptional divisors $F_i$ with $a_{F_i} (X, \Delta) \le 0$, and
let $G = \sum G_i$ be the sum of the $f$-exceptional divisors $G_i$ with $a_{G_i} (X, \Delta) > 0$.
Let $\widetilde{\Delta}$ be the strict transform of $\Delta$ on $Y$.
We may assume that there exists an effective $\mathbb{R}$-divisor $H$ on $Y$ such that
$\mathrm{Supp}\, H = \mathrm{Supp}\, (F+G)$ and that $-H$ is $f$-ample.
We set an $\mathbb{R}$-divisor $\Omega$ as
\[
\Omega = \widetilde{\Delta}^{\wedge 1} + F + (1 - \epsilon) G - \delta H
\]
for sufficiently small $\epsilon, \delta > 0$.
Since $-H$ is $f$-ample, there exists an effective ample $\mathbb{R}$-divisor $A$ on $Y$ such that
$- \delta H \sim _{/X,\, \mathbb{R}} A$.
We set
\[
\overline{\Omega} = \widetilde{\Delta}^{\wedge 1} + F + (1 - \epsilon) G + A.
\]
We may assume that $(Y, \overline{\Omega})$ is dlt.
Note that $K_Y + \Omega \sim _{/X,\, \mathbb{R}} K_Y + \overline{\Omega}$.
Since $A$ is ample, there exists an $\mathbb{R}$-divisor $\overline{\Omega}'$ such that
$\overline{\Omega}' \sim _{\mathbb{R}} \overline{\Omega}$ and $(Y, \overline{\Omega}')$ is klt.
Hence, we may run a $(K_Y + \Omega)$-MMP over $X$ and it terminates.
Let $Y'$ be the end result and let $h: Y' \to X$ be the induced morphism.
We shall show that $h: Y' \to X$ is a dlt blow-up of $(X, \Delta)$.
We have
\begin{align*}
K_Y + \Omega
& \sim _{/X,\, \mathbb{R}} K_Y + \Omega - f^*(K_X + \Delta) \\
&= - (\widetilde{\Delta} - \widetilde{\Delta}^{\wedge 1}) + \sum a_i F_i + \sum b_i G_i - \epsilon G - \delta H,
\end{align*}
where $a_i = a_{F_i} (X, \Delta) \le 0$ and $b_i = a_{G_i} (X, \Delta) >0$.
Since $b_i > 0$ and $\epsilon$ and $\delta$ are sufficiently small,
it follows that
\[
\mathrm{coeff}_{G_j} \left( \sum a_i F_i + \sum b_i G_i - \epsilon G - \delta H \right) > 0
\]
for each $G_j$.
Hence by the negativity lemma, all the divisors $G_i$'s are contracted in this MMP.
Therefore $a_E (X, \Delta) \le 0$ holds for any $h$-exceptional prime divisor $E$.
Since the $(K_Y + \Omega)$-MMP is also a $(K_Y + \overline{\Omega})$-MMP,
the pair $(Y', \overline{\Omega} _{Y'})$ is still dlt where $\overline{\Omega} _{Y'}$ is the push forward of $\overline{\Omega}$.
Define $\Delta _{Y'}$ by $K_{Y'} + \Delta _{Y'} = h^* (K_X + \Delta)$.
Then $\Delta _{Y'} ^{\wedge 1}$ is the push forward of $\widetilde{\Delta}^{\wedge 1} + F$ on $Y'$,
and hence $0 \le \Delta _{Y'} ^{\wedge 1} \le \overline{\Omega} _{Y'}$ holds.
Therefore $(Y', \Delta _{Y'} ^{\wedge 1})$ is also dlt.
We have proved that $g$ is a dlt blow-up of $(X, \Delta)$.
\end{proof}
\begin{rmk}\label{rmk:dltbup}
\begin{enumerate}
\item When $X$ is $\mathbb{Q}$-factorial, any dlt blow-up of $(X, \Delta)$ satisfies condition (3) in
Theorem \ref{thm:dltmodif}.
\item
If $V \to X$ is a log resolution of $(X, \Delta)$,
then we can construct (by the proof above) a dlt blow-up $Y \to X$ of $(X, \Delta)$ such that
the induced birational map $Y \dasharrow V$ does not contract any divisor on $Y$.
\end{enumerate}
\end{rmk}
\subsection{Dual complexes}\label{subsection:dual_cpx}
In this subsection, we explain how to define a CW complex from a dlt pair, and we also prove an invariant property (Proposition \ref{prop:indep}).
First, we briefly review the notion of $\Delta$-complexes following \cite{Hat02}.
\begin{defi}
\begin{enumerate}
\item Let $X = \cup _{\varphi _{\alpha}} F_{\alpha}$ be a CW complex with the attaching maps $\varphi _{\alpha} : F_{\alpha} \to X$.
We call $X$ a \textit{$\Delta$-complex} when each cell $F_{\alpha}$ is a simplex and
the restriction of $\varphi _{\alpha}$ to each face of $F_{\alpha}$ is equal to
the attaching map $\varphi _{\beta} : F_{\beta} \to X$ for some $\beta$.
\item
A $\Delta$-complex $X$ is called \textit{regular} if the attaching maps are injective,
or equivalently, if every $d$-cell in $X$ has $d+1$ distinct vertices.
\item
A regular $\Delta$-complex $X$ is called a \textit{simplicial complex}
if the intersection of any two cells in $X$ is a face of both cells, or equivalently,
every $k+1$ vertices in $X$ is incident to at most one $k$-cell.
\end{enumerate}
\end{defi}
For a dlt pair $(X, \Delta)$, we define the dual complex $\mathcal{D}(\Delta ^{=1})$.
\begin{defi}
\begin{enumerate}
\item Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial dlt pair over $k$ with the condition $(\star)$ and
let $\Delta ^{=1} = \sum _{i \in I} E_i$ be the irreducible decomposition.
Then the \textit{dual complex} $\mathcal{D}(\Delta ^{=1})$ is a CW complex obtained as follows.
The vertices of $\mathcal{D}(\Delta ^{=1})$ are the set of $\{ E_i \} _{i \in I}$.
To each $k$-codimensional stratum $S$ of $\Delta ^{=1}$ we associate a $k$-dimensional cell.
The attaching map is uniquely defined by Proposition \ref{prop:dltstrata} (2).
\item Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial pair over $k$ with the condition $(\star)$.
Suppose that $(X, \Delta^{\wedge 1})$ is dlt. Then we define
$\mathcal{D} (\Delta ^{\ge 1}) = \mathcal{D} ((\Delta^{\wedge 1}) ^{= 1})$
\end{enumerate}
\end{defi}
\begin{prop}\label{prop:regular}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial dlt pair over $k$ with the condition $(\star)$.
Then the dual complex $\mathcal{D}(\Delta ^{=1})$ is a regular $\Delta$-complex.
\end{prop}
\begin{proof}
The assertion follows from Proposition \ref{prop:dltstrata} (1). See \cite{dFKX17} for more detail.
\end{proof}
The following theorem from \cite{dFKX17} says that the dual complex is preserved under a certain MMP up to simple-homotopy equivalence.
\begin{thm}[{\cite[Proposition 19]{dFKX17}}]\label{thm:inv_mmp}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial dlt pair over $k$ with condition $(\star)$ and
let $f:X \dasharrow Y$ be a divisorial contraction or
flip corresponding to a $(K_X + \Delta)$-negative extremal ray $R$.
Assume that there is a prime divisor $D_0 \subset \Delta ^{=1}$ such that $D_0 \cdot R >0$.
Then $\mathcal{D}(\Delta ^{=1})$ collapses to $\mathcal{D}(\Delta _Y ^{=1})$ where we set $\Delta _Y := f_* \Delta$.
In particular $\mathcal{D}(\Delta ^{= 1})$ and $\mathcal{D}(\Delta _Y ^{=1})$ are simple-homotopy equivalent.
\end{thm}
\begin{lem}\label{lem:szabo}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial pair over $k$ with the condition $(\star)$.
Suppose that $(X, \Delta ^{\wedge 1})$ is dlt.
Let $f: Y \to X$ be a log resolution of $(X, \Delta ^{\wedge 1})$ such that
$a_E (X, \Delta ^{\wedge 1}) > 0$ holds for any $f$-exceptional divisor $E$.
Let $\Delta _Y$ be the (not necessarily effective) $\mathbb{R}$-divisor defined by $K_Y + \Delta _Y = f^*(K_X + \Delta)$.
Then the dual complexes $\mathcal{D}(\Delta ^{\ge 1})$ and $\mathcal{D}(\Delta _Y ^{\ge 1})$ are simple-homotopy equivalent.
\end{lem}
\begin{proof}
Let $F = \sum F_i$ be the sum of the $f$-exceptional divisors $F_i$ with $a_{F_i} (X, \Delta) \le 0$, and
let $G = \sum G_i$ be the sum of the $f$-exceptional divisors $G_i$ with $a_{G_i} (X, \Delta) > 0$.
Let $\widetilde{\Delta}$ be the strict transform of $\Delta$ on $Y$.
Let $H$ be an effective $\mathbb{R}$-divisor on $Y$ such that
$\mathrm{Supp}\, H = \mathrm{Supp}\, (F+G)$ and that $-H$ is $f$-ample.
We set an $\mathbb{R}$-divisor $\Omega$ as
\[
\Omega = \widetilde{\Delta}^{\wedge 1} + F + (1 - \epsilon) G - \delta H
\]
for sufficiently small $\epsilon, \delta > 0$.
Since $-H$ is $f$-ample, there exists an effective ample $\mathbb{R}$-divisor $A$ on $Y$ such that
$- \delta H \sim _{/X,\, \mathbb{R}} A$.
We set
\[
\overline{\Omega} = \widetilde{\Delta}^{\wedge 1} + F + (1 - \epsilon) G + A.
\]
We may assume that $(Y, \overline{\Omega})$ is dlt.
Note that $K_Y + \Omega \sim _{/X,\, \mathbb{R}} K_Y + \overline{\Omega}$ and $\mathcal{D}(\overline{\Omega} ^{=1}) = \mathcal{D}(\Delta _Y ^{\ge 1})$.
Since $A$ is ample, there exists an $\mathbb{R}$-divisor $\overline{\Omega}'$ such that
$\overline{\Omega}' \sim _{\mathbb{R}} \overline{\Omega}$ and $(Y, \overline{\Omega}')$ is klt.
Hence, we may run a $(K_Y + \Omega)$-MMP over $X$ and it terminates.
First, we prove that this MMP ends with $X$.
\begin{align*}
K_Y + \Omega
& \sim _{/X,\, \mathbb{R}} K_Y + \Omega - f^*(K_X + \Delta^{\wedge 1}) \\
&= \sum a_i F_i + \sum b_i G_i - \epsilon G - \delta H,
\end{align*}
where $a_i = a_{F_i} (X, \Delta ^{\wedge 1})$ and $b_i = a_{G_i} (X, \Delta ^{\wedge 1})$.
Since $a_i, b_i > 0$ and $\epsilon$ and $\delta$ are sufficiently small,
it follows that
\[
\mathrm{coeff}_E \left( \sum a_i F_i + \sum b_i G_i - \epsilon G - \delta H \right) > 0
\]
for any $f$-exceptional divisor $E$.
Hence by the negativity lemma, all the divisors $F_i$'s and $G_i$'s are contracted in this MMP.
Since $X$ is $\mathbb{Q}$-factorial, this MMP ends with $X$.
Let $Y_j \dasharrow Y_{j+1}$ be the step of the $(K_Y + \Omega)$-MMP over $X$, and
let $R$ be the corresponding extremal ray, and $Y_j \to Z_j$ be its contraction.
For a divisor $D$ on $Y$, we denote $D_{Y_j}$ the strict transform of $D$ on $Y_j$.
We also denote $g: Y_j \to X$ the induced morphism.
Then,
\begin{align*}
& K_{Y_j} + \Omega _{Y_j} \\
\sim & _{/X,\, \mathbb{R}} K_{Y_j} + \Omega _{Y_j} - g^*(K_X + \Delta) \\
= & - \left( \widetilde{\Delta}_{Y_j} - (\widetilde{\Delta} _{Y_j}) ^{\wedge 1} \right) +
\sum a' _i F_{i, Y_j} + \sum b' _i G_{i, Y_j} - \epsilon G_{Y_j} - \delta H_{Y_j},
\end{align*}
where $a' _i = a_{F_i} (X, \Delta)$ and $b' _i = a_{G_i} (X, \Delta)$.
Since $a' _i \le 0$, it follows that
\[
\mathrm{coeff}_{F_{i, Y_j}} \left( \sum a' _i F_{i, Y_j} + \sum b' _i G_{i, Y_j} - \epsilon G_{Y_j} - \delta H_{Y_j} \right) < 0.
\]
Since $b' _i > 0$ and $\epsilon$ and $\delta$ are sufficiently small, it follows that
\[
\mathrm{coeff}_{G_{i, Y_j}} \left( \sum a' _i F_{i, Y_j} + \sum b' _i G_{i, Y_j} - \epsilon G_{Y_j} - \delta H_{Y_j} \right) > 0.
\]
Since $(K_{Y_j} + \Omega_{Y_j}) \cdot R < 0$, at least one of the following conditions hold:
\begin{enumerate}
\item $D \cdot R > 0$ holds for some component $D \subset \mathrm{Supp} (\widetilde{\Delta} _{Y_j} ^{>1})$.
\item $F_{i, Y_j} \cdot R > 0$ for some $i$.
\item $G_{i, Y_j} \cdot R < 0$ for some $i$.
\end{enumerate}
Here, we have $K_{Y_j} + \Omega _{Y_j} \sim _{/X,\, \mathbb{R}} K_{Y_j} + \overline{\Omega} _{Y_j}$ and
that a component of $\overline{\Omega} _{Y_j} ^{=1}$ is one of $F_{i, Y_j}$'s or
a component of $(\widetilde{\Delta} _{Y_j}) ^{\ge 1}$.
Hence in the case (1) or (2), by Theorem \ref{thm:inv_mmp},
the dual complexes $\mathcal{D}((\overline{\Omega} _{Y_j}) ^{=1})$ and
$\mathcal{D}((\overline{\Omega} _{Y_{j+1}}) ^{=1})$ are simple-homotopy equivalent.
In the case (3), the exceptional locus $L$ of $Y_j \to Z_j$ is contained in $\mathrm{Supp} (G_{i, Y_j})$.
Since $(Y_j, \overline{\Omega} _{Y_j})$ is dlt, any stratum of $(\overline{\Omega} _{Y_j}) ^{=1}$
is not contained in $\mathrm{Supp} (G_{i, Y_j})$ and neither in $L$.
Hence in the case (3), by \cite[Lemma 16]{dFKX17}, it follows that
$\mathcal{D}((\overline{\Omega} _{Y_j}) ^{=1}) = \mathcal{D}((\overline{\Omega} _{Y_{j+1}}) ^{=1})$.
Hence by induction on $j$, the dual complexes $\mathcal{D}(\overline{\Omega} ^{=1})$ and
$\mathcal{D}((\overline{\Omega} _X) ^{=1})$ are simple-homotopy equivalent.
Since $\mathcal{D}(\overline{\Omega} ^{=1}) = \mathcal{D}(\Delta _Y ^{\ge 1})$ and
$\mathcal{D}((\overline{\Omega} _X) ^{=1}) = \mathcal{D}(\Delta ^{\ge 1})$,
it follows that $\mathcal{D}(\Delta _Y ^{\ge 1})$ and $\mathcal{D}(\Delta ^{\ge 1})$ are homopoty equivalent.
\end{proof}
\begin{lem}\label{lem:bup}
Let $(X, \Delta)$ be a sub log pair over $k$ with the condition $(\star)$ such that $(X, \mathrm{Supp}\ \Delta)$ is log smooth.
Let $Z$ be a smooth irreducible subvariety of $X$ which has only simple normal
crossing with $\mathrm{Supp}\, \Delta$.
Let $f : Y \to X$ be the blow up along $Z$, and
let $\Delta _Y$ be the $\mathbb{R}$-divisor defined by $K_Y + \Delta _Y = f^*(K_X + \Delta)$.
Then the dual complexes $\mathcal{D}(\Delta ^{\ge 1})$ and $\mathcal{D}(\Delta _Y ^{\ge 1})$ are simple-homotopy equivalent.
\end{lem}
\begin{proof}
Set $F = (\Delta ^{\ge 1})^{\wedge 1}$ and $F_Y = (\Delta_Y ^{\ge 1})^{\wedge 1}$.
Let $E$ be the $f$-exceptional divisor and $\widetilde{F}$ be the strict transform of $F$ on $Y$.
We divide the case as follows:
\begin{enumerate}
\item $Z$ is a stratum of $F$.
\item $Z \subset \mathrm{Supp}\, F$ but $Z$ is not a stratum of $F$.
\item $Z \not \subset \mathrm{Supp}\, F$ (in particular, $Z$ is not a stratum of $F$).
\end{enumerate}
Suppose (1). Then $F_Y = \widetilde{F} + E$ holds.
On the other hand, $\mathcal{D}(\widetilde{F} + E)$ and $\mathcal{D}(F)$ are PL homeomorphic each other by \cite[9]{dFKX17}.
Suppose (2). Then $F_Y = \widetilde{F}$ or $F_Y = \widetilde{F} + E$ holds.
On the other hand, $\mathcal{D}(\widetilde{F}) = \mathcal{D}(F)$ holds and
$\mathcal{D}(\widetilde{F} + E)$ and $\mathcal{D}(F)$ are simple-homotopy equivalent by \cite[9]{dFKX17}.
Suppose (3). Then $F_Y = \widetilde{F}$ holds and $\mathcal{D}(\widetilde{F}) = \mathcal{D}(F)$.
\end{proof}
\begin{prop}\label{prop:indep}
Let $(X, \Delta)$ be a pair over $k$ with condition (i) in $(\star)$ and
let $f_1: Y_1 \to X$ and $f_2: Y_2 \to X$ be two dlt blow-ups of $(X, \Delta)$.
Define $\mathbb{R}$-divisors $\Delta _{Y_i}$ by
$K_{Y_i} + \Delta _{Y_i} = f_i ^* (K_X + \Delta)$.
Then the dual complexes $\mathcal{D}(\Delta _{Y_1} ^{\ge 1})$ and
$\mathcal{D}(\Delta _{Y_2} ^{\ge 1})$ are simple-homotopy equivalent.
\end{prop}
\begin{proof}
The same proof of \cite[Proposition 11]{dFKX17} works.
For the reader's convenience, we give a sketch of proof.
By definition of dlt pairs, we can take a log resolution $g_i: W_i \to Y_i$ of $(Y_i, \Delta _{Y_i})$ such that
$a_E (Y_i, \Delta _{Y_i}^{\wedge 1}) > 0$ holds for any $g_i$-exceptional divisor $E$.
Define $\mathbb{R}$-divisors $\Delta _{W_i}$ on $W_i$ by $K_{W_i} + \Delta _{W_i} = g_i ^* (K_{Y_i} + \Delta _{Y_i})$.
By Lemma \ref{lem:szabo}, the dual complexes $\mathcal{D} (\Delta _{Y_i} ^{\ge 1})$ and
$\mathcal{D} (\Delta _{W_i} ^{\ge 1})$ are simple-homotopy equivalent.
By the weak factorization theorem \cite{AKMW02}, the pairs $(W_1, \Delta _{W_1})$ and
$(W_2, \Delta _{W_2})$ can be connected by a sequence of blow up as in Lemma \ref{lem:bup}.
Hence $\mathcal{D}(\Delta _{W_1} ^{\ge 1})$ and $\mathcal{D}(\Delta _{W_2} ^{\ge 1})$ are simple-homotopy equivalent.
\end{proof}
\begin{rmk}\label{rmk:independence}
\begin{enumerate}
\item
In the proof of this proposition, the weak factorization theorem \cite{AKMW02} is used.
So, the proof does not work in positive characteristic even in dimension three.
\item If $(X, \Delta)$ is log canonical in this proposition, then
$\mathcal{D}(\Delta _{Y_1} ^{\ge 1})$ and
$\mathcal{D}(\Delta _{Y_2} ^{\ge 1})$ are PL homeomorphic each other (\cite[Proposition 11]{dFKX17}).
\end{enumerate}
\end{rmk}
\subsection{Results on the Witt vector cohomologies}
For the definition of the Witt vector cohomology and its basic properties, we refer to \cite{GNT} and \cite{CR12}.
The following vanishing theorem of Nadel type will be used in this paper.
\begin{thm}[{\cite[Theorem 4.10]{NT}}]\label{thm:WNV}
Let $(X, \Delta)$ be a projective log pair over a perfect field $k$ with condition (ii) in $(\star)$.
Then
\[
H^i(X, WI_{\mathrm{Nklt}(X, \Delta), \mathbb{Q}}) = 0
\]
holds for $i >0$, where $\mathrm{Nklt}(X, \Delta)$ denotes the reduced closed subscheme of $X$ consisting of the non-klt points of $(X, \Delta)$
and $I_{\mathrm{Nklt}(X, \Delta)}$ is the coherent ideal sheaf on $X$ corresponding to $\mathrm{Nklt}(X, \Delta)$.
\end{thm}
\section{Dual complex of weak Fano varieties}
\subsection{Dual complex of dlt pairs with a Mori fiber space structure}
\begin{lem}\label{lem:MFS2}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial dlt pair over $k$ with condition $(\star)$ and
let $f: X \to Z$ be a projective surjective $k$-morphism to a quasi-projective $k$-scheme $Z$ such that
\begin{enumerate}
\item
$\dim X >\dim Z$,
\item
$f$ has connected fibers,
\item
$-(K_X+\Delta)$ is $f$-ample, and
\item
there exists an irreducible component $D_0$ of $\Delta ^{=1}$
such that $D_0$ is $f$-ample.
\end{enumerate}
Then $\mathcal{D}(\Delta ^{=1})$ is contractible.
\end{lem}
\begin{proof}
Let $g:D_0 \to Z$ be the induced morphism.
Let $\Delta ^{=1} = \sum_{i=0}^m D_i$ be the irreducible decomposition.
What we want to show is the following:
\begin{itemize}
\item[(1)] Any stratum $S$ of $\sum_{i=1}^m D_i$ intersects with $D_0$, and
\item[(2)] $S \cap D_0$ is connected.
\end{itemize}
These conditions imply that the dual complex of $\sum_{i=0}^m D_i$ is the cone of the dual complex of
$\sum_{i=1}^m D_i$ with the vertex $D_0$.
Therefore $\mathcal{D}(\Delta ^{=1})$ is contractible.
We show (1) and (2) by induction on $\dim X$.
First we prove the following claim.
\begin{claim}\label{claim:majiwaru}
For any $i \in \{1, \ldots, m\}$,
it holds that
\begin{enumerate}
\item[(a)] $\dim f(D_i) < \dim D_i$, and
\item[(b)] $f(D_i) =f(D_i \cap D_0)$ holds, in particular $D_i \cap D_0 \not= \emptyset$.
\item[(c)] $D_i \cap D_0$ is connected and irreducible.
\end{enumerate}
\end{claim}
\begin{proof}
We prove (a).
Since the assertion is clear if $\dim Z \leq \dim X - 2$,
we may assume that $\dim Z = \dim X - 1$.
Suppose that $f(D_i)=Z$ holds for some $i \in \{1, \dots, m\}$.
For a general fiber $F$ of $f$, $\dim F = 1$ holds and $(D_i \cup D_0) \cap F$ is not connected.
This contradicts the Koll{\'a}r-Shokurov connectedness lemma (cf.\ \cite[Theorem 1.2]{NT}).
Therefore, $D_i$ does not dominate $Z$ for any $i \in \{1, \dots, m\}$.
We prove (b).
Let $x \in f(D_i)$ be a closed point.
Since $\dim f(D_i) < \dim D_i$,
there exists a curve $C$ on $X$ contained in $D_i \cap f^{-1}(x)$.
Since $D_0$ is ample over $Z$,
the contracted curve $C$ intersects with $D_0$.
This implies $x \in f(D_i \cap D_0)$.
Thus we get $f(D_i) = f(D_i \cap D_0)$.
We prove (c). Suppose that $D_i \cap D_0$ is not connected.
Let $S$ be a connected component of $D_i \cap D_0$ which satisfies $f(S) = f(D_i \cap D_0)$.
Let $G$ be another connected component of $D_i \cap D_0$.
Then for any closed point $x \in f(G)$, $D_i \cap D_0$ is not connected over $x$.
However, this contradicts the Koll{\'a}r-Shokurov connectedness lemma.
Therefore $D_i \cap D_0$ is connected.
Then the irreducibility follows from Proposition \ref{prop:dltstrata} (1).
\end{proof}
Let $S$ be a stratum of $\sum_{i=1}^m D_i$.
Then $S$ is a connected component of $\bigcap _{i \in I} D_i$ for some
$I \subset \{ 1, \ldots ,m \}$.
We may assume that $1 \in I$ possibly changing the indices.
Let $C:= f(D_1)$, and
let $D_1 \xrightarrow{f'} C' \xrightarrow{s} C$ be the Stain factorisation of $D_1 \to C$.
Let $\Delta _{D_1}$ be the effective $\mathbb{R}$-divisor on $D_1$ defined by adjunction
$(K_X+\Delta)|_{D_1}=K_{D_1}+\Delta _{D_1}$.
Then the following properties hold.
\begin{enumerate}
\item[(d)] $(D_1, \Delta _{D_1})$ is dlt.
\item[(e)] $-(K_{D_1} + \Delta _{D_1})$ is $f'$-ample.
\item[(f)] $- D_0|_{D_1}$ is $f'$-ample.
\end{enumerate}
We prove (1) and (2) by induction on $\# I$.
If $\# I = 1$, then (1) and (2) follow from Claim \ref{claim:majiwaru}.
Suppose $\# I \ge 2$. Then $S$ is also a stratum of $\Delta _{D_1} ^{=1}$.
Hence (1) and (2) holds by induction on the dimension.
\end{proof}
\begin{lem}\label{lem:MFS}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial dlt pair over $k$ with the condition $(\star)$ and
let $f: X \to Z$ be a $(K_X+\Delta)$-Mori fiber space to a quasi-projective $k$-variety $Z$.
Suppose that $f(\mathrm{Supp}\, \Delta ^{=1})=Z$.
Then $\mathcal{D}(\Delta ^{=1})$ is contractible.
\end{lem}
\begin{proof}
Since $f(\mathrm{Supp}\, \Delta ^{=1})=Z$,
some irreducible component $D_0$ of $\Delta ^{=1}$ satisfies $f(D_0)=Z$.
Since $\rho(X/Z)=1$, it follows that $D_0$ is $f$-ample.
Hence the assertion follows from Lemma \ref{lem:MFS2}.
\end{proof}
\subsection{Dual complex of weak Fano varieties}
\begin{thm}\label{thm:scc2}
Let $(X, \Omega)$ be a $\mathbb{Q}$-factorial projective log pair.
Assume that
\begin{itemize}
\item[(1)] $(X, \Omega ^{\wedge 1})$ is dlt but not klt.
\item[(2)] $K_X + \Omega \sim _{\mathbb{R}} 0$.
\item[(3)] $\mathrm{Supp}\, \Omega ^{>1} = \mathrm{Supp}\, \Omega ^{\ge 1}$.
\end{itemize}
Then, the dual complex $\mathcal{D}(\Omega ^{\ge 1})$ is contractible.
\end{thm}
\begin{proof}
Since $(X, \Omega ^{\wedge 1})$ is not klt, it follows that
\[
\mathrm{Supp}\, \Omega ^{>1} = \mathrm{Supp}\, \Omega ^{\ge 1} \not = \emptyset.
\]
Therefore, $K_X+\Omega^{\wedge 1} \sim _{\mathbb{R}} - (\Omega-\Omega^{\wedge 1})$ is not pseudo-effective.
Further, by (3), $\left( X, \Omega^{\wedge 1} - \epsilon (\Omega-\Omega^{\wedge 1}) \right)$
is klt for some small $\epsilon >0$.
Hence we may run a $(K_X+\Omega^{\wedge 1})$-MMP and ends with a Mori fiber space $f: X_{\ell} \to Z$:
\[
X=:X_0 \dashrightarrow X_1 \dashrightarrow \cdots \dashrightarrow X_{\ell}.
\]
Let $\Omega_{i}$ be the push-forward of $\Omega$ on $X_{i}$.
Then $\Omega _{i}$ also satisfies the conditions (1)--(3).
Further, since
$\Omega _{\ell} -\Omega _{\ell} ^{\wedge 1} \sim _{\mathbb{R}}
-(K_{X_{\ell}}+\Omega _{\ell} ^{\wedge 1})$
is ample over $Z$, it follows that
$f(\mathrm{Supp}\, \Omega _{\ell} ^{\ge 1}) = Z$.
Hence, by Lemma \ref{lem:MFS}, the dual complex
$\mathcal{D} ( (\Omega _{\ell} ^{\wedge 1}) ^{=1})$ is contractible.
Let $R_i$ be the extremal ray of $\overline{\mathrm{NE}} (X_i)$ corresponding to the step of
the MMP $X_i \dashrightarrow X_{i+1}$.
Since
$- (\Omega _{i} -\Omega _{i} ^{\wedge 1}) \cdot R_i < 0$,
it follows that some component $D_i$ of
$\mathrm{Supp} (\Omega _{i} ^{> 1})$\ ($= \mathrm{Supp} \left( (\Omega ^{\wedge 1}) ^{=1} \right)$) satisfies
$D_i \cdot R_i > 0$.
By Theorem \ref{thm:inv_mmp}, $\mathcal{D} ( (\Omega _{i} ^{\wedge 1}) ^{=1})$ and
$\mathcal{D} ( (\Omega _{i+1} ^{\wedge 1}) ^{=1})$ are homotopy equivalent.
Hence, $\mathcal{D}(\Omega ^{\ge 1}) = \mathcal{D} ( (\Omega ^{\wedge 1}) ^{=1})$ is contractible.
\end{proof}
\begin{prop}\label{prop:ample}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial projective pair over $k$ with the condition $(\star)$.
Suppose that $-(K_X + \Delta)$ is ample.
Then for any dlt blow-up $g: (Y, \Delta _Y) \to (X, \Delta)$,
the dual complex $\mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible,
where we define $\Delta _Y$ by $K_Y + \Delta _{Y} = g^* (K_X + \Delta)$.
\end{prop}
\begin{proof}
By Proposition \ref{thm:scc2},
it suffices to find an effective $\mathbb{R}$-divisor $\Omega _Y$ on $Y$ such that
\begin{enumerate}
\item[(1)] $(Y, \Omega _Y ^{\wedge 1})$ is dlt,
\item[(2)] $K_Y + \Omega _Y \sim _{\mathbb{R}} 0$,
\item[(3)] $\mathrm{Supp} (\Omega _Y^{>1}) = \mathrm{Supp} (\Omega _Y ^{\ge 1})$, and
\item[(4)] $\mathrm{Supp} (\Omega _Y ^{\ge 1}) = \mathrm{Supp} (\Delta _Y ^{\ge 1})$.
\end{enumerate}
Since $X$ is $\mathbb{Q}$-factorial,
there exists an effective $\mathbb{R}$-divisor $F$ on $Y$ such that $-F$ is $g$-ample and
$\mathrm{Supp}\, F = \mathrm{Excep} (g)$.
Since $-(K_Y + \Delta _Y)$ is the pullback of an ample $\mathbb{R}$-divisor
$-(K_X + \Delta)$ on $X$,
it follows that $-(K_Y + \Delta _Y)- \epsilon F$ is ample for any sufficiently small $\epsilon > 0$.
Note that $\mathrm{Supp}\, F \subset \mathrm{Supp} (\Delta _Y ^{\ge 1})$.
Thus, we can find an effective $\mathbb{R}$-divisor $B$ on $Y$
such that
\begin{itemize}
\item $B \ge \epsilon F$,
\item $-(K_Y + \Delta _Y) - B$ is still ample, and
\item $\mathrm{Supp}\, B = \mathrm{Supp} (\Delta _Y ^{\ge 1})$.
\end{itemize}
Then there exists an effective $\mathbb{R}$-divisor $A$ on $Y$
such that
\begin{itemize}
\item $A \sim_{\mathbb{R}} -(K_Y + \Delta _Y)- B$, and
\item $(Y, \Delta _Y ^{\wedge 1} + 2A)$ is dlt (cf.\ \cite[Lemma 2.8]{NT} in positive characteristic case).
\end{itemize}
In particular, it follows that
\begin{itemize}
\item $(Y, \Delta _Y ^{\wedge 1} + A)$ is dlt, and
\item $(\Delta _Y + A) ^{\ge 1} = \Delta _Y ^{\ge 1}$.
\end{itemize}
Set $\Omega _Y := \Delta _Y + A + B$.
Then (2) holds. Since
\begin{itemize}
\item $\Omega _Y ^{\wedge 1} = (\Delta _Y + A) ^{\wedge 1} = \Delta _Y ^{\wedge 1} + A$,
\end{itemize}
(1) also holds. (3) and (4) hold by the way of taking $A$ and $B$.
\end{proof}
\begin{lem}\label{lem:Qnefbig}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial projective pair over $k$ with the condition $(\star)$.
Assume that $-(K_X + \Delta)$ is nef and big.
Then there exists a dlt blow-up $g: (Y, \Delta _Y) \to (X, \Delta)$
such that the dual complex $\mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible,
where we define $\Delta _Y$ by $K_Y + \Delta _{Y} = g^* (K_X + \Delta)$.
\end{lem}
\begin{proof}
Since $-(K_X + \Delta)$ is nef and big,
there exists an effective $\mathbb{R}$-divisor $E$ such that
$-(K_X + \Delta) - \epsilon E$ is ample for any real number $\epsilon$
satisfying $0<\epsilon \le 1$.
Let $h: W \to X$ be a log resolution of $(X, \Delta + E)$.
For sufficiently small $\epsilon >0$, we can assume that
\begin{itemize}
\item[(1)] For any $h$-exceptional prime divisor $F$ with $a_F (X, \Delta) > 0$,
it still holds that $a_F(X, \Delta + \epsilon E) > 0$.
\end{itemize}
Let $g: Y \to X$ be a dlt blow-up of $(X, \Delta + \epsilon E)$
such that the birational map $Y \dasharrow W$ does not contract any divisor on $Y$ (Remark \ref{rmk:dltbup} (2)).
By Proposition \ref{prop:ample},
the dual complex $\mathcal{D}((\Delta _Y + \epsilon g^* E) ^{\ge 1})$ is contractible.
Hence it is sufficient to check the following three conditions (the conditions (2) and (3) below mean that $g$ is also a dlt blow-up of $(X, \Delta)$):
\begin{itemize}
\item[(2)] $a_F(X, \Delta) \le 0$ for any $g$-exceptional divisor $F$.
\item[(3)] $(Y, \Delta _Y ^{\wedge 1})$ is dlt.
\item[(4)] $\mathrm{Supp} \left( (\Delta _Y + \epsilon g^* E) ^{\ge 1} \right) = \mathrm{Supp} (\Delta _Y ^{\ge 1})$.
\end{itemize}
Since any $g$-exceptional divisor is $h$-exceptional, the conditions (2) and (4) follow from (1).
By (2), it follows that $\Delta _Y \ge 0$ and hence (3) is obvious.
\end{proof}
\begin{thm}\label{thm:nefbig}
Let $(X, \Delta)$ be a projective pair over $k$ with the condition $(\star)$.
Assume that $-(K_X + \Delta)$ is nef and big.
Then there exists a dlt blow-up $g: (Y, \Delta _Y) \to (X, \Delta)$
such that the dual complex $\mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible,
where we define $\Delta _Y$ by $K_Y + \Delta _{Y} = g^* (K_X + \Delta)$.
Further, we can take such $g$ with the additional condition (3) in Theorem \ref{thm:dltmodif}.
\end{thm}
\begin{proof}
Let $h: (X', \Delta _{X'}) \to (X, \Delta)$ be a dlt blow-up of $(X, \Delta)$
with the condition (3) in Theorem \ref{thm:dltmodif}.
Then, $X'$ is $\mathbb{Q}$-factorial and $-(K_{X'} + \Delta _{X'})$ is still nef and big.
By Lemma \ref{lem:Qnefbig},
there exists a dlt blow-up $(X'', \Delta _{X''}) \to (X', \Delta _{X'})$ such that
the dual complex $\mathcal{D}(\Delta _{X''} ^{\ge 1})$ is contractible.
Since the composition $X'' \to X$ is again a dlt blow-up of $(X, \Delta)$
with condition (3) in Theorem \ref{thm:dltmodif}, we complete the proof.
\end{proof}
In characteristic zero, we can conclude the following stronger statement.
\begin{thm}\label{thm:nefbig0}
Let $(X, \Delta)$ be a projective pair over $k$ with the condition (i) in $(\star)$.
Assume that $-(K_X + \Delta)$ is nef and big.
Then for any dlt blow-up $g: (Y, \Delta _Y) \to (X, \Delta)$,
the dual complex $\mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible,
where we define $\Delta _Y$ by $K_Y + \Delta _{Y} = g^* (K_X + \Delta)$.
\end{thm}
\begin{proof}
The assertion follows from Theorem \ref{thm:nefbig} and Proposition \ref{prop:indep}.
\end{proof}
\section{Vanishing theorem on log canonical Fano varieties}
In this section, we prove a vanishing theorem of Witt vector cohomology of Fano varieties of Ambro-Fujino type
(Theorem \ref{thm:WAFV}).
\subsection{Non-klt locus of log Fano three-folds}
In this subsection,
we prove Lemma \ref{lem:rationality}, Proposition \ref{prop:normality}, and Proposition \ref{prop:tree}
which will be used in the proof of Theorem \ref{thm:WAFV}.
\begin{lem}\label{lem:rationality}
Let $k$ be an algebraic closed field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective log canonical pair over $k$ with $-(K_X + \Delta)$ ample.
Suppose that $\mathrm{Nklt}(X, \Delta)$ is pure dimension one.
Then each irreducible component of $\mathrm{Nklt}(X, \Delta)$ is a rational curve.
\end{lem}
\begin{proof}
Let $C_0$ be an irreducible component of $\mathrm{Nklt}(X, \Delta)$.
Let $f: Y \to X$ be a dlt blow-up of $(X, \Delta)$ (Theorem \ref{thm:dltmodif}).
We define $\Delta _Y$ by $K_Y + \Delta _Y = f^* (K_X + \Delta)$.
Let $E \subset \mathrm{Supp}\, \Delta _Y ^{=1}$ be a component which dominates $C_0$.
We define $\Delta _E$ by $K_{E} + \Delta _E = (K_Y + \Delta _Y)|_{E}$.
Then $K_E + \Delta _E$ is not nef since $-(K_X + \Delta)$ is ample.
By the cone theorem for surfaces (cf.\ \cite[Proposition 3.15]{Tan14}),
there exists a $(K_E + \Delta _E)$-negative rational curve $B$.
Since $(K_E + \Delta _E) \cdot B < 0$ and
$-(K_E + \Delta _E)$ is the pulled back of a divisor on $C_0$,
it follows that $B$ dominates $C_0$, which proves the rationality of $C_0$.
\end{proof}
We prove two properties (Proposition \ref{prop:G1}, Proposition \ref{prop:G2}) on the dual complex
$\mathcal{D}(\Delta _{Y} ^{\ge 1})$ of a dlt blow-up.
They will be used in the proof of Proposition \ref{prop:normality}.
First, we introduce some notations.
When we write that $G$ is a $\Delta$-complex, we regard $G$ as the set of its simplices.
Hence, when we write $S \in G$, then $S$ is a simplices of $G$.
\begin{defi}
\begin{enumerate}
\item For a $\Delta$-complex $G$, we denote by $|G|$ the topological space of $G$. Hence, $|G| = \bigcup _{S \in G} S$ as set.
\item For a $\Delta$-complex $G$ and its subset $G' \subset G$, we define the \textit{star} $\mathrm{st}(G', G)$ by
\[
\mathrm{st}(G', G) = \{ S \in G \mid \text{$S \cap S' \not = \emptyset$ for some $S' \in G'$} \}.
\]
\item For a $\Delta$-complex $G$, and $S, S' \in G$, we write $S < S'$ when $S$ is a face of $S'$.
\end{enumerate}
\end{defi}
\begin{prop}\label{prop:G1}
Let $k$ be an algebraic closed field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective log canonical pair over $k$ with $-(K_X + \Delta)$ nef and big.
Suppose that $C := \mathrm{Nklt}(X, \Delta)$ is of pure dimension one.
Let $f: (Y, \Delta_Y) \to (X, \Delta)$ be a dlt blow-up such that
$\mathrm{Supp}\, \Delta _Y ^{\ge 1} = f^{-1}(C)$ and that $G := \mathcal{D}(\Delta _Y ^{\ge 1})$ is contractible.
Let $C_0$ be an irreducible component of $C = \mathrm{Nklt}(X, \Delta)$.
Let $G'$ be the subcomplex of $G$ which consists of the strata of $\Delta _Y ^{\ge 1}$ which dominate $C_0$.
Let $U := |G| \setminus |G'|$ be the open subset of $|G|$, and
let $V \subset |G|$ be a sufficiently small open neighborhood of $|G'|$.
Then following hold.
\begin{itemize}
\item[(1)] $G'$ is a connected subcomplex of $G$ of dimension at most one.
\item[(2)] For each connected component $U'$ of $U$, it follows that $V \cap U'$ is also connected.
\end{itemize}
\end{prop}
\begin{proof}
We prove (1). Only the connectedness of $G'$ is non-trivial.
Let $q \in C_0$ be a general closed point. Since $q$ is general, we may assume that
\begin{itemize}
\item for a strata $S$ of $\Delta _Y ^{\ge 1}$, $q \in f(S)$ holds if and only if $f(S) = C_0$ holds.
\end{itemize}
Then the connectedness of the fiber $f^{-1}(q)$ shows that the connectedness of $G'$.
For (2), consider the Mayer--Vietoris sequence
\[
H_1(|G| , \mathbb{Q}) \to H_0(U \cap V, \mathbb{Q}) \to
H_0(U, \mathbb{Q}) \oplus H_0(V, \mathbb{Q}) \to
H_0(|G| , \mathbb{Q}) \to 0.
\]
Here, $H_1 (|G|, \mathbb{Q}) = 0$ follows because $|G|$ is contractible,
$H_0(|G| , \mathbb{Q}) = H_0(V, \mathbb{Q}) = 0$ follows because $|G|$ and $V$ are connected.
Therefore, it follows that $H_0(U \cap V, \mathbb{Q}) \cong H_0(U, \mathbb{Q})$,
which proves the claim.
\end{proof}
\begin{prop}\label{prop:G2}
Let $(X, \Delta), (Y, \Delta _Y), C, C_0, G, G', U, V$ be as in the Proposition \ref{prop:G1}.
Let $U'$ be a connected component of $U$.
Consider a pair $(B, S)$ with the following conditions:
\begin{itemize}
\item[(a)] $B \in \mathrm{st}(G', G) \setminus G'$ is an edge, and $S \in G'$ is a vertex.
\item[(b)] $S < B$ and $B \cap U' \not = \emptyset$.
\end{itemize}
For any two pairs $(B, S)$ and $(B', S')$ with the conditions (a) and (b) above,
the assertion is that
there exists a sequence of pairs
\[
(B, S) = (B_0, S_0), (B_1, S_1), \ldots , (B_k, S_k)=(B', S')
\]
with $k \ge 0$ and the following conditions:
\begin{itemize}
\item[(c)] Each $(B_j, S_j)$ satisfies the conditions (a) and (b).
\item[(d)] For each $0 \le i \le k-1$,
the pairs $(B_i, S_i), (B_{i+1}, S_{i+1})$ satisfy one of the following conditions:
\begin{itemize}
\item[(d-1)] $S_i = S_{i+1}$ holds, and $B_i, B_{i+1} < F$ for some $2$-simplex $F \in G$.
\item[(d-2)] $S_i \not = S_{i+1}$ holds, and $S_i, S_{i+1} < E$ for some edge $E \in G'$.
Further, $B_i, B_{i+1}, E < F$ for some $2$-simplex $F \in G$.
\end{itemize}
\end{itemize}
\end{prop}
\vspace{3mm}
\begin{center}
\begin{footnotesize}
\input{graph/gguv.tex}
\end{footnotesize}
\end{center}
\vspace{5mm}
\begin{proof}
Note that
$U' \cap V$ is a connected component of $U \cap V$ by Proposition \ref{prop:G1} and
that $U \cap V \subset \bigcup _{A \in \, \mathrm{st}\, (G', G) \setminus G'} \mathrm{int}\, A$.
We also note that giving a pair $(B, S)$ with conditions (a) and (b) is equivalent to
giving $B ^{\circ}$ in the following set.
\begin{align*}
\Gamma :=
\left\{ B^{\circ} \ \middle |
\begin{array}{l}
\text{$B^{\circ}$ is a connected component of $B \cap (U' \cap V)$} \\
\text{for some edge $B \in \mathrm{st}(G, G') \setminus G'$}
\end{array}\right \}
\end{align*}
Indeed, for a pair $(B, S)$ with conditions (a) and (b), there exists the unique connected component of $B \cap (U' \cap V)$ which is around $S$.
Inversely, for $B^ {\circ}$ in the set above, the corresponding $B$ and $S$ are uniquely determined.
Let $B^{\circ}$ (resp.\ ${B'} ^{\circ}$) be the connected component of $B \cap (U' \cap V)$
(resp.\ $B' \cap (U' \cap V)$) which is around $S$ (resp.\ $S'$).
Since $B^{\circ}, {B'} ^{\circ} \subset U' \cap V$ and $U' \cap V$ is connected,
we can take the following sequence
\[
B^{(0)} := B^{\circ}, F^{(0)}, B^{(1)}, F^{(1)}, \ldots , B^{(k-1)}, F^{(k-1)}, B^{(k)}:= {B'} ^{\circ}
\]
with the following conditions:
\begin{itemize}
\item Each $B^{(i)}$ is a connected component of $B_i \cap (U' \cap V)$ for some edge $B_i \in \mathrm{st}(G', G) \setminus G'$
(equivalently $B^{(i)} \in \Gamma$).
\item Each $F^{(i)}$ is a connected component of $F_i \cap (U' \cap V)$ for some $2$-simplex $F_i \in \mathrm{st}(G', G) \setminus G'$.
\item $B^{(i)}, B^{(i+1)} \subset F^{(i)}$
\end{itemize}
Possibly passing to a subsequence, we may also assume that
\begin{itemize}
\item $B^{(i)} \not = B^{(i+1)}$ for each $0 \le i \le k-1$.
\end{itemize}
We denote by $S_i$ the unique vertex of $B_i$ which is around $B^{(i)}$. Obviously, $S_i$ is a vertex of $G'$.
For each $i$, there are two possibilities.
\begin{itemize}
\item[(e-1)] The connected component of $F_i \cap |G'|$ which is around $F^{(i)}$ is zero dimensional.
\item[(e-2)] The connected component of $F_i \cap |G'|$ which is around $F^{(i)}$ is one dimensional.
\end{itemize}
\vspace{3mm}
\begin{center}
\begin{footnotesize}
\input{graph/e1e2.tex}
\end{footnotesize}
\end{center}
\vspace{5mm}
Suppose $i$ satisfies (e-1).
Then $S_i = S_{i+1}$ and $S_i$ is the common vertex of $B_i$ and $B_{i+1}$.
Therefore $(B_i, S_i)$ and $(B_{i+1}, S_{i+1})$ satisfy (d-1).
Suppose $i$ satisfies (e-2).
Then there exists an edge $E_i \in G'$ such that $E_i, B_i, B_{i+1} < F_i$.
Then $S_i$ (resp.\ $S_{i+1}$) is the common vertex of $E_i$ and $B_i$ (resp.\ $E_i$ and $B_{i+1}$).
Then $(B_i, S_i)$ and $(B_{i+1}, S_{i+1})$ satisfy (d-2).
\end{proof}
\begin{prop}\label{prop:normality}
Let $k$ be an algebraic closed field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective log pair over $k$ with $-(K_X + \Delta)$ nef and big.
Suppose that $\mathrm{Nklt}(X, \Delta)$ is of pure dimension one.
Then for each irreducible component $C_0$ of $\mathrm{Nklt}(X, \Delta)$,
its normalization $\overline{C_0} \to C_0$ is a universal homeomorphism.
\end{prop}
\begin{proof}
Set $C := \mathrm{Nklt}(X, \Delta)$.
By contradiction, suppose that $p \in C_0$ is a singular point such that
the normalization $\overline{C_0} \to C_0$ is not a universal homeomorphism around $p$.
Let $p^{(1)}, \ldots , p^{(m)}$ be the inverse image of $p$.
By the assumption, $m \ge 2$.
Let $f: (Y, \Delta_Y) \to (X, \Delta)$ be a dlt blow-up such that
$\mathrm{Supp}\, \Delta _Y ^{\ge 1} = f^{-1}(C)$ (Theorem \ref{thm:dltmodif}).
Let $G = \mathcal{D}(\Delta _Y ^{\ge 1})$.
We may assume that $G$ is contractible by Theorem \ref{thm:nefbig}.
We define $\{ S_i \}_{i \in I}$ and $\{ T_j \}_{j \in J}$ as follows:
\begin{itemize}
\item Let $\{ S_i \}_{i \in I}$ be the set of the irreducible components of $\Delta _Y ^{\ge 1}$ which dominate $C_0$.
\item Let $\{ T_j \}_{j \in J}$ be the set of the irreducible components $T_j$ of $\Delta _Y ^{\ge 1}$ which do not dominate $C_0$ but $p \in f(T_j)$.
\end{itemize}
For each $S \in \{S_i \}_{i \in I}$, since $S$ is normal and dominates $C_0$,
$S \to C_0$ factors through $S \to \overline{C_0} \to C_0$.
We denote by $S_{p^{(k)}}$ the fiber $S \to \overline{C_0}$ over $p^{(k)}$.
Obviously, we have
\begin{itemize}
\item[(1)] $S_{p^{(k)}} \cap S_{p^{(j)}} = \emptyset$ holds for each $k \not = j$.
\end{itemize}
On the other hand, $f^{-1}(p)$ is connected.
Since $f^{-1}(p) \cap T \not = \emptyset$ for each $T \in \{ T_j \}_{j \in J}$,
\[
\left( \bigcup_{i \in I,\ k} S_{i, p^{(k)}} \right) \cup \left( \bigcup _{j \in J} T_j \right)
=
f^{-1}(p) \cup \bigcup _{j \in J} T_j
\]
is also connected.
Hence, by (1), possibly changing the indices of $p^{(1)}, \ldots , p^{(m)}$, we can conclude the following:
\begin{itemize}
\item[(2)] There exist $S, S' \in \{ S_i \} _{i \in I}$ and a sequence $T_1, \ldots , T_{\ell} \in \{ T_j \}_{j \in J}$
with $\ell \ge 0$
such that
\[
S_{p^{(1)}} \cap T_1 \not = \emptyset,\ T_1 \cap T_2 \not = \emptyset, \ \ldots ,\ T_{\ell - 1} \cap T_{\ell} \not = \emptyset,\
T_{\ell} \cap S'_{p^{(2)}} \not = \emptyset.
\]
\end{itemize}
We shall rephrase (2) into a combinatorial condition (STEP 1) and lead a contradiction (STEP 2, 3).
For STEP 1, we introduce some notations which are same as in Proposition \ref{prop:G1}.
Let $G'$ be the subcomplex of $G$ which consists of the strata of $\Delta _Y ^{\ge 1}$ which dominate $C_0$.
Let $U := |G| \setminus |G'|$ be the open subset of $|G|$, and
let $V$ be a sufficiently small open neighborhood of $G'$.
\vspace{2mm}
\noindent
\textbf{STEP 1. }\
In this step, we prove the following statement from the condition (2).
\begin{itemize}
\item[(3)] There exist a connected component $U'$ of $U$,
and pairs $(B, S)$ and $(B', S')$ with the condition (a) and (b) in Proposition \ref{prop:G2}
such that $S_{p^{(1)}} \cap B \not= \emptyset$ and $S' _{p^{(2)}} \cap B' \not = \emptyset$ hold.
Here we allow $B = B'$ and $S=S'$.
\end{itemize}
Suppose $\ell = 0$ in (2), that is, $S_{p^{(1)}} \cap S' _{p^{(2)}} \not = \emptyset$.
Then, we can take a one dimensional stratum $B \subset S \cap S'$ such that
$S_{p^{(1)}} \cap S' _{p^{(2)}} \cap B \not= \emptyset$ holds.
If $B$ dominates $C_0$, then $B \to C_0$ also factors through $\overline{C_0}$ since $B$ is normal.
We define $B_{p^{(1)}}$ and $B_{p^{(2)}}$ as the fibers of $B \to \overline{C_0}$ over $p^{(1)}$ and $p^{(2)}$.
Then $S_{p^{(1)}} \cap B = B_{p^{(1)}}$ and
$S' _{p^{(2)}} \cap B = B_{p^{(2)}}$ hold, and they contradict the fact that $B_{p^{(1)}} \cap B_{p^{(2)}} = \emptyset$.
Therefore, $B$ does not dominate $C_0$, that is, $B \not \in G'$.
Hence the condition (3) holds when we set $B' = B$.
Suppose $\ell \ge 1$ in (2). Then
we can take a one dimensional stratum $B \subset S \cap T_1$ (resp.\ $B' \subset T_{\ell} \cap S'$) such that
$S_{p^{(1)}} \cap B \not = \emptyset$ (resp.\ $S' _{p^{(2)}} \cap B' \not = \emptyset$).
Since $T_1$ and $T_{\ell}$ do not dominate $C_0$, neither do $B$ and $B'$, hence $B, B' \not \in G'$.
Moreover, $T_1 \cup \cdots \cup T_{\ell}$ is connected, $B$ and $B'$ have intersection with a same connected component $U'$ of $U$.
We have proved (3). Here, we note that the condition $S_{p^{(1)}} \cap B \not= \emptyset$
(resp.\ $S' _{p^{(2)}} \cap B' \not = \emptyset$) in (3) actually implies that
$B \subset S_{p^{(1)}}$ (resp.\ $B' \subset S' _{p^{(2)}}$).
Indeed, it follows from the facts that $B \subset S$ (resp. $B' \subset S'$) and
that $S$ (resp.\ $S'$) dominates $C_0$, but $B$ (resp.\ $B'$) does not dominate $C_0$.
Therefore we conclude the following.
\begin{itemize}
\item[(4)] There exist a connected component $U'$ of $U$,
and pairs $(B, S)$ and $(B, S')$ with the condition (a) and (b) in Proposition \ref{prop:G2} such that
$B \subset S_{p^{(1)}}$ and $B' \subset S' _{p^{(2)}}$ hold.
Here we allow $B = B'$ and $S=S'$.
\end{itemize}
\vspace{2mm}
\noindent
\textbf{STEP 2.}\
Let $(B, S), (B', S')$ be pairs which satisfy (a) and (b) in Proposition \ref{prop:G2}.
Suppose that one of the following conditions holds:
\begin{itemize}
\item[(d-1)] $S = S'$ holds, and $B, B' < F$ for some $2$-simplex $F \in G$.
\item[(d-2)] $S \not = S'$ holds, and $S, S' < E$ for some edge $E \in G'$.
Further, $B, B', E < F$ for some $2$-simplex $F \in G$.
\end{itemize}
In this step, we prove the condition $B \subset S_{p^{(1)}}$ implies $B' \subset S' _{p^{(1)}}$.
\begin{center}
\begin{footnotesize}
\input{graph/d1d2.tex}
\end{footnotesize}
\end{center}
\vspace{5mm}
Suppose (d-1). $S=S'$ in this case. Since $B, B' < F$ for some $2$-simplex $F \in G$,
it follows that $B \cap B' \not= \emptyset$.
Since $B \subset S_{p^{(1)}}$, it holds that $B' \cap S_{p^{(1)}} \not = \emptyset$.
Then $B' \subset S_{p^{(1)}}$ holds because of
the facts that $B' \subset S$ and
that $S$ dominates $C_0$, but $B'$ does not dominate $C_0$.
Suppose (d-2).
Since $B, B', E < F$ for some $2$-simplex $F \in G$,
$B \cap B' \cap E \not = \emptyset$ holds.
Since $B \subset S_{p^{(1)}}$,
it follows that
\[
S_{p^{(1)}} \cap B' \cap E \not = \emptyset.
\]
Here we note that $E$ dominates $C_0$ and hence $E \to C_0$ factors through $\overline{C_0}$ because $E$ is normal.
We denote by $E_{p^{(1)}}$ the fiber of $E \to \overline{C_0}$ over $p^{(1)}$.
Since $E \subset S, S'$, it holds that
\[
S_{p^{(1)}} \cap E = E_{p^{(1)}} \subset S' _{p^{(1)}}.
\]
Hence we obtain that $B' \cap S' _{p^{(1)}} \not = \emptyset$.
It implies that $B' \subset S' _{p^{(1)}}$ by the same reason as before.
\vspace{2mm}
\noindent
\textbf{STEP 3.}\ In this step, we assume the condition (4) in STEP 1, and lead a contradiction.
Let $(B, S), (B', S')$ be the pairs in (4) in STEP 1.
Then $(B, S), (B', S')$ satisfy the condition (a), (b) in Proposition \ref{prop:G2}.
Hence by Proposition \ref{prop:G2},
we can take a sequence $(B, S) = (B_0, S_0), (B_1, S_1), \ldots , (B_k, S_k)=(B', S')$ as in Proposition \ref{prop:G2}.
By STEP 2 and the assumption that $B \subset S_{p^{(1)}}$,
it follows that $B_k \subset S_{k, p^{(1)}}$ for each $k$ by induction.
Therefore $B' \subset S'_{p^{(1)}}$ holds and it contradicts the assumption that $B' \subset S' _{p^{(2)}}$
and the fact that $S'_{p^{(1)}} \cap S'_{p^{(2)}} = \emptyset$.
\end{proof}
In order to state Proposition \ref{prop:tree}, we introduce a notation.
\begin{defi}\label{def:tree}
Let $C$ be a scheme of finite type of pure dimension one over an algebraic closed field $k$.
Let $C = C_1 \cup C_2 \cup \cdots \cup C_{\ell}$ be the irreducible decomposition.
We define whether $C$ \textit{forms a tree} or not
by induction on the number $\ell$ of the irreducible components.
Any irreducible curve \textit{forms a tree}.
We call that the union of irreducible curves $C = C_1 \cup C_2 \cup \cdots \cup C_{\ell}$ \textit{forms a tree},
if there exists $i$ such that $C' = \bigcup _{j \not = i} C_j$ forms a tree and
$\# (C' \cap C_i) = 1$.
\end{defi}
First, we prepare some notation in combinatorics.
\begin{defi}
For a sequence $(a_1, \ldots , a_n)$ of length $n$, we define the operations (a), (b) as follows.
Here we define $a_{n+1} := a_1$ and $a_{n+2} := a_2$ by convention.
\begin{itemize}
\item[(a)] If $a_i = a_{i+1}$ for some $1 \le i \le n$, we remove $a_i$ and
get a new sequence $(a_1, \ldots , a_{i-1}, a_{i+1}, \ldots , a_n)$ of length $n-1$.
\item[(b)] If $a_i = a_{i+2}$ for some $1 \le i \le n$, then we remove $a_i$ and $a_{i+1}$, and
get a new sequence $(a_1, \ldots , a_{i-1}, a_i = a_{i+2} ,a_{i+3}, \ldots , a_n)$ of length $n-2$.
\end{itemize}
For a sequence $(a_1, \ldots , a_n)$ of length $n$, applying the operation (a) (resp.\ the operations (a) and (b)) repeatedly,
we get a sequence $(b_1, \ldots , b_m)$ with the condition that $b_i \not = b_{i+1}$ for each $i$
(resp. the condition that $b_i \not = b_{i+1}$ and $b_i \not = b_{i+2}$).
We call such $(b_1, \ldots , b_m)$ \textit{the (a)-reduction} (resp. \textit{(a,b)-reduction}) of
$(a_1, \ldots , a_n)$.
We say that a sequence $(a_1, \ldots, a_n)$ has the \textit{trivial (a)-reduction}
(resp.\ \textit{trivial (a,b)-reduction}) if the (a)-reduction (resp.\ the (a,b)-reduction) has length $0$.
We note here that the (a)-reduction of a sequence of length $1$ has length $0$ by definition.
\end{defi}
\begin{defi}\label{defi:curve_seq}
Let $C = C_1 \cup C_2 \cup \cdots \cup C_{\ell}$ be in Definition \ref{def:tree}.
We call that a sequence $(a_1, \ldots, a_n)$ is a \textit{cycle sequence} if the following conditions (1), (2) hold.
\begin{itemize}
\item[(1)] Each $a_i$ is an irreducible component of $C$ or a closed point on $C$.
\item[(2)] The (a)-reduction $(b_1, \ldots , b_m)$ of $(a_1, \ldots , a_n)$ satisfies the following conditions:
\begin{itemize}
\item[(2-1)] $\dim b_i \not = \dim b_{i+1}$ for each $1 \le i \le m$. We set $b_{m+1} = b_1$ by convention.
\item[(2-2)] If $\dim b_i = 0$, then $b_i \in b_{i-1} \cap b_{i+1}$. We set $b_0 = b_m$ by convention.
\end{itemize}
\end{itemize}
Moreover, we say that a cycle sequence $(a_1, \ldots , a_n)$ is \textit{trivial}
if it has the trivial (a,b)-reduction.
\end{defi}
\begin{ex}
Consider curves $C=C_1 \cup \cdots \cup C_5$ in the following figure.
\vspace{3mm}
\begin{center}
\begin{footnotesize}
\input{graph/exreduction.tex}
\end{footnotesize}
\end{center}
\vspace{5mm}
\noindent
Set sequences $Q_1, Q_2$ as follows:
\begin{align*}
Q_1&=(P_1, C_1, P_2, C_2, P_3, C_3, P_3, C_4, P_4, C_5, P_5, C_3, P_3, C_2, P_2, C_1), \\
Q_2&=(P_1, C_1, P_2, C_2, P_3, C_3, P_3, C_4, P_4, C_5, P_5, C_5, P_4, C_4, P_3, C_2, P_2, C_1).
\end{align*}
Their (a)-reductions are themselves.
These two sequences satisfy the condition (2-1), (2-2), hence they are cycle sequences.
$Q_1$ is not trivial, but $Q_2$ is trivial.
Indeed the (a,b)-reduction of $Q_1$ is $(C_3, P_3, C_4, P_4, C_5, P_5)$.
\end{ex}
A typical example of cycle sequences we will see in the proof of Proposition \ref{prop:tree} is as follows.
\begin{ex}\label{ex:edge_loop}
Let $k$ be an algebraic closed field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective log pair over $k$ with $-(K_X + \Delta)$ nef and big.
Suppose that $\mathrm{Nklt}(X, \Delta)$ is pure dimension one.
Let $f: (Y, \Delta_Y) \to (X, \Delta)$ be a dlt blow-up,
and let $G := \mathcal{D} (\Delta _Y ^{\ge 1})$ be the dual complex of $\Delta _Y ^{\ge 1}$.
Then $G$ is a regular $\Delta$-complex by Proposition \ref{prop:regular}.
For an edge $C$ in $G$, its two vertices $S$ and $S'$ are distinct because $G$ is regular.
We denote by $C(S,S')$ the oriented $1$-cell corresponding to $C$ with initial point $S$ and final point $S'$.
When we write
\[
P: S_1 \overset{C_1}{\longrightarrow} S_2 \overset{C_2}{\longrightarrow} \cdots
\overset{C_{n-1}}{\longrightarrow} S_n \overset{C_n}{\longrightarrow} S_{n+1},
\]
we assume the following condition.
\begin{itemize}
\item For each $1 \le i \le n$, $S_i \not = S_{i+1}$ holds and $S_i$ and $S_{i+1}$ are the two vertices of $C_i$.
\end{itemize}
We denote by $P$ the edge path obtained by joining the oriented $1$-cell $C_1(S_1, S_2), \ldots, C_n(S_n, S_{n+1})$.
The edge path $P$ is called an \textit{edge loop} when $S_1 = S_{n+1}$.
Let $P$ as above be an edge loop in $G$.
Then $(f(S_1), f(C_1), \ldots , f(S_n), f(C_n))$ is a cycle sequence because $f(S_{n-1}) \supset f(C_n) \subset f(S_n)$ holds.
We say that the sequence $(f(S_1), f(C_1), \ldots , f(S_n), f(C_n))$ is
the \textit{image} of the edge loop $P$ for simplicity.
\end{ex}
\begin{lem}\label{lem:cycle_seq}
Let $C = C_1 \cup C_2 \cup \cdots \cup C_{\ell}$ be in Definition \ref{def:tree}.
Suppose that $C$ is connected but does not form a tree.
Then there exists a non-trivial cycle sequence $(a_1, \ldots , a_n)$ such that $a_i$'s are distinct.
\end{lem}
\begin{proof}
Since $C$ does not form a tree, there exist irreducible components $B_1, \ldots, B_k$ of $C$ with the condition
\begin{itemize}
\item[(3)] $\# \left( B_i \cap (\bigcup _{j \not = i} B_j) \right) \ge 2$ for each $1 \le i \le k$.
\end{itemize}
We set $b_1 = B_1$ and take a point $b_2$ in $B_1 \cap (\bigcup _{j \not = 1} B_j)$.
Inductively, we set $b_{2i+1}$ and $b_{2i+2}$ for $i \ge 1$ as follows:
\begin{itemize}
\item We take an arbitrary $B_m$ among $\{ B_1, \ldots , B_k \} \setminus \{ b_{2i-1} \}$
such that $b_{2i} \in b_{2i-1} \cap B_m$.
\item We take an arbitrary point $p$ in
$\left( B_{m} \cap (\bigcup _{j \not = m} B_j) \right) \setminus \{ b_{2i} \}$.
\item We set $b_{2i+1} = B_m$ and $b_{2i+2} = p$.
\end{itemize}
We can repeat this process by the condition (3).
Since $\{ B_1, \ldots , B_k \}$ is a finite set,
there exist $m_1, m_2$ with $m_2 \ge m_1 + 4$ such that
\begin{itemize}
\item $b_{m_1} = b_{m_2}$ but $b_{m_1}, \ldots , b_{m_2 -1}$ are distinct.
\end{itemize}
Then the sequence $(b_{m_1}, \ldots , b_{m_2 -1})$
satisfies the condition (1), (2) in Definition \ref{defi:curve_seq}.
Since $b_{m_1}, \ldots , b_{m_2 -1}$ are distinct, the sequence has the non-trivial (a,b)-reduction.
\end{proof}
\begin{prop}\label{prop:tree}
Let $k$ be an algebraic closed field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective log pair over $k$ with $-(K_X + \Delta)$ nef and big.
Suppose that $\mathrm{Nklt}(X, \Delta)$ is pure dimension one.
Then $\mathrm{Nklt}(X, \Delta)$ forms a tree.
\end{prop}
\begin{proof}
Note that $C := \mathrm{Nklt}(X, \Delta)$ is connected by \cite[Theorem 1.2]{NT}.
Suppose that $C = \mathrm{Nklt}(X, \Delta)$ does not form a tree.
Let $C = C_1 \cup \cdots \cup C_{\ell}$ be the irreducible decomposition.
\vspace{2mm}
\noindent
\textbf{STEP 1. }\
Let $f: (Y, \Delta_Y) \to (X, \Delta)$ be a dlt blow-up such that $\mathrm{Supp}\, \Delta _Y ^{\ge 1} = f^{-1}(C)$.
Let $G := \mathcal{D} (\Delta _Y ^{\ge 1})$ be the dual complex.
We may assume that $G$ is contractible by Theorem \ref{thm:nefbig}.
The assertion in this step is that there exists an edge loop (see Example \ref{ex:edge_loop} for the notation)
\[
S_1 \overset{C_1}{\longrightarrow} S_2 \overset{C_2}{\longrightarrow} \cdots
\overset{C_{n-1}}{\longrightarrow} S_n \overset{C_n}{\longrightarrow} S_1
\]
in $G$ such that
\begin{itemize}
\item its image $(f(S_1), f(C_1), \ldots , f(S_n), f(C_n))$ is a non-trivial cycle sequence.
\end{itemize}
By Lemma \ref{lem:cycle_seq}, there exists a non-trivial cycle sequence $(a_1, \ldots , a_n)$ such that $a_i$'s are distinct.
Since $a_i$'s are distinct, the (a)-reduction of this sequence is itself.
Therefore, this sequence itself satisfies the conditions (2-1) and (2-2) in Definition \ref{defi:curve_seq}.
We may assume that $\dim a_1 = 0$.
Suppose that $i$ is odd.
\vspace{2mm}
\begin{center}
\begin{footnotesize}
\input{graph/oddi.tex}
\end{footnotesize}
\end{center}
\vspace{2mm}
\noindent
Then, we can take an edge path $P_i$:
\[
P_i: S^{(i)}_1 \overset{C^{(i)}_1}{\longrightarrow} S^{(i)}_2 \overset{C^{(i)}_2}{\longrightarrow} \cdots
\overset{C^{(i)}_{m_i -1}}{\longrightarrow} S^{(i)}_{m_i}
\]
in $G$ with the following conditions:
\begin{itemize}
\item[(4)] $f(S^{(i)}_1) = a_{i-1}$ and $f(S^{(i)} _{m_i}) = a_{i+1}$.
\item[(5)] $a_i \in f(S^{(i)}_j)$ but $f(S^{(i)}_j) \not = a_{i-1}, a_{i+1}$ for $2 \le j \le m_i -1$.
\end{itemize}
Such $P_i$ can be taken by the connectedness of
the subcomplex of $G$ which consists of the simplices corresponding to the stratum $S$ of
$\Delta _Y ^{\ge 1}$ which satisfies $a_i \in f(S)$.
Suppose that $i$ is even.
\vspace{2mm}
\begin{center}
\begin{footnotesize}
\input{graph/eveni.tex}
\end{footnotesize}
\end{center}
\vspace{3mm}
\noindent
Then, we can take an edge path $P_i$:
\[
P_i: S^{(i)}_1 \overset{C^{(i)}_1}{\longrightarrow} S^{(i)}_2 \overset{C^{(i)}_2}{\longrightarrow} \cdots
\overset{C^{(i)}_{m_i -1}}{\longrightarrow} S^{(i)}_{m_i}
\]
in $G$ with the following conditions:
\begin{itemize}
\item[(6)] $S_1 ^{(i)} = S_{m_{i-1}} ^{(i-1)}$ and
$S_{m_i} ^{(i)} = S_1 ^{(i+1)}$ (here, $S_{m_{i-1}} ^{(i-1)}$ and $S_1 ^{(i+1)}$ were already taken).
\item[(7)] $f(S^{(i)}_j) = a_i$ for $1 \le j \le m_i$ and
$f(C^{(i)}_j) = a_i$ for $1 \le j \le m_i - 1$.
\end{itemize}
Such $P_i$ can be taken by the connectedness of
the subcomplex of $G$ which consists of the simplices corresponding to the stratum $S$ of
$\Delta _Y ^{\ge 1}$ which satisfies $a_i = f(S)$ (Proposition \ref{prop:G1} (1)).
Connecting the edge paths $P_1, P_2, \ldots , P_n$, we get an edge loop $P$,
which is possibly not simple.
We prove that the image of $P$ is a non-trivial cycle sequence.
Note that the image of any edge loop in $G$ is a cycle sequence (Example \ref{ex:edge_loop}).
Therefore, it is sufficient to show that the (a,b)-reduction of the image of $P$ is non-trivial.
For an even number $i$, it follows that
\[
f(S^{(i)} _1) = f(C^{(i)} _1) = \cdots =
f(C^{(i)} _{m_i -1}) = f(S^{(i)} _{m_i}) = a_{i}
\]
by the condition (7).
Hence, applying the operation (a) to the image of $P$
\[
\left( \ldots, f(C_{m_{i-1} - 1}^{(i-1)}) , f(S_{m_{i-1}}^{(i-1)}) = f(S_1 ^{(i)}),
\ldots , f(S_{m_i} ^{(i)}) = f(S_{1} ^{(i+1)}), f(C_{1} ^{(i+1)} ) , \ldots \right),
\]
it can be reduced to
\[
\left( \ldots, f(C_{m_{i-1} - 1}^{(i-1)}) , f(S_{m_{i-1}}^{(i-1)}) = a_i = f(S_{1} ^{(i+1)}),
f(C_{1} ^{(i+1)} ) , \ldots \right).
\]
For an odd number $i$, we set
\[
b_1 ^{(i)} = f(S^{(i)} _1), \quad b_2^{(i)} = f(C^{(i)} _1), \quad \ldots , \quad
b_{2(m_i-1)}^{(i)} = f(C^{(i)} _ {m_i -1}), \quad b^{(i)} _{2m_i -1} = f(S^{(i)} _{m_i}).
\]
Then, they satisfy the following conditions:
\begin{itemize}
\item $b_1 ^{(i)} = a_{i-1}$ and $b^{(i)} _{2m_i -1} = a_{i+1}$ by the condition (4).
\item $a_i \in b_j ^{(i)}$ but $b_j ^{(i)} \not = a_{i-1}, a_{i+1}$ for $2 \le j \le 2m_i -2$ by the condition (5).
\item If $b_j ^{(i)} \not = b_{j+1} ^{(i)}$, then either $b_j ^{(i)} = a_i$ or $b_{j+1} ^{(i)} = a_i$
(This is because $a_i$ is a point and
we have an inclusion either $a_i \in b_j ^{(i)} \subset b_{j+1} ^{(i)}$ or $a_i \in b_{j+1} ^{(i)} \subset b_{j} ^{(i)}$).
\end{itemize}
Hence, applying the operation (a) to the image of $P$
\[
\left( \ldots , f(C^{(i-1)} _{m_{i-1} -1}),
b_1^{(i)}, \ldots, b_{2m_i -1} ^{(i)}, f(C^{(i+1)} _1) \ldots \right),
\]
it can be reduced to
\[
\left( \ldots , f(C^{(i-1)} _{m_{i-1} -1} ),
a_{i-1}, a_i, c_1, a_i, \cdots, a_i, c_{n_i} ,a_i, a_{i+1}, f(C^{(i+1)} _1), \ldots \right),
\]
for some $c_1, \ldots, c_{n_i}$.
Applying the operation (b), it can be reduced to
\[
\left( \ldots , f(C^{(i-1)} _{m_{i-1} -1}),
f(S_{m_i}^{(i-1)})= a_{i-1}, a_i, a_{i+1}= f(S_{1}^{(i+1)}), f(C^{(i+1)} _1), \ldots \right).
\]
Therefore, the (a,b)-reduction of the image of $P$ is $(a_1, a_2, \ldots , a_n)$,
which is non-trivial since $a_i$'s are distinct.
We have proved that the image of $P$ is a non-trivial cycle sequence.
\vspace{2mm}
\noindent
\textbf{STEP 2. }\
Let $Q$ be an edge loop
\[
Q: S'_1 \overset{C'_1}{\longrightarrow} S'_2 \overset{C'_2}{\longrightarrow} \cdots
\overset{C'_{n-1}}{\longrightarrow} S'_n \overset{C'_n}{\longrightarrow} S'_1
\]
in $G$.
Suppose that $C'_i \not = C' _{i+1}$ there exist $i$ and a $2$-simplex $F$ in $G$ such that $C' _i, C' _{i+1} < F$.
Let $C' < F$ be the edge which is different from $C' _i$ and $C' _{i+1}$.
\vspace{1mm}
\begin{center}
\begin{footnotesize}
\input{graph/facered.tex}
\end{footnotesize}
\end{center}
\vspace{3mm}
\noindent
Then we have a new edge loop $Q'$:
\[
Q': S'_1 \overset{C'_1}{\longrightarrow} S'_2 \overset{C'_2}{\longrightarrow} \cdots
\overset{C'_{i-1}}{\longrightarrow} S'_i \overset{C'}{\longrightarrow} S'_{i+2} \overset{C'_{i+2}}{\longrightarrow}
\cdots
\overset{C'_{n-1}}{\longrightarrow} S'_n \overset{C'_n}{\longrightarrow} S'_1.
\]
\noindent
We claim in this step that
\begin{itemize}
\item the image of $Q$ and the image of $Q'$ have the same (a,b)-reduction.
\end{itemize}
The image of $Q$ is
\[
R: \left( f(S' _1), f(C' _1), \ldots , f(S' _i), f(C' _i), f(S' _{i+1}), f(C' _{i+1}), f(S' _{i+2}), \ldots , f(C'_n) \right),
\]
and the image of $Q'$ is
\[
R': \left( f(S' _1), f(C' _1), \ldots , f(S' _i), f(C'), f(S' _{i+2}), \ldots , f(C'_n) \right).
\]
We have four cases.
\begin{itemize}
\item[(i)] $f(S' _i) = f(S' _{i+1}) = f(S' _{i+2})$.
\item[(ii)] $f(S'_i) = f(S' _{i+2}) \not = f(S'_{i+1})$.
\item[(iii)] $f(S'_i) = f(S' _{i+1}) \not = f(S' _{i+2})$ or $f(S'_i) \not = f(S' _{i+1}) = f(S' _{i+2})$.
\item[(iv)] $f(S' _i), f(S' _{i+1}), f(S' _{i+2})$ are distinct.
\end{itemize}
Suppose (i).
Applying the operation (b) twice to $R$, and once to $R'$, we get the same sequence
\[
\left( f(S' _1), f(C' _1), \ldots , f(C'_{i-1}),f(S'_i) = f(S'_{i+2}) ,f(C'_{i+2}), \ldots , f(C'_n) \right).
\]
Suppose (ii). Since $f(C'_i) \subset f(S' _i) \cap f(S' _{i+1})$ and $f(S' _i) \not = f(S' _{i+1})$,
it follows that $\dim f(C'_i) = 0$. By the same reason, it follows that $\dim f(C'_{i+1}) = 0$.
Since $f(F) \subset f(C' _i) \cap f(C' _{i+1})$, it follows that $f(C' _i) = f(F) = f(C' _{i+1})$.
Applying the operation (b) twice to $R$, and once to $R'$, we get the same sequence
\[
\left( f(S' _1), f(C' _1), \ldots , f(C'_{i-1}),f(S'_i)=f(S'_{i+2}),f(C'_{i+2}), \ldots , f(C'_n) \right).
\]
Suppose (iii).
We may assume that $f(S'_i) = f(S' _{i+1}) \not = f(S' _{i+2})$.
By the same reason as in the case (ii), $f(C' _{i+1}) = f(F) = f(C')$.
Applying the operation (b) once to $R$, we get the sequence $R'$
\[
\left( f(S' _1), f(C' _1), \ldots , f(C'_{i-1}),f(S'_i)=f(S'_{i+1}),f(C' _{i+1})=f(C'),
f(S' _{i+2}), f(C'_{i+2}), \ldots , f(C'_n) \right).
\]
Suppose (iv). In this case, $f(C' _i) = f(C' _{i+1}) = f(C')$.
Applying the operation (b) once to $R$, we get the sequence $R'$
\[
\left( f(S' _1), f(C' _1), \ldots , f(C'_{i-1}),f(S'_i), f(C' _i) = f(C' _{i+1}) = f(C'), f(S' _{i+2}),
f(C'_{i+2}), \ldots , f(C'_n) \right).
\]
In any case, $R$ and $R'$ have the same (a,b)-reduction.
\vspace{2mm}
\noindent
\textbf{STEP 3. }\
Let $Q$ be an edge loop
\[
Q: S'_1 \overset{C'_1}{\longrightarrow} S'_2 \overset{C'_2}{\longrightarrow} \cdots
\overset{C'_{i-1}}{\longrightarrow} S' _i \overset{C'_i}{\longrightarrow} S'_{i+1} \overset{C'_{i+1}}{\longrightarrow} S'_{i+2}
\overset{C'_{i+2}}{\longrightarrow} \cdots
\overset{C'_{n-1}}{\longrightarrow} S'_n \overset{C'_n}{\longrightarrow} S'_1
\]
in $G$.
Suppose that there exist $i$ such that $C' _i = C' _{i+1}$.
Then $S'_i = S'_{i+2}$ holds.
\vspace{2mm}
\begin{center}
\begin{footnotesize}
\input{graph/edgered.tex}
\end{footnotesize}
\end{center}
\vspace{4mm}
\noindent
Then we have a new edge loop $Q'$:
\[
Q': S'_1 \overset{C'_1}{\longrightarrow} S'_2 \overset{C'_2}{\longrightarrow} \cdots
\overset{C'_{i-1}}{\longrightarrow} S'_i \overset{C'_{i+2}}{\longrightarrow} S'_{i+3} \overset{C'_{i+3}}{\longrightarrow}
\cdots
\overset{C'_{n-1}}{\longrightarrow} S'_n \overset{C'_n}{\longrightarrow} S'_1.
\]
We claim in this step that
\begin{itemize}
\item the image of $Q$ and the image of $Q'$ have the same (a,b)-reduction.
\end{itemize}
The image of $Q$ is
\[
R: \left( f(S' _1), f(C' _1), \ldots , f(S' _i), f(C' _i), f(S' _{i+1}), f(C' _{i+1}), f(S' _{i+2}),f(C' _{i+2}), \ldots , f(C'_n) \right),
\]
and the image of $Q'$ is
\[
R': \left( f(S' _1), f(C' _1), \ldots , f(S' _i), f(C'_{i+2}), \ldots , f(C'_n) \right).
\]
Since $C' _i = C' _{i+1}$ and $S'_i = S'_{i+2}$,
applying the operation (b) twice to $R$, we get $R'$.
Hence $R$ and $R'$ have the same (a,b)-reduction.
\vspace{2mm}
\noindent
\textbf{STEP 4. }\
By STEP 1,
there exists an edge loop $Q$
\[
Q: S_1 \overset{C_1}{\longrightarrow} S_2 \overset{C_2}{\longrightarrow} \cdots
\overset{C_{n-1}}{\longrightarrow} S_n \overset{C_n}{\longrightarrow} S_1
\]
in $G$ such that
\begin{itemize}
\item its image $(f(E_1), f(C_1), \ldots , f(E_n), f(C_n))$ is a non-trivial cycle sequence.
\end{itemize}
Since $G$ is simply connected, applying the operation in STEP 2 and STEP 3 (and the reversing operation) repeatedly,
we get a trivial path
\[
Q': S'
\]
for some vertex $S'$ \cite[Theorem 3.4.1]{Geo08}.
The (a,b)-reduction of the image of $Q$ is not trivial but that of $Q'$ is trivial.
This contradicts STEP 2 and STEP 3.
\end{proof}
\subsection{Vanishing theorem of Witt vector cohomology of Ambro-Fujino type}
\begin{thm}\label{thm:WAFV}
Let $k$ be a perfect field of characteristic $p > 5$.
Let $(X, \Delta)$ be a three-dimensional projective $\mathbb{Q}$-factorial log canonical pair over $k$ with $-(K_X + \Delta)$ ample.
Then $H^i(X,W \mathcal{O}_{X, \mathbb{Q}}) = 0$ holds for $i > 0$.
\end{thm}
\begin{proof}
By replacing $\Delta$ smaller, we may assume that $\dim \mathrm{Nklt}(X, \Delta) \le 1$.
By the exact sequence
\[
0 \to WI_{\mathrm{Nklt}(X, \Delta), \mathbb{Q}} \to W \mathcal{O}_{X, \mathbb{Q}} \to
W \mathcal{O}_{\mathrm{Nklt}(X, \Delta), \mathbb{Q}} \to 0,
\]
and the Nadel type vanishing $H^i(X, WI_{\mathrm{Nklt}(X, \Delta), \mathbb{Q}}) = 0$ for $i >0$
(Theorem \ref{thm:WNV}),
it is sufficient to show that
\[
H^1 (\mathrm{Nklt}(X, \Delta), W \mathcal{O}_{\mathrm{Nklt}(X, \Delta), \mathbb{Q}}) = 0.
\]
Here, we may assume that $k$ is algebraically closed by \cite[Lemma 2.15]{NT}.
Since $\dim \mathrm{Nklt}(X, \Delta) \le 1$ and $\mathrm{Nklt}(X, \Delta)$ is connected (\cite[Theorem 1.2]{NT}),
we may assume that $\mathrm{Nklt}(X, \Delta)$ is a union of curves.
Let $C := \mathrm{Nklt}(X, \Delta) = C_1 \cup C_2 \cup \cdots \cup C_l$ be the irreducible decomposition.
By Lemma \ref{lem:rationality}, Proposition \ref{prop:normality}, and Proposition \ref{prop:tree},
the curve $C$ satisfies the following conditions.
\begin{enumerate}
\item[(1)] Each $C_i$ is a rational curve.
\item[(2)] Each normalization of $C_i$ is a universal homeomorphism.
\item[(3)] $C = C_1 \cup \ldots \cup C_l$ forms a tree (see Definition \ref{def:tree}).
\end{enumerate}
Then $H^1 (C_i, W \mathcal{O}_{C_i, \mathbb{Q}}) = 0$ follows from (1) and (2) (cf.\ \cite[Lemma 2.21, 2.22]{GNT}).
Hence the desired vanishing $H^1 (C, W \mathcal{O}_{C, \mathbb{Q}}) = 0$ follows from (3).
\end{proof}
\subsection{Rational point formula}
As an application of Theorem \ref{thm:WAFV}, we obtain the following rational point formula.
\begin{thm}\label{thm:RPF}
Let $k$ be a finite field of characteristic $p > 5$.
Let $(X, \Delta)$ be a geometrically connected three-dimensional projective $\mathbb{Q}$-factorial log canonical pair over $k$ with $-(K_X + \Delta)$ ample.
Then the number of the $k$-rational points on the non-klt locus on $(X, \Delta)$ satisfies
\[
\# \mathrm{Nklt}(X, \Delta) (k) \equiv 1 \mod {|k|}.
\]
In particular, there exists a $k$-rational point on $\mathrm{Nklt}(X, \Delta)$.
\end{thm}
\begin{proof}
Let $Z = \mathrm{Nklt}(X, \Delta)$ and let $I_Z$ be the corresponding coherent ideal sheaf.
By Theorem \ref{thm:WNV} and Theorem \ref{thm:WAFV},
\[
H^i(X, WI_{Z, \mathbb{Q}}) = 0,\ \text{and}\ H^i(X,W \mathcal{O}_{X, \mathbb{Q}}) = 0
\]
hold for $i > 0$.
By the exact sequence
\[
0 \to WI_{Z} \to W \mathcal{O}_{X, \mathbb{Q}} \to
W \mathcal{O}_{Z, \mathbb{Q}} \to 0,
\]
$H^i(Z, W \mathcal{O}_{Z, \mathbb{Q}}) = 0$ holds for $i >0$.
By \cite[Proposition 6.9 (i)]{BBE07}, it follows that $\# Z (k) \equiv 1 \pmod {|k|}$.
\end{proof}
\begin{bibdiv}
\begin{biblist*}
\bib{AKMW02}{article}{
author={Abramovich, Dan},
author={Karu, Kalle},
author={Matsuki, Kenji},
author={W\l odarczyk, Jaros\l aw},
title={Torification and factorization of birational maps},
journal={J. Amer. Math. Soc.},
volume={15},
date={2002},
number={3},
pages={531--572},
}
\bib{Ax64}{article}{
author={Ax, James},
title={Zeroes of polynomials over finite fields},
journal={Amer. J. Math.},
volume={86},
date={1964},
pages={255--261},
}
\bib{BBE07}{article}{
author={Berthelot, Pierre},
author={Bloch, Spencer},
author={Esnault, H{\'e}l{\`e}ne},
title={On Witt vector cohomology for singular varieties},
journal={Compos. Math.},
volume={143},
date={2007},
number={2},
pages={363--392},
}
\bib{Bir16}{article}{
author={Birkar, Caucher},
title={Existence of flips and minimal models for 3-folds in char $p$},
language={English, with English and French summaries},
journal={Ann. Sci. \'Ec. Norm. Sup\'er. (4)},
volume={49},
date={2016},
number={1},
pages={169--212},
}
\bib{BCHM10}{article}{
author={Birkar, Caucher},
author={Cascini, Paolo},
author={Hacon, Christopher D.},
author={McKernan, James},
title={Existence of minimal models for varieties of log general type},
journal={J. Amer. Math. Soc.},
volume={23},
date={2010},
number={2},
pages={405--468},
}
\bib{BW17}{article}{
author={Birkar, Caucher},
author={Waldron, Joe},
title={Existence of Mori fibre spaces for 3-folds in ${\rm char}\,p$},
journal={Adv. Math.},
volume={313},
date={2017},
pages={62--101},
}
\bib{CR12}{article}{
author={Chatzistamatiou, Andre},
author={R{\"u}lling, Kay},
title={Hodge-Witt cohomology and Witt-rational singularities},
journal={Doc. Math.},
volume={17},
date={2012},
pages={663--781},
}
\bib{CP08}{article}{
author={Cossart, Vincent},
author={Piltant, Olivier},
title={Resolution of singularities of threefolds in positive
characteristic. I. Reduction to local uniformization on Artin-Schreier
and purely inseparable coverings},
journal={J. Algebra},
volume={320},
date={2008},
number={3},
pages={1051--1082},
}
\bib{DH16}{article}{
author={Das, Omprokash},
author={Hacon, Christopher D.},
title={On the adjunction formula for 3-folds in characteristic $p>5$},
journal={Math. Z.},
volume={284},
date={2016},
number={1-2},
pages={255--269},
}
\bib{dFKX17}{article}{
author={de Fernex, Tommaso},
author={Koll\'ar, J\'anos},
author={Xu, Chenyang},
title={The dual complex of singularities},
conference={
title={Higher dimensional algebraic geometry---in honour of Professor
Yujiro Kawamata's sixtieth birthday},
},
book={
series={Adv. Stud. Pure Math.},
volume={74},
publisher={Math. Soc. Japan, Tokyo},
},
date={2017},
pages={103--129},
}
\bib{Esn03}{article}{
author={Esnault, H{\'e}l{\`e}ne},
title={Varieties over a finite field with trivial Chow group of 0-cycles
have a rational point},
journal={Invent. Math.},
volume={151},
date={2003},
number={1},
pages={187--191},
}
\bib{Fuj07}{article}{
author={Fujino, Osamu},
title={What is log terminal?},
conference={
title={Flips for 3-folds and 4-folds},
},
book={
series={Oxford Lecture Ser. Math. Appl.},
volume={35},
publisher={Oxford Univ. Press, Oxford},
},
date={2007},
pages={49--62},
}
\bib{Fuj11}{article}{
author={Fujino, Osamu},
title={Fundamental theorems for the log minimal model program},
journal={Publ. Res. Inst. Math. Sci.},
volume={47},
date={2011},
number={3},
pages={727--789},
}
\bib{Fuj17}{book}{
author={Fujino, Osamu},
title={Foundations of the minimal model program},
series={MSJ Memoirs},
volume={35},
publisher={Mathematical Society of Japan},
date={2017},
}
\bib{Geo08}{book}{
author={Geoghegan, Ross},
title={Topological methods in group theory},
series={Graduate Texts in Mathematics},
volume={243},
publisher={Springer, New York},
date={2008},
}
\bib{GNT}{article}{
author={Gongyo, Yoshinori},
author={Nakamura, Yusuke},
author={Tanaka, Hiromu},
title={Rational points on log Fano threefolds over a finite field},
journal={to appear in J. Eur. Math. Soc.},
eprint={arXiv:1512.05003v3},
}
\bib{HX15}{article}{
author={Hacon, Christopher D.},
author={Xu, Chenyang},
title={On the three dimensional minimal model program in positive
characteristic},
journal={J. Amer. Math. Soc.},
volume={28},
date={2015},
number={3},
pages={711--744},
}
\bib{Har77}{book}{
author={Hartshorne, Robin},
title={Algebraic geometry},
note={Graduate Texts in Mathematics, No. 52},
publisher={Springer-Verlag, New York-Heidelberg},
date={1977},
}
\bib{HNT}{article}{
author={Hashizume, Kenta},
author={Nakamura, Yusuke},
author={Tanaka, Hiromu},
title={Minimal model program for log canonical threefolds in positive characteristic},
journal={to appear in Math. Res. Lett.}
eprint={arXiv:1711.10706v1},
}
\bib{Hat02}{book}{
author={Hatcher, Allen},
title={Algebraic topology},
publisher={Cambridge University Press, Cambridge},
date={2002},
}
\bib{Kat71}{article}{
author={Katz, Nicholas M.},
title={On a theorem of Ax},
journal={Amer. J. Math.},
volume={93},
date={1971},
pages={485--499},
}
\bib{Kol13}{book}{
author={Koll{\'a}r, J{\'a}nos},
title={Singularities of the minimal model program},
series={Cambridge Tracts in Mathematics},
volume={200},
note={With a collaboration of S\'andor Kov\'acs},
publisher={Cambridge University Press, Cambridge},
date={2013},
}
\bib{KM98}{book}{
author={Koll{\'a}r, J{\'a}nos},
author={Mori, Shigefumi},
title={Birational geometry of algebraic varieties},
series={Cambridge Tracts in Mathematics},
volume={134},
publisher={Cambridge University Press, Cambridge},
date={1998},
}
\bib{KX16}{article}{
author={Koll\'{a}r, J\'{a}nos},
author={Xu, Chenyang},
title={The dual complex of Calabi-Yau pairs},
journal={Invent. Math.},
volume={205},
date={2016},
number={3},
pages={527--557},
}
\bib{Mau}{article}{
author={Mauri, Mirko},
title={The dual complex of log Calabi-Yau pairs on Mori fibre spaces},
eprint={arXiv:1808.03706v1},
}
\bib{NT}{article}{
author={Nakamura, Yusuke},
author={Tanaka, Hiromu},
title={A Witt Nadel vanishing theorem for threefolds},
eprint={arXiv:1712.07358v1},
}
\bib{Sza94}{article}{
author={Szab\'{o}, Endre},
title={Divisorial log terminal singularities},
journal={J. Math. Sci. Univ. Tokyo},
volume={1},
date={1994},
number={3},
pages={631--639},
}
\bib{Tan14}{article}{
author={Tanaka, Hiromu},
title={Minimal models and abundance for positive characteristic log
surfaces},
journal={Nagoya Math. J.},
volume={216},
date={2014},
pages={1--70},
}
\bib{Wal18}{article}{
author={Waldron, Joe},
title={The LMMP for log canonical 3-folds in characteristic $p>5$},
journal={Nagoya Math. J.},
volume={230},
date={2018},
pages={48--71},
}
\end{biblist*}
\end{bibdiv}
\end{document}
|
2,869,038,155,252 | arxiv | \section{Selection of electrons from input cosmic ray particle flux}
The selection of electrons from the input cosmic ray particle flux was performed using the Monte Carlo simulations of the showers generated by the primary electrons in the ATIC apparatus and in the residual atmosphere with the FLUKA system \citep{FLUKA}. To distinguish the electrons, some special quantities are constructed that describe the shape of the shower in the apparatus in the longitudinal and transverse directions in such way that these quantities take sharply different values for ``typical'' electrons and for ``typical'' protons\footnote{The nuclei with $z\ge2$ are easily rejected by the use of the charge detector.}. We call these quantities ``electron filters''. In contrast to \citet{CHANG2008,CHANG-NATURE2008}, where only one filter was used, we constructed five different filters to provide a cross-check of the results and an evaluation of the methodological reliability. The following results address to one filter, called Chi, but are confirmed by other filters as well. The basic parameters for the construction of the Chi-filter are the relative energy deposits in the layers of the calorimeter, $C_l=E_l/E$ (where $l=0,1,2,\dots$ is the number of the layer from the top to the bottom, and $E$ is the total energy deposit in the calorimeter) and the root mean square widths of the shower in the layers $R_l$. The value of the filter Chi is given by the formula
$$\mathrm{Chi}=\sqrt{\frac{1}{8}\left[\sum_{l=0}^3[(R_l-\bar R_l)/\sigma^R_l]^2 + \sum_{l=4}^7[(F_l-\bar F_l)/\sigma^F_l]^2 \right]},$$
where $F_l=R_l\sqrt{C_l}$. The mean values and dispersions of quantities $R_l$ and $F_l$ are calculated for the incident electrons by the simulations. The distribution of Chi-values for single-charge particles after a preliminary selection of events by energy deposits in the layers of calorimeter is shown in Fig.~\ref{FilterChiAndBack}. The preliminary energy condition consists of a number of cuts for relative energy deposits $C_l$. These cuts are the lower limits for $C_0,\dots,C_6$ and the upper limits for $C_7,C_8,C_9$. They are obtained by the simulations in which about 98\% of the electrons passed the selection with the applied cuts, while about 30\% of the protons in ATIC-2 and 70\% in ATIC-4 were rejected by these cuts. The energy cuts are based on the difference of the averaged longitudinal development of the shower in the calorimeter for electrons and protons. For the final selection of electrons, we used such cut values for Chi that 30--25\% of the primary electrons were rejected and lost together with the rejected proton background (the optimal cut values were different for ATIC-2 and ATIC-4 flights; the efficiency of the selection was estimated by the simulations). These losses are taken into account when evaluating an absolute primary flux of the cosmic ray electrons.
\section{Fine structure in the electron spectrum}
\label{FINE}
The calorimeter of the ATIC spectrometer is in practice thick for electrons (it is 18 radiation lengths for ATIC-2 and 22.5 radiation lengths for ATIC-4), therefore the incident energy of an electron can be easily determined by the total energy deposit in the calorimeter. The test measurements on the electron beam at CERN \citep{GANEL2005} and the simulations have shown that the ATIC spectrometer has a very high energy resolution for the electrons. The resolution is a slow-varying function of energy and in the terms of half the line width at half maximum less than 3\% at the energies of 200--600~GeV. No unfolding procedure is needed to obtain the high-resolution spectrum with such a narrow apparatus function since the width of the apparatus function is less than or comparable with the width of the energy bins for all binnings used. Only an energy-dependent scaling factor of about 1.1, evaluated in the simulations, is used to obtain the primary energy of the electrons from the energy deposited in the calorimeter. The high value of the energy resolution enables us to investigate the electron spectrum for the presence of a structure on the scale of 0.1-0.2 decade in energy.
It is essential that to detect a structure on the scale of 0.1-0.2 decade in energy it is not necessary to investigate the ``absolute'' electron spectrum obtained after the subtraction of the proton background (see Fig.~\ref{FilterChiAndBack}), and accounting for the electron scattering in the residual atmosphere. Neither the background nor the scattering of electrons in the atmosphere can lead to a short-period structure in the electron spectrum. We have confirmed it by the simulations explicitly: It was shown that the atmospheric correction produces only an amplitude scaling factor that is slow-varying and almost linear function of the logarithm of energy (see Sec.~\ref{ABS} for the further details). On the other hand, the protons which can mimic the electrons ($\mathrm{Chi} \sim 1$) and are responsible for the proton background, have a very wide energy apparatus function (about 50\% of a mean deposited energy), and such an apparatus function smears all possible structures (if they are present) in the initial flux of protons. Consequently, the proton background should form a spectrum without prominent features.
The spectra of electrons as measured in ATIC-2 and ATIC-4 experiments, without atmospheric corrections and the subtraction of proton background, are shown in Fig.~\ref{Fine} in the energy range of 30--900~GeV with the step of 0.035 decades in energy. It is easy to see a structure in the range of 200--600~GeV, which is well reproduced in both experiments. Hereinafter, we call this phenomenon the `fine structure'. A total statistics per each energy bin of the spectrum is shown in Fig.~\ref{N-A2A4-Fine}.
The statistical significance of the observed fine structure is determined by two different factors: firstly, by the statistical significance of the presence of non-random structure with the usual $\chi^2$-criterion for the total ATIC-2 + ATIC-4 spectrum; and secondly, by the statistical significance of the correlation (similarity) of structures of the spectra measured separately in ATIC-2 and ATIC-4. The evaluation was performed for the region of the spectrum from 200~GeV to 800~GeV. The formulae for the calculation of the $\chi^2$-value and a correlation-like function $C$ are respectively:
$$
\chi^2 = \sum_i\left(\frac{y_i - \bar y_i}{\sigma^y_i}\right)^2;\quad
C = \sum_i \frac{x_i-\bar x_i}{\sigma^x_i}\,\frac{y_i-\bar y_i}{\sigma^y_i},
$$
where $x_i,y_i$ are integer numbers representing the statistics in the bin number $i$; $\bar x_i,\bar y_i$ are some `smoothed' values of the spectra in the same bin; and $\sigma^x_i = \sqrt{\bar x_i},\sigma^y_i = \sqrt{\bar y_i}$ are the Poisson dispersions for the smoothed spectra; $y_i$ denotes the intensity of the sum of ATIC-2 and ATIC-4 spectra in the formula for $\chi^2$, and $x_i,y_i$ denote the intensities of ATIC-2 and ATIC-4 spectra separately in the formula for $C$. Probabilities $P_{\chi^2} = P(\chi^2_\mathrm{rnd}>\chi^2_\mathrm{exp})$, $P_C=P(C_\mathrm{rnd}>C_\mathrm{exp})$ for the values of $\chi^2$ and $C$ of random spectra to exceed the experimental values $\chi^2_\mathrm{exp}$ and $C_\mathrm{exp}$ were calculated. The random spectra are the spectra with the Poisson statistics relative to the average values determined by smoothed spectra. Quantities $1-P_{\chi^2}$, $1-P_C$ are statistical significances of the fine structure calculated in two different ways. All probabilities are calculated with the Monte Carlo simulations and are checked approximately against the standard table values where it was possible.
The problem of calculation of $P_{\chi^2}$, $P_C$ is no way a simple one. One difficulty of the approach is that there is some irreducible arbitrariness in the construction of a `smoothed' spectrum. Therefore, one of the main objectives was to investigate the stability of the estimate against an arbitrariness of the smoothing procedure. To obtain the smoothed spectra we approximated the primary spectra in a wide range of energies (from 35~GeV to 1500~GeV) by a cubic spline. To evaluate a stability of the statistical significance against the arbitrariness of the smoothing procedure, we studied smoothing procedures with a very wide range of steps of averaging during the construction of the spline (spline steps). We found that there exists a wide plateau in the dependence of an estimated statistical significance on the spline step. The estimated statistical significance of the fine structure is almost constant for the spline steps from 0.12 to 0.24 decades of energy. Such a plateau is possible due to a very high amplitude of the fine structure of the spectrum. The details of any 'reasonable' smoothing procedure are not actually important. Our final results are obtained by averaging with spline steps 0.12, 0.15, 0.18, 0.21, and 0.24.
The other difficulty is that the estimate is highly dependent on the specific energy binning used in the building of the spectrum. The significance for each spline step (see above) is actually evaluated by averaging on a number of different binning with bin widths from 0.015 to 0.035 decades of energy. The errors of the final estimates are calculated as standard deviations representing the fluctuations of the estimates with different binning and with different `splining'. The contribution to the error from an instability related to the bin size is dominant.
In this way we found that the statistical significance for the correlation is $1-P_C=(99.69^{+0.25}_{-1.32})\%$, and for the $\chi^2$-criterion $1-P_{\chi_2}=(99.68^{+0.27}_{-1.69})\%$. The high statistical significance practically eliminates a random nature of the observed fine structure. We would like to note that the present estimates of the statistical significance should be considered as preliminary ones. It is desirable to develop a `binningless' technique to avoid the relatively large statistical errors of the estimates caused by the fluctuations of the estimates against binning. This work is currently in progress.
Several tests were also performed to exclude possible methodological causes of the observed structures. The statistics of the proton background in the range of the filter values near the electron peak but free of the electron signal ($2<\mathrm{Chi}<3.5$, see Fig.~\ref{FilterChiAndBack}) were investigated: no signs of a possible structure were found. The different electron filters were studied: all the filters produced a very similar structure. The spectra for different solid angles and different time periods of the experiments were compared: the fine structure was reproduced in all the cases. Thus, no evidence that the observed fine structure might be caused by the methodological effect was found.
If the existence of the fine structure in the electron spectrum is confirmed by independent experiments, then its most likely source will be the nearby supernova remnants and/or pulsars, but not the annihilation or decay of the dark matter particles. The idea that the cosmic ray electron spectrum in the TeV energy range may have features related to a few nearby sources like the pulsars is forty years old \citep{SHEN1970}. The possibility that the deviation from a smooth power-like behavior in the spectrum is caused by the individual nearby sources and can be observable in this energy range was proposed in \citet{NISHIMURA1980}, in relation to the feasible experiments. Later on this issue was considered from various viewpoints repeatedly (see, for example, \citet{POHL1998,EW2002,PROFUMO2008}). Specifically, a structure very similar to the one observed in the ATIC experiment is predicted in \citet{MALYSHEV2009}, where it is especially emphasized that such a structure might be used as a signature to distinguish between the annihilation or decay of dark matter particles and the other sources of electrons, such as nearby pulsars. The dark matter can not be a source of a specific structure represented by several narrow peaks \citep{MALYSHEV2009}. The reason is simple and deep: the sources like the pulsars can be treated as instantaneous, meanwhile the sources like the dark matter halos or subhalos (clumps) should be considered as permanent in time. High energy wing of the spectrum of an instantaneous source can, in principle, create a very sharp peak in the electron spectrum, due to the process of cooling of electrons \citep{MALYSHEV2009}, therefore a number of sources can produce several peaks as observed in the ATIC experiment. On the contrary, the permanent sources like the dark matter clumps would mix such peaks with different energies related to the different moments of the time of emission of electrons and produce wide distributions in the energy spectrum \citep{KUHLEN2009} which has little in common with the fine structure observed by the ATIC. Even if some `multi-peak' structure were produced by the dark matter clumps, this structure would be very smooth and smeared compared to the observed ATIC fine structure (see \citet[FIG.3]{BRUN2009}, \citet[FIG.9]{CLINE2010}).
\section{The electron spectrum after proton background subtraction and atmospheric correction}
\label{ABS}
The spectrum shown in Fig.~\ref{Fine} does not provide a basis for a comparison with the results of other experiments, since it does not give a correct absolute intensity of the electron flux. To obtain the correct absolute intensity of the electron spectrum, the proton background must be subtracted, and the electron spectrum must be corrected for the scattering in the residual atmosphere (and we neglect the secondary atmospheric electron background from a hadronic component of cosmic rays, since it is shown to be small at the thickness of a residual atmosphere of about 5~g/cm$^2$ typical for the ATIC flights \citep{NISHIMURA1980}).
The procedure of the subtraction of proton background based on the simulations of proton cascades in the ATIC apparatus leads to an unstable result: the inevitable small errors in the simulations lead to large errors in the electron spectrum \citep{PANOV2009}. In this paper, we implemented a method that is independent of the simulations. The method is based on an approximation of the experimental plot of Chi-values for protons (see Fig.~\ref{FilterChiAndBack}) with the simple functions and on the interpolation of these functions to the coordinates origin. We examined the functions from two different three-prpameter assemblages $y(x) = A x^2 \exp[-(x/\sigma)^\alpha]$ and $y(x) = Ax/[1+(x/\sigma)^\alpha]$, which produce a reasonable approximation of the proton peak, but result in somewhat different extrapolations of the proton background under the electron peak. The scale of the possible systematic error related to the background subtraction is estimated by comparison of the results for different types of functions. This method is not rigorous, of course, and should be considered as a qualitative estimate. The scale of the proton background is about 5\% of the total amplitude of the measured spectrum near 40~GeV and about 40\% near 700~GeV. The total estimated corridor of possible systematic errors varies from ${}^{+15\%}_{-16\%}$ for the energy of 40~GeV to ${}^{+57\%}_{-46\%}$ at 700~GeV, and is related mainly to the uncertainty in the detection efficiency at low energies, while at high energies it is related to the errors in the subtraction of backgrounds. The systematic errors can not lead to an essential distortion in the shape of the spectrum.
An atmospheric correction is calculated on the basis of the simulations of the scattering of primary electrons in the atmosphere using the FLUKA system \citep{FLUKA}. As suggested in our paper \citet{PANOV2009}, one does not need to correct the energy at the top of the ATIC apparatus to obtain the energy at the top of the atmosphere since the scattering angles of the secondary gamma quanta are very small, and the energy of an electron is recorded in the calorimeter together with the energies of almost all secondary gamma quanta. However, these gamma quanta may distort the shape of the cascade in the apparatus, which leads to some additional inefficiencies of the filtration of the electrons. This was taken into account. The correction factor calculated from the simulations is approximated by a linear function of the logarithm of energy, and varies from 1.42 at 30~GeV to 1.26 at 700~GeV, forcing the spectrum to become more steep. Absolute electron spectrum measured in the present paper, along with the ATIC electron spectrum of the paper \citet{CHANG-NATURE2008} and the results of the space spectrometer Fermi/LAT \citep{FERMILAT2009} are shown in Fig.~\ref{ToAbs}.
\conclusions
Our results confirm the existence of the ATIC excess, but this excess is resolved into a fine structure. For the energies below 200 GeV, our spectrum is identical in form to the spectrum of Fermi/LAT. The fine structure measured by ATIC above 200 GeV can be smoothed by using energy bins as wide as in the Fermi/LAT spectrum. Moreover, the ATIC spectrum is related to the part of the South sky with declination between $-45^{\mathrm o}$ and $-90^{\mathrm o}$, while the Fermi/LAT spectrum integrates the flux over the whole sky. Taking into account the possible anisotropy of the electron energy spectrum, it would be more correct to compare the ATIC spectrum with the spectrum measured by Fermi/LAT for the same part of the sky and with narrower energy bins.
\begin{acknowledgements}
The work is supported by grant of RFBR 08-02-00238.
\end{acknowledgements}
|
2,869,038,155,253 | arxiv | \section{Introduction}\label{intro}
\noindent
The standard cosmological model, based on the idea that the energy budget of the universe is currently dominated by a tiny cosmological constant $\Lambda$ plus
Cold Dark Matter ($\Lambda$CDM), predicts that the initial seeds for galaxy formation are halos with relatively low masses of the order of $10^6 M_\odot$.
The initial James Webb Space Telescope (JWST) imaging
via the Cosmic Evolution Early Release Science (CEERS) survey has recently reported a population of surprisingly massive galaxy candidates at redshift $z\mathrel{\rlap{\lower4pt\hbox{\hskip0.5pt$\sim$} 8$ with stellar masses of the order of $10^9 M_\odot$ \cite{2022arXiv220712338A,2022arXiv220712474F,2022arXiv220801612H,2022arXiv220802794N,Yan:2022sxd}.
Even though a spectroscopic follow-up will be necessary to confirm the observation based on photometry only, the early formation of massive galaxies reported by the JWST is hardly reconcilable with the standard $\Lambda$CDM expectations, which would require an implausible high star formation efficiency (SFE), even larger than the cosmic baryon mass budget in collapsed structures.
A useful quantity to assess the viability of the $\Lambda$CDM model is the stellar mass density $\rho_*(>M_*)$
predicted above a given mass scale $M_*$.
The stellar mass is related to the
average baryon mass within each halo through
the SFE, which we define as $\epsilon$, by the relation
\begin{equation}
M_* = \epsilon
(\Omega_\text{\tiny b}/\Omega_\text{\tiny CDM}) M
= \epsilon f_\text{\tiny b} M,
\end{equation}
with $f_\text{\tiny b} = 0.156$ the baryon fraction as measured by Planck \cite{Planck:2018vyg}.
In the following, and in order to be on the conservative side, we will
identify the stellar mass with the baryon mass contained within a given halo, that means fixing the SFE to $\epsilon =1$. This conservative choice maximises the stellar mass predicted by a given scenario.
The comoving cumulative stellar mass density
contained within galaxies above a certain stellar mass $M_{\star}$ reads
\begin{align}
\rho_*(>M_*,z)
&
=\epsilon f_{\rm b}
\int_{M_* / (\epsilon f_\text{\tiny b})}^{\infty}
\frac{{\mathrm{d}} n(M,z)}{{\mathrm{d}} M} M {\mathrm{d}} M\ , \label{esmd}
\end{align}
where $n(M)$ is the CDM halo mass function.
Recently, based on 14 galaxy candidates with masses in the range $\sim 10^{9}\divisionsymbol 10^{11}\ M_{\odot}$ at $7<z<11$ identified in the JWST CEERS program, Ref.~\cite{2022arXiv220712446L} derived the cumulative stellar mass density at $z=8$ and 10 for $M_{\star}\gtrsim 10^{10}\ M_{\odot}$. They found
at $z\simeq 10$
\begin{align}
&\rho_*(>10^{10} M_\odot)\simeq 1.3^{+1.1}_{-0.6}\cdot 10^6 M_\odot\,{\rm Mpc}^{-3},
\nonumber\\
&\rho_*(>10^{10.5} M_\odot)\simeq 9^{+11}_{-6}\cdot 10^5 M_\odot\,{\rm Mpc}^{-3}.
\end{align}
These values are larger than the $\Lambda$CDM predictions by a factor $\sim 50$, even allowing maximum efficiency $\epsilon=1$, or invoking extreme value statistics \cite{Lovell:2022bhx}.
While several extensions of the $\Lambda$CDM scenario have been already put forward in the recent literature
\cite{Menci:2022wia,Liu:2022bvr,Gong:2022qjx}, they all appeal to new ingredients in the late time evolution of the universe. The goal of this paper is to discuss a possible solution which invokes a change in the initial conditions of the cosmological perturbations giving rise to the DM halos, that is, non-Gaussianity (NG) \cite{Bartolo:2004if}. Indeed, a possible source of NG could be primordial in origin, being specific to a particular mechanism for the generation of the cosmological perturbations. It is known that NG in the initial conditions may change the abundance of DM halos, especially in the high mass range of the halo mass function. As such, primordial NG may provide in principle a boost in forming high massive and bright galaxies. In the following, we characterize the nature of NG, specifying which properties NG has to possess to be in agreement with the JWST data.
The paper is organized as follows. In Sec.~II we discuss how one can model the Gaussian and NG halo mass functions, by also checking their validity with dedicated N-body simulations.
In Sec.~III we compare models with various NG signatures to the JWST data, while our conclusions are offered in Sec.~IV.
\section{Halo mass function}
\subsection{Gaussian}
\noindent
We describe the Gaussian differential halo abundance as
\begin{equation}
{{\mathrm{d}} n\over {\mathrm{d}} M} =
F(\nu){\overline{\rho}_\text{\tiny M}\over M^2}
{{\mathrm{d}} \ln\sigma^{-1}\over {\mathrm{d}} \ln M},
\label{dndm}
\end{equation}
where $\overline{\rho}_\text{\tiny M}$
is the background average matter density, $\nu=\delta_c/\sigma(M,z)$ with
$\delta_c = 1.686$ corresponding to the critical linear overdensity for collapse, while
$\sigma(M ,z)$ being the variance of the smoothed linear density field.
The smoothing scale $R$ is related to the halo mass through the relation
$R=\left ( {3M}/{4\pi\overline{\rho}_\text{\tiny M}}\right )^{1/3}$.
Linear density fields evolve with time according to the linear growth factor $D(z)$ and we assume a CDM form for the linear power spectrum.
The variance of linear density perturbations smoothed on scale $R$ is therefore computed as
\begin{equation}
\sigma^2
=\langle \delta^2 \rangle
=
\int
{{\mathrm{d}}^3 k\over (2 \pi)^3}
W^2(kR)
\mathcal M^2(k,z)
P_\zeta(k)
\label{vari},
\end{equation}
where $P_\zeta(k)$ is the linear comoving curvature power spectrum,
$W(kR)$ is the Fourier transform of a top-hat spherical window function
\begin{equation}
W(kR)=3 \left ( {{\sin(kR)\over (kR)^3}-{\cos(kR)\over (kR)^2}}\right ) ,
\end{equation}
and we defined
\begin{equation}
\mathcal M(k,z) = \frac{2}{5} \frac{k^2 T(k)D(z)}{\Omega_\text{\tiny M} H_0^2},
\end{equation}
in terms of the linear transfer function $T(k)$, the matter abundance $\Omega_\text{\tiny M}$ and present day Hubble rate $H_0,$
following the standard conventions in the literature.
\subsection{Non-Gaussianity}
\noindent
The presence of NGs in the initial conditions alters the abundance of dark matter halos. Several ways of modeling this effect have been proposed in the past (see e.g. \cite{Biagetti:2019bnp} for a recent review, and references therein).
The general approach is based on the Edgeworth expansions of the Probability Distribution Function (PDF) of the matter density field, or of the level excursion probability of overcoming a threshold for collapse \cite{Matarrese:2000iz,LoVerde:2007ri,Desjacques:2009jb}. In the limit of weak enough NG, the expansion is usually truncated to the leading term, which is generated by the three-point function of the primordial field.
As a result, the exponential tail of the mass function \eqref{dndm} is modified by a non-vanishing skewness and one can correct the Gaussian halo mass function with a multiplicative
factor,
\begin{equation}
{{\mathrm{d}} n_\NG \over {\mathrm{d}} M}=
{{\mathrm{d}} n\over {\mathrm{d}} M}
\times C_\NG(M),
\end{equation}
which we take to be the one proposed by \cite{Desjacques:2009jb},
\begin{equation}
C_\text{\tiny NG}(M) =
\left[
\frac{\hat \delta_c^2}{6 \Delta} \frac{{\mathrm{d}} S_3}{{\mathrm{d}} \ln \sigma} + \Delta
\right]
\times
\exp \left( \frac{S_3 \hat \delta_c^3}{ 6 \sigma^2} \right).
\end{equation}
Here we introduced
$\hat \delta_c = 0.949 \times \delta_c $, $\Delta \equiv \sqrt{ 1 - \hat \delta_c S_3 / 3}$ and
the skewness $S_3$ can be computed by integrating the matter bispectrum
\begin{align}\label{S3skew}
S_3
\equiv
\frac{\langle \delta^3\rangle }{\sigma^4 }
=
\frac{1}{\sigma^4}
\int
\left (
\prod_{i=1}^3
{{\mathrm{d}}^3 k_i\over (2 \pi)^3}
\right )
\mathcal{B}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3),
\end{align}
which in turn is sourced by the primordial curvature bispectrum $B_\zeta$ through
\begin{equation}
\mathcal{B}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3)=
\mathcal M(k_1,z)
\mathcal M(k_2,z)
\mathcal M(k_3,z)
\times B_\zeta(k_1,k_2,k_3).
\end{equation}
The specific type of NG that sources the change in the halo mass function is fully specified by $B_\zeta$ in a model-dependent way. In this work, we focus on the so-called \emph{local}-type NG, which include the class of models where local interactions among fields take place on superhorizon scales (see \cite{Bartolo:2004if} for a review).
For these models, the primordial bispectrum takes the simple, factorizable, form of
\begin{equation}
B_\zeta(k_1, k_2, k_3) = \frac{6}{5} f_\text{\tiny NL} [ P_\zeta(k_2)P_\zeta(k_3) + {\rm perm}],
\label{eq:local}
\end{equation}
where $f_\text{\tiny NL}$ parametrizes the amplitude of the bispectrum and $P_\zeta$ is the primordial curvature power spectrum.\footnote{Assuming a constant $f_\text{\tiny NL}$, one can show by directly integrating Eq.~\eqref{S3skew} that an accurate fit of the skewness as a function of both scale and redshift is given by (see e.g. \cite{Chongchitnan:2010xz})
\begin{equation}
S_3 (M,z)=
\frac{1.8\times 10^{-4} f_\text{\tiny NL} }{\sigma^{0.838}(M,z)D^{0.162}(z)},
\end{equation}
that we adopt in the remainder of this work when dealing with a constant $f_\text{\tiny NL}$. We have checked that the fit is accurate even up to redshifts $z \simeq 10$.}
While in the most popular version of the local NG $f_\text{\tiny NL}$ is scale-independent, in our comparison with JWST data we are going to test extensions that allow $f_\text{\tiny NL}$ to run with scale.
This generalization is well-motivated for several models of interactions taking place during inflation \cite{Chen:2005fe,Khoury:2008wj,Byrnes:2010ft,Riotto:2010nh,Huang:2010cy,Huang:2010es,Byrnes:2011gh} and its implications have been thoroughly investigated in CMB observations and for galaxy surveys observing at low redshift \cite{LoVerde:2007ri,Sefusatti:2009xu,Becker:2010hx,Giannantonio:2011ya,Becker:2012yr,Agullo:2012cs,Biagetti:2013sr}.
The corresponding bispectrum in this scale dependent
model is
\begin{equation}
B(k_1, k_2, k_3) = \frac{6}{5} [f_\text{\tiny NL}(k_1) P_\zeta(k_2)P_\zeta(k_3) + {\rm perm}],
\label{eq:fnlk_bispec}
\end{equation}
where different functional forms for $f_\text{\tiny NL}(k)$ we adopt are specified in the next section.
\subsection{Testing high-redshift halo mass functions with N-body simulations}
\noindent
Previous literature has thoroughly compared theoretical predictions of the halo mass function both for Gaussian \cite{Jenkins:2000bv,VIRGO:2001szp,Warren:2005ey,Reed:2003sq,Reed:2006rw,Lukic:2007fc,Cohn:2007xu,Tinker:2008ff,Tinker:2010my,Despali:2015yla} and NG \cite{Moscardini:1990zh,Weinberg:1991qe,Matarrese:1991sj,Park:1991mh,Gooding:1991ys,Borgani:1993nz,Dalal:2007cu,Pillepich:2008ka,Achitouv:2011sq,Achitouv:2013oea,Stahl:2022did} initial conditions to simulations. However, most results are at low redshifts, $z \lesssim 2$, and none of them include comparisons at higher redshift including NG initial conditions. Hence, it is important to validate the predictions at redshifts of relevance for the galaxies observed by JWST that are discussed in this paper.
To perform our analysis, we use a subset of the \textsc{Eos}\xspace \textsc{Dataset}\xspace \cite{Matteorepo},
that includes simulations with Gaussian as well as NG initial conditions. The initial particle displacement is generated at $z_{in}=99$ using \texttt{2LPTic} \cite{Scoccimarro:1997gr}, and its extended version \cite{Scoccimarro:2011pz} for local NG initial conditions, using $f_{\rm NL}=500$ as value for the non-linearity parameter. The linear power spectrum given as an input is computed using CLASS \cite{Blas:2011rf} and assumes a flat $\Lambda$CDM cosmology with $n_s=0.967$, $\sigma_8=0.85$, $h=0.7$, $\Omega_m=0.3$ and $\Omega_b=0.045$. The public code \texttt{Gadget2} \cite{Springel:2005mi} is used to evolve $512^3$ particles in a cubic box of $64$ Mpc$/h$ per side, which allows enough resolution to resolve dark matter halos down to $M \sim 10^{10} M_\odot$. We run $30$ different realizations for both the Gaussian and NG initial conditions.
We identify halos in each simulation using the code \texttt{Rockstar} \cite{Behroozi:2011ju}, with a lower mass cut off of a minimum of $100$ particles, resulting in halos of minimum $M_{\rm min} \simeq 2.3\times 10^{10} M_\odot$. The algorithm used is Friends-of-Friends (FoF) with a
linking length $\lambda = 0.28$ at redshifts $z=8$ and $10$ and it estimates the halo mass with a Spherical Overdensity approach, with overdensity $\Delta = 200\, \bar \rho_{\rm \tiny M}$.
As already shown in \cite{Biagetti:2016ywx} on a similar set of halos at redshifts $z=0$, $1$ and $2$, the Tinker fit \cite{Tinker:2010my} provides a good agreement with the halo mass function measured on the simulations. Hence, in what follows, we adopt the Tinker halo mass function parametrized as
\begin{align}
& F_\text{\tiny Tinker} = 0.368
\left [
1+
\left ( \beta\nu \right ) ^{-2\phi}
\right ]
\nu^{2\eta+1}e^{-\gamma\nu^2/2},
\nonumber \\
& \qquad \beta = 0.589(1+z)^{0.2}, \phi=-0.729(1+z)^{-0.08},
\nonumber\\
& \qquad \eta = -0.243(1+z)^{0.27}, \gamma = 0.864(1+z)^{-0.01},
\end{align}
where $\nu$ is the computing using the same linear power spectrum provided as input to the simulations.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{Plots/dndlnm_G_NB.pdf}
\caption{
Halo mass distribution at redshift $z = 8$ and $z = 10$
assuming either Gaussian or non-Gaussian ($f_\text{\tiny NL} = 500$) curvature perturbations and compared to N-body simulations (see text).
The bands around the simulation data points indicate the standard error on the mean.
}
\label{fig:dndm}
\end{figure}
In Fig.~\ref{fig:dndm}, we show the halo mass function at various redshifts in the absence on NGs and assuming NG initial conditions of the local type with $f_{\rm NL}=500$. Given the good agreement shown both for Gaussian and NG initial conditions even at $z=8$ and $10$, we are confident that our theoretical predictions for the HMF are realistic within the approximations made.
\section{The JWST data and non-Gaussianities}\label{sec:JWSTandNGs}
\noindent
Based on the results of the previous section,
we compute the (co-moving) cumulative stellar mass density contained within galaxies above a certain stellar mass $M_{\star}$ integrating Eq.~(\ref{esmd})
including the presence of local NG. For these computations we use a value of $\sigma_8=0.815$, which is closer to the current best-fit model quoted in the latest Planck results \cite{Planck:2018vyg}. All other cosmological parameters are taken to be the same as the simulated data presented in the previous section.
In Fig.~\ref{fig:Labbe} we show the comparison between the JWST observations from Ref.~\cite{2022arXiv220712446L} and the heavy halo star density for different values of $f_\text{\tiny NL}$ in the case in which $f_\text{\tiny NL}$ is constant.
Large NGs can easily reduce the tension with observations at redshift $z \approx 10$
but do not help explaining the mild evolution between the two redshift bins.
Such large NG are however ruled out by CMB anisotropy data \cite{Planck:2019kim} and eBOSS clustering data \cite{Castorina:2019wmr}, which constrain local-type NG to be of order $|f_\text{\tiny NL}| \lesssim 10$ and $|f_\text{\tiny NL}| \lesssim 26$ at $95\%$ confidence level, respectively.
On the other hand, one should take into account the fact that these constraints are valid at relatively large scales, $k_{\rm constraints} \lesssim 0.3\,h/$Mpc, while the relevant scale for these massive galaxies at redshifts $z=8$ and $10$ is $k \sim 1/R \gtrsim 1.5\,h/$Mpc, where we choose $R$ to be the Lagrangian radius corresponding to halo masses of $M\sim 10^{11} M_\odot$ considered in our analysis (i.e. stellar masses around $M_*\sim10^{10} M_\odot$). Around these small scales, Ref.~\cite{Sabti:2020ser} have put constraints using UV galaxy luminosity functions from the Hubble Space Telescope (HST) of about $f_\text{\tiny NL} \lesssim 500$ at $95\%$ confidence level, but assuming that NG are switched on
already at scales $k_{\rm cut} \sim 0.15 \, h/$Mpc. With increasing $k_{\rm cut}$, the constraints loosen considerably \cite{Sabti:2020ser}.
We therefore consider the possibility that the $f_\text{\tiny NL}$ parameter runs enough with scale to evade current constraints, and simultaneously explain the JWST high-redshift galaxies.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\textwidth]{Plots/RhoMstar_Local_z8.pdf}
\includegraphics[width=0.49\textwidth]{Plots/RhoMstar_Local_z10.pdf}
\caption{{\textit{Left:}}
Co-moving cumulative stellar mass density within galaxies with
stellar mass above $M_*$
at redshift $z = 10$.
The black bars indicate 1 and 2 $\sigma$ range inferred from the JWST observations \cite{2022arXiv220712446L}, where the latter is extrapolated assuming a Gaussian distribution.
The same convention is used in the following figures.
{\textit{Right:}}
Same as top for $z = 8$.
}
\label{fig:Labbe}
\end{figure}
A first case is the so-called running NG \cite{Sefusatti:2009xu} for which
\begin{equation}\label{eq:runNG}
{\it i)}\qquad
f_\text{\tiny NL}(k) = f_\text{\tiny NL}^0
\left ( \frac{k}{k_\text{\tiny max}}\right ) ^{n_{f_\text{\tiny NL}}},
\end{equation}
where, depending on the running, we consider
\begin{align}
f_\text{\tiny NL}^0 = 17 \quad \text{for}& \quad n_{f_\text{\tiny NL}} = 1,
\nonumber \\
f_\text{\tiny NL}^0 = 8.3 \quad \text{for}& \quad n_{f_\text{\tiny NL}} = 2,
\nonumber \\
f_\text{\tiny NL}^0 = 5.6 \quad \text{for}& \quad n_{f_\text{\tiny NL}} = 2.4.
\end{align}
The corresponding stellar mass densities are plotted in Fig.~\ref{fig:Labberunning} (left panel) and compared to the JWST data.
We observe that a sufficiently large $n_{f_\text{\tiny NL}} \mathrel{\rlap{\lower4pt\hbox{\hskip0.5pt$\sim$} 2$
may help in reducing the tension,
but give rise to a too steep halo mass function tilted towards small halo masses that can hardly reach the largest datapoint at redshift $z = 10$, while being compatible with the others.
In Fig.~\ref{fig:S3running} we plot the corresponding skewness $S_3$, where we have chosen
$k_\text{\tiny max} \simeq k_\text{\tiny constraints}$
to be the smallest scale constrained by LSS observations (see Ref.~\cite{Sabti:2020ser}).
The amplitude of $f_\text{\tiny NL}(k_\text{\tiny max})$ has been fixed
such that $S_3$ saturates the current bound from the LSS.\footnote{Note that the model of Eq. \eqref{eq:runNG}, besides being too step to explain JWST observations, produces also a very large skewness at small scales (cfr. Fig. \ref{fig:S3running}). These large values would be anyway in contrast with the truncation made on the Edgeworth expansion when neglecting the effects of the kurtosis, etc., in the calculation of the halo mass function.}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.99\textwidth]{Plots/plt3.pdf}
\caption{
Stellar mass density above $M_*$
as shown in Fig.~\ref{fig:Labbe}
assuming different NG models
{\textit{Left:}}
The running-NG model.
\textit{Center:}
Constant NG with a sharp cut
at the scale corresponding to halo masses $M_\text{\tiny cut}$.
In the lower panel, transitions to negligible values of $S_3$ brings the predictions towards the Gaussian case (as can be seen in the lower panel).
{\textit{Right:}}
NG bump.}
\label{fig:Labberunning}
\end{figure*}
A possible solution to this problem is to take a $f_\text{\tiny NL}(k)$ such that is constant up to some scale $k_\text{\tiny cut}$ (corresponding to an halo mass $M_\text{\tiny cut}$) and vanishing at smaller momenta, that is (see e.g. \cite{Sabti:2020ser})
\begin{align}
{\it ii)}\qquad
B_\zeta(k_1,k_2,k_3) = \frac{6}{5}
f_\text{\tiny NL}^0 P_\zeta(k_2)P_\zeta(k_3)
\left[\prod_{i=1}^3
\Theta(k_i - k_\text{\tiny cut})
+ {\rm perm.}\right].
\end{align}
Such a scale dependent NG can be obtained in inflationary models where besides the inflaton field there is another spectator field which experiences a transition from massless to massive at a scale $\approx k_\text{\tiny cut}$ \cite{Riotto:2010nh}.
The corresponding result is shown in Fig.~\ref{fig:Labberunning} (central panel), where
we plot the stellar mass density above $M_*$ assuming different values of $f_\text{\tiny NL}^0$
and the scale where NGs are switched off corresponding to
$M_\text{\tiny cut}= 10^{12} M_\odot$.
This indicates that, in order to reach the JWST observations, large NGs are needed at least starting from masses below $\approx 10^{12} M_\odot$.
The resulting shape of $S_3$ obtained in this scenario is shown in Fig.~\ref{fig:S3running}. Note that these large values of $f_\text{\tiny NL}$ are still allowed by constraints quoted by \cite{Sabti:2020ser}, which sensitively depend on $k_{\rm cut}$.
Finally, we consider a model in which the NG correction is localised within a bump at scales close the the one observed by Ref.~\cite{2022arXiv220712446L}.
We assume the functional form
\begin{equation}
{\it iii)}\qquad
f_\text{\tiny NL}(k) =
\frac{f_\text{\tiny NL}^0 }{\sqrt{2 \pi} w}
\exp\left [ -\frac{\log^2 \left ( k/k_0\right )}{2 w^2}\right ] .
\end{equation}
and fix the central scale to be $k_0 = 1.4 h/{\rm Mpc}$
which corresponds to the masses detected by \cite{2022arXiv220712446L} at redshift $z=10$.
In Fig.~\ref{fig:Labberunning} we show the corresponding stellar mass density above $M_*$ with varying assumptions on
the width of the bump $w$, while the resulting skewness is plotted in Fig.~\ref{fig:S3running}.
We see that a large normalisation $f_\text{\tiny NL}^0$
and a relatively narrow width may allow to reduce the tension between JWST observations and the cosmological model.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{Plots/S3_running.pdf}
\caption{Skewness $S_3$ as a function of scale $R$ (or halo mass function at $z=10$ indicated on top).
The gray region corresponds to values of $S_3$ obtained using excluded values of $f_\text{\tiny NL}$ due to constraints from \cite{Castorina:2019wmr}, assuming local-type NG. We use the limiting value of $f_\text{\tiny NL}=26$ at $95\%$ confidence level using a $k_{\rm max} = 0.3\,h/$Mpc for a conservative assumption on the response of quasars to NG.
We indicate $S_3$ obtained with the various models considered in this work with the same color used in Fig.~\ref{fig:Labberunning} and labelled in the inset. }
\label{fig:S3running}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
\noindent
In this paper we have investigated whether changing the initial conditions of the cosmological perturbations by adding some amount of NG helps in boosting the formation of massive and bright galaxies, as recently reported by JWST.
We tested our modelling of the NG correction of the halo mass function adopting N-body simulations and check whether NG scenarios compatible with current large-scale and low redshift observations may help explaining recent data.
Our findings indicate that a large and strongly scale dependent NG (which switches on at small scales) is needed to alleviate the tension between the cosmological model and the observations.
We have modelled the halo mass distribution with the Tinker model \cite{Tinker:2010my}, which was used in the model validation against N-body simulations.
Notice, however, that different choices were adopted in the literature (e.g. Sheth-Tormen \cite{Sheth:2001dp}) that lead to larger HMF tail and, consequently, to smaller values of $f_\text{\tiny NL}$ to alleviate the tension. We have verified this intuition by performing our analysis using the Sheth-Tormen mass function, for which the values of $f_\text{\tiny NL}$ needed to alleviate the tension are around a factor of two smaller.
We notice once again that the small evolution of the halo mass function between redshift $8$ and $10$ reported in Ref.~\cite{2022arXiv220712446L} poses a threat to our explanation as it is not easily captured in the models we tested and would need a rather artificial redshift dependence of the theoretical prediction. Such a caveat appears to be valid for most of the solutions recently proposed in the literature.
It should be noted that our analysis does not include a complete assessment of parameter degeneracies within the $\Lambda$CDM model. In particular, $\sigma_8$, which parametrizes the amplitude of matter fluctuations, is known to also provide an enhancement on the tail of the HMF. Leaving all other cosmological parameters fixed and setting $f_\text{\tiny NL}=0$, we have verified that to explain the observed galaxies would require values of $\sigma_8\gtrsim 0.9$ which are significantly excluded by Planck \cite{Planck:2018vyg}.
We are aware that there are several uncertainties related to the JWST measurements that might solve the tension with respect to the $\Lambda$CDM independently from NG. Current measurements rely on identifying high redshift candidates using photometric template fitting which however are not tested at such high redshifts \cite{2022arXiv220807879S}. Another possibly large uncertainty is added in the estimation of $M_*$. For it the Chabrier Initial Mass Function is typically adopted \cite{Chabrier:2003ki}, which however is tested at much lower masses and redshifts. Furthermore, the effect of a large scatter in the star formation \cite{2022arXiv220812826M} as well as the impact of dust attenuation \cite{2022arXiv220906840Z} may introduce further contamination in the mass estimation. The spectroscopic follow-up and further testing on the astrophysical uncertainties will soon shed more light on the issue.
\begin{acknowledgments}
\noindent
The authors would like to thank Pierluigi Monaco for very useful comments on the draft and for discussions on the possible sources of uncertainty on the JWST measurements. M.B. acknowledges support from the NWO project “Cosmic origins from simulated universes” for the computing time allocated to run a subset of the Eos simulations
on \textsc{Snellius}, a supercomputer that is part of the Dutch National Computing Facilities.
G.~F. acknowledges financial support provided under the European
Union's H2020 ERC, Starting Grant agreement no.~DarkGRA--757480 and under the MIUR PRIN programme, and support from the Amaldi Research Center funded by the MIUR program ``Dipartimento di Eccellenza" (CUP:~B81I18001170001). This work was supported by the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 101007855. A.R. acknowledges financial support provided by the Boninchi Foundation.
\end{acknowledgments}
|
2,869,038,155,254 | arxiv | \section{Introduction}
Single quantum emitters coupled to optical cavities constitute a
platform that allows conversion of quantum states of one physical
system to those of another in an efficient and reversible manner.
Such coupled, cavity quantum electrodynamic (cQED)systems have
been the subject of intense studies as one of the building blocks
for scalable quantum information processing and long distance
quantum communication \cite{Kimble-internet2008, rempe_entangle_2012}. Over the years,
these systems have evolved from conventional atomic systems
\cite{Kimble94} to include those based on solid state emitters, such as
quantum dots \cite{article:yoshio04, andrei_njp}, nitrogen vacancy
(NV)centers \cite{andrei_NV} and recently transmon qubits in the
context of circuit QED \cite{article:dispersiveQED}. In
particular, a cQED system consisting of a semiconductor
self-assembled quantum dot (QD) coupled to photonic crystal
nanocavity was used to demonstrate generation of non-classical
states of light \cite{AF_natphys,arka_tunneling,blockade_imamog},
electro-optic modulation \cite{andrei_eom}, and ultrafast all
optical switching
\cite{arka_switching,edo_switching,atac_switching}. Although, a
significant progress has been done in QD-based cQED, most of the
reported works use a neutral QD, which effectively acts as a
two-level quantum emitter with an optical frequency transition
from the ground state to the single exciton state. While such
two-level system could in principle be used as a qubit
\cite{finley_spin,waks_cnot}, the short life-time of the exciton state ($<1$
ns) makes it not suitable for practical applications.
On the other hand, the spin states of a charged QD, into which a
single electron or a hole was introduced, have been shown to
possess coherence times in the microseconds range
\cite{article:press08, article_Erik_Spin}. The use of ultrafast
optical techniques with charged QDs provides the possibility of
performing a very high number of spin manipulations within the
spin coherence time and opens avenues for their use as qubits for
quantum information applications. However, the efficiency of spin
initialization \cite{article:xu07} and manipulation achieved so
far is not high. To attain the efficiency necessary for practical
applications, one needs to enhance the light-matter interaction.
This can be achieved by embedding the charged QD in a cavity. Several groups have so far demonstrated deterministic charging
of a single QD within a photonic crystal cavity \cite{imamoglu_spin}, and magnetic field tuning of a single QD strongly
coupled to a photonic crystal cavity \cite{waks_cnot}. Recently, manipulation of a QD spin in a photonic
crystal cavity was reported by Carter et. al.
\cite{article_Gammon}. However, the configuration used in
their approach does not permit getting full advantage of the
photonic interface. We also note that a lot of effort has been directed towards spin-photon interfaces based on
NV centers in diamond \cite{lukin_entanglement}, but embedding them inside cavities has so far proven difficult.
\begin{figure}[b]
\centering
\includegraphics[width=2.8in]{Spin_schematic.pdf}
\caption{(color online) (a) Scanning electron microscope image of
a bimodal photonic-crystal nanocavity fabricated in a GaAs
membrane with embedded quantum dots. The scalebar is $500$ nm.
(b) The schematics and the optical transitions of a four-level
system arising when a magnetic field is applied in the Voigt
configuration (see plot (a)) to a QD charged with a single
electron. The cavity mode $a$ has H polarization (see plot (a))
and can couple to transitions $|1\rangle \rightarrow |4\rangle$
and $|2\rangle \rightarrow |3\rangle$, while the cavity mode $b$
has V polarization and can drive the transitions $|1\rangle
\rightarrow |3\rangle$ and $|2\rangle \rightarrow |4\rangle$.}
\label{Spin_schematic}
\end{figure}
In this paper, we theoretically analyze a system consisting of a
QD spin coupled to a nano-cavity, with realistic system parameters. While
one might naively think that coupling a QD spin to a cavity is a
simple extension of coupling a neutral QD to a cavity, a closer
look quickly reveals that it is not so. Specifically, the QD spin
states become significantly perturbed due to presence of the
cavity, and the spin initialization or control becomes impossible when the
cavity is brought on resonance with QD transitions, in an attempt to enhance them. In this
work, we show how this can be overcome and that
successful QD spin initialization and manipulation can be achieved
in a properly chosen configuration based on a bimodal nanocavity (Fig.
\ref{Spin_schematic}a). Bimodal photonic crystal nanocavities were previously proposed for nonclassical light generation and near-degenerate bimodal cavities were also demonstrated \cite{calzone_immamoglu,AM_bimodal}.
We analyze a large parameter space and find an optimal range of
detunings between QD transitions, cavity modes, and the
driving laser for which a high fidelity of spin initialization
can be achieved. Presence of a cavity also increases the speed of
spin initialization, by enhancing the rates of the coupled
transitions, bringing it to beyond GHz range, that is not achievable in bulk
semiconductor \cite{article:press08}. Finally we describe the spin manipulation in such a system. Here, we find that coherent population transfer is realized only by applying a short optical pulse that is far detuned from the QD-cavity system.
Previously, several research articles looked theoretically into
the problem of spin initialization and manipulation in a cavity
\cite{snellart_spin,AM_TP_spin,article:ima99,Rossi_spin}. However, in those studies, it was
assumed that the effect of the cavity is mainly to enhance the
local electric field of a laser and
that the QD is driven by a classical field. This
assumption breaks down for a QD embedded in a high-Q photonic
cavity and therefore quantization of the cavity field and a
careful choice of driving terms are important for a realistic
treatment of the coupled system. A single electron spin confined
to the QD can be in two different states of the same energy: spin
up or spin down, where we define the spin-state along the
optical (QD growth, i.e., $z$-axis). When a magnetic field is applied perpendicularly to
the optical axis (also known as the Voigt geometry), the spin-up
and spin-down states split by an amount $\Delta_e=g_e \mu_B B$ (Fig. \ref{Spin_schematic}b).
These states can also be thought of spin up and spin down states along the $x$-axis,
and we will use this notation for the rest of the paper.
The excited state of the QD, called the trion state, also splits.
The excited state splitting is due to the hole spin and is given by
$\Delta_h=g_h \mu_B B$. Here $g_e$
and $g_h$ are respectively, the Lande g-factor for electron and
hole; $\mu_B$ is the Bohr magnetron; $B$ is the applied magnetic
field; and in this work we neglect any diamagnetic shift of the
QD. The lossless dynamics of the system consisting of QD spins
coupled to a cavity with two modes of perpendicular polarizations
(labeled $H$ and $V$ in Fig. \ref{Spin_schematic}b) and driven by
a laser, can be described by the Hamiltonian
\begin{equation}
\label{eqn:H_res_0}
\mathcal{H}=\mathcal{H}_o+\mathcal{H}_{int}+\mathcal{H}_d
\end{equation}
where in the rotating-wave approximation (and with $\hbar=1$)
\begin{eqnarray}
\nonumber
\mathcal{H}_o&=& -\frac{\Delta_e}{2}|1\rangle\langle1|+\frac{\Delta_e}{2}|2\rangle\langle2|
+(\omega_o-\frac{\Delta_h}{2})|3\rangle\langle3|\\
&&+(\omega_o+\frac{\Delta_h}{2})|4\rangle\langle4|+\omega_{a} a^\dag a + \omega_{b} b^\dag b\\
\mathcal{H}_{int}&=&g_aa^\dag (\sigma_{14}+\sigma_{23})+ig_bb^\dag (\sigma_{24}+\sigma_{13})+h.c.
\end{eqnarray}
Here, $g_a$ and $g_b$ describe the coupling strengths between the
cavity modes and the QD transitions, $a$ and $b$ are the photon
annihilation operators of the two cavity modes with frequencies
$\omega_a$ and $\omega_b$, respectively, $\sigma_{ij}=\vert
i\rangle \langle j\vert$, and $\omega_o$ is the frequency of the
QD's optical transitions in the absence of the magnetic field (see
Fig. \ref{Spin_schematic} b). The driving part of the Hamiltonian,
$\mathcal{H}_d$, changes depending on whether the applied laser
field drives the QD or the cavity.
We consider the general form of
the driving Hamiltonian $(\mathcal{H}_d=\mathcal{H}_d^{cav}+\mathcal{H}_{d}^{QD})$, which describes a single laser with
controllable polarization capable of driving both the cavity
modes ($\mathcal{H}_d^{cav}$) or the QD directly ($\mathcal{H}_d^{QD}$):
\begin{equation}
\mathcal{H}_d^{cav}=\mathcal{E}_ae^{i\omega_l t}a + \mathcal{E}_be^{i\omega_l t}b+h.c.
\end{equation}
where $\mathcal{E}_{a,b}$ are the rates with which the applied
laser field excites each of the cavity modes, and
\begin{equation}
\mathcal{H}_d^{QD}=\Omega_h e^{i\omega_l t}(\sigma_{13}+\sigma_{24}) + \Omega_v e^{i\omega_l t}(\sigma_{23}+\sigma_{14})+h.c.
\end{equation}
Here, $\Omega_h$ and $\Omega_v$ are the Rabi frequencies of the laser for the horizontal and the vertical QD transitions, respectively.
Depending on the driving conditions, one of the two terms in $\mathcal{H}_d$ can be dominant. For example, if a laser can couple to cavity resonance, we assume that $\mathcal{H}_d^{cav}$ is dominant, and neglect $\mathcal{H}_d^{QD}$. In other words, QD is always driven via a cavity mode. This condition is assumed for spin initialization. On the other hand, if the laser cannot couple well to cavity resonance, but QD transitions are instead driven directly, $\mathcal{H}_d^{QD}$ dominates. This happens when the laser detuning from QD transitions is smaller than from cavity resonances \cite{majumdar_QD_splitting}, or when the laser is applied from the spatial direction where it does not couple to cavity modes. In this paper, we show that for coherent spin manipulation, it is necessary to drive the QD directly, and not via cavity mode. We can transform the Hamiltonian into the rotating frame by using
$\mathcal{H}_{r}=T^\dag\mathcal{H}T+i\frac{\partial
T^\dag}{\partial t}T$ where $T=e^{-i\omega_lt(a^\dag
a+b^\dag b+|3\rangle\langle3|+|4\rangle\langle4|)}$.
The losses in the system are incorporated by solving the master equation of the density matrix $\rho$ of the coupled QD-cavity system:
$\frac{d\rho}{dt}=-i[\mathcal{H}_{r},\rho]+\sum_j \mathcal{L}(c_j)$
where, $\mathcal{L}(c_j)=2c_j \rho c_j^{\dag}-c_j^{\dag} c_j \rho-\rho c_j^{\dag} c_j$ is the Lindblad operators for the
collapse operator $c_j$. In this case, we have six different loss
channels, and hence six collapse operators (one for each cavity
and each QD transition): $c_j\in \{ \sqrt{\kappa} a, \sqrt{\kappa} b,
\sqrt{\gamma_{41}}\sigma_{41},
\sqrt{\gamma_{42}}\sigma_{42},\sqrt{\gamma_{31}}\sigma_{31},
\sqrt{\gamma_{32}}\sigma_{32}\}$.
\begin{figure}
\centering
\includegraphics[width=3.5in]{Fig_PL_charac.pdf}
\caption{(color online) Photoluminescence spectra of the coupled
QD spin-cavity system as a function of the magnetic field: (a)
for a QD-cavity system with: $\kappa/2\pi=5$ GHz and $g/2\pi=20$
GHz.(b): similar PL spectra for $g/2\pi=\kappa/2\pi=20$
GHz. (c),(d) Energy splitting between all the QD transitions as a
function of the magnetic field without(c) and with (d) a cavity,
in the absence of losses ($\kappa=0$).} \label{Fig_PL_charac}
\end{figure}
We now use this model to theoretically investigate the spectrum of
the coupled charged QD-cavity system probed under
photoluminescence (PL) as is commonly done experimentally (to
reveal eigenstates of the system). We perform this analysis for
two different cavity decay rates: readily achievable cavity decay
rate: $\kappa/2\pi=20$GHz and better than the state of the art
(but achievable with cavity fabrication improvements) system
parameters: $\kappa/2\pi=5$ GHz. The dot-cavity interaction
strength is $g/2\pi=20$ GHz for both cases (corresponding to
experimentally achievable condition \cite{article:majumdar09}). The dipole decay rates are $\gamma_{41},
\gamma_{42},\gamma_{31},
\gamma_{32}/2 \pi=1$ GHz. When the system is characterized through PL, an above
band laser pumps the semiconductor to generate electron hole
pairs. These carriers recombine in the QD, which subsequently emits a photon. This is an
incoherent way of probing the system, and is modeled by adding
Lindblad terms $P(\mathcal{L}(a^\dag)+\mathcal{L}(b^\dag))$, which
signify incoherently populating the cavities with a rate $P$. A low value of $P/2\pi \sim 0.1$ is used to allow using a small Fock state basis $(N=4)$.
We calculate the power spectral density (PSD) $S(\omega)$ of the
system given by
\begin{equation}
S(\omega)=\int_\infty^\infty <a^\dag(t) a>e^{-i\omega t}+<b^\dag(t) b>e^{-i\omega t} dt
\end{equation}
where $\omega$ is the spectrometer frequency, and in rotating frame $\Delta=\omega-\omega_o$.
The numerically simulated PSD as a function of increasing magnetic
field is shown in Fig. \ref{Fig_PL_charac} for two different
values of the cavity decay rates $\kappa/2\pi=5$ GHz and
$\kappa/2\pi=20$ GHz. Peaks in
the PSD correspond to eigen-states of the coupled system. For a system with low $\kappa$, we observe six peaks
(for $B>0$). However, with increasing cavity decay rates, the
energies of the eigenstates become degenerate and such structure
disappears. To understand the origin of the six peaks, we note
that with one quantum of energy present in the system, the bare
states of the coupled charged QD-bimodal cavity system are
$|1,0,1>$, $|0,1,1>$, $|1,0,2>$, $|0,1,2>$, $|0,0,3>$ and
$|0,0,4>$, where the first number denotes the number of photons
present in the mode $a$, second number is the number of photons in
mode $b$ and the last one is the populated charged QD state. If
$g/\kappa$ is sufficiently large, these bare states couple and
give rise to six dressed states that we observe in the PL spectrum
in Fig. \ref{Fig_PL_charac} (a). For additional intuitive
understanding, we diagonalize the Hamiltonian when only one photon
is present in the system and plot the eigen-values as a function
of the magnetic field. In absence of the cavity, we see four
transitions from the QD (Fig. \ref{Fig_PL_charac}c). In presence
of the two cavity modes, however, a hybridization between the
cavity modes and the QD transitions occurs, which results in six
observable transitions (Fig. \ref{Fig_PL_charac}d). We note that
these six transitions are also present in the system with larger
cavity losses (Fig. \ref{Fig_PL_charac} b), but they cannot be
as clearly resolved as in Fig. \ref{Fig_PL_charac} a because of
their overlap.
\begin{figure}
\centering
\includegraphics[width=3.5in]{Figure_init_detuning.pdf}
\caption{(color online) The fidelity and speed of spin
initialization for a realistic QD-cavity system
($g/2\pi=\kappa/2\pi=20$ GHz at a magnetic field of $5$ T). (a)
Initialization fidelity $\vert \rho_{11}-\rho_{22} \vert$ as a
function of the cavity mode $b$ frequency
$\Delta_{b}=\omega_{b}-\omega_o$ and the pump laser wavelength
$\Delta_l=\omega_l-\omega_o$. The white point marks the situation
where we get optimal spin initialization fidelity (the situation
shown in the inset of (b): the pump laser is tuned to the QD transition $\vert 1\rangle
\rightarrow \vert 4 \rangle$, and the cavity mode $b$ is tuned to
the transition $\vert 2 \rangle \rightarrow \vert 4 \rangle$. Here double arrows denote cavity fields and the single arrows
denote the laser). (b) The spin initialization as a function of the
time for different cavity detunings $\Delta_{c2}$, with laser
being fixed at the transition frequency $\omega_{14} (\Delta_l
\sim 2\kappa)$. $\Delta_{c2}$ is changed from $0$ to $2\kappa$. We
note that the spin-initialization time is only around $40/\kappa
\sim 300$ ps.} \label{Figure_init_detuning}
\end{figure}
Next, we proceed to study the spin initialization
in such a QD-cavity system. We analyze the speed and fidelity of
spin initialization as a function of laser and cavity detuning. We
consider the state of the art parameter set for these simulations
($\kappa/2\pi=20$ GHz and $g/2\pi=20$ GHz) and assume a magnetic
field of $5$ T, resulting in $\Delta_e/2\pi \approx 28$ GHz and
$\Delta_h/2\pi \approx 14$ GHz. We also assume that the QD spin
starts in a mixed state with equal spin up and down (i.e., equal states $|1\rangle$ and $|2\rangle$)
population: $\rho_{11}=\rho_{22}=1/2$. We pump the
system with an H polarized laser (which couples to mode $a$),
while the other (V-polarized) cavity mode $b$ is not driven. In
other words, we use the laser to drive outer (H) transitions in Fig.
\ref{Spin_schematic}b via cavity mode $a$, but the inner (V)
polarized QD transitions are coupled to vacuum field of the cavity
mode $b$. For each $\Lambda$ QD system, this resembles the vacuum
induced transparency (VIT) configuration recently proposed and
experimentally studied in atomic physics
\cite{vuletic_vit}. Fig. \ref{Figure_init_detuning}(a) plots
$\vert \rho_{11}-\rho_{22} \vert$ as a function of the pump laser
wavelength and the cavity mode $b$ frequency $\omega_{b}$. The
cavity mode $a$ frequency $\omega_{a}$ is kept fixed at
$\omega_o$ (QD transition in absence of $B$-field, see Fig.\ref{Spin_schematic}), so the laser is not
necessarily on resonance with this mode. We find that a high
fidelity spin initialization is achieved when the pump laser is
tuned to the QD transition $\vert 1\rangle \rightarrow \vert 4
\rangle$, and the cavity mode $b$ (which is in vacuum state) is
tuned to the transition $\vert 2 \rangle \rightarrow \vert 4
\rangle$ (see inset of Fig. \ref{Figure_init_detuning}(b)). The laser can
potentially couple to the transition $\vert 2 \rangle \rightarrow
\vert 3\rangle$ and the cavity mode $b$ to the transition $\vert
1\rangle \rightarrow \vert 3\rangle$. However, due to detunings of
the laser and the cavity mode $b$, the QD is efficiently optically
pumped only via $\vert 1 \rangle \rightarrow \vert 4
\rangle\rightarrow \vert 2\rangle$ route, leading to all the spin
population in the QD state $\vert 2 \rangle$. In a similar fashion
we can also use the path $\vert 2 \rangle \rightarrow \vert
3\rangle \rightarrow \vert 1\rangle$, with a different set of
detunings to achieve initialization in state $\vert 1\rangle$.
Fig.\ref{Figure_init_detuning}a also reveals that a high spin
initialization fidelity can also be achieved when the cavity is
far detuned \cite{article_Gammon}, but this is a trivial case equivalent to the
situation of a QD uncoupled to the cavity (a QD in bulk
semiconductor), eliminating the cavity's beneficial role.
Next, we focus on the temporal dynamics of the spin-population
with varying detuning $\omega_{b}$ of the cavity mode $b$. We
observe that the spin-initialization is faster when the cavity is
resonant to the $|2\rangle \rightarrow |4\rangle$ transition
(Fig.\ref{Figure_init_detuning}(b)). The initialization speed of
several GHz is achieved in this case, but even faster
initialization speed can be achieved by pumping the system with a
stronger laser (while keeping a resonant cavity on the other
transition in the $\Lambda$ system). The simulations were performed using the open source python package QuTip \cite{qutip}.
Finally we analyze how the spin-manipulation can be performed in
such a system. The coupled QD-cavity system is driven by a laser pulse with different detunings,
and the spin-population is monitored as a function of the pulse
amplitude. Initially all the population is in the state $|1\rangle$.
We assume that the cavity is at the undressed QD frequency $\omega_o$, and we drive the QD directly
(i.e., the driving conditions are such that the laser is spatially or spectrally decoupled from cavity modes) \cite{supplementary}. The system is excited with a short pulse (pulse width $5$ ps), and the spin
population in states $|1\rangle$ ($\rho_{11}$) and $|2\rangle$ ($\rho_{22}$) are monitored over time \cite{supplementary}.
The spin population difference ($\rho_{11}-\rho_{22}$) in the steady state is plotted as a function of the pulse amplitude for three different pulse detunings $\Delta$ from the cavity resonances (Fig. \ref{Figure_spin_manipulate} a). Rabi oscillations are observed between the spin up ($|1\rangle$) and spin
down ($|2\rangle$) states as the pulse amplitude is changed.
To check whether the process is coherent, we calculate the trace of the density matrix $\rho^{(1,2)}$ of the subspace consisting of spin up and down states, and plot $Tr[(\rho^{(1,2)})^2]$ in Fig. \ref{Figure_spin_manipulate} b. $Tr[(\rho^{(1,2)})^2]$ is unity for a pure state. Therefore, if the spin manipulation process is coherent, we expect this value to be near unity. Our results in Fig. \ref{Figure_spin_manipulate}b indicate that only at a large detuning $\Delta$, the process is coherent. Additionally, the process is only coherent if the QD is driven directly (and not via cavity) \cite{supplementary}. Therefore, a coherent spin manipulation in the proposed system is also possible, but only by applying an optical pulse that is far detuned from the cavity and drives the QD directly.
\begin{figure}
\centering
\includegraphics[width=3.25in]{Figure_spin_manipulate.pdf}
\caption{(color online) (a) The steady-state spin population ($\rho_{11}-\rho_{22}$) as a function of
the laser Rabi frequency ($\Omega_h=\Omega_v$). All population is initially in state $|1\rangle$, and a $5$
ps laser pulse excites the system. Three different pulse detunings are used. Rabi oscillations between two spin states are
observed. (b) $Tr[(\rho^{(1,2)})^2]$ as a function of the laser Rabi frequency. The process becomes coherent ($Tr[(\rho^{(1,2)})^2]\sim 1$) when the detuning of the pulse from the QD-cavity system is large. }\label{Figure_spin_manipulate}
\end{figure}
In summary, we have presented a proposal for efficient
initialization and manipulation of a single QD spin coupled to a
bimodal optical cavity. Our numerical analysis (with full field
quantization and with realistic system parameters) confirms that
nearly $100$\% spin initialization fidelity is achievable with
speed beyond GHz, as well as spin manipulation, benefiting from a
cavity not only as a photonic interface but also to speed up the
spin control.
The authors acknowledge financial support provided by the Air Force Office of Scientific Research, MURI Center for Multi-functional light-matter interfaces based on atoms and solids. K.G.L. acknowledges support by the Swiss National Science Foundation and A.R. is also supported by a Stanford Graduate Fellowship.
|
2,869,038,155,255 | arxiv | \section{Introduction and summary of results}
\label{sec:Intro}
Five-dimensional SCFTs deformed by relevant operators often admit a low energy description in terms of 5d $\mathcal{N}=1$ SYM gauge theories with matter. The corresponding massive parameters are interpreted in the gauge theory as the Yang-Mills couplings $t = g_{\rm YM}^{-2}$ for the simple factors in the gauge group and the masses of the matter hypermultiplets. The SCFTs are then viewed as the limit of infinite coupling $g_{\rm YM} \to \infty$ ($t \to 0$) and massless matter of the gauge theories. This was described first in the seminal paper of Seiberg \cite{Seiberg:1996bd} for $SU(2)$ gauge theories with $N_f \le 7$ flavor hypermultiplets, where it was argued from their string theory engineering that the SCFTs have enhanced $E_{N_f+1}$ global symmetry. Many more SCFTs have been contructed from 5d $\mathcal{N}=1$ quiver gauge theories and related to brane systems and geometric engineering in string theory (see \cite{Douglas:1996xp,Morrison:1996xf,Intriligator:1997pq,Aharony:1997ju,Aharony:1997bh} for some early papers).\footnote{Recently there were some attempts to classify low rank 5d $\mathcal{N}=1$ SCFTs based on their Coulomb branch or their engineering in M-theory \cite{Jefferson:2017ahm,Jefferson:2018irk}. See also \cite{Chang:2017cdx} for an analysis of low rank 5d SCFTs based on numerical bootstrap techniques.}
It can happen that different massive deformations of a SCFT lead to different gauge theory descriptions. Typically a deformation with parameter $t>0$ or $t<0$ can lead to two different gauge theories with couplings $g_{\rm YM}^{-2} \sim |t|$. Thus deformations in different ``chambers" of the parameter space may be described by different gauge theories.\footnote{However this is not a generic phenomenon. Often some regions of parameter space simply do not admit a gauge theory description.} This can be phrased as a duality between the gauge theories which are obtained from deformations of the same SCFT.
One such duality is realized by S-duality in the type IIB brane setup realizing the 5d theories and we will thus call it S-duality. An important class of S-dual theories are $SU(N)^{M-1}$ linear quivers and their dual $SU(M)^{N-1}$ linear quivers. They can be realized as the low-energy theories of webs of 5-branes in type IIB string theory as described in \cite{Aharony:1997ju,Aharony:1997bh} and the action of S-duality exchanges the brane webs of the dual theories.
This duality can be tested by computing observables in the dual theories and matching them with appropriate identification of parameters. This assumes that the gauge theory observables in question can be analytically continued in the deformation parameters to the full parameter space of the SCFT.
Such tests have been performed with exact results from topological strings \cite{Bao:2011rc,Mitev:2014jza} and from supersymmetric localization at the level of the partition function (or supersymmetric index) of the gauge theories \cite{Bergman:2013aca}.
An important challenge is to understand how S-duality acts on loop operators. In this paper we answer this question for half-BPS Wilson loop operators,
\begin{equation}} \newcommand{\ee}{\end{equation}
W = \textrm{Tr} \,_{\mathcal{R}} \, \mathcal{P} \exp\big(\int i A + \sigma dx \big) \,,
\ee
where $A$ is the one-form gauge potential, $\sigma$ the adjoint scalar in the vector multiplet and $\mathcal{R}$ is a representation of the gauge group.
We find that S-duality acts as an automorphism on the space of Wilson loops, namely that Wilson loops are mapped to Wilson loops. This differs from 4d S-duality where Wilson loops are mapped to 't Hooft loops (or in general to dyonic loops), and from 3d mirror symmetry where they are mapped to vortex loops \cite{Assel:2015oxa}.
Our findings are guided by the type IIB brane realization of half-BPS loop operator insertions. We relate Wilson loops to configurations with specific arrays of strings stretched between D5 branes and auxiliary D3 branes. Through standard brane manipulations we identify these configurations across S-duality and deduce a prediction for the S-duality map between Wilson loops.
We consider two classes of theories, which are those with lowest gauge algebra rank. In Section \ref{sec:SU2} and \ref{sec:SU2Nf} we consider $SU(2)$ theories with $N_f \le 4$ flavor hypermultiplets. These theories are self-dual under S-duality. For instance the pure $SU(2)$ theory is dual to another pure $SU(2)$ theory with the gauge couplings related by $t = - \widetilde t$ (namely the region of negative $t$ is described by the dual $SU(2)_{\widetilde t=-t}$ theory).
In Section \ref{sec:SU3dualities} we consider examples of $SU(3)$ theories and their $SU(2)\times SU(2)$ quiver duals.
An important prediction of the brane analysis is that there is a privileged basis of Wilson loops in which to express the S-duality map: these are Wilson loops in tensor products of fundamental and anti-fundamental representations for each gauge node.\footnote{For higher rank theories, the relevant representations are tensor products of anti-symmetric representations for each gauge node.} They are naturally realized in the brane setup. For these loops we predict a one-to-one S-duality map. For Wilson loops in other representations (which are linear combinations of loops in the privileged basis), each individual loop is mapped to a linear combination of loops in the dual theory. We therefore focus on Wilson loops of the former kind.
To test the proposed duality map we compute the exact VEVs of the Wilson loops wrapping the circle at the origin of the 5d Omega background $S^1\times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$, or `half-index' in the presence of Wilson loop insertions, using supersymmetric localization.
This happens to be a challenging computation because the modifications of the instanton corrections (in particular to the moduli space of singular instantons at the origin) in the presence of a Wilson loop are not yet completely understood (to our knowledge). To side-step this problem we follow the approach advocated in \cite{Kim:2016qqs} (see also \cite{Agarwal:2018tso}) and compute instead the VEVs of certain $\mathcal{N}=(0,4)$ SQM loop operators, which are roughly speaking generating functions for some Wilson loops. They are defined by an array of 1d fermions with a subgroup of the flavor symmetry gauged with 5d fields in an $\mathcal{N}=(0,4)$ supersymmetry preserving manner (they preserve the same supersymmetry as the half-BPS Wilson loops). The relevant SQM loops are those sourced by stacks of $n$ D3 branes placed in the brane web.\footnote{These SQM loop operators are also known under the name of fundamental (for $n = 1$ D3) or higher (for $n > 1$) qq-characters in the language of \cite{Nekrasov:2015wsu,Bourgine:2016vsq,Kimura:2015rgi,Bourgine:2017jsi}, although the relation to Wilson loops is not discussed in those works.} The Wilson loop VEVs can then be identified as certain residues of the SQM loop VEVs in the SQM flavor fugacities. This property is inferred from string considerations.
The virtue of the SQM loops is that one can use their brane realization as a guide to find the appropriate modification of the ADHM quiver quantum mechanics computing instanton corrections. Our results confirm the validity of the procedure by correctly reproducing the classical contributions to the Wilson loop VEVs and by confirming the conjectured S-duality map.
We find however a somewhat surprising feature: Wilson loops in the appropriate tensor product representations do not transform exactly into their dual Wilson loop, but rather come with an extra multiplicative factor which can be interpreted as a background Wilson loop. We say that they transform covariantly under S-duality. Let us summarize our results:
\begin{itemize}
\item For the self-S-dual $SU(2)$ theories with $N_f\le 4$ flavors we consider Wilson loops in the representations $\mathbf{2}^{\otimes n}$ and find that they transform under S-duality as
\begin{equation}} \newcommand{\ee}{\end{equation}
S. W_{\mathbf{2}^{\otimes n}}(t,m_k) = Y^{-n} \, \widetilde W_{\mathbf{2}^{\otimes n}} (\widetilde t, \widetilde m_k) \,,
\label{introStransfoSU2}
\ee
with $t,m_k, \widetilde t, \widetilde m_k$ the gauge coupling and mass parameters in the dual theories respectively (see Section \ref{sec:SU2Nf} and Appendix \ref{app:Nf34} for the precise maps), and
$Y= e^{\frac t2 + \frac 14 \sum_{k=1}^{N_f} (-1)^k m_k} = \widetilde Y^{-1}$. We also find that at each order in the appropriate expansion parameter the contributions to the Wilson loop VEVs are organized into characters of the $E_{N_f+1}$ symmetry, confirming the symmetry enhancement. S-duality is then a transformation in the Weyl group of $E_{N_f+1}$ \cite{Mitev:2014jza}. The parameter $Y$ can be understood as a charge one background Wilson loop for a $U(1)$ subgroup of $E_{N_f+1}$. We strongly believe that these results hold for $N_f=5,6,7$ (but we were not able to test it).
\item For $SU(3)$ theories we consider Wilson loops in representations $\mathbf{3}^{\otimes n_1} \otimes \overline{\mathbf 3}{}^{\otimes n_2}$ and in the dual $SU(2)\times SU(2)$ quiver Wilson loops in representations $(\mathbf{2}^{\otimes n_1},\mathbf{2}^{\otimes n_2})$. We find the exact map
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& S.W_{\mathbf{3}^{\otimes n_1} \otimes \overline{\mathbf 3}{}^{\otimes n_2}} = Y_1^{-n_1} Y_2^{-n_2} \widetilde W_{(\mathbf{2}^{\otimes n_1},\mathbf{2}^{\otimes n_2})} \,,
\label{introStransfoSU3}
\eea
with background Wilson loops $Y_1,Y_2$ which are given, for instance in the duality relating the $SU(3)$ $N_f=2$ to the $SU(2)\times SU(2)$ quiver without fundamental hypermultiplet (see Section \ref{ssec:SU3Nf2duality}) by $Y_1 = e^{-\frac{2\widetilde t_1+ \widetilde t_2}{3}} = e^{\frac t2 + \frac{m_1+m_2}{4}}$ and $Y_2= e^{-\frac{\widetilde t_1+ 2\widetilde t_2}{3}}= e^{\frac t2 - \frac{m_1+m_2}{4}}$.
\end{itemize}
These results are based on exact computations up to 2 or 3-instanton corrections and for the Wilson loops in the lowest rank representations, namely with $n=1,2$ (sometimes $n=3$) and $n_1+n_2 \le 2$, which is as far as we could reasonably go technically (using Mathematica).
We conjecture a generalization of the S-duality map of Wilson loops with the relation \eqref{SmapGen} for the duality relating $SU(M)$ SQCD theories to $SU(2)^{M-1}$ quivers.
Before moving to the bulk of the discussion it is worth mentioning some related work. Analogous dualities of 5d $\mathcal{N}=1$ theories for $SU(N)$ theory with $N_f$ flavors and Chern-Simons level $N- \frac{N_f}{2}$ were studied in \cite{Gaiotto:2015una} (with generalization to quiver theories). In that case the theories are self-dual with a map of massive parameters which reverses the sign of the (squared) gauge coupling. The paper describes duality interface theories for this duality, but also study the action of the duality on Wilson loop operators. This involves a dressing factor in the form of a background Wilson loop as well.\footnote{The map proposed in \cite{Gaiotto:2015una} (Equation (4.2)) is not quite analogous to what we find for S-duality, because it acts covariantly on Wilson loops in irreducible representations, instead of tensor product representations. We observe that the proposal does not seem consistent with the fact that Wilson loops in rank $N$ antisymmetric representations are trivial. The duality studied is not S-duality in general, but it should be S-duality for $SU(2)$ theories (up to $E_{N_f+1}$ Weyl transformations).
Moreover the method proposed to compute the half-index in the presence of a Wilson loop differs from the one proposed in this paper and might explain the slightly different results. We believe that the method we present provides a more robust framework to carry out such computations.}
The enhancement to $E_{N_f+1}$ global symmetry seen from the computation of the superconformal index in $SU(2)$ SQCD theories was also found in \cite{Kim:2012gu,Hwang:2014uwa} and with Wilson ray insertions in \cite{Chang:2016iji}, using closely related computational methods.
\medskip
The rest of the paper is organized as follows. In section \ref{sec:BranesLoops} we discuss the brane realization of Wilson loops and SQM loops in IIB string theory, their relations and the action of S-duality inferred from type IIB S-duality. In Section \ref{sec:SU2} we explain the computation of the half-index with Wilson loops in detail and derive the exact S-duality action for the pure $SU(2)$ theory. Section \ref{sec:SU2Nf} contains the computation and the results for the $SU(2)$ theory with $N_f=1$ and $N_f=2$ flavors. We relegate to Appendix \ref{app:Nf34} the study of $SU(2)$ with $N_f=3$ and $4$. In Section \ref{sec:SU3dualities} we study the duality relating $SU(3)$ theories to $SU(2)$ quivers and we generalize it in Section \ref{sec:Generalization}. The remaining appendices contain details about the ADHM instanton computations (Appendix \ref{app:ADHMcomputations}) and some exact results which were too voluminous to fit in the main text (Appendix \ref{app:longresults}).
\section{Branes and Loops}
\label{sec:BranesLoops}
In this section we give a brief review of the brane realization of 5d $\mathcal{N}=1$ theories following \cite{Aharony:1997bh} and we explain how the insertion of half-BPS loop operators can be achieved by adding extra branes or strings to the construction.
\subsection{Brane setup}
\label{ssec:Branes}
The 5d $\mathcal{N}=1$ theories that we will study are engineered with a 5-brane web in type IIB string theory, with the orientations described in the first entries of Table \ref{tab:braneorientations}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
D5 & X & X & X & X & X & X & & & & \\
NS5 & X & X & X & X & X & & X & & & \\
5$_{(p,q)}$ & X & X & X & X & X & $\theta$ & $\theta$ & & & \\ \hline
F1 & X & & & & & & X & & & \\
D1 & X & & & & & X & & & & \\
D3 & X & & & & & & & X & X & X \\ \hline
\end{tabular}
\caption{\footnotesize Brane array for 5d $\mathcal{N}=1$ theories and half-BPS loop operators. ($\tan\theta=\frac{q}{p}$)}
\label{tab:braneorientations}
\end{center}
\end{table}
A 5$_{(p,q)}$ brane spans a line in the $x^{56}$ plane defined by $\cos(\theta) x^5 + \sin(\theta) x^6 =0$ with $\tan\theta=\frac{q}{p}$. In this convention we have D5 = 5$_{(0,1)}$ and NS5 = 5$_{(1,0)}$. In pictures we stick to the usual convention that D5 branes are horizontal lines, while NS5 branes are vertical lines, which means we draw pictures in the $x^{56}$ plane.
The brane setups have parallel D5 branes spanning an interval along $x^5$ and supporting a 5d Yang-Mills gauge theory at energies lower than the size of the interval. The simplest example is that of Figure \ref{SU2}-a with two parallel D5s supporting an $SU(2)$ gauge theory.\footnote{For $N$ D5 brane segments, it is believed that the diagonal $U(1) \subset U(N)$ subgroup of the gauge group living on the D5s is massive, so that the theory at energies sufficiently small is an $SU(N)$ gauge theory.}
There are two distances in this configuration: the distance $2a$ between the D5s corresponding to the VEV of the real scalar in the vector multiplet
$\vev{\phi} =
\left(
\begin{array}{cc}
a & 0 \cr
0 & -a \cr
\end{array}
\right)$,
and the distance $t_{\rm eff} := \frac{1}{g^2_{\rm eff}}= t + 2a$ between the NS5s corresponding to the effective abelian coupling on either of the D5 branes, where we denoted $t:= \frac{1}{g^2}$ the bare Yang-Mills coupling of the $SU(2)$ theory.
\begin{figure}[th]
\centering
\includegraphics[scale=0.35]{SU2.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{a) Brane realization of the pure $SU(2)$ theory. b) Half-BPS strings excitations for W-bosons and instanton particles.}}
\label{SU2}
\end{figure}
The brane setup thus describes the gauge theory at finite coupling $t$ and on the Coulomb branch of vacua. The SCFT is obtained as the configuration where these two sizes are set to zero, namely when the D5s and NS5s are shrunk to zero size and the configuration looks like two intersecting 5$_{(1,1)}$ and 5$_{(1,-1)}$ branes.
\medskip
In this picture one can add strings stretched between 5-branes associated to particle excitations of the 5d $\mathcal{N}=1$ theory, as shown in Figure \ref{SU2}-b. F1 strings stretched between D5s are W-bosons excitations with mass $2a$, while D1 strings stretched between NS5s are instanton particles with mass $t +2a$.
\medskip
To add flavor hypermultiplets to the 5d theory one should add external (semi-infinite) D5 branes to the construction. To increase the rank of the gauge group one should add D5 segments to the construction. We will look at these more elaborate brane setups in later sections.
\subsection{Half-BPS loop operators}
\label{ssec:Loops}
Half-BPS loop operators are realized by adding semi-infinite F1 strings, D1 strings and/or D3 branes to the setup with the orientations given in the second entries of Table \ref{tab:braneorientations}. The F1 strings, D1 strings and D3 branes all preserve the same four supercharges, so we can consider configurations with all of them together if we wish.
The presence of the strings and/or D3 branes break the supersymmetry to a 1d $\mathcal{N}=(0,4)$ subalgebra.
Importantly the D3 and D5 branes are in a Hanany-Witten orientation relative to each other, with a F1 string creation effect, which means that as the D3 brane crosses the D5 brane a F1 string is created stretched between them.
Similarly (and remarkably) the D3 and NS5 branes are also in a Hanany-Witten orientation relative to each other, but with a D1 string creation effect: as a D3 brane crosses an NS5 brane a D1 string is created. We illustrate these effects in Figure \ref{SU2Loops}.
This will be important since, according to \cite{Hanany:1996ie}, the low energy physics is not affected by moving the D3 brane along $x^{56}$ as long as one takes into account these string creation effects.
\begin{figure}[th]
\centering
\includegraphics[scale=0.3]{SU2Loops.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{Hanany-Witten string creation/annihilation effects as we move a D3 brane (green dot) in the $x^{56}$ plane.}}
\label{SU2Loops}
\end{figure}
This also comes with an important property, usually called {\it s-rule} : at low energies there can be at most one F1 string stretched between a D3 and a D5, and similarly at most a D1 string stretched between a D3 and an NS5. This is because the lowest energy mode on such a string is fermionic.
\medskip
The interpretation of the semi-infinite F1 and D1 string as operator insertions in the $SU(2)$ gauge theory is the following.
A semi-infinite F1 string stretched from infinity (along $x^5$) to the D5s inserts a half-BPS Wilson loop in the fundamental representation of $SU(2)$. There are two configurations -- the string ending on one or the other D5 -- corresponding to the two states traced over in the fundamental representation.
A semi-infinite D1 string stretched from infinity (along $x^6$) to the NS5s inserts a loop operator which should be a 1d defect related to instantons in the 5d theory, however we do not know of any description of these loops in terms of a singularity prescription for the 5d fields.
We will not try to characterize them further in this paper, however we observe from Figure \ref{SU2Loops} that such loops are related to standard Wilson loops through Hanany-Witten moves, therefore it is enough in principle to study Wilson loops.
\smallskip
The interpretation of a D3 brane placed in the middle of the 5-brane array is not strictly speaking as the insertion of a loop operator since the D3 brane support a 4d $\mathcal{N}=4$ $U(1)$ SYM theory at low-energies. However the 4d theory is coupled to 5d theory along a line, through charged localized 1d fields, and the whole 5d-4d-1d setup preserves the same supersymmetry as a half-BPS loop operator in the 5d theory, namely 1d $\mathcal{N}=(0,4)$ supersymmetry. Moreover the 4d theory will not play a role in our computations and we can consider it as non-dynamical.\footnote{It would be interesting to study the full 5d-4d theories interacting along a line defect and to understand the duality properties of such systems.}
Therefore we interpret this setup as inserting a loop operator described by coupling a (0,4) SQM to the 5d theory.
At low energies the localized 1d modes are two complex fermions $\chi_{a=1,2}$ (two (0,4) Fermi multiplets), which arise as the lowest excitation of strings stretched between the D3 and D5 branes. They form a doublet of $SU(2)$ which is identified with the 5d gauge group. The 1d fermions $\chi_a$ are coupled to the 5d `bulk' theory via gauging the $SU(2)$ symmetries by the 5d vector multiplet at the location of the line defect.\footnote{The 5d $\mathcal{N}=1$ vector multiplet can be decomposed into 1d $\mathcal{N}=(0,4)$ multiplets. This provides a 1d vector multiplet which gauges the $SU(2)$ flavor symmetry of the defect theory.} This leaves a $U(1)_f$ flavor symmetry acting on both fermions with the same charge.
The corresponding mixed 5d-1d quiver theory is shown in Figure \ref{D3SQM}.
\begin{figure}[th]
\centering
\includegraphics[scale=0.4]{D3SQM.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{5d-1d quiver theory corresponding to the addition of a D3 brane in the center. The circle indicates the 1d flavor symmetry gauged with 5d fields. The dashed line
indicates a bifundamental Fermi multiplet (two fermions).}}
\label{D3SQM}
\end{figure}
The 1d action is (in implicit notation)
\begin{equation}} \newcommand{\ee}{\end{equation}
S_{\chi} = \int dt \, \bar\chi^a (\partial_t - i A^{(5d)}{}_{a}{}^b + \phi^{(5d)}{}_{a}{}^b - M \, \delta_a^b ) \chi_b \,.
\ee
The VEVs of the vector multiplet scalar $\phi^{(5d)}$ and the real mass $M$ can be identified with the positions of the D5s and of the D3 along $x^5$ respectively. Denoting $M$ the position of the D3 brane and $a_1,a_2$ the positions of the D5s along $x^5$, the fermions have mass $a_1 - M$ and $a_2 - M$.
\medskip
It will be central in our discussion to understand the relation between such `D3-loop operators' or `SQM loop operators' and the Wilson loop operators that we wish to study. This is because the exact computation of Wilson loop VEVs is at the moment not completely understood, therefore in order to
evaluate them we will have to make use of certain relations between the VEVs of SQM operators and the VEVs of Wilson loops on a given manifold.
To this purpose we make the following heuristic argument.
We consider the supersymmetric partition function on some manifold of the SQM theory associated to the presence of the D3 brane, by which we mean the partition function of the 5d-1d theory, and we normalize it by the partition function of the 5d theory alone $Z_{\text{5d-1d}}/Z_{\text{5d}}$. We define this as the normalized VEV of the SQM loop.
It receives contributions from the degrees of freedom sourced by (fundamental) strings stretched between the D3 and D5 branes.
Since there can be at most one F1 string stretched between the D3 and a D5, there are four possible configurations with F1 strings contributing: $(0,0), (0,1),(1,0),(1,1)$, where $(n,m)$ stands for $n$ strings stretched to the top D5 and $m$ strings stretched to the bottom D5.
In the configurations $(1,0)$ and $(0,1)$, with a single string, one can move the D3 brane to the top or to the bottom of the brane setup so that no string ends on it anymore (taking into account the string annihilation effect). Such configurations carry almost trivial contributions to the SQM loop VEV \footnote{Because of the flux induced by the D3 brane on the D5 worldvolumes (and vice-versa) \cite{Hanany:1996ie}, such a contribution is not 1, but rather a simple classical factor, as we will see in later sections.} since the D3 brane is decoupled from the brane web.
In the two other configurations, (0,0) and (1,1), by moving the D3 vertically to the top of the brane configuration we obtain a brane configuration with a string stretched between the D3 and one of the D5s, corresponding of the two setups of the fundamental Wilson loop insertion. This is illustrated in Figure \ref{SU2Wcontrib}. Therefore the (normalized) fundamental Wilson loop VEV corresponds to a sector of the SQM loop, which is associated to the two configurations (0,0) and (1,1).
\begin{figure}[th]
\centering
\includegraphics[scale=0.3]{SU2Wcontrib.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{The (0,0) and (1,1) string configurations (on the left) are related to the two configurations for the fundamental Wilson loop insertion (on the right) by Hanany-Witten moves.}}
\label{SU2Wcontrib}
\end{figure}
These configurations are those with zero net number of strings ending on the D3 (when placed in the middle of the web)\footnote{The net number of strings ending on the D3 is the number of strings ending on the D3 counted with a sign according to their orientation, namely the difference between the number of strings ending on the top and on the bottom of the D3 brane in the picture.} and correspond to states with no charge under the $U(1)_f$ symmetry. The same considerations apply in the presence of D1 strings stretched between the two NS5s, corresponding to instanton sectors of the gauge theory, and in the presence of extra F1 strings stretched between the two D5s, corresponding to sectors with W-boson excitations. Therefore we arrive at the proposal for the pure $SU(2)$ theory,
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle\text{fundamental Wilson loop} \rangle \ = \ \langle \text{SQM loop} \rangle \Big|_{U(1)_f \, \text{neutral sector}} \,.
\label{SU2RelFund}
\ee
In explicit computations this means that the Wilson loop will be obtained by taking a residue in the $U(1)_f$ flavor fugacity. Of course the heuristic argument that we gave is not precise enough to predict the overall coefficient in the above relation and we will find in later sections that it holds up to a sign.
\medskip
To access Wilson loops in higher representations we need to consider more D3 branes. Let us place $n$ D3 branes in the middle of the 5-brane web, as in Figure \ref{D3SQM2}.
\begin{figure}[th]
\centering
\includegraphics[scale=0.4]{D3SQM2.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{Adding $n$ D3 branes realizes a quiver theory with a Fermi multiplet in the bifundamental of the $SU(2)\times U(n)_f$ flavor symmetry.}}
\label{D3SQM2}
\end{figure}
The SQM theory has now Fermi multiplets transforming in the bifundamental representation of $SU(2)\times U(n)_f$, with $U(n)_f$ flavor symmetry associated to the stack of D3s.
Once again we can think of configurations with strings stretched between the D3s and the D5s and try to isolate those corresponding to Wilson loops insertions. We take the D3s separated, namely we give generic masses to the $n$ fundamental Fermi multiplets. Each D3 brane of type (0,0) or (1,1) (zero or two strings attached) contributes as the insertion of a fundamental Wilson loop. The sum over configurations with D3s of type (0,0) or (1,1) only can be mapped to the trace over states in the tensor product representation $\mathbf{2}^{\otimes n} := \mathbf{2} \otimes \mathbf{2} \otimes \cdots \otimes \mathbf{2}$ ($n$ times). It corresponds to the sector of the SQM theory neutral under a $U(1)_f^n$ maximal torus of $U(n)_f$. We thus arrive at the proposal
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle \text{Wilson loop in} \ \mathbf{2}^{\otimes n} \rangle \ = \ \langle \text{SQM loop} \rangle \Big|_{U(1)_f^n \, \text{neutral sector}} \,.
\label{SU2RelTensProd}
\ee
Finally we may think about identifying configurations related by D3 permutations, which correspond to averaging over $U(n)_f$ Weyl transformations. The resulting reduced set of configurations reproduces the states in the symmetric representation of rank $n$ of $SU(2)$, or spin $n$ representation, and corresponds to projecting to the $U(n)_f$ invariant sector in the SQM,
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle \text{Wilson loop in spin $n$} \rangle \ = \ \langle \text{SQM loop} \rangle \Big|_{U(n)_f \, \text{neutral sector}} \,.
\label{SU2RelSym}
\ee
These are the predictions we can make from the analysis of the brane setup realizing half-BPS loop insertions. As we will see in the next sections, some more refined prescription will be needed to extract the Wilson loop VEVs from the SQM loop VEVs, in the form of a precise residue integration. We will now try to make these claims more precise, to confirm them by exact computations and use the results to understand the S-duality map of Wilson loops.
\bigskip
Before proceeding we should make a comment.
In the description of the SQM defect theory there is no excitation corresponding to D1 strings stretched between the D3 and the NS5 branes, although these are present in the brane setup. These should correspond to 't Hooft loops in the 4d SYM theory living on the D3 branes (that we consider as frozen). This means that in our field theory description we are restricting to a sector of the full system which excludes these excitations. One consequence of this is that when applying S-duality to the brane setup we will not be able to map the full SQM operator to a dual SQM operator, but we will only map the Wilson loops which are sectors of the SQM loop.
\subsection{S-duality}
\label{ssec:Sduality}
A type IIB brane configuration realizing a 5d gauge theory can be transformed by S-duality, namely the element
$S= \left(
\begin{array}{cc}
0 & -1 \cr
1 & 0
\end{array}
\right) $
of the $SL(2,\mathbb{Z})$ symmetry of IIB string theory, to a dual brane configuration, which may realize a different 5d gauge theory. S-duality in type IIB thus implies a duality or equivalence of the two 5d gauge theories and in particular the identification of their infinite coupling SCFT limit. We will refer to the duality of 5d theories as S-duality again.
In the brane picture S-duality transforms a 5$_{(p,q)}$ brane into a 5$_{(-q,p)}$ brane. For convenience we combine it with a reflection $x^5 \leftrightarrow x^6$ so that NS5 and D5 branes are still horizontal and vertical respectively in the brane picture.\footnote{The reflection can be seen as a $\frac{\pi}{2}$ rotation in $x^{56}$, followed by a parity $x^5 \to - x^5$ reversing the orientation of one type of 5-branes. Combined with IIB S-duality, it ensures that NS5s and D5s are exchanged. This convention is different from part of the literature on the topic where S is only combined with the $\frac{\pi}{2}$ rotation.}
Therefore under S-duality the brane picture is simply flipped around the $x^5=x^6$ diagonal.
In many situations the dual brane configuration has no D5 branes and we cannot read a dual field theory. We will only discuss situations where there is a dual 5d field theory.
When this is the case, in general the dual 5d gauge theories have different gauge gauge groups and hypermultiplet representations. The Coulomb parameters are exchanged with the effective abelian gauge couplings.
In the simplest cases, and in particular for the pure $SU(2)$ theory, S-duality brings back the brane configuration to itself with the Coulomb parameter and effective coupling exchanged $2a \leftrightarrow t + 2a$. This means that the theory is mapped to itself under this map of parameters. We say that the pure $SU(2)$ theory is self-dual.
We will see that $SU(2)$ theories with $N_f$ flavors are also self-dual, while $SU(N)$ theories with $N>2$ are dual to $SU(2)$ quiver theories. We will study both situations in this paper.
\bigskip
The action of S-duality on loop operators can be understood from their realization in the IIB brane picture.
F1 strings and D1 strings are swapped under S-duality, which means that in general Wilson loops will be exchanged with the loops created by the D1 strings. However, brane manipulations like those in Figure \ref{SU2Loops} suggest that these two classes of loops are not independent, but rather form a single class of half-BPS loops which can all be realized with D3 branes placed in the middle of the brane web. One way to phrase this is that Wilson loops of one theory are mapped to Wilson loops of the dual theory under S-duality. This is the conjecture that we wish to verify.
\begin{center}
\begin{tikzpicture}
\node (A) at (1,1.5) {Theory A};
\node (C) at (1,0.5) {Wilson loop in $\mathcal{R}_A$ \quad };
\node (B) at (6,1.5) {Theory B};
\node (D) at (6,0.5) {Wilson loop in $\mathcal{R}_B$ \quad };
\draw[<->] (C) -- (D);
\end{tikzpicture}
\end{center}
We will make this mapping more precise in examples by providing the map of representations labelling the Wilson loops $\mathcal{R}_A \leftrightarrow \mathcal{R}_B$. We will see that the mapping of Wilson loops is actually slightly more complicated in the presence of massive deformations because it involves some dressing factors corresponding to background Wilson loops.
\medskip
In the case of a self-dual theory, the brane picture predicts that the set of all Wilson loops
gets mapped to itself under the exchange of deformation parameters, with contributions from W-boson excitations exchanged with contributions from instanton excitations. We will see that Wilson loops in certain representations -- the tensor products of fundamental representations -- are directly mapped back to themselves under the duality, they transform {\it covariantly} under S, while loops in other $SU(2)$ representations are mapped to linear combinations of Wilson loops.
\section{Loops in pure $SU(2)$ theory}
\label{sec:SU2}
The simplest theory to analyse is the pure $SU(2)$ theory, whose brane web is shown in Figure \ref{SU2}.
According to the discussion in the previous section we expect the set of all Wilson loops to be mapped to itself under S-duality. We now wish to find precisely how S-duality acts on each individual Wilson loop.
To do so we propose to compute the exact half-index of the 5d theory in the presence of a Wilson loop, which is the VEV of a Wilson loop on
$S^1 \times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$, where the loop wraps $S^1$ and is placed at the origin in $\mathbb{R}^4_{\epsilon_1,\epsilon_2}$.\footnote{The name half-index comes from the fact that the superconformal index is computed by the partition function on $S^1\times S^4$, which can be obtained by a gluing procedure from two copies of $S^1 \times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$. It is sometimes called hemisphere index.} Here $\mathbb{R}^4_{\epsilon_1,\epsilon_2}$ denotes the $\mathbb{R}^4$ Omega background with equivariant parameters $\epsilon_1,\epsilon_2$. To be more precise we will be considering the VEVs of Wilson loops normalized by the partition function.
Such supersymmetric observables can in principle be computed by equivariant localization techniques, as discussed for example in \cite{Gaiotto:2014ina,Bullimore:2014upa,Bullimore:2014awa,Gaiotto:2015una} following the seminal works \cite{Nekrasov:2002qd,Pestun:2007rz}.
However, in practice one encounters difficulties because the computations reduce to an integration over the moduli spaces of singular instantons localized at the origin of $\mathbb{R}^4_{\epsilon_1,\epsilon_2}$. The presence of a Wilson loop affects these moduli spaces in a way that is not completely understood to our knowledge. To circumvent this difficulty it has been proposed in particular cases \cite{Kim:2016qqs} (building on the analysis of \cite{Tong:2014yna,Tong:2014cha}) that Wilson loop observables can be identified as certain contributions in partition functions of 5d-1d coupled systems, namely contributions in SQM loop observables (aka qq-characters \cite{Nekrasov:2015wsu,Bourgine:2016vsq,Kimura:2015rgi,Bourgine:2017jsi}). To compute the SQM loop observables $\vev{L_{\rm SQM}}$ one then relies on the string theory realization of the defect theory. From the brane construction it is possible to understand how the loop affects each instanton sector, as we will see below. Explicit proposals and computations have been made in \cite{Kim:2016qqs} for Wilson loops in completely antisymmetric representations in 5d $\mathcal{N}=1^\ast$ $U(N)$ theory and 5d $\mathcal{N}=1$ pure $U(N)$ theory, as well as in \cite{Agarwal:2018tso} for Wilson loops in more general representations in 5d $\mathcal{N}=1^\ast$ $U(N)$ theory. Here we apply the same approach to study Wilson loops in all possible tensor product of antisymmetric representations for a larger class of 5d $\mathcal{N}=1$ theories with unitary gauge groups. In section \ref{sec:BranesLoops} we have proposed a relation between Wilson loops and SQM loops. Based on the brane realization of the SQM loops we will be able to carry out the computations and extract the exact results for the Wilson loops. The validity of the method will be ensured by consistency checks, including nice S-duality properties.
\subsection{Half-index computations from residues}
\label{ssec:ResiduesComputations}
In section \ref{ssec:Loops} we predicted the equality \eqref{SU2RelTensProd} between the (normalized) VEVs of the Wilson loop in the tensor product representation $\vev{W_{\mathbf{2}^{\otimes n} }}$ and the $U(1)_f^n$ neutral sector of the (normalized) SQM loop VEVs realized with $n$ D3 branes $\vev{L_{{\rm SQM}}}$.
The evaluation of the SQM loop on $S^1\times \mathbb{R}^{4}_{\epsilon_1,\epsilon_2}$ is obtained from standard equivariant localization techniques. The exact result has the form of a supersymmetric index and depends on various fugacities:
\begin{itemize}
\item $q_1=e^{\epsilon_1}$ and $q_2=e^{\epsilon_2}$ are the fugacities associated to the symmetry generators $\frac 12 (j_1+j_2+R)$ and $\frac 12 (j_2-j_1+R)$ respectively, with $j_1,j_2$ the Cartans of the $SO(4)_{1234}\sim SU(2)_1\times SU(2)_2$ rotation symmetry on $\mathbb{R}^4$ and $R$ the Cartan of the $SO(3)_{789} \sim SU(2)_R$ R-symmetry;
\item $\alpha=e^{a}$ is the fugacity associated to the Cartan generator of global $SU(2)$ gauge symmetries;
\item $Q = e^{-t}$ is the fugacity associated to the $U(1)_{\rm inst}$ symmetry (instanton counting parameter);
\item $x_i = e^{M_i}$ are the $U(n)_f$ flavor symmetry fugacities of the defect theory, with $M_i$ the masses of the SQM multiplets.
\end{itemize}
In the $\mathcal{N}=(0,4)$ SQM, the R-symmetry is $SO(4) \sim SU(2)_2\times SU(2)_R$.
The result of the computation is organized in an expansion in instanton sectors weighted by $Q^{k}$, $k\ge 0$, multiplied by a common perturbative part. Since we normalize the SQM loop by the partition function in the absence of the defect, we have the following structure
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
Z_{\text{5d}} & = Z^{\rm pert}_{\rm 5d} (\alpha) \sum_{k\ge 0} Z^{\text{inst}, (k)}_{\rm 5d} (\alpha) \, Q^k \,, \cr
Z_{\text{5d-1d}} & = Z^{\rm pert}_{\rm 5d} (\alpha) \sum_{k\ge 0} Z^{\text{inst}, (k)}_{\text{5d-1d}}(\alpha , x) \, Q^k \,, \cr
\vev{L_{\rm SQM}} & = \frac{Z_{\text{5d-1d}}}{Z_{\text{5d}}} = \sum_{k\ge 0} c_{k}(\alpha,x) Q^k\,.
\eea
Since $Z^{\rm pert}_{\rm 5d}$ cancels in the normalization there is no need to compute it.
The coefficient $Z^{\text{inst}, (k)}_{\rm 5d} (\alpha)$ is computed as the supersymmetric index of the ADHM quantum mechanics of the instanton sector $k$. The coefficient $Z^{\text{inst}, (k)}_{\text{5d-1d}}(\alpha , x)$ arises from a modified $\mathcal{N}=(0,4)$ ADHM quantum mechanics\footnote{1d $\mathcal{N}=(0,4)$ supermultiplets and Lagrangians can be constructed as the dimensional reduction of the 2d $\mathcal{N}=(0,4)$ supermultiplets and Lagrangians. See \cite{Hwang:2014uwa,Kim:2016qqs} for a detailed presentation of the (0,4) ADHM quiver data. See e.g. \cite{Tong:2014yna,Putrov:2015jpa} for details on 2d (0,4) supersymmetry.\label{foot:1d04}} which can be read off from the brane realization of the SQM loop and which is shown in Figure \ref{SU2ADHM} for the SQM loop realized with $n$ D3 branes. The various (0,4) supermultiplets arise from the lowest modes of fundamental strings stretched between various D-branes. We have a $U(k)$ gauge theory with a vector multiplet and an adjoint hypermultiplet (both symbolized by a circle in the figure), $2$ fundamental hypermultiplets (continuous line), $n$ fundamental twisted hypermultiplets and $n$ Fermi multiplets with two complex fermions (doubled continuous-dashed lines), and $2n$ uncharged Fermi multiplets with a single fermion (dashed line). In addition there are potential terms (J and E terms) required by (0,4) supersymmetry and other potentials coupling 1d and 5d fields\footnote{We did not study in detail the form of the $J$ and $E$ terms. Being Q-exact, they do not affect the computations, except for the identifications of 1d and 5d flavor symmetries which are implicit in Appendix \ref{app:ADHMcomputations}. The $J$ and $E$ terms ensuring $\mathcal{N}=(0,4)$ supersymmetry can be found e.g. in \cite{Tong:2014yna}.}.
The flavor symmetries of the ADHM theory are $SU(2)\times U(n)_f$ with fugacities $\alpha$ for $SU(2)$, identified with the global $SU(2)$ gauge transformations of the 5d theory, and $x_{i=1,\cdots, n}$ for $U(n)_f$. Closely related ADHM quantum mechanics were already considered in \cite{Tong:2014yna,Tong:2014cha,Kim:2016qqs,Agarwal:2018tso} in relation to Wilson loops in 5d $\mathcal{N}=1^\ast$ theories.
\begin{figure}[th]
\centering
\includegraphics[scale=0.4]{SU2ADHM.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{Brane setup for the $k$-instanton sector in the presence of the SQM loop and associated $\mathcal{N}=(0,4)$ ADHM quiver SQM. The circle denotes a $U(k)$ vector multiplet with an adjoint hypermultiplet, the continuous line a bifundamental hypermultiplet, the dashed line a Fermi multiplet with single fermion and the mixed-doubled line a twisted hypermultiplet and a Fermi multiplet with two fermions. The $SU(2)$ flavor symmetry is gauged with 5d fields in the 5d-1d theory.}}
\label{SU2ADHM}
\end{figure}
We relegate the details of the computations to appendix \ref{app:ADHMcomputations}. It is still worth mentioning that we obtain our results by first considering the 5d $U(2)$ gauge theory with fugacities $\alpha_1=e^{a_1}, \alpha_2=e^{a_2}$, and then projecting onto the $SU(2)$ theory by imposing the traceless condition$a_1 = -a_2 =a$ with $\alpha=e^{a}$. There are additional subtleties to this procedure that arise when including matter hypermultiplets (see next sections and appendix \ref{app:ADHMcomputations}) and we follow \cite{Bergman:2013aca} for the precise method.
To keep the formulas short we show some results only at the one instanton order, although we computed them up to three instanton order.
For the $n=1$ SQM loop (single D3 brane), we find
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{L^{n=1}_{\rm SQM}} = x - \left( \alpha + \alpha^{-1} - Q \frac{q_1 q_2(\alpha + \alpha^{-1})}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} + O(Q^2) \right) + x^{-1} \,.
\label{Zn1}
\ee
It is a Laurent polynomial in the $U(1)_f$ fugacity $x$. We can easily relate the various terms in this polynomial to contributions from strings in the brane setup with a single D3 brane (Figure \ref{D3SQM}). In particular, the terms $x$ and $x^{-1}$ can be associated to the contributions with one string stretched from the D3 (placed in the middle) to the upper and to the lower D5 respectively; moving the D3 brane to the top, respectively to the bottom, of the brane web and taking into account the string annihilation effect we observe that the D3 brane decouples from the 5-brane array, explaining the almost trivial contribution to the SQM loop (no instanton correction). The counting parameter $x$ and $x^{-1}$ can be associated to the presence of fluxes induced by the D5 brane on the D3 worldvolume \cite{Hanany:1996ie}: with a D3 at (exponentiated) position $x$ and a D5 at (exponentiated) position $y$ we associate a classical contribution $\sqrt{x/y}$ or $\sqrt{y/x}$ if the D3 is above or below the D5. In addition, if a string is stretched from the D3 to the D5 we add a factor $y/x$ or $x/y$ if the D3 is above or below the D5. These rules ensure that the contribution of a given configuration of strings is invariant under Hanany-Witten moves of the D3 brane along $x^6$. Using these rules we understand the four classical contributions $x = \sqrt{\alpha/x}\sqrt{x\alpha}(x/\alpha)$, $x^{-1} = \sqrt{\alpha/x}\sqrt{x\alpha}(1/x\alpha)$, $\alpha = \sqrt{\alpha/x}\sqrt{x\alpha}$ and $\alpha^{-1} = \sqrt{\alpha/x}\sqrt{x\alpha}(x/\alpha)(1/x\alpha)$ as associated to the four possible string configurations (1,0), (0,1), (0,0) and (1,1) discussed in Section \ref{ssec:Loops} (the positions of the two D5s are $y=\alpha$ and $y=1/\alpha$). The other terms are instanton terms and only affect the sectors (0,0),(1,1), namely the sector neutral under $U(1)_f$, that we recognized as the fundamental Wilson loop.
Following our prescription \eqref{SU2RelFund} we can extract the fundamental Wilson loop $\vev{W_{\mathbf{2}}}$ by taking a residue over $x$, which selects the contributions from $U(1)_f$ neutral states,
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathbf{2}}} = - \oint \frac{dx}{2\pi i x} \vev{L^{n=1}_{\rm SQM}}(x) \,.
\ee
Here we have fixed the coefficient in the relation to $-1$, so that the classical contribution to the Wilson loop matches usual conventions.
This leads to
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathbf{2}}} = \alpha + \alpha^{-1} - Q \frac{q_1 q_2(\alpha + \alpha^{-1})}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} + O(Q^2) \,.
\label{W2}
\ee
\medskip
We can now look at larger values of $n$, where the SQM loop is defined by coupling $n$ fundamental fermions to the 5d $SU(2)$ theory (Figure \ref{D3SQM2}), with $n$ flavor fugacities $x_i$.
For $n=2$ (two D3 branes) the SQM loop evaluates to
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
\vev{L^{n=2}_{\text{SQM}}} & = x_1 x_2 + x_1^{-1} x_2^{-1} + x_1 x_2^{-1} + x_1^{-1} x_2
- (x_1 + x_2 + x_1^{-1} + x_2^{-1}) \vev{W_{\mathbf{2}}} \\
& \quad + \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} - Q \dfrac{(1 - q_1)(1 - q_2)(1 + q_1 q_2) x_1 x_2}{(x_1 - q_1 q_2 x_2)(x_2 - q_1 q_2 x_1)} \,,
\label{Zn2}
\end{split}
\ee
where we have identified the contribution $\vev{W_{\mathbf{2}}}$, given by \eqref{W2}, and the contribution $\vev{W_{\mathbf{2} \,\otimes \mathbf{2}}}$ for the Wilson loop in the tensor product representation $\mathbf{2} \,\otimes \mathbf{2}$, with
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} &= \oint_{\mathcal{C}} \dfrac{dx_1}{2\pi i x_1} \dfrac{dx_2}{2\pi i x_2} \vev{L^{n=2}_{\rm SQM}}(x_1, x_2) \cr
&= \alpha^2 + 2 + \alpha^{-2}
+ Q \dfrac{(1-q_1)(1-q_2)(1+q_1 q_2) - 2 q_1 q_2 (\alpha^2 + 2 + \alpha^{-2})}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} + O(Q^2) \,,
\label{W22}
\eea
with the contour $\mathcal{C}$ for $x_1,x_2$ being circles around the origin with radii such that
$|x_2| < q_1q_2 |x_1|$ and $|x_2| < (q_1q_2)^{-1}|x_1|$ (see explanation below).
Here again the classical contributions to $\vev{L^{n=2}_{\text{SQM}}}$ (zero-instanton level) can be understood as associated to the possible configurations of strings stretched between the two D3s and the two D5s. The Wilson loop VEV $\vev{W_{\mathbf{2} \,\otimes \mathbf{2}}}$ corresponds, according to our prescription \eqref{SU2RelTensProd}, to the $U(1)_f^2$ invariant sector, which can be isolated by taking the residue over the two fugacities $x_1,x_2$ \eqref{W22}. Indeed we recognize the classical contribution as that of the $\mathbf{2} \,\otimes \mathbf{2}$ $SU(2)$ character.
The appearance of the fundamental Wilson loop $\vev{W_{\mathbf{2}}}$ can be understood as the contribution from string configurations where one D3 brane has a single string attached. We can move and decouple such a D3 brane from the brane web, leaving a single D3 in the middle of the web, sourcing a fundamental Wilson loop. There are four such configurations (with one D3 in the middle and one D3 moved outside) corresponding to the four factors $\vev{W_{\mathbf{2}}}$ appearing in \eqref{Zn2}.
In addition to the classical and Wilson loop factors there is a extra contribution in $\vev{L^{n=2}_{\text{SQM}}}$ at one-instanton level (but not at higher instanton levels) in the form of a rational function of $x_1,x_2$. We notice that this term has poles at $x_2/x_1 = q_1q_2$ and $x_2/x_1 = (q_1q_2)^{-1}$. We interpret this term in the string/brane language as arising from the motion of a D1 segment stretched between the two D3 branes. Indeed, such modes have (exponentiated) mass parameters $(x_2/x_1)^{\pm 1}$ when the D3 branes are in flat space (corresponding to $q_1q_2=1$), explaining the presence of the poles. They contribute to the VEV of a line operator in the D3 brane theory\footnote{By taking a residue over $\alpha$ we can isolate this extra factor and recognize it as a monopole bubbling contribution for an 't Hooft loop of minimal magnetic charge in the 4d $U(2)$ SYM theory living on the D3 branes, with $x_1,x_2$ identified with the 4d Coulomb branch parameters (see \cite{Ito:2011ea,Brennan:2018yuj,Brennan:2018moe}).}
and should a priori not contribute to the Wilson loop VEV of the 5d theory that we would like to compute.
If we take a naive contour of integration $\mathcal{C}$ as two unit circles, we would pick a residue contribution from these terms at $x_2 = (q_1q_2)^{\pm 1}x_1$. Based on the above discussion, we believe that these residues should be excluded. One way to achieve this is to define the contour $\mathcal{C}$ as described above. We illustrate this in Figure \ref{contour}. This choice provides a consistent picture in the study of S-duality in the later sections.
\begin{figure}[th]
\centering
\includegraphics[scale=0.7]{contour.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{Contour $\mathcal{C}$ (red) for the $x_2$ integration, assuming $x_1$ is integrated (after $x_2$) on the unit circle. On the left: we exclude the residue at $x_2 = q_1q_2 x_1$ (keeping only the residue at zero). This can be achieved by choosing the integration circle on the right.}}
\label{contour}
\end{figure}
\medskip
The method generalizes to any $n$. The Wilson loop in the tensor product representation $\mathbf{2}^{\otimes n}$ is extracted from the SQM loop by the residue computation
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathbf{2}^{\otimes n}}} = (-1)^n \oint_{\mathcal{C}} \prod_{i=1}^n \frac{dx_i}{2\pi i x_i} \vev{L^{n}_{\text{SQM}}}(x_1, \ldots, x_n) \,,
\label{residueformula}
\ee
with a contour $\mathcal{C}$ around the origin such that poles at $x_i = (q_1q_2)^{\pm 1} x_j$ are excluded. For instance one can pick contours as circles around zero radii such that $|x_{i+1}| < (q_1q_2)^{\pm 1} |x_i|$, $i=1,\cdots, n-1$. This reproduces the prediction from the heuristic brane argument \eqref{SU2RelTensProd}.
Let us give one more explicit results for $n=3$,
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& \vev{W_{\mathbf{2} \,\otimes \mathbf{2} \, \otimes \mathbf{2}}} = - \oint_{\mathcal{C}} \dfrac{dx_1}{2\pi i x_1} \dfrac{dx_2}{2\pi i x_2} \dfrac{dx_3}{2\pi i x_3} \vev{L_{\text{SQM}}^{n = 3}}(x_1, x_2, x_3) \cr
& \quad = \alpha^{3} + 3 \alpha + 3 \alpha^{-1} + \alpha^{-3} \cr
& \quad \ + Q (\alpha + \alpha^{-1}) \dfrac{(1 - q_1)(1 - q_2)(2 + q_1 + q_2 + 2 q_1 q_2) - 3 q_1 q_2 (\alpha^2 + 2 + \alpha^{-2})}{(1 - \alpha^2 q_1 q_2)(1 - \alpha^{-2} q_1 q_2)} + O(Q^2) \,.
\eea
The fact that we always recover the correct classical part for the Wilson loop VEVs is a confirmation of the validity of our residue procedure.
\medskip
From the evaluation of the Wilson loops in the tensor product representation $\mathbf{2}^{\otimes n}$, one can compute Wilson loops in any representation. For instance the Wilson loop in the rank two symmetric representation (spin 1/2) is given by $W_{\mathcal{S}_2} = W_{\mathbf{2} \,\otimes \mathbf{2}} - W_{\mathcal{A}_2} = W_{\mathbf{2} \,\otimes \mathbf{2}} -1$, where we used the fact that the rank two antisymmetric representation $\mathcal{A}_2$ is trivial.
\medskip
Although we will focus only on Wilson loops in tensor product representations in this paper, we can also compute directly the VEV of Wilson loops in rank $n$ symmetric representations $\mathcal{S}_n$, which are simply the irreducible spin $n$ representations of $SU(2)$, by a different residue prescription. Following the logic of section \ref{ssec:Loops} we expect that such Wilson loops can be extracted from the SQM loop $\vev{L^{n}_{\rm SQM}}$ by projecting onto the $U(n)$ invariant sector. This is achieved by computing the residue in $x_1,x_2,\cdots, x_n$ with the $U(n)$ Haar measure,
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathcal{S}_n}} = (-1)^n \oint_{\mathcal{C}} \prod_{i=1}^n \dfrac{dx_i}{2\pi i x_i} \frac{1}{n!}\prod_{i\neq j} \big( 1 - \frac{x_i}{x_j} \big)\, \vev{L^{n}_{\text{SQM}}}(x_1, \ldots, x_n) \,.
\label{residueformula2}
\ee
Once again we define the contour $\mathcal{C}$ as unit circles with residues at $x_i = (q_1q_2)^{\pm 1} x_j$ removed. In explicit computations we recover, for instance, the identity $W_{\mathbf{2} \,\otimes \mathbf{2}} = 1 + W_{\mathcal{S}_2}$.
\begin{figure}[th]
\centering
\includegraphics[scale=0.4]{SU2alternative.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{A possible alternative brane realization of the 5d $\mathcal{N} = 1$ pure $SU(2)$ theory.}}
\label{SU2alt}
\end{figure}
Before concluding this section, for the sake of completeness, we should also mention that different string theory realizations of the 5d $\mathcal{N} = 1$ pure $SU(2)$ theory appear to have different SQM loop operators, but same Wilson loop observables. Consider for example the brane configuration in Figure \ref{SU2alt}: as argued in \cite{Bergman:2013ala,Bergman:2013aca,Bao:2013pwa,Hayashi:2013qwa} this describes the same pure $SU(2)$ theory as Figure \ref{SU2}, after removing the contribution of extra decoupled states associated to the parallel external NS5-branes\footnote{In computations, the difference arises because one starts from $U(2)$ with Chern-Simons level $\kappa=-2$ rather than $\kappa=0$, before projecting to $SU(2)$.} The partition functions of the two brane configurations coincide, modulo a factor which is independent of the $SU(2)$ gauge fugacity $\alpha$ but only depends on the instanton fugacity $Q$ \cite{Bergman:2013aca,Kim:2012gu,Hwang:2014uwa}. The situation is somehow similar, although slightly more complicated, for our SQM loop operator. For example, when adding one D3 brane the configuration in Figure \ref{SU2alt} gives
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{L_{\text{SQM}}^{n=1}} = x (1 + Q) - \vev{W_{\mathbf{2}}} + x^{-1} \,,
\ee
with $\vev{W_{\mathbf{2}}}$ as in \eqref{W2}. Comparing with \eqref{Zn1} we see that the only difference appears in the $x$ sector, which receives a single instanton correction (due to the interaction between D1 stretched along the parallel external NS5 branes and the D3 inside of them), while the fundamental $SU(2)$ Wilson loop is the same. With two D3 branes we find instead
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\vev{L_{\text{SQM}}^{n=2}} & = x_1 x_2 (1 + Q)^2 + x_1^{-1} x_2^{-1} + (x_1 x_2^{-1} + x_1^{-1} x_2)(1 + Q)
- (x_1 + x_2)(1 + Q) \vev{W_{\mathbf{2}}} \\
& - (x_1^{-1} + x_2^{-1}) \vev{W_{\mathbf{2}}} + \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} - Q \dfrac{(1 - q_1)(1 - q_2)(1 + q_1 q_2) x_1^2 x_2^2}{(x_1 - q_1 q_2 x_2)(x_2 - q_1 q_2 x_1)} \,,
\eea
with $\vev{W_{\mathbf{2}}}$, $\vev{W_{\mathbf{2} \,\otimes \mathbf{2}}}$ as in \eqref{W2}, \eqref{W22} respectively. Comparing with \eqref{Zn2} we again notice that although the sectors involving positive powers of $x_1$, $x_2$ receive $Q$ corrections (and the extra rational function is also slightly modified), the Wilson loops still coincide. A similar pattern can be observed at higher number of D3 branes, as well as in more complicated theories. It is however not clear to us whether only one SQM loop VEV is the correct result, or whether the different options correspond to several SQM loops in the $SU(2)$ theory.
\subsection{S-duality of Wilson loops}
\label{SdualitySU2loops}
As we explained in the previous section the pure $SU(2)$ theory is self-dual under S-duality with the exchange of massive parameters $2a \leftrightarrow 2a+ t$, which is the map $(2a,t) \to (2a + t, -t)$. We see here that S-duality relates the theory at coupling $t$ to the theory at coupling $-t$, i.e. at negative $\frac{1}{g^2}$. It is not obvious how to make sense of the 5d theory at negative $t$. One needs to analytically continue the theory to negative $t$, assuming that observables are holomorphic in $t$. This may be possible, however we only need to assume something weaker, which is that the theory is well-defined as long as the effective coupling $t+2a$ is positive, which can be seen as a constraint on the space of vacua ($a > -t/2$). This condition ensures for instance that instantons on the Coulomb branch have positive mass.
It is convenient to introduce the exponentiated parameters, or ``fugacities",
\begin{equation}} \newcommand{\ee}{\end{equation}
Q_F = e^{-2a} \,, \quad Q_B = e^{-t - 2a} \,;
\ee
in terms of these variables the S-duality map is
\begin{equation}} \newcommand{\ee}{\end{equation}
\underline{\text{S-duality map}}: \quad (Q_F,Q_B) \to (Q_B, Q_F) \,.
\ee
The terminology $Q_F,Q_B$ refer to the fiber-base duality of toric Calabi-Yau three-folds, realizing the 5d SCFTs in M-theory, studied in \cite{Mitev:2014jza}. The M-theory realization is dual to the type IIB brane realization and the fiber-base duality of the Calabi-Yaus is the S-duality that we want to study.
\medskip
In the previous section we evaluated the Wilson loop VEVs in a small $Q=Q_B/Q_F$ expansion. To check S-duality we should further expand in small $Q_F$ and write the result as a double expansion in $Q_F,Q_B$. We find\footnote{We use the evaluation of the Wilson loop up to three instantons, which we did not explicitly write in the previous section.}
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
Q_F^{1/2} \vev{W_{\mathbf{2}}} & = 1 + Q_F + Q_B + \chi^{A_1}_{\mathbf{3}}(q_+) Q_F Q_B
+ \chi^{A_1}_{\mathbf{5}}(q_+) Q_F Q_B (Q_F + Q_B) \\
& + Q_F Q_B (Q_F^2 + Q_F Q_B + Q_B^2) \chi^{A_1}_{\mathbf{7}}(q_+) \\
& + Q_F^2 Q_B^2 \Big( \chi^{A_1}_{\mathbf{7}}(q_+) + \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \chi^{A_1}_{\mathbf{8}}(q_+) \Big) + \ldots ,
\end{split}
\ee
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
Q_F \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} &= 1 + 2(Q_F + Q_B) + \left( Q_F^2 + Q_F Q_B + Q_B^2 \right)
+ \Big( \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) Q_F Q_B \\
& + Q_F Q_B (Q_F + Q_B) \Big( \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \\
& + Q_F Q_B (Q_F^2 + Q_F Q_B + Q_B^2) \Big( \chi^{A_1}_{\mathbf{7}}(q_+) + \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{6}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \\
& + Q_F^2 Q_B^2 \Big( \chi^{A_1}_{\mathbf{8}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + \chi^{A_1}_{\mathbf{6}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-)
\\
& \hspace{1.8 cm} + \chi^{A_1}_{\mathbf{7}}(q_+) \chi^{A_1}_{\mathbf{3}}(q_-) + \chi^{A_1}_{\mathbf{7}}(q_+) + 2 \chi^{A_1}_{\mathbf{5}}(q_+) + 1 \Big) + \ldots,
\end{split}
\ee
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
Q_F^{3/2} \vev{W_{\mathbf{2} \,\otimes \mathbf{2} \, \otimes \mathbf{2}}} & = 1 + 3(Q_F + Q_B) + 3 (Q_F^2 + Q_F Q_B + Q_B^2) \\
& + \Big( \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_-) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 2 \Big) Q_F Q_B \\
& + \Big( \chi^{A_1}_{\mathbf{5}}(q_+) + 3 \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{3}}(q_-) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \\
& \;\;\;\;\;\; + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big)
Q_F Q_B (Q_F + Q_B) + \Big(Q_F^3 + Q_F^2 Q_B + Q_F Q_B^2 + Q_B^3\Big) \\
& + \ldots.
\end{split}
\ee$SU(2) \, (\sim A_1)$ characters for various representations. Indeed $q_+$ and $q_-$ are fugacities for two $SU(2)$ symmetries of the theory: $SU(2)_{\rm diag}=$diag$(SU(2)_2\times SU(2)_R)$ and $SU(2)_1$ respectively.
\medskip
Every term in the above expansions is invariant under the S-duality map $Q_F \leftrightarrow Q_B$. However we had to multiply each Wilson loop by a factor $Q_F^{n/2}$ to obtain this result. We therefore have the identity
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathbf{2}^{\otimes n}}} (Q_F,Q_B) = \left(\frac{Q_B}{Q_F}\right)^{n/2} \vev{W_{\mathbf{2}^{\otimes n}}} (Q_B,Q_F) \,.
\ee
This means that the Wilson loops are not invariant under S-duality, but rather covariant with the transformation
\begin{equation}} \newcommand{\ee}{\end{equation}
S. W_{\mathbf{2}^{\otimes n}}(a,t) = e^{-\frac{nt}{2}} \, W_{\mathbf{2}^{\otimes n}}(a+t/2,-t) \,,
\label{StransfoNf0}
\ee
with ``$S.$" denoting the action of S-duality. In the CFT limit $t\to 0$, the Wilson loops become invariant under S-duality.
The multiplicative factor $e^{-\frac{nt}{2}}$ can be interpreted as background Wilson loop of charge $-n$ for the a $U(1)_{\rm inst}$ global symmetry associated with the instanton charge.
\medskip
We thus find that Wilson loops in tensor product representations $\mathbf{2}^{\otimes n}$ transform covariantly under S-duality. From here we can deduce the transformation of Wilson loops in any representation. What we find is that in general Wilson loops do not transform covariantly, but rather pick up an inhomogeneous part in the transformation. In particular all the Wilson loops in spin $n$ representations are mapped to combinations of Wilson loops involving various representations with different multiplicative factors.
\medskip
\noindent{\bf $E_1$ symmetry}
\medskip
This is not the whole story since the pure $SU(2)$ theory is conjectured to have an $E_1 = SU(2)_I$ global symmetry in the CFT limit ($t=0$), enhanced from the $U(1)_{\rm inst}$ symmetry, and
S-duality should correspond to the $\mathbb{Z}_2$ Weyl transformation in $SU(2)_I$. To make the $SU(2)_I$ symmetry manifest
one should introduce a different set of variables,
\begin{equation}} \newcommand{\ee}{\end{equation}
A = e^{-\frac{t}{4} - a} \,, \quad y = e^{\frac{t}{2}} = Q^{-\frac 12} \,.
\ee
The parameters $Q_F, Q_B$ are re-expressed as
$Q_F = A^2 y$ and $Q_B = \frac{A^2}{y}$. The S-duality (or Weyl transformation) then corresponds to
\begin{equation}} \newcommand{\ee}{\end{equation}
\underline{\text{S-duality map}}: \quad (A,y) \to (A, y^{-1}) \,.
\ee
The parameter $y$ is the $SU(2)_I$ fugacity.
Expanding observables in powers of $A^2$, one expects coefficients $f_n(y)$ which are $SU(2)$ characters.
This was checked at the level of the $S^1 \times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$ partition function or ``half-index" in \cite{Mitev:2014jza} at the first few orders in $A$, using the topological vertex formalism.
Expanding the Wilson loops in this new set of parameters we find
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
A y^{1/2} \vev{W_{\mathbf{2}}} & = 1 + \chi^{A_1}_{\mathbf{2}}(y) A^2
+ \chi^{A_1}_{\mathbf{3}}(q_+) A^4 + \chi^{A_1}_{\mathbf{2}}(y)
\chi^{A_1}_{\mathbf{5}}(q_+) A^6 \\[6 pt]
& + \Big( \chi^{A_1}_{\mathbf{3}}(y) \chi^{A_1}_{\mathbf{7}}(q_+) + \chi^{A_1}_{\mathbf{7}}(q_+) + \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \chi^{A_1}_{\mathbf{8}}(q_+) \Big) A^8 + \ldots,
\end{split}
\ee
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
A^2 y \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} & = 1 + 2 \chi^{A_1}_{\mathbf{2}}(y) A^2 + \Big( \chi^{A_1}_{\mathbf{3}}(y) + \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) A^4 \\
& + \chi^{A_1}_{\mathbf{2}}(y) \Big( \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) A^6
\\
& + \chi^{A_1}_{\mathbf{3}}(y) \Big( \chi^{A_1}_{\mathbf{7}}(q_+) + \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{6}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) A^8
\\
& + \Big( \chi^{A_1}_{\mathbf{8}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + \chi^{A_1}_{\mathbf{6}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + \chi^{A_1}_{\mathbf{7}}(q_+) \chi^{A_1}_{\mathbf{3}}(q_-) \\
& \;\;\;\;\;\; + \chi^{A_1}_{\mathbf{7}}(q_+) + 2 \chi^{A_1}_{\mathbf{5}}(q_+) + 1 \Big) A^8 + \ldots,
\end{split}
\ee
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
A^3 y^{3/2} \vev{W_{\mathbf{2} \,\otimes \mathbf{2} \, \otimes \mathbf{2}}} & =
1 + 3 \chi^{A_1}_{\mathbf{2}}(y) A^2 + \Big( 3 \chi^{A_1}_{\mathbf{3}}(y) +
\chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_-) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 2 \Big) A^4 \\
& + \chi^{A_1}_{\mathbf{2}}(y) \Big( \chi^{A_1}_{\mathbf{5}}(q_+) + 3 \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{3}}(q_-) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \\
& \hspace{1.8 cm} + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) A^6
+ \chi^{A_1}_{\mathbf{4}}(y) A^6 + \ldots \,.
\end{split}
\ee
The $SU(2)_I$ characters do appear, but only after multiplying the Wilson loop by a factor $(A^2 y)^{n/2}$.
\section{Loops in $SU(2)$ theories with matter}
\label{sec:SU2Nf}
The discussion of Wilson loops in the pure $SU(2)$ theory generalizes to $SU(2)$ theories with $N_f$ fundamental flavors. These are realized via 5-brane webs with extra external D5 and NS5 branes. They are again self-dual under S-duality and we will show that the Wilson loops in the $\mathbf{2}^{\otimes n}$ representations transform covariantly under S-duality. It is well-known that the $SU(2)$ theories with $N_f$ flavors enjoy a conjectured symmetry enhancement $U(1)_{\rm inst}\times SO(2N_f) \to E_{N_f+1}$ at the CFT locus. The S-duality is again a Weyl transformation in $E_{N_f+1}$ \cite{Mitev:2014jza}. We check this remarkable conjecture by showing that the Wilson loop VEVs on $S^1\times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$ admit an expansion in $E_{N_f+1}$ characters.
Because of technical limitations we studied only the cases $N_f = 1,2,3,4$, however we strongly believe that the Wilson loops in the remaining theories with $N_f =5,6,7$ have qualitatively identical properties.
In this section we provide the results for $N_f=1$ and $N_f=2$ flavors, while the theories with $N_f=3,4$ are discussed in Appendix \ref{app:Nf34} to shorten the presentation. Our results strongly support the general relation \eqref{StransfoNfgeneral} for the action of S-duality on Wilson loops in tensor product representations $\mathbf{2}^{\otimes n}$ at finite massive deformations.
\subsection{$N_f = 1$}
\label{ssec:Nf1}
We start by considering the $SU(2)$ gauge theory with one fundamental hypermultiplet. The brane web realizing the theory is shown in Figure \ref{SU2Nf1}-a. It is useful to see it as arising from the $U(2)_{-1/2}$ theory with $N_f=1$, by ungauging the diagonal $U(1)$. The index $-\frac 12$ indicates a Chern-Simons at level $-\frac 12$ for the diagonal $U(1)$.\footnote{This `parent' $U(2)$ theory is also in principle the theory realized by the brane setup \ref{SU2Nf1}-a. However the diagonal $U(1)$ subgroup of the gauge group is massive, since there is only one Coulomb branch deformation of the brane web (i.e. preserving the positions of the exterior 5-branes), corresponding to the $SU(2)$ Coulomb parameter.}
This $U(2)$ theory is used to facilitate explicit half-index computations (see appendix \ref{app:ADHMcomputations}).
\begin{figure}[th]
\centering
\includegraphics[scale=0.4]{SU2Nf1.pdf}
\vspace{-1cm}
\caption{\footnotesize{a) Brane realization of the $SU(2)$ theory with $N_f=1$, with $a_1=-a_2=a$. b) Brane setup and ADHM quiver SQM for the $k$-instanton sector in the presence of an $n$ D3 branes SQM loop.}}
\label{SU2Nf1}
\end{figure}
The vertical positions of the internal D5 branes are $a_1, a_2$ for the Coulomb parameters, and the vertical position of the external D5 brane is $m_1$ for the mass parameter of the hypermultiplet.
The horizontal distance between the two NS5 branes is the effective gauge coupling $t_{\rm eff}$ of the abelian theory on a single D5. At a generic point on the Coulomb branch the adjoint real scalar is $\phi = $diag$(a_1,a_2)$, with say $a_1 > a_2$, and the prepotential evaluates to \cite{Intriligator:1997pq}
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{F} = \frac t2 (a_1^2 + a_2^2) + \frac{1}{6} (a_1-a_2)^3 -\frac{1}{12} \big((m_1-a_1)^3 +(m_1-a_2)^3 \big) - \frac{1}{12} (a_1^3 + a_2^3) \,,
\ee
where we assumed $m_1 > a_i$ for $i=1,2$, as in the figure.
The effective coupling on a D5 brane is
\begin{equation}} \newcommand{\ee}{\end{equation}
t_{\rm eff} = \frac{\partial^2\mathcal{F}}{\partial a_1{}^2} = t + (a_1-a_2) -\frac{m_1}{2} \,.
\ee
We now impose the traceless condition $a_1 = -a_2 = a$ and define the fugacities
\begin{equation}} \newcommand{\ee}{\end{equation}
\alpha = e^a \,, \quad \mu_1 = e^{m_1} \,.
\ee
The half-index in the presence of a Wilson loop in the tensor product representation $\mathbf{2}^{\otimes n}$ is computed using the same technology as for the pure $SU(2)$ theory. We identify the Wilson loop VEVs with sectors of the SQM loop realized by the addition of $n$ D3 branes in the center of the brane web. This SQM loop $L_{\rm SQM}^{n}$ is described, as for the pure $SU(2)$ SYM theory, by a (0,4) SQM theory with $2n$ Fermi multiplets with flavor symmetry $SU(2)\times U(n)_f$ and the $SU(2)$ flavor gauged with 5d fields.
The SQM loop VEV $\vev{L^n_{\text{SQM}}}$ is computed with the modified ADHM quiver for the $k$-instanton sector shown in Figure \ref{SU2Nf1}-b, deduced from the brane setup with $n$ D3 branes and $k$ D1 branes. This ADHM quiver is not the same as in the pure $SU(2)$ theory (there are (0,4) Fermi multiplets from strings stretched between the D1s and the external D5 and superpotential terms identifying 1d and 5d flavor symmetries). Finally the Wilson loop in $\mathbf{2}^{\otimes n}$ is extracted by the residue computation (the same as \eqref{residueformula})
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathbf{2}^{\otimes n}}} = (-1)^n \oint_{\mathcal{C}} \prod_{i=1}^n \dfrac{dx_i}{2\pi i x_i} \vev{L^{n}_{\text{SQM}}}(x_1, \ldots, x_n) \,,
\ee
where $x_1,\cdots,x_n$ are the fugacities for the $U(n)_f$ SQM flavor symmetry and the contour $\mathcal{C}$ is chosen such that $|x_{i+1}| < (q_1q_2)^{\pm 1} |x_i|$, for $i=1,\cdots, n-1$.
\noindent We find for $n=1$,
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& \vev{L^{n=1}_{\text{SQM}}} = x_1 - \vev{W_{\mathbf{2}}} + x_1^{-1} , \\[5 pt]
& \vev{W_{\mathbf{2}}} = \alpha + \alpha^{-1} + Q q_1 q_2 \dfrac{\mu_1^{-1/2}(q_1^{1/2} q_2^{1/2} + q_1^{-1/2} q_2^{-1/2}) - \mu_1^{1/2}(\alpha + \alpha^{-1})}{(1 - \alpha^2 q_1 q_2)(1 - \alpha^{-2} q_1 q_2)} + O(Q^2) \,.
\eea
For $n=2$,
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& \vev{L^{n=2}_{\text{SQM}}} = x_1 x_2 + x_1^{-1} x_2^{-1} + x_1 x_2^{-1} + x_1^{-1} x_2
- (x_1 + x_2 + x_1^{-1} + x_2^{-1}) \vev{W_{\mathbf{2}}} + \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} \cr
& - Q \mu_1^{1/2} \dfrac{(1 - q_1)(1 - q_2)(1 + q_1 q_2) x_1 x_2}{(x_1 - q_1 q_2 x_2)(x_2 - q_1 q_2 x_1)}
+ Q q_1^{1/2} q_2^{1/2} \mu_1^{-1/2} \dfrac{(1 - q_1)(1 - q_2) x_1 x_2 (x_1 + x_2)}{(x_1 - q_1 q_2 x_2)(x_2 - q_1 q_2 x_1)} , \\[8 pt]
& \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} = \alpha^2 + 2 + \alpha^{-2}
+ Q \mu_1^{1/2} \dfrac{(1-q_1)(1-q_2)(1+q_1 q_2) - 2 q_1 q_2 (\alpha^2 + 2 + \alpha^{-2})}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} \cr
& - Q q_1^{1/2} q_2^{1/2} \mu_1^{-1/2} \dfrac{(\alpha + \alpha^{-1})(1 + q_1)(1 + q_2)}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} + O(Q^2) \,.
\eea
Acting with S-duality in the brane setup ($x^5 \leftrightarrow x^6$ reflection) we find that $2a$ is exchanged with $t_{\rm eff}= t + 2a - m_1/2$ and $m_1$ becomes $m_1-a + t_{\rm eff}/2 = 3m_1/4 +t/2$.
The S-symmetry is the Weyl transformation in the full $E_2 = SU(2) \times U(1)$ global symmetry (enhanced from $SO(2) \times U(1)$). To make this symmetry apparent, we define
\begin{equation}} \newcommand{\ee}{\end{equation}
A = e^{-a -\frac t4 + \frac{m_1}{8}} \,, \quad y = e^{\frac{t}{2}-\frac{m_1}{4}} \,, \quad v = e^{-\frac{t}{4}-\frac{7m_1}{8}} \,,
\ee
giving the map of fugacities
\begin{equation}} \newcommand{\ee}{\end{equation}
\alpha = A^{-1} y^{-1/2} \,, \quad \mu_1 = y^{-1/2}v^{-1} \,.
\ee
The parameter $A$ captures the Coulomb branch moduli, while $y$ and $v$ are fugacities for the $SU(2)$ and $U(1)$ global symmetries respectively.
S-duality corresponds to the action $y \to y^{-1}$, with $A$ and $v$ invariant.
Expanding further the above results (at 3-instanton order) at small $A$, we find
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
A y^{1/2} \vev{W_{\mathbf{2}}} & = 1 + \chi^{A_1}_{\mathbf{2}}(y) A^2 - \chi^{A_1}_{\mathbf{2}}(q_+) v A^3 + \chi^{A_1}_{\mathbf{3}}(q_+) A^4 \cr
& \ - \chi^{A_1}_{\mathbf{2}}(y) \chi^{A_1}_{\mathbf{4}}(q_+) v A^5
+ \chi^{A_1}_{\mathbf{2}}(y) \chi^{A_1}_{\mathbf{5}}(q_+) A^6 + \chi^{A_1}_{\mathbf{5}}(q_+) v^2 A^6 + \ldots , \cr
A^2 y \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} & = 1 + 2 \chi^{A_1}_{\textbf{2}}(y) A^2 - \Big( \chi^{A_1}_{\textbf{2}}(q_+) + \chi^{A_1}_{\textbf{2}}(q_-) \Big) v A^3 \cr
& \ + \Big( \chi^{A_1}_{\textbf{3}}(y) + \chi^{A_1}_{\textbf{3}}(q_+) + \chi^{A_1}_{\textbf{2}}(q_+) \chi^{A_1}_{\textbf{2}}(q_-) \Big) A^4 \cr
& \ - \chi^{A_1}_{\textbf{2}}(y) \chi^{A_1}_{\textbf{3}}(q_+) \Big( \chi^{A_1}_{\textbf{2}}(q_+) + \chi^{A_1}_{\textbf{2}}(q_-) \Big) v A^5 \cr
& \ + \chi^{A_1}_{\textbf{2}}(y) \Big( \chi^{A_1}_{\textbf{5}}(q_+) + \chi^{A_1}_{\textbf{3}}(q_+) + \chi^{A_1}_{\textbf{4}}(q_+) \chi^{A_1}_{\textbf{2}}(q_-) \Big) A^6 \cr
& \ + \Big( \chi^{A_1}_{\textbf{5}}(q_+) + \chi^{A_1}_{\textbf{4}}(q_+) \chi^{A_1}_{\textbf{2}}(q_-) + 1 \Big) v^2 A^6 + \ldots \,.
\eea
The coefficients are expressed as characters of $SU(2)$ as in the previous section.
Here again the characters of the $SU(2)\subset E_2$ global symmetry arise only after multiplying the Wilson loops by a factor $(A^2 y)^{n/2}$. We deduce that under S-duality the Wilson loops transform covariantly, with the S action
\begin{equation}} \newcommand{\ee}{\end{equation}
S. W_{\mathbf{2}^{\otimes n}}(A,y,v) = y^{-n} \, W_{\mathbf{2}^{\otimes n}}(A,y^{-1},v) \,.
\label{StransfoNf1}
\ee
This is the same transformation as in the pure $SU(2)$ theory \eqref{StransfoNf0}, except that now the parameter $y$ is $y= e^{\frac{t}{2}-\frac{m_1}{4}}$.
\subsection{$N_f = 2$}
\label{ssec:Nf2}
The brane realization of the $SU(2)$ theory with $N_f=2$ fundamental hypermultiplets is shown in Figure \ref{SU2Nf2}.
We can regard the theory as arising from the $U(2)$ theory with $N_f=2$ (without Chern-Simons term), by ungauging the diagonal $U(1)$.
We denote $m_1,m_2$ the masses of the fundamental hypermultiplets.
\begin{figure}[th]
\centering
\includegraphics[scale=0.38]{SU2Nf2.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{a) Brane realization of the $SU(2)$ theory with $N_f=2$ (with $a_1=-a_2=a$). b) Brane setup and ADHM quiver SQM for the $k$-instanton sector in the presence of an $n$ D3 branes SQM loop.}}
\label{SU2Nf2}
\end{figure}
The prepotential of the theory on the Coulomb branch, with parameter ranges $m_2 < a_1,a_2 < m_1$ (corresponding to the brane configuration of Figure \ref{SU2Nf2}), is
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{F} = \frac t2 (a_1^2 + a_2^2) + \frac{1}{6} (a_1-a_2)^3 -\frac{1}{12} \sum_{i=1,2}\big[ (m_1-a_i)^3 + (a_i-m_2)^3 \big] \,,
\ee
and the effective abelian coupling is
\begin{equation}} \newcommand{\ee}{\end{equation}
t_{\rm eff} = \frac{\partial^2\mathcal{F}}{\partial a_1{}^2} = t + (a_1-a_2) -\frac{m_1-m_2}{2} = t + 2a -\frac{m_1-m_2}{2} \,,
\ee
corresponding to the distance between the NS5 branes in the brane configuration. In the last equality we imposed the traceless condition $a_1 = -a_2 =a$. We define the fugacities
\begin{equation}} \newcommand{\ee}{\end{equation}
\alpha = e^a \,, \quad \mu_1 = e^{m_1} \,, \quad \mu_2 = e^{m_2} \,.
\ee
The Wilson loops $W_{\mathbf{2}^{\otimes n}}$ are evaluated from the residue formula \eqref{residueformula} from the SQM loop $L^{n}_{\rm SQM}$ defined as before, but with the modified $k$-instanton ADHM SQM shown in Figure \ref{SU2Nf2}-b.
We find for $n = 1$
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\vev{L^{n=1}_{\text{SQM}}} &= x_1 - \vev{W_{\mathbf{2}}} + x_1^{-1} \,, \\[5 pt]
\vev{W_{\mathbf{2}}} &= \alpha + \alpha^{-1} + Q q_1 q_2 \dfrac{(\mu_1^{1/2} \mu_2^{1/2} + \mu_1^{-1/2} \mu_2^{-1/2})(q_1^{1/2} q_2^{1/2} + q_1^{-1/2} q_2^{-1/2})}{(1 - \alpha^2 q_1 q_2)(1 - \alpha^{-2} q_1 q_2)} \\
& - Q q_1 q_2 \dfrac{(\mu_1^{1/2} \mu_2^{-1/2} + \mu_1^{-1/2} \mu_2^{1/2})(\alpha + \alpha^{-1})}{(1 - \alpha^2 q_1 q_2)(1 - \alpha^{-2} q_1 q_2)} + O(Q^2) \,,
\eea
while for $n = 2$
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\vev{L^{n=2}_{\text{SQM}}} &= x_1 x_2 + x_1^{-1} x_2^{-1} + x_1 x_2^{-1} + x_1^{-1} x_2
- (x_1 + x_2 + x_1^{-1} + x_2^{-1}) \vev{W_{\mathbf{2}}} + \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} \cr
& + Q \frac{(1 - q_1)(1 - q_2) }{\mu_1^{1/2}\mu_2^{1/2}}
\dfrac{ q_1^{1/2} q_2^{1/2} (x_1 x_2 + \mu_1 \mu_2) (x_1 + x_2) - x_1 x_2 (1 + q_1 q_2) (\mu_1 + \mu_2)}{(x_1 - q_1 q_2 x_2)(x_2 - q_1 q_2 x_1)} \,, \\[8 pt]
\vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} &= \alpha^2 + 2 + \alpha^{-2}
+ Q \dfrac{\mu_1 + \mu_2}{\sqrt{\mu_1 \mu_2}} \dfrac{(1 - q_1)(1 - q_2)(1 + q_1 q_2) - 2 q_1 q_2 (\alpha^2 + 2 + \alpha^{-2})}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} \cr
& + Q \dfrac{1 + \mu_1 \mu_2}{\sqrt{\mu_1 \mu_2}} \dfrac{\sqrt{q_1 q_2}(1 + q_1)(1 + q_2) (\alpha + \alpha^{-1})}{(1-\alpha^2 q_1 q_2)(1-\alpha^{-2} q_1 q_2)} + O(Q^2) \,.
\eea
\medskip
\noindent S-duality, implemented by the $x^5 \leftrightarrow x^6$ reflection, acts on the parameters as follows:
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\underline{\text{S action}}: \quad & a \to \frac t2 + a - \frac{m_1-m_2}{4} \,, \quad t \to - \frac t2 + \frac 34 (m_1-m_2) \,, \cr
& m_1 \to \frac t2 +\frac{3m_1+m_2}{4} \,, \quad m_2 \to - \frac t2 +\frac{m_1+3m_2}{4} \,.
\eea
To make the $E_3 = SU(2) \times SU(3)$ global symmetries (enhanced from $SO(4) \times U(1)$) appear, we define the new set of fugacities
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& A = e^{-\frac{t}{3} - a} \,, \quad u = e^{-\frac{m_1+m_2}{2}} \,, \cr
& y_1 = e^{\frac{2t}{3}} \,, \quad y_2 = e^{-\frac{t}{3} + \frac{m_1-m_2}{2}} \,, \quad y_3 = e^{-\frac{t}{3} - \frac{m_1-m_2}{2}} \,,
\eea
satisfying $y_1 y_2 y_3 =1$. The $y_i$ are the $SU(3)$ fugacities and $u$ is the $SU(2)$ fugacity.
In terms of the new parameters, the S action is simply $y_1 \leftrightarrow y_2$ (with the other parameters invariant) and corresponds to a Weyl transformation in $SU(3)$. In particular it does not commute with the flavor symmetry $m_1 \leftrightarrow m_2$, which is the Weyl transformation $y_2 \leftrightarrow y_3$.
The full group of Weyl symmetries of $SU(2)\times SU(3)$ corresponds to the action $u \to u^{-1}$ for $SU(2)$ and the permutations of $y_1,y_2,y_3$ for $SU(3)$.
Expanding further the above results (at 3-instanton order) at small $A$, we find
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
A y_1^{1/2} \vev{W_{\mathbf{2}}} & = 1 + \chi^{A_2}_{\textbf{3}}(\vec{y}) A^2
- \chi^{A_1}_{\textbf{2}}(q_+) \chi^{A_1}_{\textbf{2}}(u) A^3
+ \chi^{A_1}_{\textbf{3}}(q_+) \chi^{A_2}_{\overline{\textbf{3}}}(\vec{y}) A^4 \cr
& \ \ - \chi^{A_1}_{\textbf{4}}(q_+) \chi^{A_2}_{\textbf{3}}(\vec{y}) \chi^{A_1}_{\textbf{2}}(u) A^5 + \chi^{A_1}_{\mathbf{5}}(q_+) \chi^{A_1}_{\mathbf{3}}(u) A^6 + \chi^{A_1}_{\mathbf{5}}(q_+) \chi^{A_2}_{\textbf{8}}(\vec{y}) A^6 \cr
& \ \ + \Big( \chi^{A_1}_{\mathbf{6}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) \Big) A^6 + \ldots \,, \cr
A^2 y_1 \vev{W_{\mathbf{2} \,\otimes \mathbf{2}}} & =
1 + 2 \chi^{A_2}_{\mathbf{3}}(\vec{y}) A^2 - \Big( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \chi^{A_1}_{\mathbf{2}}(u) A^3 \cr
& \ \ + \chi^{A_2}_{\mathbf{6}}(\vec{y}) A^4
+ \Big( \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \chi^{A_2}_{\overline{\mathbf{3}}}(\vec{y}) A^4 \cr
& \ \ - \Big( \chi^{A_1}_{\mathbf{4}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \chi^{A_1}_{\mathbf{2}}(v) \chi^{A_2}_{\mathbf{3}}(\vec{y}) A^5 \cr
& \ \ + \Big( \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \chi^{A_2}_{\mathbf{8}}(\vec{y}) A^6 \cr
& \ \ + \Big( \chi^{A_1}_{\mathbf{5}}(q_+) + \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 1 \Big) \chi^{A_1}_{\mathbf{3}}(u) A^6 + \ldots \,.
\eea
We find the expansions in characters of the $E_3$ global symmetry after multiplying the Wilson loops by a factor $(A^2 y_1)^{n/2}$. We deduce that under S-duality the Wilson loops transform covariantly, with the S action
\begin{equation}} \newcommand{\ee}{\end{equation}
S. W_{\mathbf{2}^{\otimes n}}(A,y_1,y_2,y_3, u) = \left(\frac{y_1}{y_2}\right)^{-\frac n2} \, W_{\mathbf{2}^{\otimes n}}(A,y_2,y_1,y_3,u) \,,
\label{StransfoNf2}
\ee
with the multiplicative parameter $(y_1/y_2)^{1/2} = e^{\frac t2 - \frac{m_1-m_2}{4}}$. Since it does not commute with the flavor Weyl symmetry $F$ exchanging $m_1$ and $m_2$ ($y_2 \leftrightarrow y_3$), we can define a second S-duality action $S' = F^{-1}.S.F$ which implements $y_1 \leftrightarrow y_3$ and transform the Wilson loop with a multiplicative parameter $(y_1/y_3)^{1/2} = e^{\frac t2 + \frac{m_1-m_2}{4}}$. The flavor symmetry transformation $F$ exchanges $S$ and $S'$.
\bigskip
We study similarly the $SU(2)$ theories with $N_f=3$ and $N_f=4$ flavors in Appendix \ref{app:Nf34}. We find again that the Wilson loop VEVs $\vev{ W_{\mathbf{2}^{\otimes n}}}$ are computed from the residue formula \eqref{residueformula}, with appropriate SQM loop $L^n_{\rm SQM}$ derived from the brane configurations with $n$ D3 branes. The results for $n=1,2$ are again consistent with the enhanced $E_{N_f+1}$ flavor symmetry at the CFT point.
Under S-duality we find that the Wilson loops $W_{\mathbf{2}^{\otimes n}}$ transform covariantly,
\begin{equation}} \newcommand{\ee}{\end{equation}
S. W_{\mathbf{2}^{\otimes n}}(\vec y) = Y^{-n} \, W_{\mathbf{2}^{\otimes n}}(\vec y') \,,
\label{StransfoNfgeneral}
\ee
with $\vec y$ the fugacities, $\vec y '$ their S-transform, and $Y = e^{\frac t2 + \frac 14 \sum_{k=1}^{N_f} (-1)^k m_k}$.\footnote{This one S-duality. Other S-duality transformations are obtained by conjugating $S$ by flavor Weyl symmetries (permutations of the $m_k$).} The parameter $Y$ can be understood as a charge one background Wilson loop for a $U(1)$ subgroup of $E_{N_f+1}$.
This is our main result for $SU(2)$ theories with $N_f \le 4$ flavor hypermultiplets. We conjecture that this will hold for $N_f=5,6,7$ and $n\ge 3$ as well.
\section{$SU(3)$-$SU(2)^2$ dualities}
\label{sec:SU3dualities}
We now explore the action of S-duality in theories which are not self-dual. The lowest rank examples relate $SU(3)$ theories with flavor hypermultiplets to $SU(2)\times SU(2)$ quiver theories. They are part of a larger group of dualities relating $SU(N)^{M-1}$ quivers to $SU(M)^{N-1}$ quivers, proposed in \cite{Aharony:1997ju,Aharony:1997bh,Katz:1997eq} and studied e.g. in \cite{Bao:2011rc,Mitev:2014jza}. We will discuss two instances of such dualities and find how the Wilson loops of one theory are mapped to the Wilson loops of the dual theory.
\subsection{$SU(3)$ $N_f = 2$ and $SU(2)_{\pi} \times SU(2)_\pi$}
\label{ssec:SU3Nf2duality}
First we consider the $SU(3)$ theory with $N_f=2$ fundamental hypermultiplets. Its brane realization is shown in Figure \ref{SU3Nf2}-a. Acting with S-duality on the brane configuration we obtain the web diagram of Figure \ref{SU2xSU2}-a, which realizes the quiver theory $SU(2)_{\pi} \times SU(2)_{\pi}$, which has one bifundamental hypermultiplet. The index $\pi$ indicates that the $SU(2)$ gauge nodes have a non-trivial theta angle.\footnote{In the previous sections the theta angle was always vanishing.} Indeed in five dimensions an $SU(2)$ gauge theory admits a $\mathbb{Z}_2$ valued deformation, parametrized by $\theta =0,\pi$, which affects the weight of instanton contributions in the path integral. We refer to \cite{Douglas:1996xp} for a more detailed discussion on the theta angle deformation and to \cite{Bergman:2013aca} for the determination of the theta angles from the brane configuration.
We will see that the exact computations of the half index with Wilson loop insertions support the S-duality map between loops that one can read from the brane picture. We start by computing the Wilson loop VEVs in the two dual theories from residues of SQM loops.
\subsubsection{$SU(3), N_f=2$ loops}
\label{sssec:SU3Nf2}
\begin{figure}[th]
\centering
\includegraphics[scale=0.3]{SU3Nf2.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{a) Brane realization of the $SU(3)$ theory with $N_f=2$. b) Brane setup ADHM quiver SQM for the $k$-instanton sector in the presence of an $n$ D3 branes SQM loop.}}
\label{SU3Nf2}
\end{figure}
To start with we would like to compute the VEVs of Wilson loops on $S^1\times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$ in the $SU(3)$ theory. In particular, in analogy with the $SU(2)$ case, we will focus on Wilson loops in tensor product representations $\mathcal{R}_{n_1,n_2} = \mathbf{3}^{\otimes n_1} \otimes \overline{\mathbf{3}}{}^{\otimes n_2}$. Loops in other representations can be obtained as linear combinations of those.
The Wilson loop VEVs will arise from various residues of SQM loops realized with $n = n_1+n_2$ D3 branes placed in the central regions of the brane web. The associated (0,4) SQM has $3n$ Fermi multiplets transforming in the $(\mathbf{3} , \mathbf{n})$ of $SU(3)\times U(n)_f$, where $U(n)_f$ is the flavor symmetry of the SQM and $SU(3)$ is gauged with 5d fields.
\medskip
The string configurations contributing to a Wilson loop in the $\mathbf{3}$ are those with a D3 brane above the brane web and with a single string stretched from the D3 to any D5 segment. Upon moving the D3 brane towards the middle regions, taking into account Hanany-Witten effects, we reach configurations with a D3 brane placed between the top and middle D5s, with zero net number of strings attached. Such configurations are associated to states with a non-vanishing charge under the $U(1)_f$ flavor symmetry associated to the D3. Indeed the classical factor arising from D5 and D3 actions with this positioning is $\sqrt{\frac{\alpha_1}{x}}\sqrt{\frac{x}{\alpha_2}}\sqrt{\frac{x}{\alpha_3}} = \sqrt{\frac{\alpha_1x}{\alpha_2\alpha_3}}$ (as discussed in Section \ref{ssec:Loops}), thus in the $U(1)_f$ sector of charge $\frac 12$ (sector $x^{1/2}$). Additional strings do not change the $U(1)_f$ charge since their net number on the D3 is zero. Similarly a Wilson loop in the $\overline{\mathbf{3}}$ representation is realized from configurations with a D3 below the brane web and with a single string stretched from the D3 to any D5 segment. After Hanany-Witten moves they become configurations with the D3 placed between the middle and bottom D5s and with zero net number of strings ending in it. They carry a classical contribution $\sqrt{\frac{\alpha_1}{x}}\sqrt{\frac{\alpha_2}{x}}\sqrt{\frac{x}{\alpha_3}} = \sqrt{\frac{\alpha_1\alpha_2}{\alpha_3x}}$ and correspond to the SQM sector of $U(1)_f$ charge $-\frac 12$.
Similarly, for a Wilson loop in $\mathcal{R}_{n_1,n_2}$ the string configurations contributing are those with $n_1$ D3s between the top and middle D5s and $n_2$ D3s between the middle and bottom D5s, and with zero net number of strings attached.
These configurations match the SQM sector of charge $(\underbrace{\frac 12, \cdots, \frac 12}_{n_1},\underbrace{-\frac 12, \cdots, -\frac 12}_{n_2})$ under $U(1)^{n_1} \times U(1)^{n_2} \subset U(n)$, with $n=n_1+n_2$. Here a charge $\frac 12$ or $-\frac 12$ is a flavor $U(1)$ charge associated to a single D3 brane in the upper or lower central region of the web.
We thus arrive at the following proposal for the residue relation between the SQM loop and the Wilson loops, isolating the relevant charge sector:
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathcal{R}_{n_1,n_2}}} = (-1)^{n}\oint_{\mathcal{C}} \prod_{i=1}^{n_1}\frac{dx_i}{2\pi i x_i^{3/2}} \prod_{j=1}^{n_2}\frac{dx_j}{2\pi i x_j^{1/2}} \, \vev{L_{\rm SQM}^{n}}(x) \,,
\label{residueformulaSU3}
\ee
where $n=n_1+n_2$ , $x_i$ are the $U(n)$ fugacities, and the contour $\mathcal{C}$ needs to be fixed to avoid spurious residues. As before, we will take $\mathcal{C}$ to be unit circles with residues at $x_i = (q_1q_2)^{\pm 1} x_j$, $i<j$, excluded. The sign in \eqref{residueformulaSU3} is fixed a posteriori from the explicit computations.
\medskip
The evaluation of the SQM loop VEV proceeds with the $k$-instanton ADHM quiver of Figure \ref{SU3Nf2}-b, derived from the brane picture. We start from
the computation for the $U(3)$ theory with $N_f = 2$ flavors and then impose the traceless condition $a_1 + a_2 + a_3 = 0$ on the Coulomb branch parameters.
We denote $m_1,m_2$ the flavor masses and work in the chamber $a_1 > a_2 > m_i > a_3$ as in the figure. We define $a_{12} = a_1 - a_2$, $a_{23} = a_2 - a_3$ and the fugacities
\begin{equation}} \newcommand{\ee}{\end{equation}
\alpha_{12} = e^{a_{12}} \,, \quad \alpha_{23} = e^{a_{23}} \,,\quad \mu_1 = e^{m_1} \,,\quad \mu_2 = e^{m_2} \,.
\ee
The formulas that we find in terms of these parameters are too long to be reported here (we provide some explicit results in terms of other variables below). Still we find the expected structure,
for $n=1,2$,
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\vev{L_{\text{SQM}}^{n = 1}} &= x_1^{3/2} - x_1^{1/2} \vev{W_{\mathbf{3}}} + x_1^{-1/2} \vev{W_{\overline{\mathbf{3}}}} - x_1^{-3/2} \,, \\[6 pt]
\vev{L_{\text{SQM}}^{n = 2}} & = x_1^{3/2} x_2^{3/2} + x_1^{-3/2} x_2^{-3/2} - x_1^{3/2} x_2^{-3/2} - x_1^{-3/2} x_2^{3/2} \cr
& - \Big( x_1^{3/2} x_2^{1/2} + x_1^{1/2} x_2^{3/2} - x_1^{-3/2} x_2^{1/2} - x_1^{1/2} x_2^{-3/2} \Big) \vev{W_{\mathbf{3}}} \cr
& - \Big( x_1^{-3/2} x_2^{-1/2} + x_1^{-1/2} x_2^{-3/2} - x_1^{3/2} x_2^{-1/2} - x_1^{-1/2} x_2^{3/2} \Big) \vev{W_{\overline{\mathbf{3}}}} \cr
& + x_1^{1/2} x_2^{1/2} \vev{W_{\mathbf{3} \, \otimes \mathbf{3}}}
+ x_1^{-1/2} x_2^{-1/2} \vev{W_{\overline{\mathbf{3}} \, \otimes \overline{\mathbf{3}}}}
- \Big( x_1^{1/2} x_2^{-1/2} + x_1^{-1/2} x_2^{1/2} \Big) \vev{W_{\mathbf{3} \, \otimes \overline{\mathbf{3}}}} \cr
& + Q \dfrac{(1-q_1)(1-q_2)\sqrt{x_1 x_2}}{\sqrt{\mu_1 \mu_2}}
\dfrac{\sqrt{q_1 q_2} (x_1 + x_2)(\mu_1 + \mu_2) - (1 + q_1 q_2)(x_1 x_2 + \mu_1 \mu_2)}{(x_1 - q_1 q_2 x_2)(x_2 - q_1 q_2 x_1)} \,.
\label{ZSQMSU3}
\eea
The appearance of Wilson loops VEVs in \eqref{ZSQMSU3}, with the correct classical part (zero instanton sector), is in agreement and confirms the residue formula \eqref{residueformulaSU3}.
Here again we see spurious terms at one-instanton level in $\vev{L_{\rm SQM}^{n=2}}$ (last line in \eqref{ZSQMSU3}), whose poles are avoided by the contour prescription in \ref{residueformulaSU3}.
\medskip
In order to check S-duality we introduce a new set of variables corresponding to (exponentiated) distances between D5 branes ($Q_{F_i}$) and between NS5 branes ($Q_{B_i}$),\footnote{The distances between NS5 branes are, in this case, the lengths of D5 segments and correspond to the effective abelian couplings on the Coulomb branch. They can be computed as the second derivative of the prepotential as in previous sections. Here $\mathcal{F} = \frac t2 \sum_{i} a_i^2 + \frac{1}{6} \sum_{i<j}|a_i-a_j|^3 -\frac{1}{12} \sum_{i}\big[ |a_i-m_1|^3 + |a_i-m_2|^3 \big]$.}
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
& \hspace{2.4 cm} Q_{F_1} = e^{-a_{12}} = \alpha_{12}^{-1} \,,\quad Q_{F_2} = e^{-a_{23}} = \alpha_{23}^{-1} \,, \quad Q_m = e^{\frac{m_1-m_2}{2}} = \sqrt{\frac{\mu_1}{\mu_2}} \,, \\
& Q_{B_1} = e^{- t - \frac{4}{3}a_{12} - \frac{2}{3}a_{23} - \frac{m_1+m_2}{2}} = \frac{Q}{\alpha_{12}^{4/3} \alpha_{23}^{2/3} \sqrt{\mu_1 \mu_2}} \,, \quad
Q_{B_2} = e^{- t - \frac{2}{3}a_{12} - \frac{4}{3}a_{23} + \frac{m_1+m_2}{2}} = \frac{Q \sqrt{\mu_1 \mu_2}}{\alpha_{12}^{2/3} \alpha_{23}^{4/3}} \,.
\end{split}
\ee
S-duality exchanges D5 and NS5 branes in the brane web, therefore it will map $Q_B$ parameters of the $SU(3)$ theory to $Q_F$ parameters of the $SU(2)^2$ theory and vice-versa. To compare the vevs we will need a double expansion in $Q_B$ and $Q_F$ parameters. Thus we want to express the Wilson loop VEVs in terms of the new parameters and expand further in small $Q_F$.
We show here the results at order two in $Q_F,Q_B$, and at order three in Appendix \ref{sapp:SU3Nf2},
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
Q_{F_1}^{2/3} Q_{F_2}^{1/3} \vev{W_{\mathbf{3}}} & = 1 + Q_{F_1} + Q_{B_1} + Q_{F_1} Q_{F_2} + Q_{F_1} Q_{B_2} + \chi^{A_1}_{\mathbf{3}}(q_+) Q_{F_1} Q_{B_1} \cr
& - \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(Q_m) Q_{F_1} \sqrt{Q_{B_1}Q_{B_2}} + \ldots \,, \cr
Q_{F_1}^{4/3} Q_{F_2}^{2/3} \vev{W_{\mathbf{3} \, \otimes \mathbf{3}}} & =
1 + 2(Q_{F_1} + Q_{B_1}) + Q_{F_1}^2 + Q_{B_1}^2 + 2 Q_{F_1} Q_{F_2} + 2 Q_{F_1} Q_{B_2} \cr
& + \Big( \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 1 \Big) Q_{F_1} Q_{B_1} \cr
& - \Big( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \chi^{A_1}_{\mathbf{2}}(Q_m) Q_{F_1} \sqrt{Q_{B_1} Q_{B_2}} + \ldots \,, \cr
Q_{F_1} Q_{F_2} \vev{W_{\mathbf{3} \, \otimes \overline{\mathbf{3}}}} & =
1 + Q_{F_1} + Q_{F_2} + Q_{B_1} + Q_{B_2} + 3 Q_{F_1} Q_{F_2} \cr
&+ 2 Q_{F_1} Q_{B_2} + 2 Q_{F_2} Q_{B_1} + Q_{B_1} Q_{B_2}
+ \chi^{A_1}_{\mathbf{3}}(q_+) (Q_{F_1} Q_{B_1} + Q_{F_2} Q_{B_2}) \cr
& - \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(Q_m) (Q_{F_1} + Q_{F_2})\sqrt{Q_{B_1} Q_{B_2}} + \ldots \,.
\eea
The VEV of the Wilson loop $\vev{W_{\overline{\mathcal{R}_{n_1,n_2}}}} := \vev{W_{\mathcal{R}_{n_2,n_1}}} $ is obtained from $\vev{W_{\mathcal{R}_{n_1,n_2}}}$ by exchanging $Q_{F_1} \leftrightarrow Q_{F_2}$, $Q_{B_1} \leftrightarrow Q_{B_2}$ and inverting $Q_m \to (Q_m)^{-1}$ (reflection about the $x^5$ axis in the brane picture).
We have multiplied the VEVs by appropriate factors $Q_{F_1}^{\frac{2n_1 + n_2}{3}}Q_{F_2}^{\frac{n_1 + 2n_2}{3}}$ to facilitate the comparison under S-duality. This normalization always corresponds to having expansions starting with a term $1$. This indicates that they are normalized indices counting some BPS states. It would be interesting to understand what these states are in detail in a future work.
\subsubsection{$SU(2)_{\pi} \times SU(2)_\pi$ loops}
\label{sssec:SU2xSU2}
In the $SU(2)_{\pi}\times SU(2)_{\pi}$ theory we consider Wilson loops in the tensor product representations $\widetilde\mathcal{R}_{n_1,n_2} = (\mathbf{2}^{\otimes n_1} , \mathbf{2}^{\otimes n_2})$. Again other Wilson loops can be obtained as linear combination of those.
These Wilson loops are related to the natural SQM loop that is engineered with $n_1$ D3 branes in the right-central region (between the middle and the rigth NS5 segment) and $n_2$ D3 branes in the left-central region (between the left and middle NS5 segments), as shown in Figure \ref{SU2xSU2}-b. This SQM loop corresponds to a (0,4) SQM theory with $2n_1+2n_2$ Fermi multiplets transforming in the $(\mathbf{2},\mathbf{1},\mathbf{n_1}, \mathbf{1}) \oplus (\mathbf{1},\mathbf{2}, \mathbf{1} ,\mathbf{n_2})$ of $SU(2)\times SU(2)\times U(n_1)_{f1}\times U(n_2)_{f2}$ with $U(n_1)_{f1}\times U(n_2)_{f2}$ the flavor symmetries and $SU(2)\times SU(2)$ gauged with 5d fields (this is the SQM theory in Figure \ref{SU2xSU2}-b when $k_1=k_2=0$).
\begin{figure}[th]
\centering
\includegraphics[scale=0.35]{SU2xSU2.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{a) Brane realization of the $SU(2)_{\pi} \times SU(2)_{\pi}$ theory. b) Brane setup ADHM quiver SQM for the $(k_1,k_2)$-instanton sector in the presence of an $(n_1,n_2)$ D3 branes SQM loop.}}
\label{SU2xSU2}
\end{figure}
\medskip
Following the usual heuristic argument, we say that the string configurations contributing to the Wilson loop VEV $\vev{W_{\widetilde\mathcal{R}_{n_1,n_2}}}$ are those with $n_1$ D3s in the central right-region, $n_2$ D3s in the left-central region, and with zero net-number of strings attached. These contributions are extracted from the SQM loop VEV by selecting the $U(1)^{n_1}\times U(1)^{n_2} \subset U(n_1)_{f1}\times U(n_2)_{f2}$ neutral sector, namely by performing the residue computation
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\widetilde\mathcal{R}_{n_1,n_2}}} = (-1)^{n_1+n_2} \oint_{\mathcal{C}} \prod_{i=1}^{n_1}\frac{dx_i}{2\pi i x_i} \prod_{j=1}^{n_2}\frac{dz_j}{2\pi i z_j} \, \vev{L_{\rm SQM}^{(n_1,n_2)}}(x,z) \,,
\label{residueformulaSU2xSU2}
\ee
where $x_i$ and $z_j$ are the $U(n_1)_{f1}$ and $U(n_2)_{f2}$ fugacities, respectively, and the contour $\mathcal{C}$ is chosen as unit circles with residues at $x_i = (q_1q_2)^{\pm 1}x_j$ and $z_i = (q_1q_2)^{\pm 1}z_j$ excluded.
\smallskip
The computation of $\vev{L_{\rm SQM}^{(n_1,n_2)}}$ is performed using the $(k_1,k_2)$-instanton ADHM quiver of figure \ref{SU2xSU2}-b, read from the brane setup with $k_1+k_2$ D1 segments.
In the presence of a non-zero theta angle for the $SU(2)$ gauge factors the computation of the half-index must be modified. We follow the prescription of \cite{Bergman:2013aca}, appendix A (see also Appendix \ref{app:ADHMcomputations}).
We start from the $U(2) \times U(2)$ theory (without Chern-Simons terms) with Coulomb parameters $a_{ij}$, $i=1,2$, $j=1,2$, and impose the trace condition $a_{11} + a_{12} = - ( a_{21} + a_{22} ) = m_{\rm bif}$ the mass of the bifundamental hypermultiplet. We then define the $SU(2)\times SU(2)$ Coulomb parameters $\widetilde a_1=\frac 12 (a_{11} - a_{12})$, $\widetilde a_2=\frac 12 (a_{21} - a_{22})$ and the fugacities\footnote{To be precise the $a_{ij}$ parameters corresponds to the $x^6$ positions of the D5 segments in the brane picture. They are related to the $a^{(I)}_j$ of Appendix \ref{app:ADHMcomputations} as $a_{1j} = a^{(1)}_j + m_{\rm bif}/2$ and $a_{2j} = a^{(2)}_j - m_{\rm bif}/2$.}
\begin{equation}} \newcommand{\ee}{\end{equation}
\widetilde\alpha_1 = e^{\widetilde a_1} \,, \quad \widetilde\alpha_2 = e^{\widetilde a_2} \,, \quad \widetilde\mu = e^{m_{\rm bif}} \,.
\ee
Here again the formulas are too long to be reported in terms of the gauge theory parameters. The result that we find from the residue formula \eqref{residueformulaSU2xSU2} reproduce the known classical parts of the Wilson loop VEVs.
\medskip
To compare with the dual $SU(3)$ Wilson loops we introduce the new set of variables $\widetilde Q_{F_i}, \widetilde Q_{B_j}$ corresponding to (exponentiated) distances between D5 segments and between NS5 segments respectively.
\begin{equation}} \newcommand{\ee}{\end{equation}
\begin{split}
\widetilde{Q}_{F_1} = e^{-2 \widetilde a_1} = \widetilde\alpha_1^{-2} \,, \quad \widetilde{Q}_{F_2} = e^{-2 \widetilde a_2} = \widetilde \alpha_2^{-2} \,, \quad \widetilde{Q}_m = e^{\widetilde m} = \widetilde\mu \,, \\
\widetilde{Q}_{B_1} = e^{- \widetilde t_1 -2\widetilde a_1 + \widetilde a_2} = \widetilde Q_1 \widetilde \alpha_1^{-2} \widetilde \alpha_2 \,, \quad \widetilde{Q}_{B_2} = e^{- \widetilde t_2 + \widetilde a_1 - 2 \widetilde a_2} = \widetilde Q_2 \widetilde \alpha_1 \widetilde \alpha_2^{-2} \,.
\end{split}
\ee
We then express the results in terms a double expansion in $\widetilde Q_{F_i}, \widetilde Q_{B_j}$. We show here the expansions up to order two, and in Appendix \ref{sapp:SU2xSU2} up to order three, with appropriate multiplicative factors $\widetilde Q_{F_1}^{n_1/2}\widetilde Q_{F_2}^{n_2/2}$,
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\widetilde{Q}_{F_1}^{1/2} \vev{W_{(\mathbf{2},\mathbf{1})}} &= 1 + \widetilde{Q}_{B_1} + \widetilde{Q}_{F_1} + \widetilde{Q}_{B_1} \widetilde{Q}_{B_2} + \widetilde{Q}_{B_1} \widetilde{Q}_{F_2} + \chi^{A_1}_{\mathbf{3}}(q_+) \widetilde{Q}_{B_1} \widetilde{Q}_{F_1} \cr
& - \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(\widetilde{Q}_m) \widetilde{Q}_{B_1} \sqrt{\widetilde{Q}_{F_1}\widetilde{Q}_{F_2}} + \ldots \,, \cr
\widetilde{Q}_{F_1} \vev{W_{(\mathbf{2} \, \otimes \mathbf{2},\mathbf{1})}} &=
1 + 2(\widetilde{Q}_{B_1} + \widetilde{Q}_{F_1}) + \widetilde{Q}_{B_1}^2 + \widetilde{Q}_{F_1}^2 + 2 \widetilde{Q}_{B_1} \widetilde{Q}_{B_2} + 2 \widetilde{Q}_{B_1} \widetilde{Q}_{F_2} \cr
& + \Big( \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 1 \Big) \widetilde{Q}_{B_1} \widetilde{Q}_{F_1} \cr
& - \Big( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \Big) \chi^{A_1}_{\mathbf{2}}(\widetilde{Q}_m) \widetilde{Q}_{B_1} \sqrt{\widetilde{Q}_{F_1} \widetilde{Q}_{F_2}} + \ldots \,, \cr
\widetilde{Q}_{F_1}^{1/2} \widetilde{Q}_{F_2}^{1/2} \vev{W_{(\mathbf{2}, \mathbf{2})}} & =
1 + \widetilde{Q}_{B_1} + \widetilde{Q}_{B_2} + \widetilde{Q}_{F_1} + \widetilde{Q}_{F_2} + 3 \widetilde{Q}_{B_1} \widetilde{Q}_{B_2} \cr
& + 2 \widetilde{Q}_{B_1} \widetilde{Q}_{F_2} + 2 \widetilde{Q}_{B_2} \widetilde{Q}_{F_1} + \widetilde{Q}_{F_1} \widetilde{Q}_{F_2}
+ \chi^{A_1}_{\mathbf{3}}(q_+) (\widetilde{Q}_{B_1} \widetilde{Q}_{F_1} + \widetilde{Q}_{B_2} \widetilde{Q}_{F_2}) \cr
& - \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(\widetilde{Q}_m) (\widetilde{Q}_{B_1} + \widetilde{Q}_{B_2})\sqrt{\widetilde{Q}_{F_1} \widetilde{Q}_{F_2}} + \ldots \,.
\eea
The Wilson loops $\vev{W_{\mathcal{R}_{n_2,n_1}}}$ are obtained from $\vev{W_{\mathcal{R}_{n_1,n_2}}}$ by the exchange $\widetilde Q_{F_1} \leftrightarrow \widetilde Q_{F_2}$, $\widetilde Q_{B_1} \leftrightarrow \widetilde Q_{B_2}$ and the inversion $\widetilde Q_m \to (\widetilde Q_m)^{-1}$, corresponding to a reflection about the $x^6$ axis in the brane picture.
\subsubsection{S-duality}
\label{sssec:SdualitySU3Nf2}
We are now ready to compare Wilson loops across S-duality and find the exact map.
The map of parameters is simply the exchange of the $Q_{F_i},Q_{B_j}$ with the $\widetilde Q_{B_i},\widetilde Q_{F_j}$:
\begin{equation}} \newcommand{\ee}{\end{equation}
\underline{\text{S-duality map}}: \quad (Q_{F_i},Q_{B_j},Q_m) \leftrightarrow (\widetilde Q_{B_i}, \widetilde Q_{F_j}, \widetilde Q_m) \,.
\ee
From the brane realization of the loops we can already predict the map up to multiplicative factors. The Wilson loops realized with $n_1$ and $n_2$ D3 branes in the two central regions of the brane web, with zero net number of strings attached, are related across S-duality. We thus expect the duality to map the $SU(3)$ loop $W_{\mathcal{R}_{n_1,n_2}}$ to the $SU(2)\times SU(2)$ loop $W_{\widetilde\mathcal{R}_{n_1,n_2}}$ (we chose the notations purposefully). From the low $n_1,n_2$ exact computations above we find the exact relation
\begin{equation}} \newcommand{\ee}{\end{equation}
Q_{F_1}^{\frac{2n_1 + n_2}{3}}Q_{F_2}^{\frac{n_1 + 2n_2}{3}}\vev{W_{\mathcal{R}_{n_1,n_2}}} = \widetilde Q_{F_1}^{\frac{n_1}{2}}\widetilde Q_{F_2}^{\frac{n_2}{2}} \vev{W_{\widetilde\mathcal{R}_{n_1,n_2}}} \,,
\ee
which, expressed in terms of gauge theory parameters, yields
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathcal{R}_{n_1,n_2}}} = Y_1^{-n_1} Y_2^{-n_2} \vev{W_{\widetilde\mathcal{R}_{n_1,n_2}}} \,,
\ee
with $Y_1 = e^{-\frac{2\widetilde t_1+ \widetilde t_2}{3}} = e^{\frac t2 + \frac{m_1+m_2}{4}}$ and $Y_2= e^{-\frac{\widetilde t_1+ 2\widetilde t_2}{3}}= e^{\frac t2 - \frac{m_1+m_2}{4}}$. Therefore the S-duality action can be expressed as
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& S.W_{\mathcal{R}_{n_1,n_2}} = Y_1^{-n_1} Y_2^{-n_2} W_{\widetilde\mathcal{R}_{n_1,n_2}} \,, \cr
& S.W_{\widetilde \mathcal{R}_{n_1,n_2}} = Y_1^{n_1} Y_2^{n_2} W_{\mathcal{R}_{n_1,n_2}} \,.
\label{SmapSU3Nf2}
\eea
The parameters $Y_1,Y_2$ can be understood as background Wilson loops of charge one for $U(1)$ subgroups of the global symmetry. For instance $Y_1$ is a charge one Wilson loop in $U(1)_{\rm diag} \subset U(1)_{\rm inst}\times U(2)_{\rm flavor}$ in the $SU(3)$ theory.
The fact that explicit computations are in agreement with the above simple formula is remarkable and provides a strong validation of the procedure we devised for extracting the Wilson loops VEVs.
\medskip
Importantly we focused on Wilson loops in the tensor product of (anti)fundamental representations $\mathcal{R}_{n_1,n_2}, \widetilde\mathcal{R}_{n_1,n_2}$. From this results one can deduce the S-duality map involving any chosen representation, however the map will be more complicated, in the sense that a given $SU(3)$ Wilson loop in representation $\mathcal{R}$ will be mapped to a linear combination of $SU(2)\times SU(2)$ Wilson loops and vice-versa.
\subsection{$SU(3)$ $N_f = 6$ and $SU(2)\times SU(2)$ $N_f=2+2$}
\label{ssec:SU3Nf6duality}
As a second example we consider the $SU(3)$ theory with $N_f=6$ fundamental hypermultiplets (without Chern-Simons term). Its brane realization is shown in Figure \ref{SU3Nf6}-a. The S-dual brane configuration is that of Figure \ref{SU2xSU2Nf4}-a, which realizes the quiver theory $SU(2) \times SU(2)$, with two fundamental hypermultiplets in each gauge node. We will call it the $SU(2)^2_{N_f=2+2}$ theory.
\subsubsection{$SU(3)$ $N_f = 6$}
\label{sssec:SU3Nf6}
\begin{figure}[th]
\centering
\includegraphics[scale=0.3]{SU3Nf6.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{a) Brane realization of the $SU(3)$ theory with $N_f=6$. b) Brane setup ADHM quiver SQM for the $k$-instanton sector in the presence of an $n$ D3 branes SQM loop.}}
\label{SU3Nf6}
\end{figure}
We first compute the VEVs of Wilson loops on $S^1\times \mathbb{R}^4_{\epsilon_1,\epsilon_2}$ in the $SU(3)$ theory, and we focus on Wilson loops in tensor product representations $\mathcal{R}_{n_1,n_2} = \mathbf{3}^{\otimes n_1} \otimes \overline{\mathbf{3}}{}^{\otimes n_2}$.
The computation is essentially the same as for the $SU(3)$ $N_f=2$ theory.
The Wilson loop VEVs will arise from residues of the SQM loops realized with $n = n_1+n_2$ D3 branes placed in the central regions of the brane web. The SQM loop 1d theory is the same as for the $SU(3)$ $N_f=2$ theory, but the $k$-instanton ADHM quiver is modified. It is given by the $(0,4)$ quiver of Figure \ref{SU3Nf6}-b.
The relation between the SQM loop and the Wilson loops is still given by \eqref{residueformulaSU3}.
\medskip
We start from the computation for the $U(3)$ theory with $N_f = 6$ flavors and then impose the traceless condition $a_1 + a_2 + a_3 = 0$ on the Coulomb branch parameters.
We denote $m_{i=1,\cdots,6}$ the flavor masses and work in the chamber $m_1 > a_1 > (m_2, m_3) > a_2 > (m_4,m_5) > a_3 > m_6$ as depicted in the figure. We define $a_{12} = a_1 - a_2$, $a_{23} = a_2 - a_3$ and the fugacities
\begin{equation}} \newcommand{\ee}{\end{equation}
\alpha_{12} = e^{a_{12}} \,, \quad \alpha_{23} = e^{a_{23}} \,,\quad Q = e^{-t} \,, \quad \mu_i = e^{m_i} \,.
\ee
The formulas that we find in terms of these parameters are again too long to be reported here.
To check conveniently S-duality we express the results in terms of the new variables $A_1,A_2,w,z$ and $y_i$, satisfying $\prod_{i=1}^6 y_i =1$, defined as
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\alpha_{12} = \frac{1}{A_1 w} \,,\quad \alpha_{23} = \dfrac{1}{A_2 z} \,,\quad
Q = \frac{1}{w z} \,,\quad \mu_i = \left( \frac{w}{z} \right)^{1/3} y_i \quad (i = 1, \ldots, 6) \,.
\eea
It is believed that the global symmetry group at the SCFT point is enhanced from $U(6)_{\rm flavor}\times U(1)_{\rm inst}$ to $SU(2)\times SU(2) \times SU(6)$ \cite{Bergman:2013aca,Mitev:2014jza} (see also \cite{Tachikawa:2015mha}). Our choice of parameters is such that $w$ and $z$ will be the fugacities of the two $SU(2)$ factors, while the $y_i$ will be the fugacities of the $SU(6)$.
The new "Coulomb branch" parameters are $A_1,A_2$ and in order to check S-duality we need to expand further the results in small $A_1,A_2$. Using the ADHM quivers described in Figure \ref{SU3Nf6}-b and the residue relations, we obtain for $0 \le n_1,n_2 \le 1$,
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
A_1^{2/3} A_2^{1/3} w^{2/3} z^{1/3} \vev{W_{\mathbf{3}}} & =
1 + A_1 \chi^{A_1}_{\mathbf{2}}(w)
- A_1^{5/3} A_2^{1/3} \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_5}_{\mathbf{6}}(\vec{y})
+ A_1^{4/3} A_2^{2/3} \chi^{A_5}_{\mathbf{15}}(\vec{y}) \cr
& + A_1 A_2 \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_1}_{\mathbf{2}}(z) + A_1^2 \chi^{A_1}_{\mathbf{3}}(q_+) + A_1 A_2^2 \chi^{A_1}_{\mathbf{3}}(q_+)\chi^{A_1}_{\mathbf{2}}(w)\cr
& + A_1^2 A_2 \left(1 + \chi^{A_1}_{\mathbf{3}}(q_+) \right) \chi^{A_1}_{\mathbf{2}}(z)
- A_1^2 A_2 \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{20}}(\vec{y}) \cr
& - A_1^{4/3} A_2^{5/3} \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\overline{\mathbf{6}}}(\vec{y})
+ A_1^{5/3} A_2^{4/3} \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\overline{\mathbf{15}}}(\vec{y}) \cr
& - A_1^{5/3} A_2^{4/3} \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\mathbf{6}}(\vec{y}) + A_1^3 \chi^{A_1}_{\mathbf{5}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \cr
- A_1^{8/3} A_2^{1/3} & \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{6}}(\vec{y}) + A_1^{7/3} A_2^{2/3} \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{15}}(\vec{y}) + \ldots ,
\label{W3}
\eea
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
A_1^{4/3} A_2^{2/3} w^{4/3} z^{2/3} \vev{W_{\mathbf{3} \otimes \mathbf{3}}} & =
1 + 2 A_1 \chi^{A_1}_{\mathbf{2}}(w) + 2 A_1^{4/3} A_2^{2/3} \chi^{A_5}_{\mathbf{15}}(\vec{y}) + 2 A_1 A_2 \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_1}_{\mathbf{2}}(z) \cr
& - A_1^{5/3} A_2^{1/3} \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_5}_{\mathbf{6}}(\vec{y}) + A_1^2 \chi^{A_1}_{\mathbf{3}}(w) \cr
& + A_1^2 \left( \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) \right) + 2 A_1 A_2^2 \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \cr
& + A_1^3 \chi^{A_1}_{\mathbf{4}}(q_+) \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(w) \cr
& - A_1^{8/3} A_2^{1/3} \chi^{A_1}_{\mathbf{3}}(q_+) \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{6}}(\vec{y}) \cr
& + A_1^{7/3} A_2^{2/3} \chi^{A_1}_{\mathbf{2}}(q_+) \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{15}}(\vec{y}) \cr
- 2 A_1^{4/3} A_2^{5/3} \chi^{A_1}_{\mathbf{2}} & (q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\overline{\mathbf{6}}}(\vec{y}) - A_1^2 A_2 \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{20}}(\vec{y}) \cr
+ A_1^2 A_2 \chi^{A_1}_{\mathbf{2}}(q_+) & \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(z) + 2 A_1^2 A_2 \left( 1 + \chi^{A_1}_{\mathbf{3}}(w) \right) \chi^{A_1}_{\mathbf{2}}(z) \cr
+ 2 A_1^{5/3} A_2^{4/3} \chi^{A_1}_{\mathbf{2}}& (w) \chi^{A_5}_{\overline{\mathbf{15}}}(\vec{y}) - A_1^{5/3} A_2^{4/3} \left( \chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\mathbf{6}}(\vec{y}) + \ldots ,
\label{W33}
\eea
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
A_1 A_2 w z & \vev{W_{\mathbf{3} \otimes \overline{\mathbf{3}}}} = 1 + A_1 \chi^{A_1}_{\mathbf{2}}(w) + A_2 \chi^{A_1}_{\mathbf{2}}(z) + A_1^2 \chi^{A_1}_{\mathbf{3}}(q_+)
+ A_2^2 \chi^{A_1}_{\mathbf{3}}(q_+) \cr
& + 3 A_1 A_2 \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_1}_{\mathbf{2}}(z)
- A_1^{5/3} A_2^{1/3} \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_5}_{\mathbf{6}}(\vec{y})
- A_1^{1/3} A_2^{5/3} \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_5}_{\overline{\mathbf{6}}}(\vec{y}) \cr
& + A_1^{4/3} A_2^{2/3} \chi^{A_5}_{\mathbf{15}}(\vec{y}) + A_1^{2/3} A_2^{4/3} \chi^{A_5}_{\overline{\mathbf{15}}}(\vec{y}) + A_1^3 \chi^{A_1}_{\mathbf{5}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \cr
& + A_2^3 \chi^{A_1}_{\mathbf{5}}(q_+) \chi^{A_1}_{\mathbf{2}}(z)
- A_1^{8/3} A_2^{1/3} \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{6}}(\vec{y}) \cr
& - A_1^{1/3} A_2^{8/3} \chi^{A_1}_{\mathbf{4}}(q_+) \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\overline{\mathbf{6}}}(\vec{y}) + A_1^{7/3} A_2^{2/3} \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{15}}(\vec{y}) \cr
& + A_1^{2/3} A_2^{7/3} \chi^{A_1}_{\mathbf{3}}(q_+) \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\overline{\mathbf{15}}}(\vec{y}) - A_1 A_2^2 \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\mathbf{20}}(\vec{y}) \cr
& + A_1 A_2^2 \left( 2 \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 1 \right) \chi^{A_1}_{\mathbf{2}}(w) + A_1 A_2^2 \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_1}_{\mathbf{3}}(z) \cr
& - A_1^2 A_2 \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\mathbf{20}}(\vec{y}) + A_1^2 A_2 \left( 2 \chi^{A_1}_{\mathbf{3}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_+) \chi^{A_1}_{\mathbf{2}}(q_-) + 1 \right) \chi^{A_1}_{\mathbf{2}}(z) \cr
& + A_1^2 A_2 \chi^{A_1}_{\mathbf{3}}(w) \chi^{A_1}_{\mathbf{2}}(z)
- A_1^{5/3} A_2^{4/3} \left( 2\chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\mathbf{6}}(\vec{y}) \\
& + 2 A_1^{5/3} A_2^{4/3} \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\overline{\mathbf{15}}}(\vec{y}) - A_1^{4/3} A_2^{5/3} \left( 2\chi^{A_1}_{\mathbf{2}}(q_+) + \chi^{A_1}_{\mathbf{2}}(q_-) \right) \chi^{A_1}_{\mathbf{2}}(w) \chi^{A_5}_{\overline{\mathbf{6}}}(\vec{y}) \cr
& + 2 A_1^{4/3} A_2^{5/3} \chi^{A_1}_{\mathbf{2}}(z) \chi^{A_5}_{\mathbf{15}}(\vec{y}) + \ldots .
\label{W33b}
\eea
As expected the coefficients in the expansion are characters of $SU(2)^2\times SU(6)$, providing a strong support to the symmetry enhancement proposal.
\subsubsection{$SU(2) \times SU(2)$, $N_f = 2+2$}
\label{sssec:SU2xSU2Nf22}
In the $SU(2)^2_{N_f=2+2}$ theory we consider Wilson loops in the tensor product representations $\widetilde\mathcal{R}_{n_1,n_2} = (\mathbf{2}^{\otimes n_1} , \mathbf{2}^{\otimes n_2})$.
Their VEVs are computed from the residue formula \ref{residueformulaSU2xSU2} as before, from the same SQM loop VEVs, but with the $(k_1,k_2)$-instanton ADHM quiver of Figure \ref{SU2xSU2Nf4}-b.
\medskip
\begin{figure}[th]
\centering
\includegraphics[scale=0.35]{SU2xSU2Nf4.pdf}
\vspace{-0.5cm}
\caption{\footnotesize{a) Brane realization of the $SU(2)^2_{N_f=2+2}$ theory. b) Brane setup ADHM quiver SQM for the $(k_1,k_2)$-instanton sector in the presence of an $(n_1,n_2)$ D3 branes SQM loop.}}
\label{SU2xSU2Nf4}
\end{figure}
The relevant fugacities are the same $\widetilde\alpha_1,\widetilde\alpha_2,\widetilde Q_1,\widetilde Q_2, \widetilde \mu$ as before, together with the flavor fugacities
$\widetilde\mu_{ij} = e^{m_{ij} - \frac 12 (a_{i1} + a_{i2})} \equiv e^{m'_{ij}}$, with $i,j=1,2$, for the 2+2 fundamental hypermultiplets. $m'_{ij}$ are the masses of the fundamental hypermultiplets in the $SU(2)^2$ theory.
We reorganize the fugacities to check S-duality and make the $SU(2)^2\times SU(6)$ symmetry manifest (enhanced from $SU(2)^2_{\rm fund}\times SU(2)_{\rm bif} \times U(1)^2_{\rm inst}$) as follows:
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& \widetilde\alpha_{1} = \frac{(y_3 y_4 y_5 y_6)^{1/4}}{A_1^{2/3} A_2^{1/3} (y_1 y_2)^{1/4}} \,,\quad \widetilde\alpha_{2} = \frac{(y_5 y_6)^{1/4}}{A_1^{1/3} A_2^{2/3} (y_1 y_2 y_3 y_4)^{1/4}} \,, \quad
\widetilde Q_1 = \sqrt{\frac{y_3 y_4}{y_1 y_2}} \,,\quad \widetilde Q_2 = \sqrt{\frac{y_5 y_6}{y_3 y_4}} \,, \cr
& \widetilde\mu = \sqrt{\frac{y_3}{y_4}} \,,\quad
\widetilde\mu_{11} = w \sqrt{\frac{y_1}{y_2}} \,,\quad
\widetilde\mu_{12} = \dfrac{1}{w} \sqrt{\frac{y_1}{y_2}} \,, \quad
\widetilde\mu_{21} = z \sqrt{\frac{y_6}{y_5}} \,,\quad
\widetilde\mu_{22} = \frac{1}{z} \sqrt{\dfrac{y_6}{y_5}} \,,
\eea
where we used the same notations $A_1,A_2,w,z,y_i$ as in the previous section, providing implicitly the map of parameters under S-duality.
Expanding at small $A_1,A_2$ we find
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
A_1^{2/3} A_2^{1/3} \left(\frac{ y_1 y_2}{y_3 y_4 y_5 y_6}\right)^{1/4} \vev{W_{(\mathbf{2},\mathbf{1})}} & =
\quad \text{r.h.s. of} \quad \eqref{W3} \,, \cr
A_1^{4/3} A_2^{2/3} \left(\frac{ y_1 y_2}{y_3 y_4 y_5 y_6}\right)^{1/2} \vev{W_{(\mathbf{2} \, \otimes \mathbf{2},\mathbf{1})}} & =
\quad \text{r.h.s. of} \quad \eqref{W33} \,, \cr
A_1 A_2 \left(\frac{y_1 y_2}{y_5 y_6}\right)^{1/2} \vev{W_{(\mathbf{2}, \mathbf{2})}} & =
\quad \text{r.h.s. of} \quad \eqref{W33b} \,.
\eea
\subsubsection{S-duality}
\label{sssec:SdualitySU3Nf6}
Acting with S-duality on the brane setups realizing the loop insertions we can predict the same map between Wilson loop operators as in the previous section, up to the dressing with a background Wilson loop: the $SU(3)$ Wilson loop $W_{\mathcal{R}_{n_1,n_2}}$ is mapped to the $SU(2)^2$ Wilson loop $W_{\widetilde\mathcal{R}_{n_1,n_2}}$. The exact computations above support the precise relation
\begin{equation}} \newcommand{\ee}{\end{equation}
\vev{W_{\mathcal{R}_{n_1,n_2}}} = Y_1^{-n_1} Y_2^{-n_2} \vev{W_{\widetilde\mathcal{R}_{n_1,n_2}}} \,,
\ee
with
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& Y_1 = e^{-\frac{2\widetilde t_1+ \widetilde t_2}{3} + \frac 13(m'_{11}-m'_{12})+ \frac 16 (m'_{21}-m'_{22})} = e^{\frac t2 - \frac{m_1+m_2-m_3-m_4-m_5-m_6}{4}} \,, \cr
& Y_2= e^{-\frac{\widetilde t_1+ 2\widetilde t_2}{3}+ \frac 16 (m'_{11}-m'_{12})+ \frac 13 (m'_{21}-m'_{22})}= e^{\frac t2 - \frac{m_1+m_2 + m_3+m_4-m_5-m_6}{4}} \,.
\eea
The S-duality action can be expressed as
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& S.W_{\mathcal{R}_{n_1,n_2}} = Y_1^{-n_1} Y_2^{-n_2} W_{\widetilde\mathcal{R}_{n_1,n_2}} \,.
\label{SmapSU3Nf6}
\eea
The parameters $Y_1,Y_2$ can be understood as background Wilson loops of the global symmetry group.
\section{Generalization}
\label{sec:Generalization}
From the heuristic brane reasoning and the above exact results it is straightforward to conjecture the S-duality map relating the two following theories:
the $SU(M)_{(N_{f1}-N_{f2})/2}$ theory (the subscript indicates the Chern-Simons level) with $2N-4+N_{f1}+N_{f2}$ fundamental hypermultiplets, for $0\le N_{f1} \le 2$ and $0\le N_{f2} \le 2$, and the $SU(2)^{M-1}$ linear quiver theory with $N_{f1}$ and $N_{f2}$ fundamental hypermultiplets in the left-most and right-most $SU(2)$ nodes respectively.\footnote{The conjectured duality actually extends to the ranges $0\le N_{f1} \le 4$, $0\le N_{f1} \le 4$, although this relates to more involved brane configurations compared to what we have been considering in this paper.
The case $N_{f1}= N_{f2}=4$ is actually conjectured to describe a 6d $\mathcal{N}=(1,0)$ SCFT compactified on a circle. We thank Gabi Zafrir for pointing this to us.}
The Wilson loops transforming covariantly under S-duality are the $SU(M)$ loops $W_{(n_1,\cdots,n_{M-1})}$ in tensor product of rank $i$ antisymmetric representations $\mathcal{A}_{i}$ and their dual $SU(2)^{M-1}$ loops $\widetilde W_{(n_1,\cdots,n_{M-1})}$ in the representation $\mathbf{2}^{\otimes n_i}$ for each quiver node:
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{A}_1^{\otimes n_1} \otimes \mathcal{A}_2^{\otimes n_2} \otimes \cdots \otimes \mathcal{A}_{M-1}^{\otimes n_{M-1}} \leftrightarrow (\mathbf{2}^{\otimes n_1}, \mathbf{2}^{\otimes n_2}, \cdots, \mathbf{2}^{\otimes n_{M-1}}) \,.
\ee
The associated SQM loops are realized with stacks of $n_1,n_2,\cdots,n_{M-1}$ D3 branes placed in the $M-1$ central regions of the brane system. The results in this paper generalize to the S-duality map
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
& S.W_{(n_1,\cdots,n_{M-1})} = Y_1^{-n_1} \cdots Y_{M-1}^{-n_{M-1}} \widetilde W_{(n_1,\cdots,n_{M-1})} \,,
\label{SmapGen}
\eea
with parameters $Y_i$ which are background Wilson loops, for which we conjecture the expressions in terms of $SU(M)$ parameters
$Y_i = e^{\frac t2 - \frac 14(\sum_{k=1}^{N_{f1}} \hat m_k + \sum_{k=1}^{2i-2}m_k - \sum_{k=2i-1}^{2M-4} m_k - \sum_{k=1}^{N_{f2}} \check m_k)}$, where $\hat m_k$, $m_k$ and $\check m_k$ are the masses of the $N_{f1}$, $2M-4$ and $N_{k2}$ fundamental hypermultiplets respectively.
Further generalization to the $SU(M)^{N-1} - SU(N)^{M-1}$ duality can also be worked out along the same lines.
\section*{Acknowledgements}
We would like to thank Joonho Kim and Chiung Hwang for discussions and technical support, as well as Hee-Cheol Kim, Diego Rodriguez-Gomez and Gabi Zafrir for valuable comments on the draft.
|
2,869,038,155,256 | arxiv | \section{Introduction}
High-precision timing observations which probe neutron star physics,
general relativity, and other phenomena rely on tracking rotational
phase as measured by template matching of stable integrated pulse profiles
\citep[for a discussion, see][]{Lorimer:2005fk}. Upon closer examination,
however, the radio emission displays a rich variety of examples in
which pulsars appear to flip between various states on a wide range
of timescales.
Shortly after the discovery
of pulsars, in a remarkable series of papers making use of the
Arecibo telescope, \cite{1970Natur.227..692B, 1970Natur.228...42B,1970Natur.228..752B,1970Natur.228.1297B}
showed that pulsars exhibit
nulling, mode-changing and sub-pulse drifting phenomena in their
pulse-to-pulse emission.
Nulling, the abrupt halting of the emission
mechanism for a few or many hundreds of pulses \citep{1970Natur.228...42B} can,
be viewed as an extreme version of mode-changing \citep{1970Natur.227..692B,1970Natur.228..752B,1970Natur.228.1297B}
during which
the pulse profile exhibits two or more stable pulse shapes.
Although not universal, an analysis of pulse nulling across the population of isolated pulsars by \cite{2007MNRAS.377.1383W} tends to suggest
that the fraction of time spent in the null state increases with characteristic age.
Sub-pulse drifting \citep{1968Natur.220..231D,1973NPhS..243...77B},
where clearly ordered subpulses drift
across the main pulse window at a fixed rate characterized by
a subpulse spacing and a repetition period appeared to be
preserved over an interval of time spent in the null state
\citep{1978MNRAS.182..711U}.
With the passage of time, more discoveries have blended and questioned these classifications.
\cite{1970ApJ...162..727H} discovered the first of a number of pulsars that appear to combine
mode changing with sub-pulse drift, where
these pulsars generally assume one of several sub-pulse drifting modes.
Also, \cite{MNR:MNR10512} have shown that some pulsars
exhibit what they term ``bistable profile illumination'', which is single-pulse drifting
that mimics the abrupt beginnings of mode-changing but then
gradually drifts into phase with the normal mode.
\cite{2007MNRAS.377.1383W} suggest
that nulling and mode changing are different manifestations of the same phenomenon.
Other extraordinary pulsars that may relate nulling to mode-changing occur on a wide
range of timescales. A few of these discoveries have been
the rotating radio transients (RRATs; \cite{2006Natur.439..817M}), the intermittent pulsars
\citep{Kramer28042006, 0004-637X-746-1-63, 2012ApJ...758..141L} and
the observation of bursting emission, which was
until now only seen in J1752$+$2359 \citep{0004-637X-600-2-905} and J1938$+$2213
\citep{2003PhDT.........2C, Lorimer2013}. The bursting pulsars bear some resemblance
to conventional nulling pulsars, with the exception that
the pulse to pulse intensities are modulated by a decaying envelope before
abruptly returning to a null or faint state.
While the RRATs emit only occasional
individual pulses spaced anywhere between seconds and years,
intermittent pulsars switch on and remain steady before switching off
again over timescales of days to years. Over these longer timescales,
where it is possible to measure the spin period derivative precisely,
it is observed that the ``on'' and ``off'' states for the intermittent
pulsars are associated with two different spin-down rates\citep{Kramer28042006}.
A similar behaviour was recently seen in 17 non-intermittent
pulsars \citep{DataHome} which switch between different spin-down
states in a quasi-periodic or even chaotic fashion \citep{2013MNRAS.428..983S},
and are sometimes accompanied by correlated pulse profile changes.
\cite{DataHome} suggest that the mixture of spin-down states could even account for the ``timing noise''
variations seen in many pulsars and suggest that most, if not all,
normal pulsars have multiple spin-down states indicating global
changes in magnetospheric current densities. One implication of these
results is that mode changing (and perhaps, by association, bursting)
could be connected to spin-down rate changes on much shorter timescales.
Very recently, synchronous X-ray and radio state switching was reported in the
classical mode-changing pulsar B0943+10~\citep{2013Sci...339..436H}.
In this case, non-thermal
unpulsed X-rays are observed during the bright radio emitting
phases, while an additional pulse thermal component is present
during the radio-quiet phase.
A better understanding of the observational phenomenology involved is
now required in order to make further progress.
In this paper, we present a previously unseen bursting emission phenomenon in
PSR~B0611$+$22 which was discovered by \cite{DAVIES:1972fk}.
This young pulsar with a characteristic age of $90,000$~years,
has considerably high timing noise \citep{1980ApJ...237..206H}.
This high level of noise could indicate that there is still some underlying
phenomena that is not being accounted for in the timing models.
At first glance, it appears to be a normal pulsar with a simple pulse shape - essentially Gaussian in appearance
that is approximately 80\% linearly polarized \citep{1989ApJ...346..869R}.
Yet, it has been shown that the integrated pulse shape varies at
different times \citep{1980ApJ...239..310F}.
\cite{1992msem.coll..280N} has also shown that there is a correlation with the
brighter pulses and their phase alignment, and later claimed that the timing noise
was due to mode switching \citep{2000AAS...19713004N}.
Other efforts have been made to find a relationship contributing to this timing noise, such as
sub-pulse drifts \citep{2007A&A...469..607W}, but to no avail.
In this paper, using archival Arecibo observations, we have been able to
reveal that this pulsar exhibits bursting episodes and confirm that it has
moding behaviours.
How this was performed and the implications are outlined in this paper in the following way.
In Section 2 we provide technical information about
the observations. In Section 3 we detail the data analysis techniques used to expose
relationships from pulse-to-pulse. In Section 4 we conduct a fluctuation
spectrum analysis on the normal and bursting modes found in Section 3.
In Section 5, motivated by the recent X-ray observations of PSR~B0943+10,
we make predictions for observations of B0611+22 at
X-ray wavelengths that may help to understand the phenomenon seen here.
Finally, in Section 6, we discuss our results and their scientific significance.
\section{Observations}
The data presented here were collected between March 2 and 8,
2009 (MJD range 54892--54898)
in a dedicated observing run with the Arecibo telescope to search for unusual
spectral index behaviour in PSR~B0611$+$22. To characterize the flux density
spectrum, observations were carried out at 327~MHz, 1400~MHz, 4.5~GHz, and 8.8~GHz
using the Wide Band Arecibo Pulsar Processors \citep[WAPPs; ][]{2000ASPC..202..275D}.
Unfortunately, the pulsar was not detected at 8.8 GHz and radio frequency interference (RFI)
dominated the 4.5 GHz observations.
Further details on the individual observations in which the pulsar
was clearly detected are shown in Table \ref{tab:Info}.
\begin{table*}
\begin{center}
\caption{Summary of Arecibo observations of B0611$+$22.}
\label{tab:Info}
\begin{tabular}{ccccccc}
\hline
MJD & Centre Frequency & Integration Time& Bandwidth/Channel & Sample time & Number of Channels \\
& (MHz) & (s) & (MHz) & ($\mu$s) & \\
\hline
54892.97 & 1400 & 900 & 0.195 & 256 & 2048\\
54893.01 & 327 & 900 & 0.012 & 256 & 4096\\
54894.95 & 327 & 900 & 0.012 & 256 & 4096\\
54894.97 & 1400 & 900 & 0.781 & 128 & 512\\
54896.94 & 327 & 900 & 0.012&256 & 4096\\
54896.97 & 1400& 900 & 0.781 &128 &512\\
54898.95 & 1400& 900 & 0.781 &128 &512\\
54898.97 & 327 & 900 &0.012& 256 &4096\\
\hline
\end{tabular}
\end{center}
\end{table*}%
\section{Data analysis}
The data from each of the observations listed in in Table \ref{tab:Info}
were analyzed in a systematic way, as described below.
\subsection{Individual pulse profiles}
The autocorrelation functions recorded by the WAPPs
in each polarization channel were summed and Fourier transformed
to convert them to the equivalent set of total-power
spectral channels using standard data analysis
techniques \citep[see, e.g., Section 5.2.2 of ][]{Lorimer:2005fk}
implemented in the {\tt filterbank} program as
part of the \textsc{sigproc} \footnote{http://sigproc.sourceforge.net/}
pulsar processing package. The
resulting data were then dedispersed at the pulsar's dispersion measure (96.91~cm$^{-3}$~pc) and folded
at the predicted pulse period (0.33 s) using the ephemeris provided in
the ATNF pulsar catalog \citep{2005AJ....129.1993M}\footnote{www.atnf.csiro.au/people/pulsar/psrcat/}.
The \textsc{sigproc} {\tt fold} program was used to produce
an ASCII pulse profile with 256 rotational phase bins.
Then using the standard deviation of the off-pulse bins,
we were able to convert to mJy using the radiometer equation
\citep[see, e.g., Section 7.3.2 of][]{Lorimer:2005fk}, with a gain of 10~K/Jy and an
assumed system temperature of 122~K for 327~MHz and 30~K for 1.4~GHz.
Zoomed-in versions of these profiles are shown in the left panels of
Figs.~\ref{fig:98TBOX} and \ref{fig:98LBOX}.
\subsection{Revealing the emission modes}
When looking for a relationship across pulse intensities, there is
often a great deal of noise present in the system which may obscure
any underlying correlation. A simple way to reveal these relationships is to take a
boxcar convolution (a running average) over several data points. This
method increases the signal-to-noise ratio by the square root of the
total number of data points that were averaged, but one must be
aware that this will also reduce the resolution of the data along the
direction of the averaging. We carried out a boxcar convolution of
the data with a normalised area kernel along 64 data points in a
single bin, to preserve the bin resolution. We chose this kernel
size to reduce the noise fluctuations
by at least a factor of five, and to have a multiple of two for computing
efficiency. This averaging is often known as smoothing, and we will
refer to these post-processed data as smoothed data.
\begin{figure*}
\includegraphics[width=.75\linewidth]{98T_Box_64.eps}
\caption{The 327~MHz observation of PSR~B0611$+$22 on MJD 54898.
Left: the folded dedispersed time series. Centre:
the time series after being box-car convolved along each bin
with a kernel of 64 pulses. Right: the convolution of the
perturbations about the mean profile with the same kernel.}
\label{fig:98TBOX}
\end{figure*}
When smoothing is performed on the 327~MHz emission
from PSR B0611$+$22, the mode changes suggested in \cite{1992msem.coll..280N}
are readily apparent, which is shown in the centre panel of Fig.~\ref{fig:98TBOX}.
There, two prolonged episodes of
enhanced emission can be seen. These enhanced emission
episodes can then be isolated when the average
of each bin is subtracted from that bin to form the perturbations
about the mean. These perturbations are then smoothed to produce the
right panel of Fig.~\ref{fig:98TBOX}. It appears that
the enhanced episodes are additional effects overlying a normal emission mode.
\begin{figure}
{\includegraphics[width=\linewidth,trim=25 0 100 0,clip=true ]{98T_Burst.eps}}
\caption{ Flux along the bin where the bursting events dominate in Fig. \ref{fig:98TBOX}.
The solid line is the boxcar convolution of the series.}
\label{fig:98Tburst}
\end{figure}
To examine the structure of these episodes, a bin was chosen in
a section where the abnormalities are the dominant feature. The flux
density and its convolution for that bin are shown in
Fig. \ref{fig:98Tburst}. There, a decaying behaviour can be seen across
the event before it abruptly returns to the normal emission
mode. This trend has not been recorded for this pulsar before but is strikingly similar to what was reported in
\cite{0004-637X-600-2-905} for PSR~J1752$+$2359. There the authors describe
this type of event as bursting. We adopt this nomenclature and
will refer to this type of event as a \emph{bursting} emission mode.
\begin{figure*}
\includegraphics[width=.75\linewidth]{98L_Box_64.eps}
\caption{The 1.4~GHz observation of PSR B0611$+$22 on MJD
54898. Left: The folded dedispersed time series with
one pulse profile for each pulse. Centre:
the time series after being box-car convolved along each bin with
a kernel of 64 pulses. Right: the convolution of the perturbations around the
mean profile with the same kernel.}
\label{fig:98LBOX}
\end{figure*}
To investigate whether these modes can be seen at
other frequencies, the same analysis was
conducted on data collected at 1.4~GHz. From the perturbations about
the mean at this frequency, seen
in Fig.~\ref{fig:98LBOX}, it becomes
clear that there is an undulating relationship. This suggests that
mode switching is present.
\begin{figure}
\centering
\begin{tabular}{c}
{\includegraphics[width=.9\linewidth,trim=25 0 100 0,clip=true]{98L_Norm.eps}}\\
{\includegraphics[width=.9\linewidth,trim=25 0 100 0,clip=true]{98L_Burst.eps}}\\
\end{tabular}
\caption{A 15~minute 1.4~GHz observation of PSR~B0611$+$22 on MJD
54898. (\emph{Top}): The flux within a bin where the left-sided
fluctuations dominate in Fig. \ref{fig:98LBOX}.
(\emph{Bottom}): The flux within a bin where
the right sided fluctuations dominate.
The solid line in both is the boxcar
convolution of the time series. Note the alternating flux densities between
the left and right sided bins}
\label{fig:98LModes}
\end{figure}
To examine whether the structure of these modes are similar to the ones at 327 MHz, we plot bin values from
either side of the undulation which are shown in Fig. \ref{fig:98LModes}. On the left side of the
undulation, shown in the top of Fig. \ref{fig:98LModes}, pulse sequences of larger flux density
are mesa (i.e.~table-top) shaped with a quite constant mean value throughout before quickly
returning to a baseline value. On the other hand, the right-sided
enhanced sequences, shown in the bottom of Fig. \ref{fig:98LModes},
appear to have the decaying structure of the bursting mode.
This therefore confirms that the two modes found at 327 MHz are also present at 1.4 GHz,
but with intriguing differences. Most notably, at 1.4 GHz the bursting mode in the right hand
bin is no longer accompanied by a steady weaker mode in the left hand bin. In top of Fig.~\ref{fig:98LModes}
we can see that the left bin is far from steady and at this higher frequency it exhibits abrupt changes of its own.
Moreover, the left hand bursts alternate abruptly with those of the right shown in
bottom panel of Fig.~\ref{fig:98LModes}. Together these form the undulating pattern in
Fig.~\ref{fig:98LBOX}.
\subsection{Gaussian fitting and skewness measurements}
To quantify the bursting phenomenon, knowing that the pulse
is well described by a Gaussian \citep{1999ApJS..121..171W},
we fit Gaussians to each profile on data values greater than
25\% of the maximum value of each local mean pulse. This is done to
ensure that the fit conforms around the peak of the pulse, and to
reduce any asymmetric influences. The resulting fit parameters
(peak location $\mu$, amplitude $A$, and pulse width $\sigma$) of each pulse can then
be compared to see if changes occur between the two emission modes.
To quantify any asymmetry in the data, we measured the \emph{skewness} of each pulse.
This procedure is outlined in the Appendix.
In this calculation, to avoid contamination by noise, only flux density
levels greater than 5\% of the maximum pulse value are used.
\subsubsection{Gaussian and skewness results }
Note that, due to the smoothing of
the data, any structures on scales smaller than 64 pulse numbers in
these results must be treated as a noise feature. For this reason,
histograms are generated for each parameter value in order to
investigate broader statistical trends.
\begin{figure*}
{\includegraphics[width=\linewidth,trim= 100 50 100 25 , clip=true]{HistAll98T.eps}}
\caption{Gaussian parameters and skewness measurements of the smoothed data from the
observation carried out on MJD 54898 at 327 MHz.}
\label{fig:HistT}
\end{figure*}
The results from this analysis of the 327~MHz data are shown in
Fig. \ref{fig:HistT}. There, a clear bimodal distribution is seen in
the peak locations ($\mu$) histogram. These values are then plotted against pulse
number, the lower graph in the $\mu$ column of Fig. \ref{fig:HistT}.
It then quickly becomes evident that the larger values of $\mu$ are
associated with the bursting pulses. This directly conveys a
phase-shift between the two emission modes.
To isolate the two populations, we set a threshold around the midpoint (at bin 14.9)
on the $\mu$ values. Pulse numbers with $\mu$ values below this
threshold are labeled as a normal pulse, and the rest are labeled as a
bursting pulse. From this indexing, sub-set histograms are formed for
the other recorded parameter values to see how these emission modes
contribute to each distribution.
For the pulse width ($\sigma$)
values measured in the 327 MHz data, there is little that distinguishes
the two modes, while the amplitude values ($A$) are dominated at the
higher end by the bursting mode. This is congruent to what was seen in
the smoothed data set for this frequency, seen in
Fig. \ref{fig:98TBOX} and again in the far right of
Fig. \ref{fig:HistT}. When these amplitude values are plotted against
the pulse number, the lower graph in the $A$ column of
Fig. \ref{fig:HistT}, the decaying behaviour mimics what was seen in
Fig. \ref{fig:98Tburst}. This confirms that this decay is an overall
effect on the pulse and not a single bin phenomenon.
Skewness measurements at 327 MHz show a clear separation between the
two modes, and the bursting pulses tend to have a lower skewness
than the normal mode.
We also wish to see if these trends are apparent in the pulse profiles. To do
this, pulses from the raw data were averaged over the pulse numbers
of the corresponding modes and over the whole observation for
comparison. These profiles are shown in Fig.~\ref{fig:HistT}. There
we can see that the mean bursting pulse is indeed larger in amplitude,
similar in width, and its peak is slightly shifted compared to the
mean normal pulse.
\begin{figure*}
{\includegraphics[width=\linewidth,trim= 100 50 100 25 , clip=true]{HistAll98L.eps}}
\caption{Gaussian parameters and skewness measurements of the smoothed data from the observation carried out on MJD 54898 at 1.4~GHz.}
\label{fig:HistL}
\end{figure*}
To see if these trends are consistent at other frequencies, the
analysis was performed on the 1.4~GHz observation from the same day,
shown in Fig. \ref{fig:HistL}. Again we see a clear bimodal
distribution in the $\mu$ values. When plotted against pulse number, a
noticeable phase shift is seen that corresponds with the undulation
observed in the smoothed perturbations of Fig. \ref{fig:98LBOX}. Again, a
$\mu$ threshold about the midpoint (at bin 14.1) was set to isolate the two modes for
comparison.
When these $\sigma$ values are compared, it is seen that the bursting
pulses tend to be narrower than the normal pulses. This trend was not seen
in the 327~MHz data, along with little distinction in the amplitude. On the
other-hand, the bursting pulses still have the tendency towards lower
skewness values.
\begin{figure*}
{\includegraphics[width=\linewidth,trim= 100 50 100 25 , clip=true]{HistNR98T.eps}}
\caption{Gaussian parameters and skewness measurements of the smoothed data from the observation carried out on MJD 54898 at 327 MHz with the normal profile removed from the bursting modes. }
\label{fig:HistNR}
\end{figure*}
In efforts to explain these varying trends, the 327~MHz observation
is re-examined. The larger amplitude and similar width of the bursting pulse
allows for a nested normal pulse to be simultaneously emitting. From the indexing
of the previous analysis, we remove the mean normal-pulse for the bursting sequences in
the raw data set. This new set is then convolved and Gaussian fitted to form
Fig. \ref{fig:HistNR}. There we can see that the new trends reflect the ones observed at 1.4~GHz.
Where the bursting pulses are narrower than the normal mode, and the amplitudes are of
similar magnitude.
Mean profiles for each situation are presented in the far right hand corners of
Fig. \ref{fig:HistT}, \ref{fig:HistL}, and \ref{fig:HistNR}. The bursting profiles for each are
contained within the normal profile's starting and ending
envelope. Therefore, it appears that the different emission mechanisms
are confined to a single emission cone. This confinement is also supported by
the skewness measurements, where the asymmetry of the profile is changing to
accommodate for this restraint, as well as the phase changes.
\subsubsection{Bursting rate}
When the analysis from the previous section is performed on the other observations from
Table~\ref{tab:Info}, we find that bursting events are occurring rather frequently in this
pulsar, with at least one mode shift per observation. All the observations support the same
statistical trends in Fig. \ref{fig:HistT} and \ref{fig:HistL} for their respective bands.
From the length of each burst and
the total number of events over all the observations, we estimate a bursting event occurs approximately every seven minutes (1200 pulses) lasting for two to four minutes (300 - 600 pulses).
\subsection{Pulse energy distributions}
From the indexing in the previous sections, we are able to form sub-set pulse energy distributions for each mode.
This was done by summing ten phase bins on either side of the maximum mean phase bin for each pulse in the raw data sets.
A bin span was then converted to time and multiplied by the sum to produce an
energy for each pulse . The energy distributions for each mode and the overall observation are then
generated, see Fig. \ref{fig:EnDis}.
\begin{figure*}
\centering
\mbox{
\subfigure[]
{\includegraphics[width=.46\linewidth,trim= 20 10 40 10 , clip=true]{EnergyDis_98T.eps}}
\quad
\subfigure[]
{\includegraphics[width=.5\linewidth,trim= 20 10 40 10 , clip=true]{LogNormNoiseEn_1400_98.eps}}
}
\caption{Pulse energy distributions for each mode and for entire observation.(a) The distributions for the
327~MHz observation on MJD 54898 with log-normal fits. (b) The distributions for the 1400~MHz observation on MJD 54898 with log-normal effected by noise fits.}
\label{fig:EnDis}
\end{figure*}
For the 327 MHz pulse energy distributions, a log-normal probability density function (PDF) is
fitted for each mode with a maximum-likelihood estimation (mle).
From these fits, shown in the left side of Fig. \ref{fig:EnDis}, both modes are will described by different log-normal distributions
occupying different percentages $(p)$ of the overall observation. Here we again see that the bursting
pulses are more energetic, dominating the higher energies while the normal mode pulse are centralized
at lower energies. When these two fits are summed and compared to the overall energy distribution,
lowest left panel in Fig. \ref{fig:EnDis}, we can see that it matches very well to support that our indexing
separates these two mode effectively.
When the same method was applied to the 1400 MHz observation, the log-normal fits did not provide a satisfactory
fit to the data in the lower
energy range. Because of this and the overall lower energy levels, we wanted to investigate whether the noise
was effecting an underlying log-normal distribution. To incorporate this noise to a log-normal PDF, we needed to
integrate the probability of other values ($x^\prime$) being read at another location ($x$) due to the Gaussian noise.
Resulting
\begin{equation}
PDF(x)=\int_0^\infty \frac{1}{x^\prime\sqrt{2\pi\sigma^2}}\mathrm{e}^{\frac{-(\ln{x^\prime}-\mu)^2}{2\sigma^2}}\frac{1}{\sqrt{2\pi\sigma_n^2}}\mathrm{e}^{\frac{-(x-x^\prime)^2}{2\sigma_n^2}}\,\mathrm{d}x^\prime,
\label{eq:PDF}
\end{equation}
where $\mu$ and $\sigma$ are the shape parameters for the underlying log-normal and
$\sigma_n$ is the standard deviation of the Gaussian noise.
Because of the symmetry of the Gaussian, this is the same as convolving the log-normal
with a normal distribution.
To find a value for $\sigma_n$, we summed and converted
21 bins in the off-pulse region, then fitted a Gaussian to this energy distribution, shown in Fig. \ref{fig:Noise}.
We repeatedly found that the energies in the off-pulse regions were well described with $\sigma_n =6.43$ $\mu$Jy-sec.
\begin{figure}
\centering
\includegraphics[width=\linewidth,trim= 20 110 30 100 , clip=true]{OffPulseEn_98L.eps}
\caption{Off pulse energy distribution with a Gaussian fit.}
\label{fig:Noise}
\end{figure}
Unfortunately, the PDF in Eq. \ref{eq:PDF} has no analytical solution. Therefore, we
approximate this with an finite intervals summation and use these values as our new PDF for the mle fitting.
These fits matched the distribution remarkably well, now describing the whole distribution with little change
in the log-normal shape parameters, shown in the right side of Fig. \ref{fig:EnDis}.
There we can see that both modes can be described with at single energy distribution.
This supports what was seen in the previous sections, that there was little change in the amplitudes of
the pulses between the two modes. What is peculiar, is that the pulse width changes between the
modes are not significant enough to be reflected in the energies. This maybe because of the
noise contribution.
\subsection{Sub-pulse drifting analysis}
We now take the opportunity to revisit the issue of the presence of any sub-pulse drifts in the two
emission states of B0611+22. If sub-pulse drifts are
occurring with regular frequency, there should be a dominant peak in the Fourier transform
of each phase bin of the raw data \citep{1970Natur.228..752B,1970Natur.227..692B}.
While this analysis has been performed on this pulsar before in \cite{2006A&A...445..243W,2007A&A...469..607W} with
no features found,
the two modes have not been looked at independently. Therefore, we investigate each mode separately to see if any differences
can be seen in the frequency domain. Because we are only interested in the fluctuations of the profile,
the mean value of each phase bin is subtracted before calculating a fluctuation spectra.
\begin{figure*}
\centering
{{\includegraphics[width=\linewidth,trim= 130 45 330 50 , clip=true]{Burst1FFT.eps}}}
\caption{ (Left) The fluctuation spectra of the first bursting event for 327 MHz in Fig. \ref{fig:98TBOX}, covering 21 bins centred on the mean peak and from pulse-numbers 220 to 530. (Upper Right) The mean amplitude of each frequency of the \emph{Left} figure. Along with the Fourier transform of a linear fit of the mean flux from the same section in Fig. \ref{fig:98TBOX}, and the mean amplitude of frequencies from an off pulse area of the same size. (Lower Right) The squared of the difference of raw mean amplitude from the amplitudes of the linear fit and the mean off source. Here all frequencies ($f$) are in cycles-per-period (C/P). }
\label{fig:B_FFTs}
\end{figure*}
\begin{figure*}
\centering
{{\includegraphics[width=\linewidth,trim= 130 45 330 50 , clip=true]{NormFFT.eps}}}
\caption{(Left) The fluctuation spectra of the longest normal mode for 327 MHz in Fig. \ref{fig:98TBOX}, covering 21 bins centred on the mean peak and from pulse-numbers 550 to 1450. (Upper Right) The mean amplitude of each frequency of the \emph{Left} figure. Along with the Fourier transform of a linear fit of the mean flux from the same section in Fig. \ref{fig:98TBOX}, and the mean amplitude of frequencies from an off pulse area of the same size. (Lower Right) The squared of the difference of raw mean amplitude from the amplitudes of the linear fit and the mean off source. Here all frequencies ($f$) are in cycles-per-period (C/P).}
\label{fig:N_FFTs}
\end{figure*}
When investigating amplitudes of the first bursting event at 327 MHz, shown in the left hand
side of Fig. \ref{fig:B_FFTs}, it soon becomes evident that the lower frequencies are playing
a dominant roll. To examine why this is, we first produce a linear fit of the mean intensities of
each pulse number. This fit is then Fourier transformed and compared to the the mean amplitude
of each frequency in the data, shown in the upper right hand side of Fig. \ref{fig:B_FFTs}. We can then
see that in the low frequency range these two spectra are comparable. This suggests that these
low frequencies are a consequence of the large scale structure of the bursting envelope.
To see if any radio interference is contributing, the fluctuation spectra of an off-pulse region of the same area is
taken for comparison. The mean value of this region is plotted in the upper right hand side of
Fig. \ref{fig:B_FFTs}, where a single signal is seen at high frequency. This radio interference does not seem to
be a major contributor.
Regardless, this and the linear trend spectra are subtracted from the
mean data spectra. This difference in amplitudes is then squared to approximate how the other
components are contributing to the power, seen in the lower right of Fig. \ref{fig:B_FFTs}. There
we can see that there are no substantial single frequency peaks, suggesting that sub-pulse drifting
is not prevalent in this mode.
When the same procedure is conducted on the normal mode (Fig. \ref{fig:N_FFTs}), there is clearly no low frequency
dominance as what was seen in the bursting mode. This is consistent with the structural explanation
since the normal mode should have no large-scale pattern. Again we see no prominent peaks in the amplitude
difference.
These results are consistent with what was reported in \cite{2006A&A...445..243W,2007A&A...469..607W}.
This analysis does suggest however, that if a `red noise' component in the fluctuation spectra it may be a sign of
intrinsic structures in the intensity and should not always be disregarded as interstellar scintillation or
receiver fluctuations \citep{Lorimer:2005fk}.
\section{Predictions for observations at X-ray wavelengths}
Another very interesting possibility is that
PSR~B0611+22, and others like it, may be detectable as X-ray state
switching pulsars, as was recently demonstrated for the
classical mode-changing pulsar B0943+10 where non-thermal
unpulsed X-rays are observed during the bright radio emitting
phases, while an additional pulse thermal component is present
during the radio quiet phase \citep{2013Sci...339..436H}.
A comparison between Fig.~1 of this paper and Fig.~1 of
Hermsen et al.~shows that the radio properties of PSRs~B0611+22
and B0943+10 are reversed: for B0611+22 we see a burst and
decay behavior, while for B0943+10 the bright mode gradually
increases in strength before declining rapidly to the radio-quiet
state. Correlated radio and X-ray observations would allow us to see
the energy ranges over which this pulsar is switching between
emission states. In this section, we make some testable predictions
for future radio and X-ray observations of PSR~B0611+22.
If PSR~B0611+22 behaves in a similar way to PSR~B0943+10, then we
expect a higher level of X-ray emission, accompanied by spectral
changes, during its non-bursting radio phases.
Although PSR~B0611+22 is more distant than PSR~B0943+10, its
higher spin-down energy loss rate\footnote{Following standard
practice (see e.g.~Lorimer \& Kramer 2004), we compute
the spin-down energy loss rate $\dot{E}=3.95 \times 10^{31} {\rm ergs}~{\rm s}^{-1}
\dot{P}_{-15}/P^3$ for the spin period $P$ in s and $\dot{P}_{-15}=10^{15} \dot{P}$.}
($6.2 \times 10^{34}$~ergs~s$^{-1}$ versus $1.0 \times 10^{32}$~ergs~s$^{-1}$)
provides excellent prospects for the detection of both thermal and
non-thermal X-ray emission.
To estimate the non-thermal X-ray flux expected from PSR~B0611+22,
$F_X^{\rm nt}$, we
assume an X-ray efficiency of 1\%, as inferred for PSR~B0943+10 from Hermsen
et al.~(2013), and a dispersion measure distance of 4.7~kpc and find that
$F_X^{\rm nt} = 3.8 \times 10^{-13}$~ergs~s$^{-1}$~cm$^{-2}$.
To estimate thermal X-ray flux, $F_X^{\rm th}$, we use the
recent compilation of $kT$ versus characteristic age presented in
Fig.~1 of Keane et al.~(2013) to infer $kT=120$~eV for
PSRs B0611+22. Assuming blackbody radiation from a region around the polar caps of
an $R= 10$~km radius neutron star we can estimate
the thermal X-ray flux, $F_X^{\rm th}$. For the purposes of this
calculation, we have
adopted an emitting region with a radius of 2.7~polar cap radii.
\footnote{This choice
was motivated by the fact that the thermal emission observed in
B0943+10 is consistent with a circular region of radius 370~m.
The classical (see, e.g.~Lorimer \& Kramer 2005)
polar cap radius for this pulsar, $R \sqrt{2 \pi R/(Pc)}$,
is 140~m.} At the nominal dispersion measure distance of 4.7~kpc,
we find $F_X^{\rm th}=3 \times 10^{-15}$~ergs~s~cm$^{-2}$.
These anticipated X-ray fluxes translate into reasonably
high count rates with existing X-ray instruments. For example,
using the {\tt pimms} simulation package, the expected count rates
for the {\it XMM} PN instrument are approximately
0.05~s$^{-1}$ for the non-thermal emission (assuming
a power-law spectrum with a photon index of 2) and
0.001~s$^{-1}$ for the blackbody emission.
These count rates are very encouraging and translate
into detections of up to a few hundred photons per source. The
different spectral properties of the thermal and non-thermal emission,
and the simultaneous radio monitoring, will allow the discrimination
between thermal and non-thermal emission for each
emission mode. For example, assuming a 30\% bursting duty cycle for
PSR~B0611+22 and the same X-ray properties as B0943+10, our count-rate
estimates translate to detections of $\sim 20$ thermal photons plus
$\sim 1000$ non-thermal photons during the non-bursting state, and
$\sim 500$ non-thermal photons during the bursts. The number of
non-thermal photons detected may be lower if the above 1\% efficiency
assumption is an overestimate. In either case, however, the excellent
statistics provided by the thermal photons means that it should be possible
to readily distinguish between changes in the X-ray luminosity and/or
spectra in the normal and bursting radio states.
\section{Conclusion}
In summary, using a sensitive boxcar convolution method
to preserve the phase resolution,
we have found exceptional pulse patterns in the radio emissions from PSR~B0611+22
in archival data taken with the Arecibo radio telescope.
With this analysis, prolonged enhanced
emissions were discovered at 327 MHz. When
investigating the overall structure of this emission, it was shown that these are
bursting events that abruptly appear and then systematically decay.
These events appear to be superimposed upon a constant emission mode offset in pulse phase.
This was unexpected and is a completely undocumented relationship for this pulsar.
To gain further insight into these events, we searched for them at 1.4 GHz
using the same method, finding a different picture compared to 327~MHz.
At 1.4~GHz, the bursting events were no longer bright episodes, and the steady emission mode
was no longer present. At this frequency PSR~B0611+22 appears to be a mode switching pulsar,
where one mode is the decaying burst and the other mode is constant when on.
This mode switching is surprising in this pulsar, only being suggested
briefly in the past
\citep{1992msem.coll..280N,2000AAS...19713004N}. PSR B0611+22's characteristic age is also surprising,
because the majority of mode-switching pulsars are a couple of orders of magnitude older \citep{2007MNRAS.377.1383W}.
This age discrepancy may suggest that bursting pulsars are a new classification of
pulsars that are independent of nulling and moding.
To investigate these relationships, we fitted Gaussians and measured the skewness
of each convolved pulse.
These measurements allowed us to
confirm that there was a phase change between the
two modes. This was used to isolate each of the mode statistics in each frequency range.
We then showed, that the parameter differences were reflected in each
mode's mean profile. These mean profiles, along with the skewness measurements,
support that the modes are confined to the same phase region and therefore the
same emission cone.
This analysis also shows that, apart from the amplitude
difference from 327 to 1400 MHz, there was also a change in the distribution of the pulse-width
parameters. To see what could be causing this distribution change, we revisited the 327~MHz
data and subtracted the constant underlying emission from the burst. When this was done,
the normal mean profile from the 327~MHz bursting event distributions became consistent with the
1.4 GHz distributions.
An investigation of the pulse energy distributions also revealed differences between the two modes.
While both modes can be described by log-normal probability density functions, at 327~MHz
the modes are independent, but at 1400~MHz both mode energies are in a single function.
The two modes were also searched for drifting sub-pulses independently, where
no significant signals were present in any of the fluctuation spectra, supporting
\cite{2006A&A...445..243W,2007A&A...469..607W} that sub-pulses are not prevalent in this pulsar.
We have also shown that if B0611+22 acts similarly to other known pulsars, it could exhibit changes
in its X-ray luminosity between the two separate modes. PSR~B0611+22 is particularly attractive as
a potential X-ray source due to its relative youth. Correlated radio and X-ray observations
are being planned for 2014.
It is interesting
to note that PSR B0611+22 does exhibit period derivative variations
in a manner analogous to those published by Lyne et al.~(2010;
A.G.~Lyne, private communication based on Lovell radio telescope
observations). It is currently an open question as to whether these
changes are related to the bursting events we see here. Perhaps
the pulsar bursts only in one of the spin-down rate changes? Our
data, sampled on only a few days, are insufficient to answer this question. Further detailed
studies are required.
Before this analysis, the curious emission properties of B0611$+$22
were not well known (Nowakowski 1992). Because
this is the first pulsar that we have tried with this analysis, this
may be a more common occurrence than we realise. In particular
this work required the unique sensitivity of the Arecibo telescope
to reveal this behaviour. Further studies of this kind are clearly necessary.
On a more global scale, studies such as this can help draw relationships
between different types of modal and emission variations. Here it is easy to
imagine a threshold situation that can relate nulling and bursting events.
Consider, for example, similar pulsars to B0611$+$22 that are either not as bright or
are farther away such that the normal mode is undetected. In this case the
normal mode would be interpreted as being in a null state.
A similar argument was made for PSR B0656+14 in the context of a possible
connection with rotating radio transients \cite{2006ApJ...645L.149W}.
Further examples of this phenomenon will allow us to build up a more
complete picture of the complex emission phenomenology of radio pulsars.
\section*{Acknowledgments}
The Arecibo Observatory is operated by SRI
International under a cooperative agreement with the National Science
Foundation (AST-1100968), and in alliance with the Ana G.
M\'endez-Universidad Metropolitana, and the Universities Space
Research Association.
This work was supported by grants from West Virginia EPSCoR, the Research Corporation
for Scientific Advancement and the National Science Foundation
PIRE award (OISE 0968296). We
made
use of the facilities of the ATNF Pulsar Catalogue.
Computer resources used during the
later stages of this project were supported from a WV EPSCoR Challenge
Grant. DRL acknowledges support from
Oxford Astrophysics while on sabbatical leave. We thank Aris Karastergiou,
Maura McLaughlin, Andrew Lyne, Patrick Weltevrede, Ben Stappers and the
referee, Geoff Wright, for useful comments.
|
2,869,038,155,257 | arxiv | \section{Introduction}
Fiber-optic sensors have emerged as powerful instruments for monitoring a variety of physical phenomena~\cite{Udd2011, Krohn2014, Lee2003, Culshaw2008}. Their passive nature, high bandwidth, light weight, and immunity (in certain configurations) to electromagnetic interference have motivated the development of many commercial products, with devices including gyroscopes~\cite{Lefevre2014}, hydrophones~\cite{Dandridge2019, Cranch2003}, and electric current transducers~\cite{Ziegler2009} proving particularly popular. In applications focused on monitoring electromagnetic fields specifically, the most common approaches have relied on the Faraday and Pockels effects, which rotate an optical probe field's polarization state in proportion to the magnetic or electric field to be measured, respectively~\cite{Massey1975, Rashleigh1979, Bohnert2002, Bohnert2005}. While extremely sensitive to the fields of interest, these phase-based optical sensors are unfortunately also highly sensitive to temperature and birefringence drifts, requiring complex and expensive control systems to compensate for them. These electromagnetic sensors typically use solid state lasers optimized for fiber-optic communication systems which operate at high data rates (MHz or GHz). Below $\sim$500 Hz, these lasers generally experience $1/f$ noise which introduces increasing measurement error at low frequencies. In addition to the cost of solid state lasers, backreflected light must also be eliminated, since it significantly increases the internal noise of the laser system, thus placing tight demands on, e.g., optical connector interfaces. While the exceptional sensitivity of interferometric voltage or current sensors may justify their cost in some situations, they are well suited primarily to lab applications and industrial settings where the environment is well controlled, particularly the temperature. But for field applications and for frequencies lower than 1~kHz, such sensors are not very appropriate.
In contrast, building on some of the basic ideas leveraged in early fiber-optic sensor designs~\cite{Kissinger1967}, intensity-based sensors function by delivering light to a transducer element and collecting a return signal in a second fiber (or fibers), where the returning optical power depends directly on the physical phenomena of interest. Through appropriate design of the transducer, many phenomena can be sensed in this method, such as temperature, strain, acoustic waves, static pressure, acceleration, and vibration~\cite{Krohn1987, Bucaro2005, Bucaro2013}. Recently, these techniques have been applied specifically to current and voltage sensors designed for power systems~\cite{Lagakos2017}. But their performance has yet to be analyzed in detail in the literature, creating a need for thorough characterization of this sensor type in the wider application space of power distribution.
In this article, we describe and test a complete fiber-optic sensing system for three-phase voltage measurements. Based on a piezoelectric transducer and a fiber-optic probe, this specific system design is found to enable high-sensitivity voltage measurements up to 3~kHz, with a strong mechanical resonance at 2~kHz. We obtain complete frequency responses for all three electrical phases, along with noise floor measurements, allowing us to estimate the sensitivity and dynamic range throughout the usable bandwidth. Finally, compensating the spectral response of the measured sensor output, we demonstrate an equalization approach for accurate reconstruction of impulses, making these sensors well suited for real-time, high-bandwidth voltage monitoring in demanding power distribution environments.
\section{Sensor Physics and Design}
\label{sec:design}
The fiber-optic intensity-modulated sensors considered here do not utilize electro- or magneto-optic effects but instead rely on a mechanical process, employing an optical probe to measure displacement of a transducer sensitive to the effect being measured. Both the sensitivity and responsivity depend on a variety of elements in transducer construction. Such sensors have extremely simple designs, require only moderate precision in fabrication and operation, demonstrate high linearity, and feature a wide dynamic range. A schematic of basic sensor operation is provided in Fig.~\ref{fig1}(a). Light in a central optical fiber propagates to the sensing element, where it is emitted from the fiber over a short distance (less than 1~mm) and bounces off of a highly reflective surface attached to an appropriate transducer. The reflected light is coupled into a fiber bundle symmetrically surrounding the emitting fiber which delivers the light to a photodetector converting optical power into an electrical signal. A force which displaces the sensing element modulates the distance between the end of the optical probe and the reflector, which in turn modulates the light power, as the amount of reflected light detected is a function of the displacement between the probe and the reflector.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig1.pdf}
\caption{Intensity-modulated fiber-optic sensor. (a) Basic principle of operation. Light is launched onto a reflective surface, the position of which is related to the quantity being sensed, and then collected by receiving fibers. (b) Voltage sensor configuration. A voltage applied across the series capacitor ($C_s$) and piezoelectric transducer (piezo) modulates the optical field that is collected by the return fibers, detected, and amplified with electro-optic circuitry (EO).}
\label{fig1}
\end{figure}
Now, because these intensity-modulated sensors rely on the mechanical displacement of a macroscopic-size reflective element, the transducer possesses natural resonance frequencies, in contrast to standard Pockels and Faraday effect phase-modulated sensors with no resonant features. In the particular system examined here, we utilize a specially designed bimorph transducer element constructed from PZT-4 piezoceramic (Navy Type I~\cite{DoD1995}) with nominal dimensions of $12\times 1.5\times0.5$~mm$^3$. This yields a fundamental cantilever resonance frequency of $\sim$2~kHz, which is confirmed by the tests in the following section. Importantly, because piezoelectric materials are so well understood and characterized, a variety of geometries could be considered, with dimensions chosen beforehand to realize a predefined resonance frequency. In general, there exists a design tradeoff between sensitivity and bandwidth: a lower fundamental resonance provides higher sensitivity, but reduced bandwidth, whereas upshifting the resonance frequency enables wider bandwidth at the cost of lower sensitivity. In this way, one can tailor the transducer to the bandwidth needs of the particular application.
Because of the straightforward, non-interferometric design of this intensity-based fiber-optic sensor, we employ LEDs as light sources which, when compared to lasers, are significantly less expensive, have a very long lifetime (mean time to failure measured in decades for the types of LEDs we utilize), and do not suffer as strongly from low-frequency intensity noise~\cite{Rumyantsev2004}. Because they do not require an optical cavity, LEDs are also less sensitive to backreflection-induced instabilities so that conventional fiber-optic connectors (and connection techniques) can be used. In order to optimize coupling efficiency, we utilize multimode fibers for transmission and collection as well. Figure~\ref{fig1}(b) depicts in simplified form the complete optical voltage sensor configuration for a single electrical phase, a commercial-grade sensor designed and constructed at SmartSenseCom. The specific fiber probe consists of seven identical multimode fibers each with a 200~$\upmu$m diameter glass core and 230~$\upmu$m plastic cladding, with a numerical aperture of 0.37. The single transmitting fiber is surrounded by six receiving fibers distributed in a fixed geometric pattern around the longitudinal axis~\cite{Bucaro2005}. Light from an LED emitting at 850~nm is coupled into the transmitting fiber and sent to the piezoelectric transducer, the surface of which is coated with a reflective film. Any voltage $v_\mathrm{in}$ applied across the capacitive voltage divider including the transducer stretches or compresses the material, modulating the optical power coupled into the return fibers. A photodiode then converts the received optical power into electric current, which is fed into an electro-optic (EO) circuit with a net transimpedance gain of $\sim5\times10^6$ V/A for amplification and filtering, yielding the output $v_\mathrm{out}$.
To set the optimal sensor operating point, we first measure the collected optical power as a function of displacement between the fiber-optic probe tip and the piezo, controlled by a precision manual translation stage. An example curve is plotted in Fig.~\ref{fig2}(a). The reflected optical power initially increases with displacement until leveling off around 500~$\upmu$m and gradually decreasing thereafter. From these results, we set the quiescent displacement at 280~$\upmu$m, corresponding to the highest-slope region of the optical response. At this point, applying an oscillating voltage directly to the transducer terminals produces an extremely linear output from the electro-optic circuitry. Test results for a 100~Hz voltage applied to the piezo are shown in Fig.~\ref{fig2}(b); the log-log slope of the fit is $1.004 \pm 0.002$, extremely close to the ideal of unity expected for a perfect linear response.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig2.pdf}
\caption{Optical transducer characterization. (a) Collected optical power vs. fiber-probe/piezo separation. The chosen operating point at 280~$\upmu$m represents the highest slope. (b) EO readout potential at 280~$\upmu$m separation as a function of applied 100~Hz voltage. Near-ideal linear operation is obtained over these values (corresponding to peak-to-peak displacements up to $\sim$3~$\upmu$m).}
\label{fig2}
\end{figure}
In practice, the axial resolution of the optical probe is primarily limited by the noise of the readout circuitry. For the present EO configuration, this corresponds to a minimum detectable displacement on the order of 1.5~nm, or 0.05\% of our chosen maximum peak-to-peak displacement of 3~$\upmu$m (corresponding, at 100~Hz, to an applied potential of 20~V~rms), thereby giving a predicted dynamic range of $\sim$66~dB. Incidentally, as the maximum 3~$\upmu$m amplitude is barely visible on Fig.~\ref{fig2}(a), one should be able to increase the utilized displacement range several times over without significant loss of linearity, resulting in an even wider dynamic range and better resolution.
In order to set the overall scaling to a desired input voltage range, i.e., define the voltage-to-displacement gain factor, we place the transducer (which itself can be modeled as a capacitor) in a capacitive voltage divider circuit. Specifically, we choose the series capacitor $C_s$ in Fig.~\ref{fig1}(b) to scale the voltage applied to the piezo down by a factor of $\sim$10 compared to the input. We note that, for $v_\mathrm{in}$ under several kilovolts, this type of capacitive division works well. Alternatively, for significantly higher voltages (hundreds of kilovolts), a suitably scaled transducer potential can be achieved by using a simple antenna that diverts a \emph{de minimis} amount of energy from the electric field close to the active (hot) wire, and in this way provides electric-field--to--voltage conversion.
Finally, the components selected for this sensor system impart high stability with temperature. As this sensor relies on the principles of intensity modulation, it is unaffected by small temperature-induced variations in fiber path length, in contrast to phase-modulated--based sensing. Additionally, the chosen transducer ceramic, PZT-4, is extremely stable with temperature; its $d_{31}$ piezoelectric coefficient varies by less than 1\% over the entire temperature range from $-$40 to $+$80 $^\circ$C~\cite{Hooker1998}. Empirically, we have not observed significant changes in sensor response from any other effects over a range of $\sim$100 $^\circ$C, indicating this design (transducer and fiber-optic probe) should be well equipped for harsh field environments.
\section{Device Characterization}
For characterizing each voltage sensor as deployed within the complete test system, we adopt a black box view in terms of the input and output electrical connections, which include a three-phase plug input, three 0 to 5~V sensor outputs (one for each electrical phase), and a feedthrough three-phase output. We examine each phase separately in the test setup depicted schematically in Fig.~\ref{fig3}. A function generator (Stanford Research Systems DS345) drives a 10~kHz voltage amplifier (Thorlabs MDT694B) which produces the sensor test signal. Our particular amplifier is unipolar ($+$150~V~max), so that all waveforms have a 50\% dc offset; because the optical sensor detectors are capacitively coupled to reject low-frequency contributions ($<$10~Hz) and we still remain fully in the linear regime of the transducer displacement curve [Fig.~\ref{fig2}(b)], this offset has no observable effect on our test results. (Of course, dc sensing is possible by the transducer physics, but requires alternative detector amplifier and filter electronics.) We model the relationship between input potential $v_\mathrm{in}(t) = \tfrac{1}{2\pi} \int d\omega\, V_\mathrm{in}(\omega) e^{j\omega t}$ and sensor output $v_\mathrm{out}(t) = \tfrac{1}{2\pi} \int d\omega\, V_\mathrm{out}(\omega) e^{j\omega t}$ as a linear system with some (to be determined) frequency response $H(\omega)$, such that $V_\mathrm{out}(\omega)= H(\omega) V_\mathrm{in} (\omega)$, neglecting for the time being any nonlinear effects which would distort this behavior.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig3.pdf}
\caption{Test setup for a single phase. A voltage amplifier driven by a function generator produces the optical sensor input $v_\mathrm{in}$. The feedthrough $v_\mathrm{through}$ is left open, and both $v_\mathrm{in}$ and the sensor output $v_\mathrm{out}$ are recorded on an oscilloscope. All impedances are nominal values from component datasheets.}
\label{fig3}
\end{figure}
We then probe each phase with single-frequency sinewave excitations (phase to ground), so that the input signal can be modeled as $v_\mathrm{in}(t) = V_\mathrm{dc} + V_0 e^{j\omega t} + V_0^* e^{-j\omega t}$. Leaving the three-phase throughput open-circuited, we record the temporal waveforms $v_\mathrm{in} (t)$ and $v_\mathrm{out} (t)$ on a 200~MHz oscilloscope. By computing the fast Fourier transform (FFT) of both waveforms and taking the ratio of their values at the probe frequency $\omega_0$, we retrieve the complex number $H(\omega_0)=V_\mathrm{out} (\omega_0 )/V_\mathrm{in} (\omega_0)$ which, by scanning $\omega_0$ through all values of interest, allows us to map out $H(\omega)$ completely. To reduce noise, we take the average of 16 traces, and we select a scope span setting as close as possible to 50 periods, with some variation due to the discrete horizontal division steps; for any given frequency the total number of recorded periods can never drop below 28, and the sampling rate is always at least 35 points per period.
Bode plots of the frequency characterization results for all three sensors are presented in Fig.~\ref{fig4}. Apart from a slightly lower responsivity for the phase 1 sensor, all display extremely consistent behavior: an increase in amplitude until a very flat region from 10 to 500 Hz, followed by a strong resonance at 2~kHz and a steep rolloff thereafter. The $\sim$90$^\circ$ phase shift at low frequencies matches theory for a first-order high-pass filter (as does the amplitude's $\sim$20~dB/decade slope); and the sharp 180$^\circ$ drop at 2~kHz coincides with that of the universal resonance curve~\cite{Siebert1986}. If we define the effective bandwidth as comprising all frequencies with amplitudes equal to or above that of the flat region, these sensors permit useful monitoring from approximately 10 Hz to 3 kHz---or the first 50 harmonics of a 60~Hz voltage.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig4.pdf}
\caption{Measured frequency responses of three-phase optical voltage sensors. (a) Amplitude $20\log_{10}|H(\omega)|$. (b) Phase angle $\angle H(\omega)$.}
\label{fig4}
\end{figure}
As noted above, the striking resonance observed here is a key distinguishing feature of this mechanical-transducer approach to voltage sensing. Incidentally, similar resonance effects have been reported in acoustic detectors based on intensity-modulated fiber-optic sensing principles~\cite{Bucaro2005}. From the perspective of voltage monitoring, the resonance simultaneously provides exceptional sensitivity for harmonics within a specific band but also introduces nuances which must be anticipated. For example, given the maximum sinewave output level of $\sim$1.4~V rms---set by the dc offset and 0~V minimum of the sensor output (cf. Fig.~\ref{fig6})---a 5~V rms input signal at 2.08~kHz would saturate the phase 2 EO output, whereas it would take $\sim$280~V~rms at 60~Hz to cause such saturation. Moreover, even in the absence of saturation, the spectral peak still reshapes the output signal so that it is not directly proportional to the input. Yet whereas nonlinear effects such as saturation prevent recovery of the original voltage, spectral distortion is \emph{linear} and can in principle be compensated digitally---a task we visit in Sec. \ref{sec:equal}.
As with any sensing system, the sensitivity to input signals depends not only on the driven coherent response, but also on the system noise floor. In order to quantify this as well, we record the fluctuating output voltage $v_\mathrm{out} (t)$ with no signal applied to the input and estimate the periodogram, defined over the time interval $T$ as $S_T (\omega)= \tfrac{1}{T} \left| \int_0^T dt\,v_\mathrm{out} (t) e^{-j\omega t} \right|^2$, by computing the FFT of the raw samples. Repeating this process many times and averaging $S_T (\omega)$ at all frequencies returns an estimate of the true power spectral density $S(\omega)$~\cite{Papoulis2002}. To prevent aliasing in this broadband measurement, we precede the oscilloscope with a low-pass filter, using measurements with a 10~kHz lowpass filter (Thorlabs EF120) and 50~kS/s sampling to compute the noise level from 1 to 10~kHz, and a 1~kHz lowpass filter (Thorlabs EF110) and~5 kS/s to compute the noise from 10 to 1000~Hz with finer spectral resolution.
The single-sided noise spectra [$S_+ (\omega)=2 S(\omega); \; \omega > 0$] for all three sensors are presented in Fig.~\ref{fig5}, where each point is the average of 128 separate periodograms. Spurs at 60~Hz and 2~kHz correspond to the wall plug ac power and natural sensor resonance, respectively. Similarly, the peaks at 180, 300, and 420~Hz are consistent with the known distortion characteristics of the power delivered to our laboratory building, whose third, fifth, and seventh harmonics have all been observed as particularly strong. The origin of spurs at 92 and 148~Hz is unknown and would require further study. Integrating over the sensors' full usable bandwidth (10 to 3000 Hz) and taking the root, we find rms noise values of 1.40, 1.54, and 1.48~mV for phases 1, 2, and 3, respectively. If we define the minimum detectable input signal as that which produces an rms output equal to the noise, the frequency responses of Fig.~\ref{fig4} indicate inputs as small as 300~mV~rms at 60 Hz and 5--7 mV rms at resonance (depending on the phase) can be sensed. Coupled with the saturation limits discussed above, these sensors thus support a dynamic range of approximately 60~dB in the current configuration. This is a conservative estimate, in that it involves no additional narrowband filtering, which would significantly boost the sensitivity when, e.g., one is interested in monitoring only a specific frequency band. Importantly, the empirical dynamic range agrees reasonably well with the rough prediction of $\sim$66~dB based on the displacement sensitivity, as discussed in Sec.~\ref{sec:design}.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig5.pdf}
\caption{Single-sided power spectral density of each sensor output, with no input signal applied.}
\label{fig5}
\end{figure}
When operating in the field, significantly stronger noise spurs could be present, due to a variety of local environmental conditions. While analog filtering or shielding techniques could certainly be applied, the impact of spurs could also be removed digitally by incorporating, into the compensation technique described below, notch filters matched to the spur frequencies found from detailed \emph{in situ} spectral characterization at the particular installation.
\section{Spectral Equalization}
\label{sec:equal}
With the frequency response $H(\omega)$ of all three sensors fully characterized, we now examine the important question of signal retrieval, i.e., recovering an accurate estimate of the input voltage given the waveform measured at the sensor output. The presence of a strong sensor resonance complicates this objective; the $>$30~dB variation in sensitivity between the flat and resonant portions of the sensor spectrum requires careful frequency matching spanning several orders of magnitude. Of course, one mitigation approach would be to simply filter out all spectral content beyond $\sim$1~kHz and restrict sensor attention to lower frequencies, yet this is undesirable, discarding potentially useful information contained within a significant portion of the sensor's response band. Accordingly, in the following we introduce a digital signal processing approach designed to recover the original input waveform up to the full bandwidth of the system.
As example test cases, we probe each sensor with short electrical pulses containing frequency content well into the resonant regime. Figure~\ref{fig6} furnishes single-shot measurements for three examples of 150~V~peak square waves, all sampled at 25~kS/s, with a wide range of durations: (a) 25~ms, applied to the phase 1 sensor; (b) 2.5~ms, applied to phase 2; and (c) 250~$\upmu$s, applied to phase 3. In all cases, the raw output exhibits significant distortion, with strong 2~kHz ringing, and fails to accurately trace the shape of the input. (Note that the output offsets of $\sim$2~V are the quiescent points of the sensor; they do not reflect dc components in the input.) Interestingly, the oscillations are particularly strong for the 250~$\upmu$s excitation; because the positive and negative pulse edges are spaced by half a period at 2~kHz, their contributions add in-phase.
In practice, simply dividing the distorted spectrum of the output, $V_\mathrm{out} (\omega)$, by $H(\omega)$ introduces error at portions where the transfer function spectrum is low, artificially amplifying frequency components to which the sensor does not actually respond. Additionally, frequency-truncated digital reconstruction of jump discontinuities, such as those present here, experience overshoot from the well-known Gibbs phenomenon. To reduce the impact of both effects, we multiply $V_\mathrm{out}(\omega)$ by an apodization function before dividing out $H(\omega)$. Specifically, we consider the product of two sinusoidal windows, one for removing low-frequency content, the other for high, defined over positive frequencies as
\begin{equation}
W_L(\omega) = \begin{cases}
\sin\frac{\pi\omega}{2\omega_L} & 0<\omega<\omega_L \\
1 & \omega_L < \omega
\end{cases}
\end{equation}
and
\begin{equation}
W_H(\omega) = \begin{cases}
\cos\frac{\pi\omega}{2\omega_H} & 0<\omega<\omega_H \\
0 & \omega_H < \omega
\end{cases}.
\end{equation}
For the low and high frequencies, we choose $\omega_L/2\pi$ = 10~Hz and $\omega_H/2\pi = 4$~kHz. We emphasize that the specific functional forms of these windows represent just one convenient option; a variety of filter selections would achieve comparable apodization performance. Our procedure for returning an estimate $\tilde{v}_\mathrm{in} (t)$ of the input signal from the raw output $v_\mathrm{out} (t)$ amounts to computing the Fourier transform $V_\mathrm{out} (\omega)$ and calculating the estimated input spectrum via
\begin{equation}
\tilde{V}_\mathrm{in} (\omega) = \frac{W_L(\omega) W_H (\omega)}{H(\omega)} V_\mathrm{out}(\omega),
\end{equation}
from which $\tilde{v}_\mathrm{in}(t)$ follows from Fourier inversion.
\begin{figure*}[tb!]
\centering\includegraphics[width=5.85in]{fig6.pdf}
\caption{Transient voltage tests. (a-c) Input and raw sensor output waveforms for square pulses of duration (a) 25~ms, (b) 2.5~ms, and (c) 250~$\upmu$s. (d-f) Digitally retrieved estimates of excitation pulses, compared to the actual inputs, for the cases (d) 25~ms, (e) 2.5~ms, and (f) 250~$\upmu$s.}
\label{fig6}
\end{figure*}
The waveforms retrieved in this process are given in Fig.~\ref{fig6}(d), (e), and (f), corresponding to the 25~ms, 2.5~ms, and 250~$\upmu$s excitations, respectively. We plot on a relative voltage scale here, because any dc offset is not meaningful given the sensor response. The previous 2~kHz ringing has been removed, and the reconstructions show good agreement with the true input pulses, particularly in the 25 and 2.5~ms cases. For the most extreme example of 250~$\upmu$s, we begin to see the impact of bandwidth limitations, with the retrieved waveform noticeably widened relative to the input. Yet while the retrieved amplitude in (f) is $\sim$5\% away from the actual value, were we to filter out the resonant region entirely for simpler reconstruction---e.g., by setting $\omega_H/2\pi = 1$~kHz---the retrieved pulse height would fall short of the actual by a massive 68\%, highlighting the importance of including the resonance in reconstruction. Finally, since our method utilizes linear digital filtering with fixed and known functions, it could be implemented on a field-programmable gate array (FPGA) for spectral equalization of the sensor output in real-time. In this way the sharp physical resonance presents no fundamental limitation to accurate voltage sensing, and can even be leveraged in expanding sensor bandwidth.
\section{Discussion and Conclusion}
Making use of Lorentz-force-based displacement, the fiber-optic sensing approach employed here can be applied to monitoring electric current as well~\cite{Lagakos2017}. The basic idea is to shunt a portion of the current from the main conductor into a secondary wire. Then, by taking advantage of the main conductor's intrinsic magnetic field (or that of a dedicated permanent magnet), the shunting conductor will displace in proportion to the carried current, thereby providing the mechanical movement necessary for optical probe and readout. We plan to conduct additional inquiries into the use of this approach to measure current, including new experiments as well as analyzing data from past exercises. Moreover, while here we have concentrated on voltages up to 150~V~peak, which are relatively low from a power systems perspective, the same sensing technology is scalable to much higher, kilovolt distribution and transmission levels as well, where we anticipate similar physical behavior and thus applicability of our equalization method. We plan to examine in detail both current sensors and higher voltage levels in future work. It will prove valuable to perform tests of these intensity-modulated optical sensors in side-by-side comparisons against both conventional devices, such as potential transformers (PTs) and current transformers (CTs)~\cite{Horowitz2008}, and more complex interferometric-based optical phase/polarity sensors---particularly undertaking such comparisons in demanding field environments. Finally, we anticipate comparing sensor probe/transducer combinations with differing resonance points and frequency ranges.
Ultimately, a low-cost solution for the system should be feasible as well. The LED, fiber, and transducer elements are naturally inexpensive, so that total system cost is dominated by the electro-optic circuity. Consequently, further investment and high-volume production should bring the cost down significantly, which we expect will make the system economically competitive with traditional PTs.
In conclusion, we have described and characterized intensity-modulated fiber-optic voltage sensors. Probing with a tunable-frequency source, we measure sensor spectra exemplified by a flat response from 10 to 500 Hz and a sharp resonance at 2~kHz. The measured background noise levels imply a full-band dynamic range of approximately 60~dB. And through a digital spectral equalization method, we have demonstrated successful reconstruction of short-pulse inputs from strongly distorted output waveforms. The simplicity and robustness of these intensity-modulated sensors offer a valuable balance between the advantage tradeoffs of fully electrical sensors (low cost) and their optical interferometric replacements (high performance), and could find application in a variety of electromagnetic sensing environments.
\section*{Funding}
U.S. Department of Energy, Office of Electricity (field work proposal CETE004).
\section*{Acknowledgments}
We are grateful to B. Qi for valuable discussions and feedback. A portion of this work was performed at Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725.
\newpage
\section{Introduction}
Fiber-optic sensors have emerged as powerful instruments for monitoring a variety of physical phenomena~\cite{Udd2011, Krohn2014, Lee2003, Culshaw2008}. Their passive nature, high bandwidth, light weight, and immunity (in certain configurations) to electromagnetic interference have motivated the development of many commercial products, with devices including gyroscopes~\cite{Lefevre2014}, hydrophones~\cite{Dandridge2019, Cranch2003}, and electric current transducers~\cite{Ziegler2009} proving particularly popular. In applications focused on monitoring electromagnetic fields specifically, the most common approaches have relied on the Faraday and Pockels effects, which rotate an optical probe field's polarization state in proportion to the magnetic or electric field to be measured, respectively~\cite{Massey1975, Rashleigh1979, Bohnert2002, Bohnert2005}. While extremely sensitive to the fields of interest, these phase-based optical sensors are unfortunately also highly sensitive to temperature and birefringence drifts, requiring complex and expensive control systems to compensate for them. These electromagnetic sensors typically use solid state lasers optimized for fiber-optic communication systems which operate at high data rates (MHz or GHz). Below $\sim$500 Hz, these lasers generally experience $1/f$ noise which introduces increasing measurement error at low frequencies. In addition to the cost of solid state lasers, backreflected light must also be eliminated, since it significantly increases the internal noise of the laser system, thus placing tight demands on, e.g., optical connector interfaces. While the exceptional sensitivity of interferometric voltage or current sensors may justify their cost in some situations, they are well suited primarily to lab applications and industrial settings where the environment is well controlled, particularly the temperature. But for field applications and for frequencies lower than 1~kHz, such sensors are not very appropriate.
In contrast, building on some of the basic ideas leveraged in early fiber-optic sensor designs~\cite{Kissinger1967}, intensity-based sensors function by delivering light to a transducer element and collecting a return signal in a second fiber (or fibers), where the returning optical power depends directly on the physical phenomena of interest. Through appropriate design of the transducer, many phenomena can be sensed in this method, such as temperature, strain, acoustic waves, static pressure, acceleration, and vibration~\cite{Krohn1987, Bucaro2005, Bucaro2013}. Recently, these techniques have been applied specifically to current and voltage sensors designed for power systems~\cite{Lagakos2017}. But their performance has yet to be analyzed in detail in the literature, creating a need for thorough characterization of this sensor type in the wider application space of power distribution.
In this article, we describe and test a complete fiber-optic sensing system for three-phase voltage measurements. Based on a piezoelectric transducer and a fiber-optic probe, this specific system design is found to enable high-sensitivity voltage measurements up to 3~kHz, with a strong mechanical resonance at 2~kHz. We obtain complete frequency responses for all three electrical phases, along with noise floor measurements, allowing us to estimate the sensitivity and dynamic range throughout the usable bandwidth. Finally, compensating the spectral response of the measured sensor output, we demonstrate an equalization approach for accurate reconstruction of impulses, making these sensors well suited for real-time, high-bandwidth voltage monitoring in demanding power distribution environments.
\section{Sensor Physics and Design}
\label{sec:design}
The fiber-optic intensity-modulated sensors considered here do not utilize electro- or magneto-optic effects but instead rely on a mechanical process, employing an optical probe to measure displacement of a transducer sensitive to the effect being measured. Both the sensitivity and responsivity depend on a variety of elements in transducer construction. Such sensors have extremely simple designs, require only moderate precision in fabrication and operation, demonstrate high linearity, and feature a wide dynamic range. A schematic of basic sensor operation is provided in Fig.~\ref{fig1}(a). Light in a central optical fiber propagates to the sensing element, where it is emitted from the fiber over a short distance (less than 1~mm) and bounces off of a highly reflective surface attached to an appropriate transducer. The reflected light is coupled into a fiber bundle symmetrically surrounding the emitting fiber which delivers the light to a photodetector converting optical power into an electrical signal. A force which displaces the sensing element modulates the distance between the end of the optical probe and the reflector, which in turn modulates the light power, as the amount of reflected light detected is a function of the displacement between the probe and the reflector.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig1.pdf}
\caption{Intensity-modulated fiber-optic sensor. (a) Basic principle of operation. Light is launched onto a reflective surface, the position of which is related to the quantity being sensed, and then collected by receiving fibers. (b) Voltage sensor configuration. A voltage applied across the series capacitor ($C_s$) and piezoelectric transducer (piezo) modulates the optical field that is collected by the return fibers, detected, and amplified with electro-optic circuitry (EO).}
\label{fig1}
\end{figure}
Now, because these intensity-modulated sensors rely on the mechanical displacement of a macroscopic-size reflective element, the transducer possesses natural resonance frequencies, in contrast to standard Pockels and Faraday effect phase-modulated sensors with no resonant features. In the particular system examined here, we utilize a specially designed bimorph transducer element constructed from PZT-4 piezoceramic (Navy Type I~\cite{DoD1995}) with nominal dimensions of $12\times 1.5\times0.5$~mm$^3$. This yields a fundamental cantilever resonance frequency of $\sim$2~kHz, which is confirmed by the tests in the following section. Importantly, because piezoelectric materials are so well understood and characterized, a variety of geometries could be considered, with dimensions chosen beforehand to realize a predefined resonance frequency. In general, there exists a design tradeoff between sensitivity and bandwidth: a lower fundamental resonance provides higher sensitivity, but reduced bandwidth, whereas upshifting the resonance frequency enables wider bandwidth at the cost of lower sensitivity. In this way, one can tailor the transducer to the bandwidth needs of the particular application.
Because of the straightforward, non-interferometric design of this intensity-based fiber-optic sensor, we employ LEDs as light sources which, when compared to lasers, are significantly less expensive, have a very long lifetime (mean time to failure measured in decades for the types of LEDs we utilize), and do not suffer as strongly from low-frequency intensity noise~\cite{Rumyantsev2004}. Because they do not require an optical cavity, LEDs are also less sensitive to backreflection-induced instabilities so that conventional fiber-optic connectors (and connection techniques) can be used. In order to optimize coupling efficiency, we utilize multimode fibers for transmission and collection as well. Figure~\ref{fig1}(b) depicts in simplified form the complete optical voltage sensor configuration for a single electrical phase, a commercial-grade sensor designed and constructed at SmartSenseCom. The specific fiber probe consists of seven identical multimode fibers each with a 200~$\upmu$m diameter glass core and 230~$\upmu$m plastic cladding, with a numerical aperture of 0.37. The single transmitting fiber is surrounded by six receiving fibers distributed in a fixed geometric pattern around the longitudinal axis~\cite{Bucaro2005}. Light from an LED emitting at 850~nm is coupled into the transmitting fiber and sent to the piezoelectric transducer, the surface of which is coated with a reflective film. Any voltage $v_\mathrm{in}$ applied across the capacitive voltage divider including the transducer stretches or compresses the material, modulating the optical power coupled into the return fibers. A photodiode then converts the received optical power into electric current, which is fed into an electro-optic (EO) circuit with a net transimpedance gain of $\sim5\times10^6$ V/A for amplification and filtering, yielding the output $v_\mathrm{out}$.
To set the optimal sensor operating point, we first measure the collected optical power as a function of displacement between the fiber-optic probe tip and the piezo, controlled by a precision manual translation stage. An example curve is plotted in Fig.~\ref{fig2}(a). The reflected optical power initially increases with displacement until leveling off around 500~$\upmu$m and gradually decreasing thereafter. From these results, we set the quiescent displacement at 280~$\upmu$m, corresponding to the highest-slope region of the optical response. At this point, applying an oscillating voltage directly to the transducer terminals produces an extremely linear output from the electro-optic circuitry. Test results for a 100~Hz voltage applied to the piezo are shown in Fig.~\ref{fig2}(b); the log-log slope of the fit is $1.004 \pm 0.002$, extremely close to the ideal of unity expected for a perfect linear response.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig2.pdf}
\caption{Optical transducer characterization. (a) Collected optical power vs. fiber-probe/piezo separation. The chosen operating point at 280~$\upmu$m represents the highest slope. (b) EO readout potential at 280~$\upmu$m separation as a function of applied 100~Hz voltage. Near-ideal linear operation is obtained over these values (corresponding to peak-to-peak displacements up to $\sim$3~$\upmu$m).}
\label{fig2}
\end{figure}
In practice, the axial resolution of the optical probe is primarily limited by the noise of the readout circuitry. For the present EO configuration, this corresponds to a minimum detectable displacement on the order of 1.5~nm, or 0.05\% of our chosen maximum peak-to-peak displacement of 3~$\upmu$m (corresponding, at 100~Hz, to an applied potential of 20~V~rms), thereby giving a predicted dynamic range of $\sim$66~dB. Incidentally, as the maximum 3~$\upmu$m amplitude is barely visible on Fig.~\ref{fig2}(a), one should be able to increase the utilized displacement range several times over without significant loss of linearity, resulting in an even wider dynamic range and better resolution.
In order to set the overall scaling to a desired input voltage range, i.e., define the voltage-to-displacement gain factor, we place the transducer (which itself can be modeled as a capacitor) in a capacitive voltage divider circuit. Specifically, we choose the series capacitor $C_s$ in Fig.~\ref{fig1}(b) to scale the voltage applied to the piezo down by a factor of $\sim$10 compared to the input. We note that, for $v_\mathrm{in}$ under several kilovolts, this type of capacitive division works well. Alternatively, for significantly higher voltages (hundreds of kilovolts), a suitably scaled transducer potential can be achieved by using a simple antenna that diverts a \emph{de minimis} amount of energy from the electric field close to the active (hot) wire, and in this way provides electric-field--to--voltage conversion.
Finally, the components selected for this sensor system impart high stability with temperature. As this sensor relies on the principles of intensity modulation, it is unaffected by small temperature-induced variations in fiber path length, in contrast to phase-modulated--based sensing. Additionally, the chosen transducer ceramic, PZT-4, is extremely stable with temperature; its $d_{31}$ piezoelectric coefficient varies by less than 1\% over the entire temperature range from $-$40 to $+$80 $^\circ$C~\cite{Hooker1998}. Empirically, we have not observed significant changes in sensor response from any other effects over a range of $\sim$100 $^\circ$C, indicating this design (transducer and fiber-optic probe) should be well equipped for harsh field environments.
\section{Device Characterization}
For characterizing each voltage sensor as deployed within the complete test system, we adopt a black box view in terms of the input and output electrical connections, which include a three-phase plug input, three 0 to 5~V sensor outputs (one for each electrical phase), and a feedthrough three-phase output. We examine each phase separately in the test setup depicted schematically in Fig.~\ref{fig3}. A function generator (Stanford Research Systems DS345) drives a 10~kHz voltage amplifier (Thorlabs MDT694B) which produces the sensor test signal. Our particular amplifier is unipolar ($+$150~V~max), so that all waveforms have a 50\% dc offset; because the optical sensor detectors are capacitively coupled to reject low-frequency contributions ($<$10~Hz) and we still remain fully in the linear regime of the transducer displacement curve [Fig.~\ref{fig2}(b)], this offset has no observable effect on our test results. (Of course, dc sensing is possible by the transducer physics, but requires alternative detector amplifier and filter electronics.) We model the relationship between input potential $v_\mathrm{in}(t) = \tfrac{1}{2\pi} \int d\omega\, V_\mathrm{in}(\omega) e^{j\omega t}$ and sensor output $v_\mathrm{out}(t) = \tfrac{1}{2\pi} \int d\omega\, V_\mathrm{out}(\omega) e^{j\omega t}$ as a linear system with some (to be determined) frequency response $H(\omega)$, such that $V_\mathrm{out}(\omega)= H(\omega) V_\mathrm{in} (\omega)$, neglecting for the time being any nonlinear effects which would distort this behavior.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig3.pdf}
\caption{Test setup for a single phase. A voltage amplifier driven by a function generator produces the optical sensor input $v_\mathrm{in}$. The feedthrough $v_\mathrm{through}$ is left open, and both $v_\mathrm{in}$ and the sensor output $v_\mathrm{out}$ are recorded on an oscilloscope. All impedances are nominal values from component datasheets.}
\label{fig3}
\end{figure}
We then probe each phase with single-frequency sinewave excitations (phase to ground), so that the input signal can be modeled as $v_\mathrm{in}(t) = V_\mathrm{dc} + V_0 e^{j\omega t} + V_0^* e^{-j\omega t}$. Leaving the three-phase throughput open-circuited, we record the temporal waveforms $v_\mathrm{in} (t)$ and $v_\mathrm{out} (t)$ on a 200~MHz oscilloscope. By computing the fast Fourier transform (FFT) of both waveforms and taking the ratio of their values at the probe frequency $\omega_0$, we retrieve the complex number $H(\omega_0)=V_\mathrm{out} (\omega_0 )/V_\mathrm{in} (\omega_0)$ which, by scanning $\omega_0$ through all values of interest, allows us to map out $H(\omega)$ completely. To reduce noise, we take the average of 16 traces, and we select a scope span setting as close as possible to 50 periods, with some variation due to the discrete horizontal division steps; for any given frequency the total number of recorded periods can never drop below 28, and the sampling rate is always at least 35 points per period.
Bode plots of the frequency characterization results for all three sensors are presented in Fig.~\ref{fig4}. Apart from a slightly lower responsivity for the phase 1 sensor, all display extremely consistent behavior: an increase in amplitude until a very flat region from 10 to 500 Hz, followed by a strong resonance at 2~kHz and a steep rolloff thereafter. The $\sim$90$^\circ$ phase shift at low frequencies matches theory for a first-order high-pass filter (as does the amplitude's $\sim$20~dB/decade slope); and the sharp 180$^\circ$ drop at 2~kHz coincides with that of the universal resonance curve~\cite{Siebert1986}. If we define the effective bandwidth as comprising all frequencies with amplitudes equal to or above that of the flat region, these sensors permit useful monitoring from approximately 10 Hz to 3 kHz---or the first 50 harmonics of a 60~Hz voltage.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig4.pdf}
\caption{Measured frequency responses of three-phase optical voltage sensors. (a) Amplitude $20\log_{10}|H(\omega)|$. (b) Phase angle $\angle H(\omega)$.}
\label{fig4}
\end{figure}
As noted above, the striking resonance observed here is a key distinguishing feature of this mechanical-transducer approach to voltage sensing. Incidentally, similar resonance effects have been reported in acoustic detectors based on intensity-modulated fiber-optic sensing principles~\cite{Bucaro2005}. From the perspective of voltage monitoring, the resonance simultaneously provides exceptional sensitivity for harmonics within a specific band but also introduces nuances which must be anticipated. For example, given the maximum sinewave output level of $\sim$1.4~V rms---set by the dc offset and 0~V minimum of the sensor output (cf. Fig.~\ref{fig6})---a 5~V rms input signal at 2.08~kHz would saturate the phase 2 EO output, whereas it would take $\sim$280~V~rms at 60~Hz to cause such saturation. Moreover, even in the absence of saturation, the spectral peak still reshapes the output signal so that it is not directly proportional to the input. Yet whereas nonlinear effects such as saturation prevent recovery of the original voltage, spectral distortion is \emph{linear} and can in principle be compensated digitally---a task we visit in Sec. \ref{sec:equal}.
As with any sensing system, the sensitivity to input signals depends not only on the driven coherent response, but also on the system noise floor. In order to quantify this as well, we record the fluctuating output voltage $v_\mathrm{out} (t)$ with no signal applied to the input and estimate the periodogram, defined over the time interval $T$ as $S_T (\omega)= \tfrac{1}{T} \left| \int_0^T dt\,v_\mathrm{out} (t) e^{-j\omega t} \right|^2$, by computing the FFT of the raw samples. Repeating this process many times and averaging $S_T (\omega)$ at all frequencies returns an estimate of the true power spectral density $S(\omega)$~\cite{Papoulis2002}. To prevent aliasing in this broadband measurement, we precede the oscilloscope with a low-pass filter, using measurements with a 10~kHz lowpass filter (Thorlabs EF120) and 50~kS/s sampling to compute the noise level from 1 to 10~kHz, and a 1~kHz lowpass filter (Thorlabs EF110) and~5 kS/s to compute the noise from 10 to 1000~Hz with finer spectral resolution.
The single-sided noise spectra [$S_+ (\omega)=2 S(\omega); \; \omega > 0$] for all three sensors are presented in Fig.~\ref{fig5}, where each point is the average of 128 separate periodograms. Spurs at 60~Hz and 2~kHz correspond to the wall plug ac power and natural sensor resonance, respectively. Similarly, the peaks at 180, 300, and 420~Hz are consistent with the known distortion characteristics of the power delivered to our laboratory building, whose third, fifth, and seventh harmonics have all been observed as particularly strong. The origin of spurs at 92 and 148~Hz is unknown and would require further study. Integrating over the sensors' full usable bandwidth (10 to 3000 Hz) and taking the root, we find rms noise values of 1.40, 1.54, and 1.48~mV for phases 1, 2, and 3, respectively. If we define the minimum detectable input signal as that which produces an rms output equal to the noise, the frequency responses of Fig.~\ref{fig4} indicate inputs as small as 300~mV~rms at 60 Hz and 5--7 mV rms at resonance (depending on the phase) can be sensed. Coupled with the saturation limits discussed above, these sensors thus support a dynamic range of approximately 60~dB in the current configuration. This is a conservative estimate, in that it involves no additional narrowband filtering, which would significantly boost the sensitivity when, e.g., one is interested in monitoring only a specific frequency band. Importantly, the empirical dynamic range agrees reasonably well with the rough prediction of $\sim$66~dB based on the displacement sensitivity, as discussed in Sec.~\ref{sec:design}.
\begin{figure}[tb!]
\centering\includegraphics[width=3.15in]{fig5.pdf}
\caption{Single-sided power spectral density of each sensor output, with no input signal applied.}
\label{fig5}
\end{figure}
When operating in the field, significantly stronger noise spurs could be present, due to a variety of local environmental conditions. While analog filtering or shielding techniques could certainly be applied, the impact of spurs could also be removed digitally by incorporating, into the compensation technique described below, notch filters matched to the spur frequencies found from detailed \emph{in situ} spectral characterization at the particular installation.
\section{Spectral Equalization}
\label{sec:equal}
With the frequency response $H(\omega)$ of all three sensors fully characterized, we now examine the important question of signal retrieval, i.e., recovering an accurate estimate of the input voltage given the waveform measured at the sensor output. The presence of a strong sensor resonance complicates this objective; the $>$30~dB variation in sensitivity between the flat and resonant portions of the sensor spectrum requires careful frequency matching spanning several orders of magnitude. Of course, one mitigation approach would be to simply filter out all spectral content beyond $\sim$1~kHz and restrict sensor attention to lower frequencies, yet this is undesirable, discarding potentially useful information contained within a significant portion of the sensor's response band. Accordingly, in the following we introduce a digital signal processing approach designed to recover the original input waveform up to the full bandwidth of the system.
As example test cases, we probe each sensor with short electrical pulses containing frequency content well into the resonant regime. Figure~\ref{fig6} furnishes single-shot measurements for three examples of 150~V~peak square waves, all sampled at 25~kS/s, with a wide range of durations: (a) 25~ms, applied to the phase 1 sensor; (b) 2.5~ms, applied to phase 2; and (c) 250~$\upmu$s, applied to phase 3. In all cases, the raw output exhibits significant distortion, with strong 2~kHz ringing, and fails to accurately trace the shape of the input. (Note that the output offsets of $\sim$2~V are the quiescent points of the sensor; they do not reflect dc components in the input.) Interestingly, the oscillations are particularly strong for the 250~$\upmu$s excitation; because the positive and negative pulse edges are spaced by half a period at 2~kHz, their contributions add in-phase.
In practice, simply dividing the distorted spectrum of the output, $V_\mathrm{out} (\omega)$, by $H(\omega)$ introduces error at portions where the transfer function spectrum is low, artificially amplifying frequency components to which the sensor does not actually respond. Additionally, frequency-truncated digital reconstruction of jump discontinuities, such as those present here, experience overshoot from the well-known Gibbs phenomenon. To reduce the impact of both effects, we multiply $V_\mathrm{out}(\omega)$ by an apodization function before dividing out $H(\omega)$. Specifically, we consider the product of two sinusoidal windows, one for removing low-frequency content, the other for high, defined over positive frequencies as
\begin{equation}
W_L(\omega) = \begin{cases}
\sin\frac{\pi\omega}{2\omega_L} & 0<\omega<\omega_L \\
1 & \omega_L < \omega
\end{cases}
\end{equation}
and
\begin{equation}
W_H(\omega) = \begin{cases}
\cos\frac{\pi\omega}{2\omega_H} & 0<\omega<\omega_H \\
0 & \omega_H < \omega
\end{cases}.
\end{equation}
For the low and high frequencies, we choose $\omega_L/2\pi$ = 10~Hz and $\omega_H/2\pi = 4$~kHz. We emphasize that the specific functional forms of these windows represent just one convenient option; a variety of filter selections would achieve comparable apodization performance. Our procedure for returning an estimate $\tilde{v}_\mathrm{in} (t)$ of the input signal from the raw output $v_\mathrm{out} (t)$ amounts to computing the Fourier transform $V_\mathrm{out} (\omega)$ and calculating the estimated input spectrum via
\begin{equation}
\tilde{V}_\mathrm{in} (\omega) = \frac{W_L(\omega) W_H (\omega)}{H(\omega)} V_\mathrm{out}(\omega),
\end{equation}
from which $\tilde{v}_\mathrm{in}(t)$ follows from Fourier inversion.
\begin{figure*}[tb!]
\centering\includegraphics[width=5.85in]{fig6.pdf}
\caption{Transient voltage tests. (a-c) Input and raw sensor output waveforms for square pulses of duration (a) 25~ms, (b) 2.5~ms, and (c) 250~$\upmu$s. (d-f) Digitally retrieved estimates of excitation pulses, compared to the actual inputs, for the cases (d) 25~ms, (e) 2.5~ms, and (f) 250~$\upmu$s.}
\label{fig6}
\end{figure*}
The waveforms retrieved in this process are given in Fig.~\ref{fig6}(d), (e), and (f), corresponding to the 25~ms, 2.5~ms, and 250~$\upmu$s excitations, respectively. We plot on a relative voltage scale here, because any dc offset is not meaningful given the sensor response. The previous 2~kHz ringing has been removed, and the reconstructions show good agreement with the true input pulses, particularly in the 25 and 2.5~ms cases. For the most extreme example of 250~$\upmu$s, we begin to see the impact of bandwidth limitations, with the retrieved waveform noticeably widened relative to the input. Yet while the retrieved amplitude in (f) is $\sim$5\% away from the actual value, were we to filter out the resonant region entirely for simpler reconstruction---e.g., by setting $\omega_H/2\pi = 1$~kHz---the retrieved pulse height would fall short of the actual by a massive 68\%, highlighting the importance of including the resonance in reconstruction. Finally, since our method utilizes linear digital filtering with fixed and known functions, it could be implemented on a field-programmable gate array (FPGA) for spectral equalization of the sensor output in real-time. In this way the sharp physical resonance presents no fundamental limitation to accurate voltage sensing, and can even be leveraged in expanding sensor bandwidth.
\section{Discussion and Conclusion}
Making use of Lorentz-force-based displacement, the fiber-optic sensing approach employed here can be applied to monitoring electric current as well~\cite{Lagakos2017}. The basic idea is to shunt a portion of the current from the main conductor into a secondary wire. Then, by taking advantage of the main conductor's intrinsic magnetic field (or that of a dedicated permanent magnet), the shunting conductor will displace in proportion to the carried current, thereby providing the mechanical movement necessary for optical probe and readout. We plan to conduct additional inquiries into the use of this approach to measure current, including new experiments as well as analyzing data from past exercises. Moreover, while here we have concentrated on voltages up to 150~V~peak, which are relatively low from a power systems perspective, the same sensing technology is scalable to much higher, kilovolt distribution and transmission levels as well, where we anticipate similar physical behavior and thus applicability of our equalization method. We plan to examine in detail both current sensors and higher voltage levels in future work. It will prove valuable to perform tests of these intensity-modulated optical sensors in side-by-side comparisons against both conventional devices, such as potential transformers (PTs) and current transformers (CTs)~\cite{Horowitz2008}, and more complex interferometric-based optical phase/polarity sensors---particularly undertaking such comparisons in demanding field environments. Finally, we anticipate comparing sensor probe/transducer combinations with differing resonance points and frequency ranges.
Ultimately, a low-cost solution for the system should be feasible as well. The LED, fiber, and transducer elements are naturally inexpensive, so that total system cost is dominated by the electro-optic circuity. Consequently, further investment and high-volume production should bring the cost down significantly, which we expect will make the system economically competitive with traditional PTs.
In conclusion, we have described and characterized intensity-modulated fiber-optic voltage sensors. Probing with a tunable-frequency source, we measure sensor spectra exemplified by a flat response from 10 to 500 Hz and a sharp resonance at 2~kHz. The measured background noise levels imply a full-band dynamic range of approximately 60~dB. And through a digital spectral equalization method, we have demonstrated successful reconstruction of short-pulse inputs from strongly distorted output waveforms. The simplicity and robustness of these intensity-modulated sensors offer a valuable balance between the advantage tradeoffs of fully electrical sensors (low cost) and their optical interferometric replacements (high performance), and could find application in a variety of electromagnetic sensing environments.
\section*{Funding}
U.S. Department of Energy, Office of Electricity (field work proposal CETE004).
\section*{Acknowledgments}
We are grateful to B. Qi for valuable discussions and feedback. A portion of this work was performed at Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725.
\newpage
|
2,869,038,155,258 | arxiv | \section{Introduction}
The Internet Archive has made great efforts to capture and archive much of the web, allowing anyone to have access to prior states of web pages. We implicitly trust the archived content delivered by the Internet Archive (IA)\footnote{\url{https://archive.org}}, but with the current trend of extended use of other public and private web archives, we should consider the question of validity of archived web pages. For example, if a web page is archived in 1999 and replayed in 2017, how do we know that it has not been tampered with during those 18 years?
When replaying the same archived web page in a web browser at different points in time, a user should be presented with the same content. Figure \ref{fig:text_chnage} shows an archived web page, or memento\footnote{A memento is an archived version of an original web page \cite{memento:rfc}.}, captured by a private web archive, ``Michael's Evil Wayback''\footnote{We established this archive to demonstrate different scenarios in this paper.}, on July 17, 2017 at 18:51 GMT. This memento is a copy of the original web page
\begin{center}
\url{https://climate.nasa.gov/vital-signs/carbon-dioxide/}
\end{center}
Figures \ref{img:11011_org} and \ref{img:11011_text} demonstrate an unexpected result --- when replaying the memento in August 2017, the level of $CO_{2}$ (or carbon dioxide in the Earth's atmosphere) was $406.31$ $ppm$, but when revisiting the same archived page in October 2017, $CO_{2}$ became $270.31$ $ppm$. So which one is the ``real'' archived page? How can we identify whether the content, sent by the archive as a response to the most recent request, has not been tampered with? In this paper, we consider the implications of using trusted timestamping to validate archived web pages.
\begin{figure*}
\centering
\subfigure[Accessing the archived page in August 2017 ($CO_2$ was $406.31$ $ppm$)]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.15]{img/11011org}
}
\label{img:11011_org}
}
\subfigure[Accessing the same archived page in October 2017 ($CO_2$ became $270.31$ $ppm$)]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.15]{img/11011text}
}
\label{img:11011_text}
}
\caption{A change is made in an archived page. Which one is the real archived page? }
\label{fig:text_chnage}
\end{figure*}
Timestamping is recording the date and time of when an event occurs. For example, the HTTP Response headers ``Date'' and ``Last-Modified'' are examples of timestamps referring to different events --- ``Date'' indicates when a server generated a response message, while ``Last-Modified'' is the datetime of when the resource was last modified. A ``trusted'' timestamp is a timestamp initially created and verified by a third-party trustworthy service. Blockchain-based networks (e.g., Bitcoin \cite{nakamoto2008bitcoin,wright2015decentralized}) have been receiving increased attention recently as trustworthy systems for initiating and validating timestamps of digital documents. Once a file is timestamped in the blockchain, anyone should be able to prove the existence of the file at a particular point in time.
In this paper, we first show that state-of-the-art timestamping services in blockchain-based networks do not allow users to submit URIs of web pages in order to establish trusted timestamps of these types of documents. Second, we discuss some difficulties in timestamping archived web pages (i.e., mementos) even if these services start accepting URIs.
Generating a hash value on the content of a memento is one of the crucial parts in the process of timestamping the memento. As shown in Figure \ref{img:basic_timestamping}, the archived web page will not be directly timestamped in the blockchain. Instead, a hash value calculated on the content of the memento is the data to be timestamped. Thus, it is important to be able to reproduce the same hash of a particular archived web page over time. As the number of public and private web archives is increasing \cite{Kim_Nowviskie_Graham_Quon_Alliance_2017,costa2017}, we can no longer have the same level of trust in content delivered by different archives (e.g., content was tampered with in Michael's Evil Wayback as Figure \ref{fig:text_chnage} shows). It becomes essential to develop a mechanism for creating and verifying timestamps of archived web pages.
\begin{figure}[h!]
\centering
\includegraphics[width=95mm]{img/webpage_to_hash.pdf}
\caption{Timestamping a hash value that summarizes a memento in the blockchain.}
\label{img:basic_timestamping}
\end{figure}
\section{Background}
\subsection{Crawling and Replaying Archived Web Pages}
\hfill\\
In order to automatically collect portions of the web, some web archives employ web crawling software, such as the Internet Archive's Heritrix \cite{sigurdhsson2010incremental,mohr2004introduction}. Having a set of seed URIs placed in a queue, Heritrix will start by fetching web pages identified by those URIs, and each time a web page is downloaded, Heritrix writes the page to a WARC file \cite{kunze2006warc}, extracts any URIs from the page, places those discovered URIs in the queue, and repeats the process.
The crawling process will result in a set of archived pages. The Internet Archive, for example, collects over one billion web pages per week \cite{elizabethshockman2016}, and as of today, it contains 585 billion web pages \cite{iaurils}. To provide access to their archived pages, many web archives which use OpenWayback \cite{openwayback}, the open-source implementation of IA's Wayback Machine, allow users to query the archive by submitting a URI. OpenWayback will replay the content of any selected archived web page in the browser. One of the main tasks of OpenWayback is to ensure that when replaying a web page from an archive, all resources that are used to construct the page (e.g., images, style sheets, and JavaScript files) should be retrieved from the archive, not from the live web. Thus, at the time of replaying the page, OpenWayback will rewrite all links to those resources to point directly to the archive \cite{tofel2007wayback}. In addition to OpenWayback, PyWb \cite{pywb} is another tool for replaying archived web pages. It is used by Perma \cite{perma} and Webrecorder \cite{webrecorder}.
\subsection{Memento}
\hfill\\
Memento \cite{nelson:memento:tr,memento:rfc} is an HTTP protocol extension that uses time as a new dimension to access the web by relating the current web resources to their prior states. The Memento protocol is supported by most public web archives including the Internet Archive. The protocol introduces two HTTP headers for content negotiation. First, Accept-Datetime is an HTTP Request header through which a client can request a prior state of a web resource by providing the preferred datetime (e.g., \emph{Accept-Datetime: Mon, 09 Jan 2017 11:21:57 GMT}). Second, the Memento-Datetime HTTP Response header is sent by a server to indicate the datetime at which the resource was captured. The Memento protocol also defines the following terminology:
\begin{itemize}
\item[--] {URI-R - to identify an original resource from the live Web}
\item[--] {URI-M - to identify an archived version (memento) of the original resource at a particular point in time}
\item[--] {URI-T - a resource (TimeMap) that provides a list of mementos (URI-Ms) for a particular original resource}
\item[--] {URI-G - a resource (TimeGate) that supports content negotiation based on datetime to access prior versions of an original resource}
\end{itemize}
\subsection{The Bitcoin Blockchain} \label{bitcoin_blockchain}
Bitcoin \cite{nakamoto2008bitcoin} is a peer-to-peer electronic cash system built using the Blockchain technology \cite{wright2015decentralized}. A ledger that contains all transactions in Bitcoin is duplicated across all nodes in the network (i.e., there is no central agency). The timestamp associated with each transaction indicates when the transaction is accepted in the Bitcoin. Services, such as OriginStamp\footnote{\url{https://originstamp.org}}, Chainpoint\footnote{\url{https://chainpoint.org/}}, and OpenTimestamps\footnote{\url{https://opentimestamps.org/}}, generate trusted timestamps in Bitcoin for digital documents. Even though timestamping steps might vary from one service to another, they follow a common procedure:
\begin{enumerate}
\item Receiving a file, a hash, or plain text from a user
\item Generating a hash value of received content
\item Converting the hash to a Bitcoin address
\item Issuing a Bitcoin transaction using the Bitcoin address as a money sender or receiver
\end{enumerate}
To verify timestamps in Bitcoin at any point in the future, the first three steps mentioned above are performed. The fourth step then would include issuing a query through the Bitcoin API to obtain information about any transactions on the given Bitcoin address. We consider the timestamp associated with the Bitcoin transaction as a trusted timestamp. Being incorruptible is the key characteristic of Bitcoin as any change in a transaction or a block requires computational power that exceeds the entire network, which is theoretically possible but unlikely to occur practically. The other important feature of Bitcoin is the decentralization of a distributed ledger which contains all transactions ever made in Bitcoin (i.e., the ledger is duplicated across all nodes).
\section{Related Work}
Some existing work focuses on exploring security issues in web archives. Archived web pages, similar to live web pages, might change over time for different reasons, such as software or hardware upgrades, the fact that complex technologies are involved in developing web pages, and malicious attacks. For more critical archived web resources (e.g., documents acknowledged as evidence in courts), it is important to find a way to validate content served by a web archive \cite{eltgrowth2009best}.
Lerner et al. \cite{lerner2017rewriting} discovered four vulnerabilities in the Internet Archive's Wayback Machine (i.e., Archive-Escapes, Same-Origin Escapes, Archive-Escapes + Same-Origin Escapes, and Anachronism-Injection) that attackers can leverage to modify a user's view at the time when a memento is rendered in a browser. The authors suggested some defenses that could be deployed by either web archives or web publishers to prevent abusing these vulnerabilities.
Other web archiving researchers created a shared repository in May 2017 maintained by the Harvard Library Innovation Laboratory. They use this platform to discuss potential threats in web archives. Threats would include, for instance, controlling a user's account due to Cross-Site Request Forgery (CSRF) or Cross-Site Scripting (XSS), and archived web resources reaching out to the live web. The authors provide recommendations on how to avoid such threats \cite{warcgamesgithub2017,cushman2017}.
Eltgrowth \cite{eltgrowth2009best} outlines several judicial decisions that involve evidence (i.e., archived web pages) taken from the Internet Archive (e.g., court cases like Telewizja Polska USA, Inc. v. Echostar Satellite Corp, and St. Luke's Cataract \& Laser Institute v. James C. Sanderson). The author mentions that there is an open question whether to consider an archived web page as a duplicate of the original web page at a particular time in the past. This concern might prevent considering archived web pages as evidence.
Yasskin \cite{webpackages2017} describes several use cases with associated requirements for distributing copies of web packages. One use case is the authentication process, which is performed to ensure resources come from particular origins and to validate the content integrity against any attempt to tamper with or modify the content in transit. The authors did not include a use case where content might be altered at any point in time in the server.
Tools have been developed to generate trusted timestamps in blockchain-based networks. OriginStamp \cite{gipp2015decentralized} allows users to submit plain text, a hash value, or any file format (e.g., PDF/PNG files). The data is not sent to the OriginStamp's server. Instead, it is hashed in the user's browser and only the resulting hash is transmitted to the server. Once delivered, it will be added to the list of all hashes submitted by other users. Once per day, OriginStamp generates a single aggregated hash of all received hashes. This hash is then converted to a Bitcoin address that will be a part of a new Bitcoin transaction (i.e., the source or destination of a transaction in Bitcoin). The timestamp associated with the transaction is considered a trusted timestamp. OriginStamp provides an instant timestamping in the Bitcoin if a user is willing to pay a Bitcoin transaction fee. A user can verify a timestamp through OriginStamp's API or by visiting their website. The server first receives a hash from a user, then OriginStamp converts the hash to a Bitcoin address and sends a query to Bitcoin's API. If any transaction involved the given address is returned, the timestamp associated with the transaction can be used as a proof of existence. In addition to the process of verifying timestamps through OriginStamp's website, users may verify timestamps directly in Bitcoin.
Other services, such as Chainpoint, Tangible.io\footnote{\url{http://tangible.io/en/}}, Proof of Existence\footnote{\url{https://proofofexistence.com}}, and OpenTimestamps, are based on the same concept of using Bitcoin to timestamp digital documents. Some differences between these tools include:
\begin{itemize}
\item {\emph{Cost} - The OriginStamp service can be used with no charge unless users want an instant submission to Bitcoin. Tangible.io's and Proof of Existence users, on the other hand, have to pay for the service.}
\item {\emph{Generation of aggregated hashes} - In OriginStamp, an aggregated hash is computed by storing all hashes received within a day (i.e., 24 hours) in a file, which then will be hashed to generate a single aggregated hash. Chainpoint and OpenTimestamps uses a Merkle Tree \cite{Merkle79} to generate one aggregated hash (i.e., root hash).}
\item {\emph{The number of Bitcoin transactions v. hashes } - Services like OriginStamp, ChainPoint, and OpenTimestamps support issuing either one Bitcoin transaction per submitted hash or one transaction per aggregated hash. Other tools, such as Proof of Existence, create one Bitcoin transaction per hash.}
\item {\emph{Use} - OriginStamp, Tangible.io, and Proof of Existence provide online services through their websites that allow users to create or verify trusted timestamps using a web browser. Chainpoint and OpenTimestamps require installing client software in order to use the timestamping service.}
\item {\emph{Blockchain-based network} - Bitcoin is commonly used by all of these services to generate trusted timestamps. In addition, Chainpoint can create timestamps based on other blockchain networks like Ethereum \cite{wood2014ethereum}.}
\end{itemize}
Even though users of the tools mentioned above can pass data by value, such as plain text, any file format, or a hash value, they are not allowed to submit data by reference (i.e., passing a URI of a web page). In other words, these services are not directly timestamping web pages. The only exception is an additional service \cite{gippusingpaperforweb} established by OriginStamp. The service works by receiving a URI from a client and then hashing the content of the web page identified by the URI. All further steps performed to create or verify timestamps are similar to the steps mentioned in Section \ref{bitcoin_blockchain}. Figure \ref{img:originstamp_uri} (from \cite{gippusingpaperforweb}) shows the UI of this service where users can search for timestamped web pages by entering a URI. There are two disadvantages of this additional service. First, the service is no longer available on the live web\footnote{\url{https://www.isg.uni-konstanz.de/web-time-stamps/}}. Second, the hash is only generated on the HTML content of the main file identified by the URI, ignoring all embedded resources like images, scripts, and style sheets \cite{gippusingpaperforweb}. As will be illustrated in Section \ref{issues_section}, by not including embedded resources in hash calculation may leave the page vulnerable to undetected changes.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.35]{img/originstamp_uri}
\caption{A list of timestamped web pages. Users can search for a particular web page by typing a URI or text (from \cite{gippusingpaperforweb}).}
\label{img:originstamp_uri}
\end{figure}
Various tools (e.g., shell scripts by Branwen \cite{gwerntimestamping2017}) calculate a hash value by considering all resources constructing a web page (e.g., images and scripts) in addition to the HTML content. This seems to be a reasonable solution for timestamping web resources, but without considering other factors, as we will see in Section \ref{issues_section}, it is difficult to produce a repeatable hash for the same web page over time.
\section{Issues in Generating Cryptographic Hashes of Mementos} \label{issues_section}
Generating a hash value on the content of a memento is the key part in the process of timestamping the memento. As shown in Figure \ref{img:basic_timestamping}, the archived web page will not be directly timestamped in blockchain-based networks (e.g., Bitcoin and Ethereum). Instead, the hash value calculated on the content is the data to be timestamped. Regardless of the cryptographic hash function (e.g., MD5 or SHA-256), a resulting hash value should fulfill the following requirement emphasizing reproducing the same hash of a particular memento at different points in time.
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
If we download a memento \emph{URI-M$_x$} at time \emph{t$_n$} (denoted as \emph{URI-M$_x@$}\emph{t$_n$}), download the same memento at time \emph{t$_m$} (denoted as \emph{URI-M$_x@$}\emph{t$_m$}), and apply a hash function \emph{H} on the content of \emph{URI-M$_x@$}\emph{t$_n$} and \emph{URI-M$_x@$}\emph{t$_m$}, then \emph{H(URI-M$_x@$}\emph{t$_n$}\emph{)} $ = $ \emph{H(URI-M$_x@$}\emph{t$_m$}\emph{)}
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 1}: Repeatable hash values
};
\end{tikzpicture}
\end{center}
\vspace{.4cm}
In this section, we will discuss several difficulties in generating a constant hash at different points of time for a specific archived web page. Thus, we will observe more requirements as we advance in our discussion in addition to Requirement 1 above. We will start with a simple scenario where hashes are calculated on only HTML content of mementos. The discussion then turns toward more complex scenarios encountered when all resources constructing a memento are included in the hash calculation.
\subsection{Generating hashes on HTML content only} \label{html_only}
Consider a scenario illustrated in Figure \ref{img:html_change_over_time} where, in August 2017, a user needs to generate a hash value based on the content of the archived page
\begin{center}
\begin{tabular}{l}
\url{http://wsdl-maturban.cs.odu.edu:11011/michael/wayback/2017071}
\\
\url{7185130/https://climate.nasa.gov/vital-signs/carbon-dioxide/}
\end{tabular}
\end{center}
The memento is preserved by Michael's Evil Wayback and illustrated in Figure \ref{img:11011_org}. The user runs a ``cURL'' command as shown in Figure \ref{img:hash_on_html}, which will first download the HTML content of the main page and then generate a SHA-256 hash value of the content resulting in a hash value that ends with ``f521''. Two months later (i.e., October 2017), the user requests the same archived web page from the Michael's Evil Wayback and performs the hash calculation on the returned HTML content, observing a different hash value that ends with ``3790''.
\begin{figure}[h!]
\centering
\includegraphics[width=115mm]{img/text_change_over_time.pdf}
\caption{An archived web page (http://wsdl-maturban.cs.odu.edu:11011/michael/wayback/ 20170717185130/https://climate.nasa.gov/vital-signs/carbon-dioxide/) has been tampered with, and the simple approach of generating a hash based on the HTML content successfully detected the change.}
\label{img:html_change_over_time}
\end{figure}
One possible cause of such observation of getting varied hash values is demonstrated in Figure \ref{img:html_change_over_time} with a ``black hat'' icon. Michael's Evil Wayback has tampered with the memento, in favor of individuals or organizations who deny that $CO_{2}$ is one of the main causes of global warming. The latest important $CO_2$ measurement has been changed from $406.31$ $ppm$ to $270.31$ $ppm$ as shown in Figure \ref{fig:text_chnage}. By applying the simple approach of computing hashes on the HTML content, the user becomes aware that the retrieved content in October 2017 cannot be identical to the content retrieved a couple of months earlier.
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
\userinput{\%} curl -s http://wsdl-maturban.cs.odu.edu:11011/michael/wayback
/20170717185130/https://climate.nasa.gov/vital-signs/carbon-
dioxide/ | shasum -a 256
e834c71aefda284fe03a4eed4e8cb78ea581537ba8884aecec29bd2d66cbf
521 -
\end{Verbatim}
\vspace{-0.5em}
\caption{cURL command to generate a SHA-256 hash of the HTML content only.}
\label{img:hash_on_html}
\end{figure}
We focus now on a more complicated scenario where an image or any other embedded resource constructing the archived page is altered. For instance, the bottom-right graph of the archived web page shown in Figure \ref{img:11011image} has been changing the historical records of $CO_2$. This image is located on the web at
\begin{center}
\begin{tabular}{l}
\url{http://wsdl-maturban.cs.odu.edu:11011/michael/wayback/20170717}
\\
\url{185130im_/https://climate.nasa.gov/system/charts/15_co2_left_0}\\\url{61316.gif}
\end{tabular}
\end{center}
It is linked from the main file of the archived page
\begin{center}
\begin{tabular}{l}
\url{http://wsdl-maturban.cs.odu.edu:11011/michael/wayback/20170717}\\\url{185130/https://climate.nasa.gov/vital-signs/carbon-dioxide/}
\end{tabular}
\end{center}
Can such change be detected by the ``cURL'' command shown in Figure \ref{img:hash_on_html}? The answer is ``no'' since it only considers hashing the HTML code of the main file and not the embedded resources. Figure \ref{img:image_change_over_time} shows the results of running the command on the archived page before (Figure \ref{img:11011org2}) and after it is modified (Figure \ref{img:11011image}). The hash values are identical, which falsely indicates that the archived page is not corrupted. Therefore, we should include embedded resources in hash calculation.
\begin{figure*}
\centering
\subfigure[Accessing the archived page in August 2017. ]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.15]{img/11011org2}
}
\label{img:11011org2}
}
\subfigure[Accessing the archived page in October 2017]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.15]{img/11011image}
}
\label{img:11011image}
}
\caption{A memento has been tampered with (modifying an image). The approach of hashing HTML content only does not detect the change.}
\label{fig:imagechnage2}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=115mm]{img/image_change_over_time.pdf}
\caption{An archived web page (http://wsdl-maturban.cs.odu.edu:11011/michael/wayback/ 20170717185130/https://climate.nasa.gov/vital-signs/carbon-dioxide/) has been tampered with, and the simple approach of generating a hash based on the HTML content alone does not detect that the archived page is corrupted.}
\label{img:image_change_over_time}
\end{figure}
\subsection{Generating a hash of a composite memento} \label{composite}
A composite memento refers to all embedded resources that comprise a memento \cite{ainsworth2014framework}. We modified the shell script (see Figure \ref{img:hash_embedded}) written by Gwern Branwen \cite{gwerntimestamping2017}. The modified script, \emph{sha256\_include\_all.sh}, computes a final hash by hashing a text file containing a set of hash values of all embedded resources constructing a memento (i.e., a composite memento). Figure \ref{img:hash_embedded_on_orig} shows an example of running this script on the content of a real archived page. Figure \ref{img:image_change_over_time2} shows the results of computing the hash on the original archived page (Figure \ref{img:11011org2}) and after an image within the memento is modified (Figure \ref{img:11011image}). The new script successfully produces two different hash values. The first one ends with ``6e8cb'' while the second hash ends with ``a92fb''. This indicates that the memento has been tampered with or altered. Thus, fulfilling Requirement 2 is essential when computing hashes.
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
#!/bin/bash
#set -euo pipefail
rm -rf ~/tmp_www/*
cd ~/tmp_www/
USER_AGENT="Firefox 6.4"
FILE=\$(nice -n 20 wget --continue --unlink --page-requisites
--timestamping -e robots=off -k
--user-agent="\$USER_AGENT" "\$1" 2>&1 \
| egrep 'Saving to: ‘.*’'
| sed -e 's/Saving to: ‘//' | tr -d '’')
let "c=0"
for TARGET in \$FILE; do
if [ -f "\$TARGET" ]; then
let "c++"
CONT=\$(cat \$TARGET)
HASH=\$(echo "\$CONT" | shasum -a 256 | awk '{print \$1;}')
echo "\$HASH" >> "allhashes.txt"
fi
done
if [ \$c = 1 ]; then
FINAL_HASH="\$HASH"
else
FINAL_HASH=\$(cat "allhashes.txt" | shasum -a 256
| awk '{print \$1;}')
fi
echo "Final hash: \$FINAL_HASH"
\end{Verbatim}
\vspace{-0.5em}
\caption{A shell script (sha256\_include\_all.sh) to generate a final hash by aggregating all hash values of the embedded resources in a single temporary file and hashing the file.}
\label{img:hash_embedded}
\end{figure}
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
\userinput{\%} sha256_include_all.sh http://wsdl-maturban.cs.odu.edu:11011/
michael/wayback/20170717185130/https://climate.nasa.gov/vital-
signs/carbon-dioxide/
Final hash: 2fa7ece06402cc9d89b9cfe7a53e4ec31a4417a34d79fee584c
01d706036e8cb
\end{Verbatim}
\vspace{-0.5em}
\caption{An example of generating an aggregated hash using sha256\_include\_all.sh.}
\label{img:hash_embedded_on_orig}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=115mm]{img/image_change_over_time2.pdf}
\caption{An archived web page (http://wsdl-maturban.cs.odu.edu:11011/michael/wayback/ 20170717185130/https://climate.nasa.gov/vital-signs/carbon-dioxide/) has been tampered with. The shell script sha256\_include\_all.sh successfully detects that the archived page is corrupted.}
\label{img:image_change_over_time2}
\end{figure}
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
We should hash a composite memento. In most cases this would include hashing the main HTML file as well as other embedded resources in the memento, such as images, style sheets, JavaScript files, iframes, and others.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 2}: Hash a composite memento
};
\end{tikzpicture}
\end{center}
Although including embedded resources of a memento in a hash calculation may help identify memento tampering (Requirement 2), it raises more questions about whether to exclude some of those resources (e.g., archive-specific resources) for several reasons explained in the next section.
\subsection{Excluding archive-specific content} \label{archive_specific}
Before sending any requested memento to a client, archives not only insert extra code for usability (e.g., the IA's banner) in the original content of mementos but may also apply some transformation to appropriately replay content in a user's browser. An archive performs such transformations for different reasons. First, all links to embedded resources constructing an archived page are rewritten so that these resources are retrieved from the archive, not from the live web. For instance, the memento
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20170705002324/http://www.weeklysta}
\\
\url{ndard.com/}
\end{tabular}
\end{center}
{\raggedleft{}contains the logo image}
\begin{center}
\url{http://www.weeklystandard.com/media/images/logo.png}
\end{center}
{\raggedleft{}This link to the logo image is rewriten by the Wayback Machine to point to the archive}
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20170705161539im_/http://www.weekly}
\\
\url{standard.com/media/images/logo.png}
\end{tabular}
\end{center}
Another purpose of such archive-specific content is to inform users that what they are viewing is actually from an archive rather than the live web. The Internet Archive, for example, adds HTML comments at the end of the main HTML file of a memento to indicate when the memento was created and retrieved (Figure \ref{fig:ia_added_code}). In addition, archives insert content to convey information such as the archive name, current datetime, and copyright-related statements. Jones et al. \cite{jones2016rules,jonesraw20162,jonesraw2016} explore transformation of original content performed by different archives and introduce several rules for acquiring mementos and extracting text from the content.
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
<html>
<head> ... </head>
<body>
...
<!--
FILE ARCHIVED ON 23:42:15 Apr 6, 2017 AND RETRIEVED FROM THE
INTERNET ARCHIVE ON 3:40:16 Apr 7, 2017.
JAVASCRIPT APPENDED BY WAYBACK MACHINE, COPYRIGHT INTERNET
ARCHIVE.
ALL OTHER CONTENT MAY ALSO BE PROTECTED BY COPYRIGHT (17 U.S.
C.SECTION 108(a)(3)).
-->
</body>
</html>
\end{Verbatim}
\vspace{-0.5em}
\caption{HTML comments added by the Internet Archive.}
\label{fig:ia_added_code}
\end{figure}
Figure \ref{fig:archive-specific-content} shows examples of archive-specific content which are not part of the original content when the memento was initially created. The Internet Archive's banner in Figure \ref{fig:banner_ia} indicates that there are 2,138 mementos available in the archive for the web page\footnote{\url{http://www.ulster.ac.uk/}}. By hovering the mouse over the banner and clicking on a specific date, the corresponding web page will be displayed in the browser. Figure \ref{fig:banner_proni} presents a different visualization tool to navigate through the archived versions of Ulster University's website\footnote{\url{http://webarchive.proni.gov.uk/20150826163149/http://www.ulster.ac.uk/}}.
\begin{figure*}
\centering
\subfigure[A Memento from Internet Archive
]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[width=2.7in, height=2.5in]{img/banner_ia}
}
\label{fig:banner_ia}
}
\\
\subfigure[From Proni Archive accessed in 2016]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[width=2.7in, height=2.2in ]{img/banner_proni}
}
\label{fig:banner_proni}
}
\hspace{0.2cm}
\subfigure[Same memento accessed in 2017]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[width=2.7in, height=2.2in]{img/banner_proni_new}
}
\label{fig:banner_proni_new}
}
\hspace{-0.2cm}
\caption{Archive-specific content (marked in red).}
\label{fig:archive-specific-content}
\end{figure*}
We can identify archive-specific content in archives which support the Memento protocol \cite{memento:rfc}. As shown in Figure \ref{img:donotnego}, archives should respond with the HTTP ``Link'' header containing ``\url{http://mementoweb.org/terms/donotnegot}\\\url{iate}'' and ``\texttt{rel="type"}'' to requests for resources which are not mementos and are excluded from content negotiation based on the time dimension.
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
\userinput{\%} curl -I https://www.webarchive.org.uk/wayback/archive/images/
toolbar/wayback-toolbar-logo.png
HTTP/1.1 200 OK
Date: Mon, 06 Nov 2017 22:35:53 GMT
Server: Apache-Coyote/1.1
\userinput{Link: <http://mementoweb.org/terms/donotnegotiate>; rel="type"}
Accept-Ranges: bytes
ETag: W/"4549-1486118270000"
Last-Modified: Fri, 03 Feb 2017 10:37:50 GMT
Content-Type: image/png
Content-Length: 4549
Content-Language: en
\end{Verbatim}
\vspace{-0.5em}
\caption{One way to identify archive-specific resources is to look at the HTTP Response header ``Link'' that contains ``http://mementoweb.org/terms/donotnegotiate''.}
\label{img:donotnego}
\end{figure}
We want to avoid including archive-specific content in hash calculations for two reasons. First, as mentioned, this type of content does not belong to the content of an original page. Second, resources such as the Wayback Machine's banner in Figure \ref{fig:banner_ia} and the sidebar inserted by Proni's archive in Figure \ref{fig:banner_proni} and Figure \ref{fig:banner_proni_new} are expected to change over time due to updates in archive-specific software (e.g., the Wayback Machine's software). In addition, archive-specific resources may carry dynamically-generated data corresponding to the current state of an archive. For example, the sidebar in Figure \ref{fig:banner_proni} lists all years in which mementos are available. In this case, if a user accesses the memento in 2016, all years which have mementos until 2016 will be part of the information in the sidebar. On the other hand, accessing the same memento in 2017 will result in having a new updated sidebar to include 2017's mementos. Another example of dynamically-generated information is the number of available mementos displayed in the Internet Archive's banner and Proni's sidebar. Thus, we need to avoid including these archive-specific resources when calculating hashes.
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
Resources added by archives are not part of the original content and should not be included in the hash calculation.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 3}: Avoid archive-specific resources
};
\end{tikzpicture}
\end{center}
\vspace{1em}
\textbf{Extracting ``raw'' content of archived web pages:}
Some archives provide an API or other mechanism to allow users obtain the ``raw'' content of a memento (i.e., the original content without any modification). For example, archives that operate OpenWayback send back raw content when receiving an HTTP request with a URI-M containing ``id\_'' appended to a timestamp. For example
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20100923005105id_/http://www.cnn.co}
\\
\url{m:80/}
\end{tabular}
\end{center}
{\raggedleft{}will return the raw content of the memento}
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20100923005105/http://www.cnn.com:8}
\\
\url{0/}
\end{tabular}
\end{center}
Even though using id\_ is beneficial for raw content retrieval, it might cause issues, such as links to resources constructing a memento not being rewritten to point to an archive, which prevents the memento from being replayed appropriately as Figure \ref{fig:cnn_raw_rewritten} shows.
\begin{figure*}
\centering
\subfigure[Requesting the CNN archived page without including id\_ option in the URI\-M: https://web.archive.org/ web/20100923005105/http://www.cnn.com:80/.]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.15]{img/cnn_rewritten}
}
\label{img:cnn_rewritten}
}
\subfigure[Requesting the raw content of the CNN archived page using id\_ option: https://web.archive.org/web/ 20100923005105id\_/http://www.cnn. com:80/.]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.15]{img/cnn_raw}
}
\label{img:cnn_raw}
}
\caption{The archived web page vs. its raw content.}
\label{fig:cnn_raw_rewritten}
\end{figure*}
\subsection{Excluding any resources from the live web} \label{live}
Archives rewrite all links of embedded resources of a memento to point to the archive, yet some URIs are not rewritten because they are produced dynamically, for example, by events triggered by client-side JavaScript. Resources specified by such links are often retrieved from the live web \cite{zombie10}. Web resources from the live web are expected to either change or disappear. Thus, we want to avoid such resources in computing a memento's hash.
Lerner et al. \cite{lerner2017rewriting} explore the web archiving ``Archive-Escapes'' vulnerability which occurs when requesting live web resources as part of constructing a memento. This may lead to changing a user's view when browsing a memento. The authors show a proof of concept implementation (Figure \ref{img:archive_escapes}) of the Archive-Escapes attack. Malicious code is injected in the live web resource
\begin{center}
\url{http://cdn.projecthaile.com/js/trb-1.js}
\end{center}
{\raggedleft{}By requesting this resource as a part of constructing the memento}
\begin{center}
\url{http://web.archive.org/web/20110901233330/reuters.com}
\end{center}
{\raggedleft{}it causes a user's view to be completely controlled by the malicious code. This leads to the need for a new requirement.}
\begin{figure*}[!t]
\centering
\subfigure[Accessing the archived web page http://web.archive.org/web/20110901233330/reuters.com page before the ``Archive-Escapes'' attack.]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.4]{img/archive_escapes_before}
}
\label{img:archive_escapes_before}
}
\subfigure[Accessing the same archived page after the ``Archive-Escapes'' attack.]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=0.4]{img/archive_escapes_after}
}
\label{img:archive_escapes}
}
\caption{A proof of concept of ``Archive-Escapes'' attack -- a request to a live web resource is made when reconstructing a memento (from \cite{lerner2017rewriting}).}
\label{fig:archive_escape_example}
\end{figure*}
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
No resource located on the live web should be part of the hashing process.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 4}: No resources from the live web
};
\end{tikzpicture}
\end{center}
\subsection{Archived web pages might be served from a cache} \label{cache}
Web archives might use a cache in order to improve performance by speeding up subsequent requests. The Wayback Machine's HTTP Response header ``X-Page-Cache'' indicates whether delivered content is from the cache (``X-Page-Cache: HIT'') or not (``X-Page-Cache: MISS''). Although caching has powerful benefits, the returned content might not reflect what actually is in the archive. Figure \ref{fig:ia_cache_miss} shows that content is not served from the cache (i.e., ``X-Page-Cache: MISS''). The issue is that cache HITs produce a risk of calculating the same hash even if the archived page has changed. For example, we issued different HTTP requests for the same memento (Figure \ref{fig:ia_cache__hit_miss}). The first response was actually not from the cache with computed MD5 hash ending in ``7afd3'' while the two responses that follow were from the cache. The MD5 hash value calculated on the content of each of the two responses were identical to the first hash value because the content served from the cache is an exact copy of the content returned upon the first request. Now, because the content on the cache is only stored for a ``short'' period of time (depending on how the caching system is configured) before it is discarded (or updated), the fourth response was not a cache HIT. The archived page seems to be changed since we obtain a different MD5 hash value ending in ``b1059''. Thus, we introduce a new requirement when computing a memento's hash.
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
We should avoid considering content returned from a cache as this does not reflect the current content of the archive.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 5}: Avoid content served from the cache
};
\end{tikzpicture}
\end{center}
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
nn.com/
HTTP/1.1 200 OK
Server: Tengine/2.1.0
Date: Mon, 02 Oct 2017 11:10:15 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 147311
Connection: keep-alive
X-Archive-Orig-set-cookie: CG=US:CA:San+Francisco; path=/
X-Archive-Orig-expires: Wed, 24 Jul 2013 14:48:55 GMT
X-Archive-Orig-vary: Accept-Encoding
X-Archive-Orig-server: nginx
X-Archive-Orig-last-modified: Wed, 24 Jul 2013 14:47:16 GMT
X-Archive-Orig-connection: close
X-Archive-Orig-cache-control: max-age=60, private
X-Archive-Orig-date: Wed, 24 Jul 2013 14:46:36 GMT
X-Archive-Guessed-Charset: utf-8
Memento-Datetime: Wed, 24 Jul 2013 14:48:01 GMT
Link: <http://www.cnn.com/>; rel="original", <http://web.archi
ve.org/web/timemap/link/http://www.cnn.com/>; rel="timemap"; t
ype="application/link-format", <http://web.archive.org/web/htt
p://www.cnn.com/>; rel="timegate", <http://web.archive.org/web
/20000620180259/http://cnn.com:80/>; rel="first memento"; date
time="Tue, 20 Jun 2000 18:02:59 GMT", <http://web.archive.org/
web/20130723125209/http://www.cnn.com/>; rel="prev memento"; d
atetime="Tue, 23 Jul 2013 12:52:09 GMT", <http://web.archive.o
rg/web/20130724144801/http://www.cnn.com/>; rel="memento"; dat
etime="Wed, 24 Jul 2013 14:48:01 GMT", <http://web.archive.org
/web/20130725162936/http://www.cnn.com/>; rel="next memento";
datetime="Thu, 25 Jul 2013 16:29:36 GMT", <http://web.archive.
org/web/20000620180259/http://cnn.com:80/>; rel="last memento"
; datetime="Tue, 20 Jun 2000 18:02:59 GMT"
Content-Security-Policy: default-src 'self' 'unsafe-eval' 'uns
afe-inline' data: archive.org web.archive.org analytics.archi
ve.org
X-App-Server: wwwb-app23
X-ts: ----
X-Archive-Playback: 0
X-location: All
\textbf{X-Page-Cache: MISS}
...
\end{Verbatim}
\vspace{-0.5em}
\caption{Memento is not delivered from the cache as the HTTP Response header ``X-Page-Cache: MISS'' indicates.}
\label{fig:ia_cache_miss}
\end{figure}
\begin{figure}[!t]
\centering
\begin{Verbatim}[commandchars=\\\{\}]
Mon Oct 2 01:15:18 EDT 2017
p://www.cnn.com/ | md5
\textbf{477b6d923cbb7bf9675a0d2feb37afd3}
Mon Oct 2 01:16:29 EDT 2017
p://www.cnn.com/ | md5
\textbf{477b6d923cbb7bf9675a0d2feb37afd3}
Mon Oct 2 01:19:31 EDT 2017
p://www.cnn.com/ | md5
\textbf{477b6d923cbb7bf9675a0d2feb37afd3}
Mon Oct 2 02:10:24 EDT 2017
p://www.cnn.com/ | md5
\textbf{dda6a9bf091d412cbdc2226ce3eb1059}
\end{Verbatim}
\vspace{-0.5em}
\caption{The first ``cURL'' request was not served from the cache (i.e., ``X-Page-Cache: MISS'') while the second and third request were cache HITs. After about an hour, the fourth request was a cache MISS and produces a different hash. This example shows that cache HITs produce the same hash even though the memento might have changed.}
\label{fig:ia_cache__hit_miss}
\end{figure}
\subsection{Archived web pages might be in flux (changes in TimeMaps)} \label{timemaps
Archived resources constructing a memento may have different creation dates (i.e., Memento-Datetime). For example, the main HTML file of the memento
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20170414182743/https://climate.nasa.g}
\\
\url{ov/vital-signs/carbon-dioxide/}
\end{tabular}
\end{center}
{\raggedleft{}was captured on April 14, 2017, while one of the embedded images was captured on April 13, 2017}
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20170413144604im_/https://climate.nas}
\\
\url{a.gov/system/time_series_images/582_co2_2002_09.jpg}
\end{tabular}
\end{center}
A TimeMap \cite{memento:rfc} contains a list of all available mementos in the archive for a particular original resource. For example, the TimeMap of the original resource \url{http://www.bbc.com/} contains a list of 27,770 mementos and is available in the Internet Archive at
\begin{center}
\url{http://web.archive.org/web/timemap/link/http://www.bbc.com/}
\end{center}
{\raggedleft{}From this list, we selected and downloaded the following memento several times}
\begin{center}
\url{https://web.archive.org/web/20170807231028/http://www.bbc.com/}
\end{center}
{\raggedleft{}We first downloaded this memento on August 14, 2017. We noticed that the TimeMap}
\begin{center}
\begin{tabular}{l}
\url{http://web.archive.org/web/timemap/link/http://ichef.bbci.co.uk/}
\\
\url{wwhp/144/cpsprodpb/730D/production/_97235492_p05brd0w.jpg}
\end{tabular}
\end{center}
{\raggedleft{}of one of the embedded images was empty. The archive had not yet captured this image and responded with ``404 NOT FOUND'' to a request to the rewritten link (URI-M)}
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20170807231028im_/http://ichef.bbci.}
\\
\url{co.uk/wwhp/144/cpsprodpb/730D/production/_97235492_p05brd0w.jpg}
\end{tabular}
\end{center}
We downloaded the same memento on August 21, 2017. We found that the image's TimeMap is no longer empty as it consists of one memento
\begin{center}
\begin{tabular}{l}
\url{https://web.archive.org/web/20170807230527im_/http://ichef.bbci.}
\\
\url{co.uk/wwhp/144/cpsprodpb/730D/production/_97235492_p05brd0w.jpg}
\end{tabular}
\end{center}
The hash generated on the composite memento on August 14, 2017 ended in ``288d7'' which is different from the hash generated for the same composite memento downloaded on August 21, 2017, ending in ``80845''.
Brunelle et al. \cite{brunelle2013evaluation} studied the TimeMaps of 4,000 original resources for three months and concluded that the number of mementos in TimeMaps changes and, in some cases, decreases. This will definitely affect how a memento is constructed, and thus will result in different hash value being generated. In addition. Kelly et al. \cite{kelly_arXiv2017,kelly_jcdl2017} discovered that the number of mementos in the TimeMap of an original web page may vary depending on the tool or the API used to access the archive (e.g., via the Internet Archive's web interface\footnote{\url{https://archive.org/web/}} and the Internet Archive's CDX API\footnote{The Wayback CDX Server API: \url{https://github.com/internetarchive/wayback/tree/master/wayback-cdx-server}}).
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
Changing TimeMaps could affect the computation of hashes. It might be necessary to estimate when a memento becomes stable within the archive to avoid issues of having different hashes.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 6}: Be aware of the effect of changing TimeMaps
};
\end{tikzpicture}
\end{center}
\subsection{Dynamic content} \label{dynamic}
Some values are generated on the client-side by JavaScript code which may result in having random or different values each time the JavaScript code is executed. Considering resources with random values may cause inconsistencies in hash calculation. Figure \ref{fig:cnn_1_2} shows a ``rainy/thunder'' icon when the memento:
\begin{center}
\url{https://web.archive.org/web/20130530221910/http://www.cnn.com/}
\end{center}
{\raggedleft{} was accessed on September 21, 2017. Reloading the same memento in the browser, we noticed that the icon changed to be ``cloudy''. This happens because the URI to the icon is generated by JavaScript, which involves retrieving the current datetime.}
\begin{figure*}
\centering
\subfigure[Accessing https://web.archive.org/web/20130530221910/http://www.cnn.com/ on September 21, 2017 at 10:12 AM (Rainy/thunder icon).]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=.33]{img/cnn_1}
}
\label{img:1_cnn}
}
\subfigure[Reloading the same memento at a different time may produce a different icon (Cloudy icon).]{
\setlength{\fboxsep}{0pt}%
\fbox{
\includegraphics[scale=.33]{img/cnn_2}
}
\label{img:2_cnn}
}
\caption{An example that shows how randomly generated values might affect the hashing process.}
\label{fig:cnn_1_2}
\end{figure*}
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
Any resources discovered to have randomly generated values should not be a part of the computation of hashes.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 7}: Avoid using dynamic content in hash calculations
};
\end{tikzpicture}
\end{center}
\subsection{Changes in HTTP Response headers} \label{hdrs}
We should include values of important HTTP Response headers in hash computation. For example, we encountered in Figure \ref{fig:ia_cache_miss} a scenario where the value of the HTTP Response header ``X-Page-Cache'' refers to whether content served from the cache or not. Another example would be the value of the HTTP Response header ``Location''. This header is included in the response when the HTTP response status code is ``3xx''. By hashing the value of this HTTP header, we can identify if mementos are served from different URI-Ms and we want keep track of such behavior. Another important header is ``Content-Type''. For instance, we may know that the content of an image has not been tampered with, but it is just the format of the image that changed to PNG, which causes a different hash. Rosenthal et al. \cite{migrationofwebcontent} implemented a proof-of-concept system that demonstrated how an archive can use HTTP content negotiation to transparently migrate resources from one MIME type to another (e.g., image/gif to image/png). This can be useful if a format type becomes obsolete (as recently happened with Adobe Flash \cite{flash}) or otherwise legally encumbered (as happened with GIF \cite{gif}).
\vspace{.5cm}
\begin{center}
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=10.5cm, inner sep=1em]{
Important HTTP Response headers should be included in the hash computation.
};
\node[xshift=1ex, yshift=-.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textbf{Requirement 8}: Include HTTP Response headers
};
\end{tikzpicture}
\end{center}
\section{Conclusions}
In this paper, we emphasize the importance of timestamping archived web pages, as the number of public and private web archives is increasing, and we do not have the same level of trust in all archives (e.g., Michael's Evil Wayback). We showed that the existing blockchain-based timestamping services do not accept URIs. They accept data by value, such as images and text. Being able to reproduce the same hash of a particular archived web page over time is the key part in the process of generating trusted timestamps. Thus, we discussed several difficulties of generating repeatable hashes of archived web pages and introduced several requirements that should be fulfilled when computing hashes. The proposed requirements include:
\begin{enumerate}
\item A generated hash must be repeatable (Section \ref{issues_section})
\item Generate a hash on a composite memento (Section \ref{composite})
\item Exclude archive-specific resources (Section \ref{archive_specific})
\item Avoid resources from the live web (Section \ref{live})
\item Avoid content served from the cache (Section \ref{cache})
\item Changes in TimeMaps might affect the computation of hashes (Section \ref{timemaps})
\item Avoid including dynamic content or randomly generated content (Section \ref{dynamic})
\item Include selected HTTP Response headers in hash calculation (Section \ref{hdrs})
\end{enumerate}
In future work, we will explore the above requirements by observing a set of archived web pages from several web archives for a period of time. This will help us identify the type of changes in the content of archived resources that might affect generating repeatable hashes.
\section{ACKNOWLEDGEMENTS}
This work is supported in part by The Andrew W. Mellon Foundation (AMF) grant 11600663.
\bibliographystyle{splncs03}
|
2,869,038,155,259 | arxiv | \section{Introduction}
In the last couple of years an intense and successful effort in extending
unquenched lattice calculations
towards realistic values of quark masses, small lattice spacings and large
volumes has
been undertaken using a variety of algorithmic techniques and lattice actions.
A review of the salient features of the various discretization schemes
currently employed can be found in Ref~\cite{Jansen:2008vs}.
Of particular relevance to the
current work are the calculations of the low-lying baryon spectrum
using two degenerate flavors ($N_f=2$)
of light dynamical quarks. Such studies have been carried out
by the MILC collaboration
\cite{Bernard:2001av,Aubin:2004fs} using Kogut-Susskind fermions
and by the European Twisted Mass Collaboration (ETMC)~\cite{Alexandrou:2008tn} for the nucleon ($N$)
and $\Delta$ baryons using twisted mass fermions. There are also
baryon mass calculations
using two degenerate flavors of light quarks and
a strange quark with the mass tuned to its physical value ($N_f=2+1$)
mainly using clover improved Wilson fermions with different levels of
smearing, such as the calculation of the nucleon mass
by the QCDSF-UKQCD collaboration~\cite{AliKhan:2003cu},
and the evaluation of the octet and decuplet
spectrum by the PACS-CS~\cite{Aoki:2008sm} and
BMW~\cite{Durr:2008zz} collaborations.
The LHP Collaboration
computed the octet and decuplet spectrum using a hybrid
action with domain wall valence
fermions
on asqtad improved staggered sea quarks~\cite{WalkerLoud:2008bp}. Preliminary results on
the nucleon mass are also computed using $N_f=2+1$ domain wall fermions by the RBC-UKQCD
collaboration~\cite{Antonio:2006px,Antonio:2006zz}.
In this work we study the low-lying spectrum of the baryon
octet and decuplet
with twisted mass fermions at maximal twist.
The light quarks are dynamical degrees of freedom
while
in the strange sector we use an Osterwalder-Seiler
valence quark, following the approach employed in the
study of the pseudo scalar meson decay constants~\cite{Blossier:2007vv,Blossier:2009bx}.
The bare strange valence quark mass is taken to be the
same as the one determined in the meson studies tuned by requiring
that the mass of the kaon at the physical point matches
its physical value.
Using the ETMC $N_f=2$ configurations~\cite{Boucaud:2007uk,Boucaud:2008xu} we
calculate the baryon spectrum for pion masses in the range of 270~MeV to 500~MeV
and at two values of the
lattice spacing corresponding to $\beta=3.9$ and $\beta=4.05$
with $r_0/a=5.22(2)$ and $r_0/a=6.61(3)$, respectively, where $r_0$ is determined from the force between two static quarks.
Results are also obtained at a third $\beta$-value, namely $\beta=3.8$, which corresponds to $r_0/a=4.46(3)$.
The latter results are not
taken into account in the final analysis due to large
autocorrelation effects observed in the Monte Carlo history for quantities
like the PCAC mass
and the plaquette at small sea quark masses.
Data at $\beta=3.8$ are only used as
a consistency check of the continuum extrapolation.
For the nucleon mass we also performed the calculation at an even finer value
of the lattice spacing corresponding
to $r_0/a=8.31(5)$ and $\beta=4.2$ to ensure that indeed the
continuum extrapolation using a
weighted average with results at $\beta=3.9$ and $\beta=4.05$ is valid.
We find that the baryon masses considered here show a very weak
dependence on the lattice spacing and are fully compatible with an ${\cal O}(a^2)$ behaviour with an almost
vanishing coefficient of the $a^2$ term.
This justifies neglecting the ${\cal O}(a^2)$
term in extrapolating results to the continuum limit.
For a fixed value of the lattice spacing we have used up to five different light quark
masses and two different volumes.
The corresponding $m_{\pi}L$ values are in the range 3.3 to 7.4, where
$L$ is the spatial extent of the lattice.
Using these various values of the lattice spacing, quark masses and volumes
allows us to estimate the volume corrections and perform a continuum and chiral extrapolation.
The good precision of our results on the baryon masses allows us to perform
a study of chiral extrapolations to
the physical point. This study shows that one of
our main uncertainties in predicting the mass at the physical point is
caused by the chiral extrapolations.
Another source of systematic error is the partially quenched approximation
that we have used.
An important issue is the restoration of the
explicitly broken isospin symmetry in the continuum limit.
At finite lattice spacing, baryon masses
display $\mathcal{O}(a^2)$ isospin breaking effects.
There are, however, theoretical arguments \cite{Frezzotti:2007qv} and numerical evidences
\cite{Dimopoulos:2008sy,Jansen:2008vs} that these isospin breaking effects are
particularly pronounced
for the neutral pseudo scalar mass whereas for other quantities studied
so far by ETMC they are compatible with zero.
In this paper we will demonstrate that also in the baryon sector these isospin
breaking
effects are in general small or even compatible with zero.
For a preliminary account of these
results see Ref.~\cite{Latt08_Vincent}.
The paper is organized as follows:
The details of our lattice setup, namely those concerning
the twisted mass action,
the parameters of the simulations and the interpolating fields used,
are given in Section~II.
Section~III contains the numerical results of the baryon masses computed
for different
lattice volumes, lattice spacings and bare quark masses
as well as the Gell-Mann Okubo relations that are supposed to be
fulfilled in the exact SU(3) limit.
Lattice artifacts, including finite volume and discretization errors are discussed in Section IV, with special
emphasis on the $\mathcal{O}(a^2)$ isospin breaking effects inherent
in the twisted mass formulation
of lattice QCD.
The chiral extrapolations are analyzed in Section~V.
Section~VI contains a comparison with other existing calculations and
conclusions are finally drawn in Section~VII.
\section{Lattice setup}
\subsection{The lattice action}
For the gauge fields we use the tree-level Symanzik improved
gauge action~\cite{Weisz:1982}, which includes besides the
plaquette term $U^{1\times1}_{x,\mu,\nu}$ also rectangular $(1\times2)$ Wilson
loops $U^{1\times2}_{x,\mu,\nu}$
\begin{equation}
\label{eq:Sg}
S_g = \frac{\beta}{3}\sum_x\Biggl( b_0\sum_{\substack{
\mu,\nu=1\\1\leq\mu<\nu}}^4\left \{1-\operatorname{Re}\operatorname{Tr}(U^{1\times1}_{x,\mu,\nu})\right \}\Bigr.
\Bigl.+
b_1\sum_{\substack{\mu,\nu=1\\\mu\neq\nu}}^4\left \{1
-\operatorname{Re}\operatorname{Tr}(U^{1\times2}_{x,\mu,\nu})\right \}\Biggr)\,
\end{equation}
with $b_1=-1/12$ and the
(proper) normalization condition $b_0=1-8b_1$. Note that at $b_1=0$ this
action reduces to the usual Wilson plaquette gauge action.
The fermionic action for two degenerate flavors of quarks
in twisted mass QCD is given by
\begin{equation}
S_F= a^4\sum_x \bar{\chi}(x)\bigl(D_W[U] + m_0
+ i \mu \gamma_5\tau^3 \bigr ) \chi(x)
\label{S_tm}
\end{equation}
with $\tau^3$ the Pauli matrix acting in
the isospin space, $\mu$ the bare twisted mass
and $D_W$ the massless Wilson-Dirac operator given by
\begin{equation}
D_W[U] = \frac{1}{2} \gamma_{\mu}(\nabla_{\mu} + \nabla_{\mu}^{*})
-\frac{ar}{2} \nabla_{\mu}
\nabla^*_{\mu}
\end{equation}
where
\begin{equation}
\nabla_\mu \psi(x)= \frac{1}{a}\biggl[U_\mu(x)\psi(x+a\hat{\mu})-\psi(x)\biggr]
\hspace*{0.5cm} {\rm and}\hspace*{0.5cm}
\nabla^*_{\mu}\psi(x)=-\frac{1}{a}\biggl[U_{\mu}^\dagger(x-a\hat{\mu})\psi(x-a\hat{\mu})-\psi(x)\biggr]
\quad .
\end{equation}
Maximally twisted Wilson quarks are obtained by setting the untwisted quark mass $m_0$ to its critical value $m_{\rm cr}$,
while the twisted
quark mass parameter $\mu$ is kept non-vanishing in order to be away from the chiral limit.
In \eq{S_tm} the quark fields $\chi$
are in the so-called ``twisted basis". The ``physical basis" is obtained for
maximal twist by the simple transformation
\begin{equation}
\psi(x)=\exp\left(\frac {i\pi} 4\gamma_5\tau^3\right) \chi(x),\qquad
\overline\psi(x)=\overline\chi(x) \exp\left(\frac {i\pi} 4\gamma_5\tau^3\right)
\quad.
\end{equation}
In terms of the physical fields the action is given by
\begin{equation}
S_F^{\psi}= a^4\sum_x \bar{\psi}(x)\left(\frac 12 \gamma_\mu
[\nabla_\mu+\nabla^*_\mu]+i \gamma_5\tau^3 \left(-
\frac{ar}{2} \;\nabla_\mu\nabla^*_\mu+ m_{\rm cr}\right )
+ \mu \right ) \psi(x)\quad.
\label{S_ph}
\end{equation}
In this paper, unless otherwise stated, the quark fields will be understood as ``physical fields'',
$\psi$, in particular when we define the baryonic interpolating fields.
A crucial advantage of the twisted mass formulation is
the fact that, by tuning the bare untwisted quark mass $m_0$ to its critical value
$m_{\rm cr}$, physical observables are automatically
${\cal O}(a)$ improved.
In practice, we implement
maximal twist of Wilson quarks by tuning to zero the bare untwisted current
quark mass, commonly called PCAC mass, $m_{\rm PCAC}$, which is proportional to
$m_0 - m_{\rm cr}$ up to ${\cal O}(a)$ corrections. As detailed in Ref.~\cite{ETMClong},
$m_{\rm PCAC}$ is conveniently evaluated through
\begin{equation}
m_{\rm PCAC}=\lim_{t/a >>1}\frac{\sum_{\bf x}\langle \partial_4 \tilde{A}^b_4({\bf x},t) \tilde{P}^b(0) \rangle}
{2\sum_{\bf x} \langle \tilde{P}^b({\bf x},t)\tilde{P}^b(0)\rangle}, \hspace*{1cm} b=1,2 \quad,
\label{PCAC mass}
\end{equation}
where $\tilde{A}^b_\mu=\bar{\chi}\gamma_\mu \gamma_5 \frac{\tau^b}{2}\chi$ is
the
axial vector current and
$\tilde{P}^b=\bar{\chi}\gamma_5 \frac{\tau^b}{2}\chi$ is
the pseudo scalar density in the twisted basis. The large $t/a$ limit
is required in order to isolate the contribution of the
lowest-lying charged pseudo scalar meson state in the correlators of \eq{PCAC mass}.
This way of determining $m_{\rm PCAC}$ is equivalent
to imposing on the lattice the validity of the axial Ward identity between the vacuum and the charged one-pion zero three-momentum state:
\begin{equation}
\partial_\mu \tilde{A}_\mu^b = 2m_{\rm PCAC} \tilde{P}^b,\;\quad b=1,2 \quad.
\end{equation}
The value of $m_{\rm cr}$ is determined at each $\beta$ value at the lowest
twisted mass used in our simulations, a procedure that preserves ${\cal O}(a)$ improvement
and keeps ${\cal O}(a^2)$ small~\cite{Boucaud:2008xu,Frezzotti:2005gi}.
The twisted mass fermionic action breaks parity and isospin at
non-vanishing lattice spacing, as it is apparent from the form of the Wilson term in
Eq.~(\ref{S_ph}).
In particular, the isospin breaking in physical observables is a
cut-off effect of ${\cal O}(a^2)$~\cite{Frezzotti:2004}.
To simulate the strange quark in the valence sector several choices are possible.
We consider a quenched Osterwalder-Seiler fermion \cite{Osterwalder:1977pc} with the
following action in the twisted basis:
\begin{equation}
S_s= a^4\sum_x \bar{\chi_s}(x)\bigl(D_W[U] + m_0
+ i \mu_s \gamma_5 \bigr ) \chi_s(x) \; .
\label{S_OS}
\end{equation}
This is naturally realized in the twisted mass approach by introducing an additional
doublet of strange quark and keeping only the positive diagonal component of $\tau_3$.
The $m_0$ value is taken to be equal to the critical mass determined in the light sector,
thus guaranteeing the $O(a)$ improvement in any observable.
The reader interested in the advantage of this mixed action in the mesonic sector is referred to the Refs~\cite{Frezzotti:2004wz, AbdelRehim:2006ra, AbdelRehim:2006ve, Blossier:2007vv, Blossier:2009bx}.
\subsection{Simulation details}
The input parameters of the calculation, namely $\beta$, $L/a$ and $a\mu$
are summarized in Table~\ref{Table:params}. The corresponding lattice spacing $a$
and the pion mass values, spanning a mass range
from 270~MeV to 500~MeV, are taken
from Ref.~\cite{Urbach:2007}.
At $m_{\pi}\approx 300$ MeV we have simulations
for lattices of spatial size $L=2.1$~fm and $L=2.7$~fm at $\beta=3.9$
allowing to investigate finite volume effects.
Finite lattice spacing effects are investigated using two sets of
results at $\beta=3.9$ and $\beta=4.05$. The set at $\beta=3.8$ is used only as a
cross-check and to estimate cut-off errors.
These sets of gauge ensembles allow us to estimate all the systematic
errors in order to have reliable predictions for the baryon spectrum.
\begin{table}[h]
\begin{center}
\begin{tabular}{c|llllll}
\hline\hline
\multicolumn{6}{c}{ $\beta=4.05$, $a=0.0666(6)$~fm from $f_\pi$~\cite{Urbach:2007}, ${r_0/a}=6.61(3)$ }\\
\hline
$32^3\times 64$, $L=2.13$~fm &$a\mu$ & 0.0030 & 0.0060 & 0.0080 & 0.012\\
& No. of confs. &269 &253 & 409 &182\\
&$m_\pi$~(GeV) & 0.3070(18) & 0.4236(18) & 0.4884(15) & 0.6881(18) \\
&$m_\pi L$ & 3.31 & 4.57 & 5.27 & 7.43 \\ \hline\hline
\multicolumn{6}{c}{$\beta=3.9$, $a=0.0855(6)$~fm, from $f_\pi$~\cite{Urbach:2007}, ${r_0/a}=5.22(2)$}\\\hline
$24^3\times 48$, $L=2.05$~fm &$a\mu$ & 0.0030 & 0.0040 & 0.0064 & 0.0085 & 0.010 \\
&No. of confs & - &782 &545 & 348 &477 \\
&$m_\pi$~(GeV) & - & 0.3131(16) & 0.3903(9) & 0.4470(12) & 0.4839(12)\\
&$m_\pi L$ & & 3.25 & 4.05 & 4.63 & 5.03 \\
$32^3\times 64$, $L=2.74$~fm &$a\mu$ & 0.003 & 0.004 & & & \\
& No. of confs & 659 &232 & & & \\
& $m_\pi$~(GeV)& 0.2696(9) & 0.3082(6) & & & \\
& $m_\pi L$ & 3.74 & 4.28 &&& \\\hline \hline
\multicolumn{6}{c}{ $\beta=3.8$, $a=0.0995(7)$~fm ${r_0/a}=4.46(3)$}\\\hline
$24^3\times 48$, $L=2.39$~fm &$a\mu$ & 0.0060 & 0.0080 & 0.0110 & 0.0165\\
& No. of confs & 215 & 302 & 248 & 244& \\
&$m_\pi$~(GeV) & 0.3667(17) & 0.4128(16) & 0.4799(9) & 0.5855(10) \\
&$m_\pi L$ & 4.44 & 5.00 & 5.81 & 7.09 \\ \hline
\end{tabular}
\caption{Input parameters ($\beta,L,\mu$) of our lattice calculation and corresponding lattice spacing ($a$) and pion mass ($m_{\pi}$).}
\label{Table:params}
\end{center}
\vspace*{-.0cm}
\end{table}
\subsection{Tuning of the bare strange quark mass}
In a previous paper from the ETM collaboration \cite{Blossier:2007vv}, pseudo scalar meson
masses have been computed for different values of the sea and valence quark masses for the
$\beta=3.9$ gauge configurations. Using the experimental value of
the mass ratio of the kaon to the pion, $m_K/m_\pi$, the
bare strange quark mass can be set.
We use the value of $a\mu_s = 0.0217(22)$ at $\beta= 3.9$
taken from Table~2 of Ref.~\cite{Blossier:2007vv}.
In a more recent study of the pseudo scalar decay constant of kaons and
D-mesons \cite{Blossier:2009bx}, the computation was extended to $\beta = 3.8$
and $\beta=4.05$. However, this is still a preliminary
analysis and an ongoing analysis for the accurate extraction of quark masses
is still in progress.
One can obtain an estimate of the bare strange quark mass at a given value of
$\beta$ by taking the results at $\beta=3.9$ as a reference and using the scaling relation~\cite{private:2008}:
\begin{equation}
a \mu_s(\beta) = \frac{Z_p(\beta)}{Z_p(\beta=3.9)} \frac{ a(\beta)}{a(\beta=3.9)} a \mu_s(\beta=3.9)\; .
\label{s-mass}
\end{equation}
The values we use for $\beta=3.8$ and $\beta=4.05$ given
in Table~\ref{Table:mus}
are obtained by applying Eq.~(\ref{s-mass}).
We use the value of the renormalization constant $Z_p(\beta)$
found in the preliminary analysis of Ref.~\cite{Dimopoulos:2007qy}
within the RI'-MOM scheme. This value is in agreement with a complementary
analysis given in Ref.~\cite{Cyprus:2009}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|llll}
\hline\hline
& $\beta= 3.8$ & $\beta= 3.9$ & $\beta= 4.05$ \\
$a \mu_s$ & $0.0208(15)(48)$ & $0.0217(22)$ & $0.0166(18)(29)$ \\
\hline\hline
\end{tabular}
\caption{Bare strange quark mass used in the valence sector for different $\beta$ values.}
\label{Table:mus}
\end{center}
\vspace*{-0.8cm}
\end{table}
\subsection{Interpolating fields}
The low lying baryons belonging to the octet and decuplet representations
of $SU(3)$ are given in Figs.~\ref{fig:interp. octet}
and \ref{fig:interp. decuplet} respectively.
They are classified by giving the isospin, $I$, the third component of the isospin, $I_3$, the strangeness (s), spin and parity.
In order to extract their masses in lattice QCD we evaluate two point correlators. We use interpolating fields to create these states from the vacuum
that have the correct quantum numbers and reduce to the quark model wave functions in the non-relativistic limit.
The interpolating fields used in this work are collected in Tables~\ref{Table:interpolating_octet}~\cite{Ioffe:1981kw, Leinweber:1990dv} and ~\ref{Table:interpolating_decuplet}~\cite{Ioffe:1981kw, Leinweber:1992hy} for
the octet and decuplet respectively.
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Baryon_octet.ps}}
\caption{The low lying baryons belonging to the octet representation labeled by the value of $I_3$ and hypercharge.}
\label{fig:interp. octet}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Baryon_decuplet.ps}}
\caption{The low lying baryons belonging to the decuplet representation labeled by the value of $I_3$ and hypercharge.}
\label{fig:interp. decuplet}
\end{minipage}
\end{figure}
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{c|c c c c}
\hline
Strangeness & Baryon & Interpolating field & $I$ & $I_z$ \\ \hline
\multirow{2}{*}{$s=0$} & $p$ & $~\chi^{p} = \epsilon_{abc}(u_a^T C\gamma_5 d_b) u_c~$ & $1/2$ & $+1/2$\\
& $n$ & $~\chi^{n} = \epsilon_{abc}(d_a^T C\gamma_5 u_b) d_c~$ & $1/2$ & $-1/2$\\
\hline
\multirow{4}{*}{$s=1$} & $\Lambda$ &$~\chi^{\Lambda^8}= \frac{1}{\sqrt{6}}\epsilon_{abc}\big\{2(u_a^T C\gamma_5 d_b )s_c +(u_a^T C\gamma_5 s_b)d_c -(d_a^T C\gamma_5 s_b)u_c\big\}~$ & $0$ & $0$\\
& $\Sigma^+$ & $~\chi^{\Sigma^+} = \epsilon_{abc}(u_a^T C\gamma_5 s_b )u_c ~$ & $1$ & $+1$\\
& $\Sigma^0$ & $~\chi^{\Sigma^0} = \frac{1}{\sqrt{2}}\epsilon_{abc}\big\{(u_a^T C\gamma_5 s_b )d_c +(d_a^T C\gamma_5 s_b)u_c \big\}~$ & $1$ & $+0$\\
& $\Sigma^-$ & $~\chi^{\Sigma^-} = \epsilon_{abc}(d_a^T C\gamma_5 s_b )d_c ~$ & $1$ & $-1$\\
\hline
\multirow{2}{*}{$s=2$} & $\Xi^0$ & $~\chi^{\Xi^{0}} = \epsilon_{abc}(s_a^T C\gamma_5 u_b) s_c~$ & $1/2$ & $+1/2$ \\
& $\Xi^-$ & $~\chi^{\Xi^{-}} = \epsilon_{abc}(s_a^T C\gamma_5 d_b) s_c~$& $1/2$ & $-1/2$\\
\hline \hline
\end{tabular}
\caption{Interpolating fields and quantum numbers for the baryons in the octet representation.}
\label{Table:interpolating_octet}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{c|c c c c}
\hline\hline
Strangeness & Baryon & Interpolating field & $I$ & $I_z$ \\ \hline
\multirow{4}{*}{$s=0$} &$\Delta^{++}$ & $~\chi^{\Delta^{++}}_{\mu} = \epsilon_{abc}(u_a^T C\gamma_{\mu} u_b) u_c~$ &$3/2$ & $+3/2$\\
&$\Delta^{+}$ & $~\chi^{\Delta^{+}}_{\mu} = \frac{1}{\sqrt{3}}\epsilon_{abc}\big\{2 (u_a^T C\gamma_{\mu} d_b) u_c + (u_a^T C\gamma_{\mu} u_b) d_c\big\} ~$ &$3/2$ & $+1/2$ \\
& $\Delta^{0}$ & $~\chi^{\Delta^{0}}_{\mu} = \frac{1}{\sqrt{3}}\epsilon_{abc}\big\{2 (d_a^T C\gamma_{\mu} u_b) d_c + (d_a^T C\gamma_{\mu} d_b) u_c\big\} ~$ &$3/2$ & $-1/2$\\
& $\Delta^{-}$ & $~\chi^{\Delta^{-}}_{\mu} = \epsilon_{abc}(d_a^T C\gamma_{\mu} d_b) d_c~$ &$3/2$ & $-3/2$ \\
\hline
\multirow{3}{*}{$s=1$} & $\Sigma^{\ast +}$ & $~\chi^{\Sigma^{\ast +}}_{\mu}= \frac{1}{\sqrt{3}}\epsilon_{abc}\big\{ (u^{T}_a C \gamma_{\mu} u_b)s_c+2(s^{T}_a C \gamma_{\mu}u_b) u_c \big\} ~$ & $1$ & $+1$\\
& $\Sigma^{\ast 0}$ & $~\chi^{\Sigma^{\ast 0}}_{\mu} = \sqrt{\frac{2}{3}}\epsilon_{abc}\big\{ (u^{T}_a C \gamma_{\mu}d_b )s_c + (d^{T}_a C \gamma_{\mu}s_b) u_c +(s^{T}_a C \gamma_{\mu}u_b) d_c \big\}~$ &$1$ & $+0$\\
& $\Sigma^{\ast -}$ & $~\chi^{\Sigma^{\ast -}}_{\mu} = \frac{1}{\sqrt{3}}\epsilon_{abc}\big\{ (d^{T}_a C \gamma_{\mu}d_b )s_c +2(s^{T}_a C \gamma_{\mu}d_b) d_c \big\} ~$ &$1$ & $-1$\\
\hline
\multirow{2}{*}{$s=2$} & $\Xi^{\ast 0}$ & $~\chi^{\Xi^{\ast 0}}_{\mu} = \epsilon_{abc}(s_a^T C\gamma_\mu u_b) s_c~$ &$1/2$ & $+1/2$\\
& $\Xi^{\ast -}$ & $~\chi^{\Xi^{\ast -}}_{\mu} = \epsilon_{abc}(s_a^T C\gamma_\mu d_b) s_c~$&$1/2$ & $-1/2$\\
\hline
\multirow{1}{*}{$s=3$} & $\Omega^{-}$ & $~\chi^{\Omega^{-}}_{\mu} = \epsilon_{abc}(s_a^T C\gamma_{\mu} s_b) s_c~$ &$0$ & $+0$\\
\hline \hline
\end{tabular}
\caption{Interpolating fields and quantum numbers for baryons in the decuplet representation.}
\label{Table:interpolating_decuplet}
\end{center}
\end{table}
Local interpolating fields are not optimal for suppressing excited state contributions. We instead apply
Gaussian smearing to each quark field, $q({\bf x},t)$: $q^{\rm smear}({\bf x},t) = \sum_{\bf y} F({\bf x},{\bf y};U(t)) q({\bf y},t)$
using the gauge invariant smearing function
\begin{equation}
F({\bf x},{\bf y};U(t)) = (1+\alpha H)^ n({\bf x},{\bf y};U(t)),
\end{equation}
constructed from the hopping matrix,
\begin{equation}
H({\bf x},{\bf y};U(t))= \sum_{i=1}^3 \biggl( U_i({\bf x},t)\delta_{{\bf x,y}-i} + U_i^\dagger({\bf x}-i,t)\delta_{{\bf x,y}+i}\biggr).
\end{equation}
Furthermore we apply APE smearing to the spatial links that enter the hopping matrix.
The parameters of the Gaussian and APE smearing are the same as those used in our previous work devoted to the nucleon and $\Delta$
masses~\cite{Alexandrou:2008tn}.
\subsection{Two-point correlators}
To extract masses in the rest frame we consider two-point correlators defined by
\begin{eqnarray}
C^\pm_X(t,\vec{p}=\vec{0}) = \frac 1 2 {\rm Tr}(1 \pm \gamma_4) \sum_{\bf x_{\rm sink}}
\langle J_X( {\bf x}_{\rm sink}, t_{\rm sink}) \bar J_X({\bf x}_{\rm source}, t_{\rm source})\rangle,\qquad
t=t_{\rm sink}-t_{\rm source} \quad.
\label{C_X}\end{eqnarray}
Space-time reflection symmetries of the action and the anti-periodic boundary conditions in the temporal direction for the quark fields
imply, for zero three-momentum correlators, that $C_X^+(t) = -C_X^-(T-t)$. So, In order to decrease errors
we average correlators in the forward and backward direction and define:
\begin{equation}
C_X(t) = C_X^+(t) - C_X^-(T-t) \, .
\end{equation}
In order to decrease correlation between measurements, we choose the source location randomly on the whole lattice
for each configuration.
Masses are extracted from the so called effective mass which is defined by
\begin{equation}
am_{\rm eff}^X(t)=-\log(C_X(t)/C_X(t-1))= am_X+\log\left(\frac{1+\sum_{i=1}^\infty c_ie^{\Delta_i t}}{1+\sum_{i=1}^\infty c_ie^{\Delta_i (t-1)}}\right)
\mathop{\longrightarrow}_{t\rightarrow \infty} am_X \quad,
\label{meff}
\end{equation}
where $\Delta_i= m_i-m_X$ is the mass difference of the excited state $i$ with
respect to the ground state mass $m_X$.
In Figs.~\ref{fig:meff octet} and \ref{fig:meff decuplet} we show the effective masses of the baryons in the octet
and decuplet representation respectively.
As can be seen a plateau region can be identified for all baryons. What is
shown in these figures are effective masses extracted from correlators where
smearing is applied both at the sink and source.
Although local correlators are expected to have the same value in the
large time limit,
smearing suppresses excited state contributions
yielding a plateau at earlier time separations and a better accuracy in the mass extraction.
Our fitting procedure to extract $m_X$ is as follows:
The mass is obtained from the leading term in Eq.~(\ref{meff}), i.e.from a constant fit to $m_X$. A second fit, including the first excited state,
allows us to estimate the
systematical error of the previously determined $m_X$
due to excited states for a given plateau range. The plateau
range is then chosen such that the
systematical error on $m_X$ drops below 50\% of its statistical
error
This criterion is in most of the cases in agreement
with a $\chi^2/{\rm d.o.f.}< 1$.
In the cases in which this criterion is not satisfied a careful examination of the effective mass is made to ensure that the fit range is in the
plateau region. The results for the masses of the octet and decuplet
at $\beta=3.9$ are are collected in Tables~\ref{tab:masses-octet-3.9} and \ref{tab:masses-decuplet-3.9} respectively. The corresponding
results for the masses at $\beta=4.05$ are given in Table~\ref{tab:masses-octet-4.05}
and \ref{tab:masses-decuplet-4.05}.
The errors are evaluated using both jackknife and the $\Gamma$-method~\cite{Wolff:2004} to ensure consistency.
\begin{figure}[h!]
\vspace*{1cm}
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{meff_octet.eps}}
\caption{Effective masses of the octet states for $\beta = 3.9$, $a\mu = 0.004$ on a $32^3\times 64$ lattice using $232$ configurations.}
\label{fig:meff octet}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{meff_decuplet.eps}}
\caption{Effective masses of the decuplet states for $\beta = 3.9$, $a\mu = 0.004$ on a $32^3\times 64$ lattice using $232$ configurations.}
\label{fig:meff decuplet}
\end{minipage}
\end{figure}
\section{Results}
The bulk of the numerical results are presented in this section. Baryon masses are given in lattice units. Our procedure to convert the results to physical
units will be discussed in the next section.
\subsection{Baryon masses}
In Tables~\ref{tab:masses-octet-3.9} to \ref{tab:masses-decuplet-4.05}
we present the masses of
the octet and decuplet states with the lattice input parameters given in
Table~\ref{Table:params}. For the isospin multiplets we have computed separately the
masses corresponding to each isospin components as well as their averaged value.
These results (averaged values in case of isospin multiplets) are displayed in
Figs.~\ref{fig:masses_octet} and \ref{fig:masses_decuplet}. The $\beta=3.9$, $L/a=24$ data are
linked by dotted lines to guide the eye. An inspection of the plots indicates that the lattice artifacts, studied in
detail in the next section, are small. Notice that the natural order of the $\Sigma^{\ast}$
and $\Xi^\ast$ states comes out to be correct for $m_\pi\leq 300$ MeV while for larger masses
this order is inverted.
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lcccccccccc}
\hline\hline
$\Bigl.\Bigr.a\mu$ & stat. & $am_N$ & $am_{\Lambda}$ & $am_{\Sigma^{\rm Av}}$& $am_{\Sigma^+}$ & $am_{\Sigma^0}$ &$am_{\Sigma^-}$&$am_{\Xi^{\rm Av}}$ &$am_{\Xi^0}$ &$am_{\Xi^-}$ \\
\hline\hline
\multicolumn{11}{c}{$24^3\times 48$}\\
\hline
$0.0040$ & 782 & $0.5111(58)$ & $0.5787(42)$ & $0.6075(46)$ & $0.6175(66)$ & $0.6118(48)$ & $0.5959(52)$ & $0.6497(31)$ & $0.6695(42)$ & $0.6372(31)$ \\
$0.0064$ & 545 & $0.5514(49)$ & $0.6017(42)$ & $0.6265(48)$ & $0.6487(72)$ & $0.6278(52)$ & $0.6131(52)$ & $0.6636(36)$ & $0.6876(50)$ & $0.6500(36)$ \\
$0.0085$ & 348 & $0.5786(67)$ & $0.6198(51)$ & $0.6491(55)$ & $0.6679(62)$ & $0.6529(49)$ & $0.6358(46)$ & $0.6728(43)$ & $0.6956(58)$ & $0.6593(43)$ \\
$0.0100$ & 477 & $0.5973(43)$ & $0.6326(36)$ & $0.6522(41)$ & $0.6662(56)$ & $0.6539(43)$ & $0.6429(44)$ & $0.6793(36)$ & $0.6959(49)$ & $0.6683(32)$ \\
\hline\hline
\multicolumn{11}{c}{$32^3 \times 64$}\\
\hline
$0.0030$ & 652 & $0.4958(43)$ & $0.5613(33)$ & $0.5891(42)$ & $0.6069(68)$ & $0.5932(50)$ & $0.5775(39)$ & $0.6382(30)$ & $0.6572(44)$ & $0.6275(26)$ \\
$0.0040$ & 232 & $0.5126(46)$ & $0.5750(35)$ & $0.6117(40)$ & $0.6281(73)$ & $0.6158(40)$ & $0.5960(47)$ & $0.6511(34)$ & $0.6748(46)$ & $0.6358(32)$ \\
\hline\hline
\end{tabular}
\caption{ Baryon masses in the octet representation at $\beta =3.9$ in lattice units. }
\label{tab:masses-octet-3.9}
\end{center}
\end{small}
\end{table}
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lcccccccccccc}
\hline\hline
$\Bigl.\Bigr.a\mu$ & stat. & $am_{\Delta^{++,-}}$ & $am_{\Delta^{+,0}}$&$am_{\Sigma^{\ast {\rm Av}}}$& $am_{\Sigma^{\ast +}}$ & $am_{\Sigma^{\ast 0}}$ &$am_{\Sigma^{\ast -}}$&$am_{\Xi^{\ast {\rm Av}}}$ &$am_{\Xi^{\ast 0}}$ &$am_{\Xi^{\ast -}}$ &$am_{\Omega}$ \\
\hline\hline
\multicolumn{12}{c}{$24^3\times 48$}\\
\hline
$0.0040$ & 782 & $0.660(14)$ & $0.670(13)$ & $0.7166(82)$ & $0.709(11)$ & $0.7226(81)$ & $0.7222(95)$ & $0.7311(51)$ & $0.7381(59)$ & $0.7200(66)$ & $0.8079(52)$ \\
$0.0064$ & 545 & $0.709(11)$ & $0.711(12)$ & $0.7461(84)$ & $0.740(10)$ & $0.7480(93)$ & $0.7489(93)$ & $0.7412(78)$ & $0.7552(76)$ & $0.7344(84)$ & $0.8156(63)$ \\
$0.0085$ & 348 & $0.714(12)$ & $0.733(13)$ & $0.7517(88)$ & $0.739(11)$ & $0.760(11)$ & $0.7645(98)$ & $0.7415(85)$ & $0.7529(81)$ & $0.7367(84)$ & $0.8133(66)$ \\
$0.0100$ & 477 & $0.7531(67)$ & $0.7559(75)$ & $0.7794(66)$ & $0.7808(62)$ & $0.7809(64)$ & $0.7798(69)$ & $0.7618(73)$ & $0.7741(64)$ & $0.7484(74)$ & $0.8284(51)$ \\
\hline\hline
\multicolumn{12}{c}{$32^3 \times 64$}\\
\hline
$0.003$ & 652& 0.6234(139) & 0.6497(133) & 0.6859(96) & 0.6838(93) & 0.6859(106) & 0.7027(101) & 0.7058(50) & 0.7097(58) & 0.7032(53) & 0.7926(49)\\
$0.004$ & 232 & $0.651(16)$ & $0.659(15)$ & $0.713(10)$ & $0.705(12)$ & $0.716(12)$ & $0.7173(99)$ & $0.7291(74)$ & $0.7366(79)$ & $0.7192(72)$ & $0.8037(69)$ \\
\hline\hline
\end{tabular}
\caption{Baryon masses in the decuplet representation at $\beta =3.9$ in lattice units. }
\label{tab:masses-decuplet-3.9}
\end{center}
\end{small}
\end{table}
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lcccccccccc}
\hline\hline
$\Bigl.\Bigr.a\mu$ & stat. & $am_N$ & $am_{\Lambda}$ & $am_{\Sigma^{av}}$& $am_{\Sigma^+}$ & $am_{\Sigma^0}$ &$am_{\Sigma^-}$&$am_{\Xi^{av}}$ &$am_{\Xi^0}$ &$am_{\Xi^-}$ \\
\hline\hline
\multicolumn{11}{c}{$32^3\times 64$}\\
\hline
$0.0030$ & 269 & $0.4091(60)$ & $0.4540(38)$ & $0.4761(44)$ & $0.4885(62)$ & $0.4774(47)$ & $0.4651(53)$ & $0.5082(31)$ & $0.5177(39)$ & $0.5007(29)$ \\
$0.0060$ & 253 & $0.4444(47)$ & $0.4792(47)$ & $0.4944(44)$ & $0.5022(66)$ & $0.4960(45)$ & $0.4834(45)$ & $0.5192(42)$ & $0.5277(50)$ & $0.5112(37)$ \\
$0.0080$ & 409 & $0.4714(31)$ & $0.4957(30)$ & $0.5089(31)$ & $0.5179(41)$ & $0.5095(32)$ & $0.5019(31)$ & $0.5262(28)$ & $0.5350(34)$ & $0.5199(25)$ \\
\end{tabular}
\caption{ Baryon masses in the octet p representation at $\beta =4.05$ in lattice units. }
\label{tab:masses-octet-4.05}
\end{center}
\end{small}
\end{table}
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lcccccccccccc}
\hline\hline
$\Bigl.\Bigr.a\mu$ & stat. & $am_{\Delta^{++,-}}$ & $am_{\Delta^{+,0}}$&$am_{\Sigma^{\ast av}}$& $am_{\Sigma^{\ast +}}$ & $am_{\Sigma^{\ast 0}}$ &$am_{\Sigma^{\ast -}}$&$am_{\Xi^{\ast av}}$ &$am_{\Xi^{\ast 0}}$ &$am_{\Xi^{\ast -}}$ &$am_{\Omega}$ \\
\hline\hline
\multicolumn{12}{c}{$32^3\times 64$}\\
\hline
$0.0030$ & 269 & $0.5381(93)$ & $0.5441(93)$ & $0.5728(79)$ & $0.5673(94)$ & $0.5750(86)$ & $0.5734(80)$ & $0.5772(56)$ & $0.5796(54)$ & $0.5750(56)$ & $0.6361(46)$ \\
$0.0060$ & 253 & $0.5505(77)$ & $0.5581(90)$ & $0.5805(66)$ & $0.5754(71)$ & $0.581(11)$ & $0.5844(68)$ & $0.5816(47)$ & $0.5834(50)$ & $0.5802(48)$ & $0.6286(53)$ \\
$0.0080$ & 409 & $0.5918(60)$ & $0.5906(63)$ & $0.6078(59)$ & $0.6044(68)$ & $0.5850(74)$ & $0.6099(57)$ & $0.5940(43)$ & $0.6021(50)$ & $0.5873(43)$ & $0.6461(49)$ \\
\hline\hline
\end{tabular}
\caption{Baryon masses in the decuplet representation at $\beta =4.05$ in lattice units. }
\label{tab:masses-decuplet-4.05}
\end{center}
\end{small}
\end{table}
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{plot_octet.eps}}
\caption{Octet states measured in our different gauge ensembles. Physical points are indicated by their name symbols (magenta).
The data at $\beta=3.9$, $L/a=24$ are connected by dotted lines to guide the eye.}
\label{fig:masses_octet}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{plot_decuplet.eps}}
\caption{The same as Fig.~\ref{fig:masses_octet} but for the decuplet states. }
\label{fig:masses_decuplet}
\end{minipage}
\end{figure}
\newpage
\subsection{Strange quark mass dependence}
The dependence of the masses of baryons with strangeness on
the bare strange quark mass has been investigated at $\beta = 3.9 $
for $a\mu = 0.004 $.
The results are given in Tables~\ref{tab:squark-octet-3.9} and
\ref{tab:squark-decuplet-3.9} and displayed in Figs.~\ref{fig:squark_octet} and
\ref{fig:squark_decuplet}. The vertical dotted line indicates the value of
the tuned bare strange
quark mass as given in Table \ref{Table:mus}. The $SU(3)$ symmetric
point $\mu_s = \mu$ is given by the nucleon and $\Delta$ mass for the octet and decuplet respectively.
As can be seen in the $SU(3)$ limit all the octet and decuplet masses converge to a single point up to
cut-off effects and the fact that we only have $N_f=2$ simulations.
For clarity we only show in Fig.~\ref{fig:squark_octet} the mass of
$\Lambda$, $\Sigma^{\rm Av}$ and $\Xi^{\rm Av}$. They should be degenerate
with the nucleon in the limit of $\mu_s = \mu$.
Indeed, if one computes the nucleon mass with
the same statistics with that used for $\Sigma^{\rm Av}$ and $\Xi^{\rm Av}$,
one finds them to be degenerate within the errors as can be
seen in Fig.~\ref{fig:squark_octet}.
The corresponding results for the decuplet-baryons
are displayed in Fig.~\ref{fig:squark_decuplet}.
As can be seen, also in the case of the decuplet masses there is
convergence to the $\Delta$ mass
as predicted in the exact $SU(3)$ limit $\mu_s = \mu$.
\begin{figure}[h!]
\vspace*{1cm}
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{mus_dep_octet_av.eps}}
\caption{Masses for octet baryons
at $\beta = 3.9$ and $a\mu = 0.004$ on a lattice of size $24^3\times 48$ as
a function of $a \mu_s$. The vertical dashed line indicates the value of the tuned bare
strange quark mass. The dotted lines are to guide the eye.}
\label{fig:squark_octet}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{mus_dep_decuplet_av.eps}}
\caption{The same as for Fig.~\ref{fig:squark_octet} but
for the decuplet baryons. }
\label{fig:squark_decuplet}
\end{minipage}
\end{figure}
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lccccccccc}
\hline\hline
$\Bigl.\Bigr.a\mu_s$ & stat. & $am_{\Lambda}$ & $am_{\Sigma^{\rm Av}}$& $am_{\Sigma^+}$ & $am_{\Sigma^0}$ &$am_{\Sigma^-}$&$am_{\Xi^{\rm Av}}$ &$am_{\Xi^0}$ &$am_{\Xi^-}$ \\
\hline\hline
\multicolumn{10}{c}{$24^3\times 48$}\\
\hline
$0.0064$ & 597 & $0.533(8)$ & $0.545(7)$ & $0.549(12)$ & $0.563(5)$ & $0.537(6)$ & $0.545(7)$ & $0.560(11)$ & $0.530(6)$ \\
$0.0085$ & 316 & $0.537(10)$ & $0.557(11)$ & $0.559(19)$ & $0.557(12)$ & $0.554(8)$ & $0.563(9)$ & $0.585(14)$ & $0.549(8)$ \\
$0.0100$ & 316 & $0.542(9)$ & $0.564(10)$ & $0.567(18)$ & $0.564(11)$ & $0.561(7)$ & $0.574(8)$ & $0.597(13)$ & $0.560(7)$ \\
$0.0175$ & 315 & $0.563(8)$ & $0.596(9)$ & $0.600(14)$ & $0.593(9)$ & $0.593(7)$ & $0.626(6)$ & $0.644(8)$ & $0.610(6)$ \\
$0.0200$ & 308 & $0.568(8)$ & $0.606(8)$ & $0.609(13)$ & $0.602(9)$ & $0.603(6)$ & $0.641(6)$ & $0.660(8)$ & $0.625(5)$ \\
$0.0250$ & 311 & $0.584(7)$ & $0.626(6)$ & $0.627(12)$ & $0.619(8)$ & $0.620(6)$ & $0.671(5)$ & $0.688(7)$ & $0.656(5)$ \\
$0.0400$ & 316 & $0.624(7)$ & $0.674(6)$ & $0.676(10)$ & $0.667(7)$ & $0.672(6)$ & $0.751(4)$ & $0.764(5)$ & $0.738(4)$ \\
$0.0800$ & 314 & $0.718(7)$ & $0.780(5)$ & $0.787(7)$ & $0.772(7)$ & $0.776(7)$ & $0.935(3)$ & $0.945(4)$ & $0.926(3)$ \\
\hline\hline
\end{tabular}
\caption{Octet masses for $\beta = 3.9$, $a\mu = 0.004$ on a $24^3\times 48$ lattice as a function of $a \mu_s$.}
\label{tab:squark-octet-3.9}
\end{center}
\end{small}
\end{table}
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lcccccccccc}
\hline\hline
$\Bigl.\Bigr.a\mu_s$ & stat. &$am_{\Sigma^{\ast {\rm Av}}}$& $am_{\Sigma^{\ast +}}$ & $am_{\Sigma^{\ast 0}}$ &$am_{\Sigma^{\ast -}}$&$am_{\Xi^{\ast {\rm Av}}}$ &$am_{\Xi^{\ast 0}}$ &$am_{\Xi^{\ast -}}$ &$am_{\Omega}$ \\
\hline\hline
\multicolumn{10}{c}{$24^3\times 48$}\\
\hline
$0.0064$ & 597 & $0.665(12)$ & $0.658(18)$ & $0.669(14)$ & $0.669(14)$ & $0.636(9)$ & $0.645(12)$ & $0.628(9)$ & $0.678(14)$ \\
$0.0085$ & 316 & $0.695(19)$ & $0.719(13)$ & $0.713(16)$ & $0.733(9)$ & $0.648(11)$ & $0.670(15)$ & $0.624(10)$ & $0.734(11)$ \\
$0.0100$ & 316 & $0.700(18)$ & $0.722(12)$ & $0.715(15)$ & $0.697(22)$ & $0.658(9)$ & $0.680(13)$ & $0.636(9)$ & $0.744(10)$ \\
$0.0175$ & 315 & $0.721(14)$ & $0.729(14)$ & $0.736(12)$ & $0.718(19)$ & $0.705(7)$ & $0.722(8)$ & $0.690(7)$ & $0.796(6)$ \\
$0.0200$ & 308 & $0.725(14)$ & $0.734(13)$ & $0.741(11)$ & $0.718(17)$ & $0.720(6)$ & $0.734(7)$ & $0.704(7)$ & $0.807(7)$ \\
$0.0250$ & 311 & $0.740(13)$ & $0.735(16)$ & $0.753(11)$ & $0.740(17)$ & $0.747(6)$ & $0.759(6)$ & $0.733(6)$ & $0.838(6)$ \\
$0.0400$ & 316 & $0.778(11)$ & $0.770(13)$ & $0.788(9)$ & $0.778(15)$ & $0.821(5)$ & $0.831(5)$ & $0.811(5)$ & $0.934(4)$ \\
$0.0800$ & 314 & $0.864(7)$ & $0.854(11)$ & $0.870(8)$ & $0.865(9)$ & $0.993(4)$ & $0.996(5)$ & $0.987(4)$ & $1.169(3)$ \\
\hline\hline
\end{tabular}
\caption{Decuplet masses for $\beta = 3.9$, $a\mu = 0.004$ on a $24^3\times 48$ lattice as a function of $a \mu_s$}
\label{tab:squark-decuplet-3.9}
\end{center}
\end{small}
\end{table}
The $\mu_s$ dependence of the strange baryon masses
provides an estimate of systematic errors
due to the uncertainty in the tuning of the strange quark mass. As already explained, the kaon mass
at the
physical point is used to fix $\mu_s$. This gives $a\mu_s=0.0217(22)$. The $\sim 10$\% uncertainty
leads to a corresponding error in the strange baryon masses that can be estimated by the variation of
their masses in the vicinity of $\mu_s$. At $\mu=0.004$ we estimate an error that is comparable
to the statistical error. In what follows we will analyze our results taking into account only statistical
errors. This analysis shows that when the statistical error is given
on the final results of strangeness non-zero baryon masses
one must bear in mind
that there is a systematic error of about the same
magnitude due to the strange quark mass determination.
\subsection{Gell-Mann-Okubo relation}
Assuming a small SU(3) breaking, Okubo derived interesting relations among
baryons masses.
We examine in this section how well
the Gell-Mann-Okubo (GMO) relations~\cite{Donoghue:1992dd}
are fulfilled for the baryons masses obtained on our lattices
at different pion mass values.
As we will discuss in detail in the next Section,
volume and discretization effects are small, and therefore
it suffices to analyze the $\beta=3.9$ and $L=24\times 48$ results.
For this study we use the lattice spacing determined from $f_\pi$ to convert to physical units.
For the $J^{P}=1/2^+$ octet the GMO relation can be written in the form:
\begin{equation}\label{OKUBO_Octet}
\frac{M_{\Xi}+M_N}{2} = \frac{3M_{\Lambda}+M_{\Sigma}}{4} \; .
\end{equation}
The results are displayed in Fig.~\ref{fig:GMO}
where the left and right hand side terms
of Eq.~(\ref{OKUBO_Octet}) are separately plotted as a function of $m_{\pi}^2$. The difference
between the two terms
are compatible with zero at any pion mass. The experimental values, shown by the
squares, are respectively
254 MeV and 248 MeV.
These results are similar
to those presented in Ref.~\cite{Beane:2006pt}
using a mixed action setup with valence domain wall fermions on
rooted staggered sea fermions.
For the $J^{P}=3/2^+$ decuplet, the GMO relations
predict equal mass difference among
two consecutive ($\Delta S=1$) isospin multiplets:
\begin{equation}\label{OKUBO_Decuplet}
M_{\Sigma^*} - M_{\Delta} = M_{\Xi^*}- M_{\Sigma^*} = M_{\Omega}-M_{\Xi^*}\; .
\end{equation}
The results for the decuplet baryons
are displayed in Fig.~\ref{fig:GMO}. As can be seen,
the equalities of Eq.~(\ref{OKUBO_Decuplet})
are strongly violated; the three
mass differences of Eq.~(\ref{OKUBO_Decuplet}) are
spread over about 200 MeV for the range of pion masses
that have been computed.
The experimental values for these
mass differences are $153, 149, 139$~MeV, shown in the plot
by the squares. In the lattice results
the larger deviation comes from $M_{\Xi^*}- M_{\Sigma^*}$,
while for $M_{\Sigma^*} - M_{\Delta}$ and $M_{\Omega}-M_{\Xi^*}$
the mass differences are smaller.
The mass difference $M_{\Xi^*}- M_{\Sigma^*}$ is increasing
as the pion mass decreases.
Unfortunately, with our present statistics it is unclear
whether this increase is sufficient to
bring this mass difference in agreement
with experiment but the trend is definitely in the right direction.
A third relation exists, that connects
the $J^{P}=1/2^+$ octet masses with the $J^{P}=3/2^+$ decuplet masses,
which reads as:
\begin{equation}\label{OKUBO_8_10}
3M_{\Lambda}-M_{\Sigma}-2M_N = 2(M_{\Sigma^*}-M_{\Delta}) \; .
\end{equation}
Experimentally, this relation is fulfilled at the 10\% level yielding 276 MeV
for the left hand side
and 305 MeV for the right hand side of Eq.~(\ref{OKUBO_8_10}).
These values are again shown
by the filled squares in Fig.~\ref{fig:GMO}.
The corresponding lattice results are shown in the same figure.
One can see that, as in the octet case, the relation of Eq.~(\ref{OKUBO_8_10}) is
satisfied within our statistical uncertainties at each pion mass. It also
approaches the experimental results with decreasing pion mass.
\vspace{+0.8cm}
\begin{figure}[h!]
\begin{minipage}{5.5cm}
\mbox{\epsfxsize=5.5cm\epsffile{OKUBO_Octet.eps}}
(a)
\end{minipage}\hspace{0.cm}
\begin{minipage}{5.5cm}
\mbox{\epsfxsize=5cm\epsffile{OKUBO_Decuplet.eps}}
(b)
\end{minipage}\hspace{0.cm}
\begin{minipage}{5.5cm}
\mbox{\epsfxsize=5cm\epsffile{OKUBO_Octet_Decuplet.eps}}
(c)
\end{minipage}
\caption{Gell Mann Okubo relations for the baryon octet (a) decuplet (b) and mixed
octet-decuplet (c) as a function of $m_{\pi}^2$. Vertical lines correspond to the
physical results. Data were obtained from simulations at $\beta=3.9$ and $L/a=24$.
The lattice spacing determined from $f_\pi$ is used to convert to physical units.}
\label{fig:GMO}
\end{figure}
Fulfillment of the GMO relations is considered a success of $SU(3)$
symmetry. Violations of these relations indicate that $SU(3)$ breaking
is not small. Therefore one would expect that these relations
are better satisfied as we approach the $SU(3)$ limit $a\mu=a\mu_s=0.0217$, up
to discretization effects. This corresponds to about $m_{\pi}^2\sim 0.50$~GeV$^2$. For the decuplet mass relation given in Eq.~(\ref{OKUBO_Decuplet}) it is
unclear if this would be indeed satisfied by the lattice data whereas the
other two relations are fulfilled
at all masses.
\section{Systematics}
In order to compare our lattice results collected in
Tables~\ref{tab:masses-octet-3.9},~\ref{tab:masses-decuplet-3.9},~\ref{tab:masses-octet-4.05}
and~\ref{tab:masses-decuplet-4.05} to the physical masses we need to check
for finite volume effects, cut-off effects and the extrapolation to the physical
light quark masses. The strange quark was fixed to the physical value using
the kaon mass with the light quarks extrapolated to the physical point
as explained
in Section~II.B. A check of the effect of this tuning on baryon masses has been discussed
in Section~III.B.
In this section we discuss finite volume and
cutoff-effects, in particular the isospin breaking.
\subsection{Finite volume effects}
Finite volume corrections to the nucleon mass in $N_f=2$
lattice QCD have been studied in Ref.~\cite{AliKhan:2003cu} within the
$p$ expansion which assumes that finite size effects originate from pions
that propagate around the spatial box. Using
relativistic $SU(2)$ baryon chiral perturbation theory~\cite{Procura:2003ig}
the finite volume corrections to the nucleon mass to ${\cal O}(p^4)$ are:
\begin{equation}
m_N(\infty)=m_N(L)-\delta m_a(L)-\delta m_b(L)
\end{equation}
where
\begin{eqnarray}
\delta m_a(L) &=& \frac{3g_A^2 m_N^0 m_\pi^2}{8\pi^2 f_\pi^2} \int_0^\infty
dx \sum^\prime_{\bf n} K_0\left(L|{\bf n}|\sqrt{(m_N^0)^2 x^2+m_\pi^2(1-x)}\right) \nonumber \\
\delta m_b(L) &=& \frac{3m_\pi^4}{2\pi^2 f_\pi^2}
\sum^\prime_{\bf n}\biggl [ (2c_1-c_3) \frac{K_1\left(L|{\bf n}|m_\pi\right)}
{L|{\bf n}|m_\pi}
+c_2\frac{K_2\left(L|{\bf n}|m_\pi\right)}{\left(L|{\bf n}|m_\pi\right)^2} \biggr] \quad.
\label{volume corrections}
\end{eqnarray}
$K_\nu(x)$ is the modified Bessel function and the sum is over all integer vectors
${\bf n}$ excluding ${\bf n}= {\bf 0}$.
The parameters $m_N^0$ and $c_1$ are determined
by fitting first the nucleon mass to the same order~\cite{Steininger98,Bernard:2004,Bernard:2005} given by
\begin{eqnarray}
m_N & = & m_N^0-4 c_1m_\pi^2- \frac{3g_A^2}{16\pi f_\pi^2} m_\pi^3 -4 E_1(\lambda)m_\pi^4
+\frac{3 m_\pi^4}{16 \pi^2 f_\pi^2}\biggl[\frac{1}{4} \left (c_2-\frac{2g_A^2}{m_N^0}\right)
- \left(c_2-8c_1+4c_3+\frac{g_A^2}{m_N^0}\right) \log\left (\frac{m_\pi}{\lambda}\right)\biggr]\quad .
\label{HBchi2}
\end{eqnarray}
We take the cut-off scale $\lambda=1$~GeV, $f_{\pi}=130.70$~MeV and fix the dimension two low energy constants
$ c_2=3.2$~GeV$^{-1}$~\cite{Fettes:1998} and
$c_3=-3.45$~GeV$^{-1}$~\cite{Bernard:2004,Procura:2006}. These values are consistent with empirical
nucleon-nucleon phase shifts~\cite{Entem:2002sf,Epelbaum:2004}.
The counter-term $E_1$ is taken as an additional fit parameter.
We then use these parameters to estimate the volume corrections
to the nucleon mass. The results are listed
in Table~\ref{tab:volume corrections}. As can be seen
the corrections for our lattices are, in all cases except one,
smaller than the statistical
errors. In the analysis that follows we will use the volume corrected nucleon mass.
\begin{table}[h!]
\begin{center}
\begin{tabular}{lcc}
\hline\hline
$am_\pi $ & $am_N(L)$ & $a\delta_a(L)$ + $a\delta_b(L)$
\\
\hline
\multicolumn{3}{c}{$\beta=3.9$ $24^3\times 48$}\\
0.1362 & 0.5111(58) & 0.0068 \\
0.1684 & 0.5514(49) & 0.0046 \\
0.1940 & 0.5786(67) & 0.0026 \\
0.2100 & 0.5973(43) & 0.0021 \\
\multicolumn{3}{c}{$\beta=3.9$ $32^3\times 64$}\\
0.1168 & 0.4958(34) & 0.0014 \\
0.1338 & 0.5126(46) & 0.0011 \\
\multicolumn{3}{c}{$\beta=4.05$ $32^3\times 64$}\\
0.1038 & 0.4091(60) & 0.0035\\
0.1432 & 0.4444(47) & 0.0018\\
0.1651 & 0.4714(31) & 0.0012\\
\hline\hline
\end{tabular}
\caption{Volume correction to the nucleon mass.}
\label{tab:volume corrections}
\end{center}
\end{table}
Concerning the other baryons a recent analysis using SU(3) heavy baryon
chiral perturbation theory has shown that
the volume corrections are smaller than for the nucleon~\cite{Ishikawa:2009vc}.
Given that the volume correction found for the nucleon are smaller
than the statistical errors we can safely neglect any volume corrections
for the other baryons computed in this work. This is also corroborated by
our lattice results at $a\mu=0.004$ where simulations at two volumes are used.
\subsection{Isospin breaking}
The twisted mass action breaks isospin explicitly to ${\cal O}(a^2)$.
How large this breaking is depends on the size of the ${\cal O}(a^2)$ terms.
It was shown that this cut-off effect is
particularly large for the neutral
pion~\cite{Frezzotti:2007qv} but small for other quantities. Indeed
we verified that isospin breaking between the $\Delta^{++,0}$ and $\Delta^{+,-}$
is consistent with zero for lattice spacings below about
0.1~fm~\cite{Alexandrou:2008tn}. We here address this issue for the octet and
decuplet baryons. We show in Fig.~\ref{fig:isospin} the mass differences
for the $\Sigma$, $\Xi$, $\Delta$, $\Sigma^*$ and $\Xi^*$ charge multiplets as a function of the pion mass
at two values of $\beta$.
\begin{figure}[h!]
\vspace*{2cm}
\epsfxsize=10truecm
\epsfysize=12truecm
\mbox{\epsfbox{isospin.ps}}
\caption{Mass splitting versus the pion mass at $\beta = 3.9$ (filled triangles) and $\beta=4.05$ (open circles) for from top to
bottom: $\Sigma^+$ and $\Sigma^0$, $\Xi^0$ and $\Xi^-$, $\Delta^{++}$ and $\Delta^+$, $\Sigma^{*+}$ and $\Sigma^{*0}$, and $\Xi^{*0}$ and $\Xi^{*-}$.}
\label{fig:isospin}
\end{figure}
As can be seen, we confirm that for the $\Delta$ system isospin breaking is
consistent with zero. This is also true for the $\Sigma^*$ and $\Xi^*$
as well as for the $\Sigma$ at the smaller lattice spacing. In the case
of the $\Xi$ we observe a non-zero splitting that decreases with the
lattice spacing. If one interpolates the results at the two $\beta$ values
to the same pion mass, as discussed in more detail in the next Section, and makes a linear extrapolation in $a^2$ one
finds that this splitting goes to zero in the continuum limit as expected.
Whereas this confirms that this splitting is a cut-off effect,
to perform a proper analysis one would need results at an additional
lattice spacing. For the current work we conclude that isospin
splitting at these two lattice spacings
is negligible for all baryons expect for the $\Xi$ where an isospin
breaking of about 6\% is observed that vanishes at a rate proportional to $a^2$.
\subsection{Continuum extrapolation}
In order to assess cut-off effects
we use results at $\beta=3.9$ and $\beta=4.05$.
The lattice results,
expressed in units of the Sommer scale $r_0$, are interpolated to the same pion mass in units of $r_0$
at each $\beta$-value.
We give the interpolated results at six values of $m_\pi r_0$ in Tables~\ref{tab:octet-cont-r0}
and \ref{tab:decuplet-cont-r0}. Interpolating linearly or
with one-loop order chiral
perturbation theory gives
values consistent within error bars. Given the size of these
cut-off effects a
weighted average of the baryon masses between these two $\beta$
values gives an estimate of the values in the continuum limit.
It must be stressed that estimating the strange quark mass at $\beta=4.05$
using Eq.~(\ref{s-mass}) may cause residual cut-off effects on the few percentage
level that are
not taken into account with the continuum extrapolation as performed here.
The results obtained from the weighted averaging of data at $\beta=3.9$ and $\beta=4.05$ are
listed in Tables~\ref{tab:octet-cont-r0}
and \ref{tab:decuplet-cont-r0}
and are plotted in Figs.~\ref{fig:octet continuum const} and \ref{fig:decuplet continuum const} .
In the figures we also include results at $\beta=3.8$. If
cut-off effects are small for all $\beta$-values then results
at $\beta=3.8$ should fall onto the same line. As can be seen this is best
fulfilled for the nucleon mass. Furthermore for the nucleon and the $\Delta$
we also show results at a smaller value of the lattice spacing corresponding
to $\beta=4.2$. Essentially, the $a^2$ dependence of the nucleon and $\Delta$
mass as computed at
$\beta=3.9$, $4.05$ and $4.2$ is consistent with a constant behaviour,
verifying
that for lattice spacings below
$0.1$~fm cut-off effects are indeed small.
For the $\Lambda$ mass results at $\beta=3.8, 3.9$ and $\beta=4.05$ are consistent with a constant. This holds approximately also for the other baryons. Within
the statistical errors one therefore concludes
that for lattice spacings below
$0.1$~fm cut-off effects are under control.
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lccccccccc}
\hline\hline
$\Bigl.\Bigr.r_0 m_{\pi}$&$r_0 m_N$ &
$r_0 m_{\Lambda}$ & $r_0 m_{\Sigma^{\rm Av.}}$& $r_0 m_{\Sigma^+}$ &$r_0 m_{\Sigma^0}$ & $r_0 m_{\Sigma^-}$& $r_0 m_{\Xi^{\rm Av.}}$ & $r_0 m_{\Xi^0}$& $r_0 m_{\Xi^-}$ \\
\hline\hline
\multicolumn{10}{c}{$\beta=3.9$}\\
0.60 & 2.571(23)& 2.922(18)& 3.062(22)& 3.156(36)& 3.084(26) & 3.004(21)& 3.324(16) & 3.421(23) & 3.271(14) \\
0.70 & 2.671(24)& 3.001(18)& 3.193(21)& 3.279(38)& 3.215(21) & 3.111(25)& 3.399(18) & 3.523(24) & 3.319(17) \\
0.80 & 2.757(30)& 3.085(25)& 3.212(29)& 3.318(42)& 3.219(31) & 3.144(31)& 3.433(21) & 3.555(30) & 3.363(21) \\
0.90 & 2.880(26)& 3.156(22)& 3.286(25)& 3.404(38)& 3.293(27) & 3.215(27)& 3.472(19) & 3.598(26) & 3.401(19) \\
1.00 & 2.992(35)& 3.226(27)& 3.381(29)& 3.482(33)& 3.401(26) & 3.310(24)& 3.507(23) & 3.629)30) & 3.436(23)\\
1.10 & 3.111(23)& 3.305(190& 3.405(21)& 3.477(29)& 3.414(22) & 3.358(23)& 3.547(19) & 3.633(26) & 3.491(17)\\
\multicolumn{10}{c}{$\beta=4.05$}\\
0.60 & 2.600(43) & 2.946(29)& 3.107(32)& 3.199(46)& 3.115(34) & 3.034(38)& 3.336(23) & 3.400(29) & 3.287(22) \\
0.70 & 2.692(40) & 3.008(25)& 3.152(29)& 3.233(41)& 3.161(31) & 3.080(35)& 3.362(21) & 3.425(26) & 3.313(19)\\
0.80 & 2.788(46) & 3.074(32)& 3.200(34)& 3.269(49)& 3.209(36) & 3.127(40)& 3.391(25) & 3.451(32) & 3.340(24)\\
0.90 & 2.874(32) & 3.135(32)& 3.242(30)& 3.295(45)& 3.253(31) & 3.165(31)& 3.418(28) & 3.474(34) & 3.364(25)\\
1.00 & 2.984(33) & 3.205(32)& 3.298(30)& 3.348(45)& 3.308(31) & 3.230(31)& 3.448(29) & 3.504(34) & 3.397(25)\\
1.10 & 3.119(21) & 3.283(20)& 3.370(21)& 3.430(27)& 3.373(21) & 3.325(21)& 3.481(19) & 3.539(23) & 3.440(17)\\
\multicolumn{10}{c}{continuum limit}\\
0.60 & 2.577(20) & 2.929(15)& 3.077(18) & 3.173(28) & 3.095(21) & 3.011(18) & 3.328(13) & 3.413(18)& 3.275(12) \\
0.70 & 2.676(21) & 3.003(15)& 3.179(17) & 3.258(28) & 3.198(17) & 3.101(20) & 3.383(13) & 3.477(18)& 3.316(13)\\
0.80 & 2.766(25) & 3.080(20)& 3.207(22) & 3.297(32) & 3.215(23) & 3.138(24) & 3.415(16) & 3.506(22)& 3.353(16)\\
0.90 & 2.878(20) & 3.149(18)& 3.267(19) & 3.359(29) & 3.275(20) & 3.193(20) & 3.456(16) & 3.552(21)& 3.387(15)\\
0.10 & 2.988(24) & 3.217(21)& 3.342(21) & 3.436(26) & 3.363(20) & 3.280(19) & 3.484(18) & 3.573(23)& 3.418(17)\\
1.10 & 3.116(15) & 3.295(14)& 3.387(15) & 3.452(20) & 3.392(15) & 3.340(15) & 3.514(13) & 3.580(17)& 3.465(12)\\
\hline
\end{tabular}
\caption{Octet masses computed at reference pion masses in units of $r_0$ and the corresponding
continuum limit values.}
\label{tab:octet-cont-r0}
\end{center}
\end{small}
\end{table}
\begin{table}[h!]
\begin{small}
\begin{center}
\begin{tabular}{lccccccccc}
\hline\hline
$\Bigl.\Bigr.r_0 m_{\pi}$& $r_0 m_{\Delta^{{\rm Av.}}}$ & $r_0 m_{\Sigma^{\ast {\rm Av.}}}$ & $r_0 m_{\Sigma^{\ast +}}$ & $r_0 m_{\Sigma^{\ast 0}}$ & $r_0 m_{\Sigma^{\ast -}}$ & $r_0 m_{\Xi^{\ast {\rm Av.}}}$ & $r_0 m_{\Xi^{\ast 0}}$& $r_0 m_{\Xi^{\ast -}}$& $r_0 m_{\Omega}$ \\
\hline\hline
\multicolumn{10}{c}{$\beta=3.9$}\\
0.60 & 3.312(51)& 3.565(51) & 3.558(49) & 3.564(56) & 3.660(53) & 3.671(27) & 3.689(31) & 3.662(28) & 4.131(26) \\
0.70 & 3.419(57)& 3.719(55) & 3.679(65) & 3.735(61) & 3.744(52) & 3.805(39) & 3.845(41) & 3.754(38) & 4.195(36) \\
0.80 & 3.631(48)& 3.850(50) & 3.824(62) & 3.856(55) & 3.853(55) & 3.856(46) & 3.925(45) & 3.812(49) & 4.252(37) \\
0.90 & 3.727(42)& 3.907(44) & 3.871(55) & 3.918(49) & 3.924(49) & 3.873(41) & 3.947(40) & 3.839(44) & 4.259(33) \\
1.00 & 3.761(47)& 3.911(46) & 3.839(55) & 3.954(58) & 3.981(52) & 3.862(45) & 3.922(43) & 3.840(44) & 4.240(35) \\
1.10 & 3.950(27)& 4.075(35) & 4.085(33) & 4.081(34) & 4.074(36) & 3.981(38) & 4.046(34) & 3.909(39) & 4.328(27) \\
\multicolumn{10}{c}{$\beta=4.05$} \\
0.60 & 3.524(47)& 3.769(57) & 3.732(67) & 3.787(65) & 3.766(58) & 3.806(40) & 3.823(39) & 3.789(40) & 4.221(34) \\
0.70 & 3.581(43)& 3.789(52) & 3.752(62) & 3.803(57) & 3.795(53) & 3.817(37) & 3.832(36) & 3.802(37) & 4.202(30) \\
0.80 & 3.615(50)& 3.808(60) & 3.773(71) & 3.819(70) & 3.822(61) & 3.828(43) & 3.842(42) & 3.816(43) & 4.183(37) \\
0.90 & 3.617(40)& 3.804(45) & 3.768(49) & 3.834(76) & 3.829(46) & 3.829(32) & 3.834(34) & 3.825(33) & 4.141(36) \\
1.00 & 3.718(41)& 3.876(46) & 3.844(49) & 3.851(76) & 3.901(47) & 3.862(32) & 3.882(35) & 3.847(33) & 4.171(36) \\
1.10 & 3.922(40)& 4.028(39) & 4.007(45) & 3.868(49) & 4.042(38) & 3.931(29)& 3.987(33) & 3.885(29) & 4.277(33)\\
\multicolumn{10}{c}{continuum limit} \\
$\Bigl.\Bigr.r_0 m_{\pi}$& $r_0 m_{\Delta^{{++}}}$, $r_0 m_{\Delta^{{+}}}$,& $r_0 m_{\Sigma^{\ast {\rm Av.}}}$ & $r_0 m_{\Sigma^{\ast +}}$ & $r_0 m_{\Sigma^{\ast 0}}$ & $r_0 m_{\Sigma^{\ast -}}$ & $r_0 m_{\Xi^{\ast {\rm Av.}}}$ & $r_0 m_{\Xi^{\ast 0}}$& $r_0 m_{\Xi^{\ast -}}$& $r_0 m_{\Omega}$ \\
\hline\hline
0.60 & 3.439(35)& 3.656(38) & 3.619(40) & 3.660(42) & 3.709(39) & 3.712(22) & 3.741(24) & 3.704(23) & 4.164(21) \\
0.70 & 3.520(49)& 3.755(38) & 3.717(45) & 3.771(42) & 3.768(37) & 3.811(27) & 3.838(27) & 3.778(26) & 4.199(23) \\
0.80 & 3.623(35)& 3.833(38) & 3.802(47) & 3.841(43) & 3.839(41) & 3.841(31) & 3.880(30) & 3.814(32) & 4.217(26) \\
0.90 & 3.668(29)& 3.856(32) & 3.813(36) & 3.893(41) & 3.874(34) & 3.845(25) & 3.882(26) & 3.830(26) & 4.205(24) \\
1.00 & 3.735(30)& 3.893(32) & 3.842(37) & 3.916(46) & 3.937(35) & 3.862(26) & 3.898(27) & 3.845(26) & 4.207(25) \\
1.10 & 3.935(20)& 4.054(26) & 4.058(27) & 4.013(28) & 4.059(26) & 3.949(23) & 4.016(24) & 3.893(23) & 4.307(21) \\
\hline
\end{tabular}
\caption{The same as Table~\ref{tab:octet-cont-r0} but for the decuplet baryons.}
\label{tab:decuplet-cont-r0}
\end{center}
\end{small}
\end{table}
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=10truecm
\mbox{\epsfbox{cont_lim_const_octet.ps}}
\caption{Constant extrapolation to the continuum limit for the octet baryons.
Stars are for $r_0 m_\pi=0.615$,
filled triangles for $r_0 m_\pi=0.7$, open circles for $r_0 m_\pi=0.8$,
open triangles for $r_0 m_\pi=0.9$, filled circles for $r_0 m_\pi=1.0$,
rhombii for $r_0 m_\pi=1.1$ and crosses for $r_0 m_\pi=1.25$.
The open squares show the extracted continuum value. For the nucleon we also
show results at $\beta=4.2$. For the $\Sigma^+$ and $\Xi^0$ we omit the case $r_0 m_\pi=1.0$ and
shift results at $r_0 m_\pi=1.25$ and $r_0 m_\pi=0.8$ for clarity. }
\label{fig:octet continuum const}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=10truecm
\vspace*{-1.4cm}
\mbox{\epsfbox{cont_lim_const_decuplet.ps}}
\caption{Constant extrapolation to the continuum for the decuplet. The notation
is the same as in Fig.~\ref{fig:octet continuum const}.}
\label{fig:decuplet continuum const}
\end{minipage}
\end{figure}
\subsection{Fixing the lattice spacing}
In order to convert to physical units we need to fix the lattice spacing.
The values of the lattice spacing given in Table~\ref{Table:params} were extracted using the pion
decay constant. Equivalently one can determine the value of
$r_0$ by extrapolating the results to the physical point. The value obtained
is $r_0=0.439(25)$ fm determined in the light meson sector \cite{r0_ETMC_Scaling_Paper}
where the systematic error is added to the statistical one. Knowing $r_0$ and the ratio $r_0/a$ one
can determine the lattice spacing.
\begin{figure}[h!]
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{nucleon_r0.ps}}
\caption{Determination of $r_0^N$ with a simultaneous fit to the
lattice data at $\beta=3.9$ and $\beta=4.05$. The asterisks denote the physical
point determined by the value of $r_0^N$ by using ${\cal O}( p^3)$
and ${\cal O}(p^4)$ $\chi$PT as described in Ref.~\cite{Alexandrou:2008tn}.}
\label{fig:nucleon r0}
\end{figure}
The nucleon mass can be used to
set the scale and this determination
seems natural if one is interested
in the study of the baryon spectrum.
Our data at $\beta=3.9$ and
$\beta=4.05$ do not show significant lattice spacing effects. Therefore
we can make a combined fit using continuum chiral
perturbation theory results to determine the $r_0^N$.
Chiral corrections to the nucleon mass are known to
${\cal O}(p^4)$ within several
expansion schemes. We use the same
schemes used in Ref.~\cite{Alexandrou:2008tn,Alexandrou:2007qq}. The fits
are shown in Fig.~\ref{fig:nucleon r0}.
Using the ${\cal O} (p^3)$ result, which is well established,
to extrapolate to the physical pion mass we obtain $r_0^N=0.465(6)$~fm.
If we instead use the data at $\beta=3.9$ and $\beta=4.05$ to perform
the continuum
limit as discussed in the previous subsection and then fit,
we find $r_0^N=0.471\pm 0.006(stat.) \pm 0.015(syst.)$~fm. The systematic error is due to the interpolation
to a fixed value of $m_\pi r_0$ and it is obtained by comparing the value of $r_0$ obtained when linear interpolation is used
to the one obtained using ${\cal O}(p^3)$.
Furthermore, we take the difference in the value of $r_0$ obtained using continuous results and the
value found by fitting the lattice data at $\beta=3.9$ and $\beta=4.05$ to be the systematic error
due to cut-off effects. We therefore take $r_0^N=0.465(6)(14)$~fm, which
is in agreement with $r_0$ extracted from the value of the pion decay constant,
$f_\pi$, when converting our results at $\beta=3.9$ and $\beta=4.05$ in units of $r_0$.
Using
the value of $r_0/a=5.22(2)$ and $r_0/a=6.61(3)$ at $\beta=3.9$ and
$\beta=4.05$ we find the for the lattice spacings $a_{\beta=3.9}=0.089(4)$~fm
and $a_{\beta=4.05}=0.070(3)$~fm.
These values for the lattice spacing extracted using the nucleon mass
at the physical point are
in agreement with those determined
from the pion decay constant. This constitutes a nice consistency check
of our lattice formulation. In what follows we will use the lattice spacing extracted from the nucleon mass
using ${\cal O}(p^3)$ heavy baryon chiral perturbation theory
to convert the rest of the masses into physical units.
\section{Chiral extrapolation}
Given that the cut-off effects are almost negligible in our simulations,
we apply continuum
chiral perturbation theory to extrapolate lattice results at
$\beta=3.9$ and $\beta=4.05$ to the
physical pion mass. In particular, we will use SU(2) chiral perturbation
theory($\chi$PT)~\cite{Tiburzi:2008bk} for two reasons: The first being that our
simulations are done for two mass-degenerate dynamical quarks and the second because it
was shown that SU(3) $\chi$PT fails to describe lattice
data~\cite{Ishikawa:2009vc}. We would like to stress however that the issue of
the applicability of SU(3) $\chi$PT is not entirely settled and e.g.
SU(3) fits to lattice results using staggered fermions
were claimed to be produce reasonable fits~\cite{Frink:2005ru,Frink:2004ic}.
The leading order $SU(2)$ heavy baryon chiral perturbation (HB$\chi$PT) results
are given by
\begin{equation}
m^{\rm LO}_X(m_\pi) = m_X^{(0)}-4c_X^{(1)} m_\pi^2 \; ,
\label{linear}
\end{equation}
with two fit-parameters,
the baryon mass in the chiral limit $m_X^{(0)}$ and $c_X^{(1)}$,
the latter of which gives the leading contribution to the $\sigma_X$-term.
The leading one-loop results for the nucleon and the $\Delta$ in HB$\chi$PT were first derived in Ref.~\cite{Gas88} and successful fits to
lattice data on the nucleon and $\Delta$ were discussed in our
previous study~\cite{Alexandrou:2008tn}.
A natural generalization of the ${\cal O}(p^3)$ results for the nucleon and $\Delta$
to the rest of the octet
and decuplet baryons~\cite{Nagels:1979xh,Nagels:1978sc} is given by
\begin{eqnarray}
m _N(m_\pi) &=& m^{(0)}_N-4c^{(1)}_N \;m_{\pi}^2 - \frac{3 g_A^2 }{16\pi f_\pi^2} \;m_\pi^3 \nonumber \\
m_\Lambda(m_\pi) &=&m^{(0)}_{\Lambda }-4c^{(1)}_{\Lambda}\; m_\pi^2 - \frac{g_{\Lambda\Sigma}^2 }{16\pi f_\pi^2} \;m_\pi^3\nonumber \\
m_\Sigma(m_\pi)& = & m^{(0)}_{\Sigma }-4c^{(1)}_{\Sigma}m_\pi^2 - \frac{2 g_{ \Sigma \Sigma}^2 + g_{\Lambda\Sigma}^2/3}{16\pi f_\pi^2}\; m_\pi^3 \nonumber \\
m_\Xi(m_\pi) &=&m^{(0)}_{\Xi }-4c^{(1)}_{\Xi}m_\pi^2 - \frac{3 g_{\Xi \Xi}^2}{16\pi f_\pi^2} \;m_\pi^3 \quad,
\label{LO octet}
\end{eqnarray}
for the octet baryons and
\begin{eqnarray}
m_\Delta(m_\pi) &=& m^{(0)}_\Delta - 4c^{(1)}_\Delta\; m_\pi^2 - \frac{25}{27}\frac{g_{\Delta \Delta}^2}{16\pi f_\pi^2} \;m_\pi^3 \nonumber \\
m_{\Sigma^*}(m_\pi) &=& m^{(0)}_{\Sigma^* } - 4c^{(1)}_{\Sigma^*}m_\pi^2 - \frac{10}{9}\frac{g_{\Sigma^* \Sigma^*}^2 }{16\pi f_\pi^2}\;m_\pi^3 \nonumber \\
m_{\Xi^*}(m_\pi) &=& m^{(0)}_{\Xi^*} - 4c^{(1)} _{\Xi^*} m_\pi^2 - \frac{5}{3}\frac{g_{\Xi^* \Xi^*}^2 }{16\pi f_\pi^2}\;m_\pi^3 \nonumber \\
m_\Omega(m_\pi) &=& m^{(0)}_\Omega - 4c^{(1)}_\Omega m_\pi^2 \; , \label{LO decuplet}
\end{eqnarray}
for the decuplet baryons.
In addition we consider a cubic term of the following form
\begin{equation} m_X(m_\pi)=m_X^{(0)}-4c_X^{(1)} m_\pi^2+c_X^{(2)} m_\pi^3
\label{cubic}
\end{equation}
treating $c_X^{(2)}$ as an additional fit parameter.
The next to leading order SU(2) $\chi$PT results~\cite{Tiburzi:2008bk} for the octet are given by
\begin{eqnarray}
m^{NLO}_N(m_\pi) &=& m^{LO} _N(m_\pi) - \frac{3 g_A^2 }{16\pi f_\pi^2} \;m_\pi^3 - \frac{8 g_{N\Delta}^2}{3(4\pi f_\pi)^2} \; {\cal F}(m_\pi,\Delta_{N\Delta}, \lambda) \nonumber \\
m^{NLO}_\Lambda(m_\pi) &=& m^{LO}_{\Lambda}(m_\pi) - \frac{g^2_{\Lambda\Sigma}}{(4\pi f_\pi)^2} \; {\cal F}(m_\pi,\Delta_{\Lambda \Sigma},\lambda)
- \frac{ 4g^2_{\Lambda\Sigma^*} } {(4\pi f_\pi)^2} \; {\cal F}(m_\pi,\Delta_{\Lambda \Sigma^*},\lambda) \nonumber \\
m^{NLO}_\Sigma(m_\pi) &=& m^{LO}_{\Sigma}(m_\pi) - \frac{2 g_{ \Sigma \Sigma}^2}{16\pi f_\pi^2}\; m_\pi^3 -\frac{g^2_{\Lambda\Sigma}}{3(4\pi f_\pi)^2} \; {\cal F}(m_\pi,-\Delta_{\Lambda \Sigma},\lambda)
- \frac{4g^2_{\Lambda\Sigma^*}}{3(4\pi f_\pi)^2} \; {\cal F}(m_\pi,\Delta_{\Sigma\Sigma^*},\lambda) \nonumber \\
m^{NLO}_\Xi(m_\pi) &=& m^{LO}_{\Xi}(m_\pi) - \frac{3 g_{\Xi \Xi}^2}{16\pi f_\pi^2} \;m_\pi^3-\frac{2g_{\Xi^*\Xi}^2}{(4\pi f_\pi)^2} \; {\cal F}(m_\pi,\Delta_{\Xi\Xi^*},\lambda) \label{NLO octet}
\end{eqnarray}
and for the decuplet baryons:
\begin{eqnarray}
m^{NLO}_\Delta(m_\pi) &=& m^{LO} _\Delta(m_\pi) - \frac{25}{27}\frac{g_{\Delta \Delta}^2}{16\pi f_\pi^2} \;m_\pi^3 -\frac{2 g_{\Delta N}^2}{3 (4\pi f_\pi)^2}) \; {\cal F}(m_\pi,-\Delta_{N\Delta},\lambda) \nonumber \\
m^{NLO}_{\Sigma^*}(m_\pi) &=& m^{LO} _{\Sigma^* }(m_\pi) - \frac{10}{9}\frac{g_{\Sigma^* \Sigma^*}^2 }{16\pi f_\pi^2}\;m_\pi^3 - \frac{2}{3(4\pi f_\pi)^2} \left[g_{\Sigma^*\Sigma} ^2
\; {\cal F}(m_\pi,-\Delta_{\Sigma\Sigma^*,\lambda}) + g_{\Lambda\Sigma^*}^2 \; {\cal F}(m_\pi,-\Delta_{\Lambda\Sigma^*,\lambda}) \right] \nonumber \\
m^{NLO}_{\Xi^*} (m_\pi) &=& m^{LO} _{\Xi^*}(m_\pi) - \frac{5}{3}\frac{g_{\Xi^* \Xi^*}^2 }{16\pi f_\pi^2}\;m_\pi^3- \frac{g_{\Xi^* \Xi}^2}{(4\pi f_\pi)^2} \; {\cal F}(m_\pi,-\Delta_{\Xi\Xi^*,\lambda}) \nonumber \\
m^{NLO}_\Omega(m_\pi) &=& m^{LO} _\Omega(m_\pi)
\label{NLO decuplet}
\end{eqnarray}
with the non analytic function~\cite{Tiburzi:2005na}
\begin{equation}\label{F}
{\cal F}(m,\Delta,\lambda) =(m^2-\Delta^2)\sqrt{\Delta^2-m^2+i\epsilon}
\;\log\left(\frac{\Delta-\sqrt{\Delta^2-m^2+i\epsilon}}{\Delta+\sqrt{\Delta^2-m^2+i\epsilon}}\right)
-\frac{3}{2}\Delta m^2\log\left(\frac{m^2}{\lambda^2}\right)-\Delta^3\log\left(\frac{4\Delta^2}{m^2}\right) \quad
\end{equation}
depending on the threshold parameter $\Delta_{XY}= m^{(0)}_{Y}-m^{(0)}_X$ and on the
scale $\lambda$ of chiral perturbation theory, fixed to
$\lambda=1$~GeV.
For $\Delta>0$ the real part of the function ${\cal F}(m,\Delta,\lambda)$ has the property
\begin{equation}
{\cal F}(m,-\Delta,\lambda) = \left \{\begin{array}{ll} -{\cal F}(m,\Delta,\lambda)& m<\Delta \\
-{\cal F}(m,\Delta,\lambda)+2\pi\left(m^2-\Delta^2\right)^{3/2} & m>\Delta\\
\end{array}\right.
\label{F symm}
\end{equation}
which corrects a typo in the sign of the
second term in Ref.~\cite{WalkerLoud:2008bp}.
In our fits, the nucleon axial charge $g_A$ and pion decay constant $f_{\pi}$ are fixed to their experimental values
(we use the convention such that $f_{\pi}=130.70$ MeV). The remaining pion-baryon axial coupling constants are taken from SU(3) relations~\cite{Tiburzi:2008bk}:
\begin{equation}
\begin{array}{lllll}
{\rm Octet:} &\quad g_{A} = D+F, &\quad g_{\Sigma\Sigma} = 2F, &\quad g_{\Xi\Xi}=D-F, &\quad g_{\Lambda\Sigma}=2D \\
{\rm Decuplet:} &\quad g_{\Delta \Delta} = {\cal H}, &\quad g_{\Sigma^*\Sigma^*} = \frac{2}{3} {\cal H}, &\quad g_{\pi\Xi^*\Xi^*}=\frac{1}{3} {\cal H} & \\
{\rm Transition:} &\quad g_{\Delta N} = {\cal C}, &\quad g_{\Sigma^*\Sigma} = \frac{1}{\sqrt{3}} {\cal C}, &\quad g_{\Xi^*\Xi}=\frac{1}{\sqrt{3}} {\cal C}, &\quad g_{\Lambda\Sigma^*}= -\frac{1}{\sqrt{2}} {\cal C}
\end{array}
\label{ACC}
\end{equation}
As can be seen, in the octet case, and once $g_A$ is fixed, the axial
coupling constants depend on the single
parameter written as $\alpha={D\over D+F}$. Its value is poorly known.
It can be taken either from the quark model ($\alpha=3/5$), from the phenomenology of
semi-leptonic decays or from hyperon-nucleon scattering.
We take $2D=1.47$ or $\alpha=0.58$ as given in Ref.~\cite{Tiburzi:2008bk} .
The decuplet coupling constants depend on a single parameter for which
we again take the value ${\cal H}=2.2$ from Ref.~\cite{Tiburzi:2008bk}.
This value is not far from that predicted by SU(6) symmetry, ${\cal H}= {9\over5} g_A = 2.29$ used in our previous work~\cite{Alexandrou:2008tn}
resulting in the
same cubic term for the nucleon and $\Delta$.
For fixing the octet-decuplet transition couplings we take the value ${\cal C}=1.48$ from Ref.~\cite{Tiburzi:2005na} .
With the coupling constants fixed in this way, the LO, the one-loop as well as the NLO fits are left with the two independent fit parameters $m_X^{(0)}$ and $c_X^{(1)}$.
All mass parameters $m_X^{(0)}$ are treated independently
unlike what is done in Ref.~\cite{Tiburzi:2008bk} where a universal mass
parameter was used for all barons with the same strangeness.
A noticeable result of this expansion is the absence of a cubic term in the
expression for the $\Lambda$ and $\Omega$ masses given in Eqs.~(\ref{NLO octet})
and (\ref{NLO decuplet}).
In the case of $\Omega$, it follows from the absence of light valence quarks.
However the absence of a cubic term in the NLO expression of $\Lambda$,
although a consequence of $\chi PT$, is nevertheless a questionable result, since it relies on the assumption that $m_{\pi} \ll M_{\Sigma}-M_{\Lambda}$.
In the limit $\Delta \to 0$ the non analytic function (\ref{F}) becomes
$${\cal F}(m_{\pi},\Delta\to 0,\lambda) = \pi m_{\pi}^3 \; \quad,$$
which generates a cubic term for the $\Lambda$ and slightly modifies the one for $\Sigma$.
The corresponding expressions are given by
\begin{eqnarray}
m_\Lambda(m_\pi) &=&m^{(0)}_{\Lambda }-4c^{(1)}_{\Lambda}\; m_\pi^2 \nonumber - \frac{g_{ \Lambda \Sigma}^2}{16\pi f_\pi^2}\; m_\pi^3 \label{ILO_Lambda} \; , \\
m_\Sigma(m_\pi) &=&m^{(0)}_{\Sigma }-4c^{(1)}_{\Sigma}m_\pi^2 - \frac{2 g^2_{ \Sigma \Sigma}+g^2_{ \Lambda \Sigma}/3 }{16\pi f_\pi^2}\; m_\pi^3 \label{ILO_Sigma} \; ,
\end{eqnarray}
in agreement with the results of Eq.~(\ref{LO octet}).
\begin{figure}[h!]
\begin{minipage}{8cm}
\vspace*{1cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{nucleon3.9_4.05_r0.ps}}
\caption{Chiral extrapolation of the nucleon mass in units of $r_0$ .
Filled triangles and squares
are results at $\beta=3.9$ on $24^3\times 48$ and
$32^3\times 64$ lattice sizes respectively. Open triangles are
results at $\beta=4.05$. We show chiral extrapolations
linear in $m_\pi^2$ as in Eq.~(\ref{linear}),
to ${\cal O}(p^3)$ as in Eq.~(\ref{LO octet}), fit in the cubic term as in
Eq.~(\ref{cubic}),
NLO and NNLO in SU(2) chiral perturbation theory as
in Eqs.~(\ref{NLO octet},\ref{NNLO octet}), respectively.
We include an error band only for the ${\cal O}(p^3)$ fit for clarity.
The physical point shown by the asterisk uses the value of $r_0$ extracted from $f_\pi$,
whereas the cross uses $r^N_0$ determined from the nucleon mass.}
\label{fig:nucleon chiral}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\vspace*{-2.2cm}
\mbox{\epsfbox{Delta3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Delta$ mass.
The notation is the same as that in Fig.~\ref{fig:nucleon chiral} but in physical units. Here we also include an error band for the cubic fit.}
\label{fig:Delta chiral}
\end{minipage}
\end{figure}
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Sigma3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Sigma$ mass in physical units. We show chiral extrapolations
linear in $m_\pi^2$, using Eq.~\ref{cubic},
${\cal O}(p^3)$, NLO and NNLO in SU(2) chiral perturbation theory given in Eqs.~(\ref{LO octet},
\ref{NLO octet})
respectively. The rest of the notation is the same as that in Fig.~\ref{fig:Delta chiral}.}
\label{fig:Sigma chiral}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\vspace*{-1.2cm}
\mbox{\epsfbox{Sigmastar3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Sigma^*$ mass.
The notation is the same as that in Fig.~\ref{fig:Sigma chiral}.}
\label{fig:Sigmastar chiral}
\end{minipage}
\end{figure}
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Xi3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Xi$ mass.The notation is the same as that in Fig.~\ref{fig:Sigma chiral}.}
\label{fig:Xi chiral}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Xistar3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Xi^*$ mass.
The notation is the same as that in Fig.~\ref{fig:Sigma chiral}.}
\label{fig:Xistar chiral}
\end{minipage}
\end{figure}
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Lambda3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Lambda$ mass.The notation is the same as that in Fig.~\ref{fig:Sigma chiral}.}
\label{fig:Lambda chiral}
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=8truecm
\epsfysize=6truecm
\mbox{\epsfbox{Omega3.9_4.05.ps}}
\caption{Chiral extrapolation of the $\Omega$ mass. We show chiral extrapolations
linear in $m_\pi^2$, using Eq.~\ref{cubic},
and NNLO in SU(2) chiral perturbation theory given in Eq.~(\ref{NNLO Omega}).
The rest of the notation is the same as that in Fig.~\ref{fig:nucleon chiral}.}
\label{fig:Omega chiral}
\end{minipage}
\end{figure}
The expressions for the strange baryon masses to NNLO in $\chi$PT
given in Ref.~\cite{Tiburzi:2008bk} involve in general more unknown low energy constants
and only if we perform a constrained fit we have enough data to extract them.
We found however no real advantage in using constrained fits,
which generally gave larger $\chi/{\rm d.o.f.}$
and did not improve the prediction of
the mass at the physical point as compared to the unconstrained fits.
For the nucleon, $\Delta$ and $\Omega$ masses, unconstrained
fits can still be performed with four fit parameters~\cite{WalkerLoud:2008bp},
namely $m^{(0)}$, $c^{(1)}$, $\alpha$ and $\beta$ appearing in the expressions of
NNLO $\chi$PT given below.
\begin{eqnarray}
m^{NNLO}_N(m_\pi)&=& m^{NLO}_N(m_\pi)+m_\pi^4\biggl[\beta_N+\frac{16g_{\Delta N}^2 c_N^{(1)}}{(4\pi f_\pi)^2}
-\frac{9g_{\Delta N}^2}{4m_N^{(0)}(4\pi f_\pi)^2}-\frac{45 g_A^2}{32 4m_N^{(0)}(4\pi f_\pi)^2} \biggr] \nonumber \\
&+& \frac{16g_{\Delta N}^2c_N^{(1)}}{(4\pi f_\pi)^2} m_\pi^2 \;{\cal J}(m_\pi,\Delta,\lambda) +\frac{m_\pi^4}{(4\pi f_\pi)^2}\log\left(\frac{m_\pi^2}{\lambda^2}\right) \biggl[{12c_N^{(1)} }
-\frac{3\alpha_N}{4\pi f_\pi} -\frac{27 g_A^2}{16 m_N^{(0)}} - \frac{5g_{\Delta N}}{2m_N{(0)}} \biggr] \label{NNLO octet}
\end{eqnarray}
\begin{eqnarray}
m^{NNLO}_\Delta(m_\pi)&=& m^{NLO}_\Delta(m_\pi) + \frac{12 c_\Delta^{(1)}}{(4\pi f_\pi)^2} \;m_\pi^4\log\left(\frac{m_\pi^2}{\lambda^2}\right)
-\frac{25 g_{\Delta \Delta}^2}{48 (m_\Delta^{(0)}+\Delta_{\Delta N})(4\pi f_\pi)^2} \; m_\pi^4\left(\log\left(\frac{m_\pi^2}{\lambda^2}\right)+\frac{19}{10}\right) \nonumber\\
&-&\frac{5 g_{\Delta N}^2}{8(m_\Delta^{(0)}+\Delta_{\Delta N})(4\pi f_\pi)^2}\; m_\pi^4\left(\log\left(\frac{m_\pi^2}{\lambda^2}\right)-\frac{1}{10}\right)
+\frac{4 c_\Delta^{(1)} g_{\Delta N}^2}{(4\pi f_\pi)^2}\; m_\pi^2 \;{\cal J}(m_\pi,-\Delta_{\Delta N},\lambda) \nonumber \\
&+& \beta_\Delta m_\pi^4 + \frac{ \alpha_\Delta}{(4\pi f_\pi)^3}\; m_\pi^4 \log\left(\frac{m_\pi^2}{\lambda^2}\right) \label{NNLO Delta}\\
m^{NNLO}_\Omega(m_\pi)&=& m^{NLO}_\Omega(m_\pi) + \frac{m_{\pi}^4}{(4\pi f_{\pi})^3} \left[ \alpha_{\Omega} \log\left({m_{\pi}^2\over\lambda^2}\right) + \beta_{\Omega} \right]
\label{NNLO Omega}
\end{eqnarray}
where
\begin{equation}
{\cal J}(m,\Delta,\lambda)=m^4 \log\left(\frac{m^2}{\lambda^2}\right)+ 2\Delta \sqrt{\Delta^2-m^2+i\epsilon}
\; \log\left(\frac{\Delta-\sqrt{\Delta^2-m^2+i\epsilon}}{\Delta+\sqrt{\Delta^2-m^2+i\epsilon}}\right) +2 \Delta^2\log\left(\frac{4\Delta^2}{m^2}\right) \quad.
\end{equation}
and the real part of ${\cal J}$ satisfies
\begin{equation}
{\cal J}(m,-\Delta,\lambda) = \left\{ \begin{array}{ll}{\cal J}(m,\Delta,\lambda) & m<\Delta \\
{\cal J}(m,\Delta,\lambda)-2\pi\Delta\left(m^2-\Delta^2\right)^{1/2} & m>\Delta \\
\end{array}\right.
\label{J symm}
\end{equation}
Using the above Ans\"atze the chiral extrapolations of lattice results at
$\beta=3.9$ and $\beta=4.05$ given in Tables~V-VII are performed.
In Fig.~\ref{fig:nucleon chiral} we show the fits for the nucleon
in units of $r_0$.
The nucleon mass at the physical point has
been expressed in units of $r_0$ using the value determined from nucleon mass as well as from the pion
decay constant. As can be seen these values are consistent. The ${\cal O}(p^3)$ being the one used to determine
the scale passes through the physical point. The other curves show the dependence on the chiral Ansatz used.
It comes as no surprise that the $NLO$ result does badly for the nucleon underestimating
the mass at the physical point whereas the $NNLO$ fits over-correct
and yield a higher mass. Lattice results
at $\beta=3.9$ and 4.05 expressed in units of $r_0$ fall on a universal
curve confirming that finite cut-off effects are small. Therefore
we corroborate the conclusion that we can
use continuum chiral perturbation theory to extrapolate lattice results at $\beta=3.9$ and $\beta=4.05$.
For the chiral extrapolation of the other baryons we use the scale
determined from the nucleon mass to convert to physical units.
We show in Figs.~\ref{fig:Sigma chiral}, \ref{fig:Xi chiral}, \ref{fig:Lambda chiral} the
chiral extrapolations for the octet baryon masses and in
Figs. \ref{fig:Delta chiral}, \ref{fig:Sigmastar chiral},
\ref{fig:Xistar chiral}, \ref{fig:Omega chiral} the corresponding fits for the decuplet baryons given in physical units.
We emphasize that the physical point is not included in these fits.
The LO expression describes well the lattice results but leads to extrapolated
values inconsistent with the experimental point.
The ${\cal O}(p^3)$ $HB\chi PT$ expansion given in
Eqs.~(\ref{LO octet}) and (\ref{LO decuplet})
with two fit parameters $m^{(0)}$ and $c^{(1)}$
provides a good description of lattice data and the results extrapolated to the physical point
are in general in agreement with experiments.
The NLO leads to a clear improvement in the case of the $\Lambda$ and $\Xi$ masses,
whereas for the rest of the baryons the improvement is marginal.
Apart from the preceding remarks, there is no clear advantage in using higher order fits,
especially NNLO, which even turns out to be numerically unstable for the case of the
$\Delta$ and $\Omega$ masses.
Therefore our main conclusion is that the ${\cal O}(p^3)$ HB$\chi$PT
provides a reasonable description for the nucleon and $\Delta$ masses whereas NLO $SU(2)$ for the the
strange baryon masses,
yielding values at the physical point that are
in agreement with experiment.
\begin{table}[h!]
\begin{center}
\begin{tabular}{lcccccccc}
\hline\hline
& $\sigma_N$ & $\sigma_\Lambda$ & $\sigma_{\Sigma^{\rm Av.}}$ & $\sigma_{\Xi_{\rm Av.}}$ & $\sigma_{\Delta^{{\rm Av.}}}$ & $\sigma_{\Sigma^{\ast {\rm Av.}}}$ & $\sigma_{\Xi^{\ast {\rm Av.}}}$ & $\sigma_{\Omega}$ \\
\hline\hline
{${\cal O}(p^3)$} & 64.2(8) & 34.7(9) & 37.1(8) & 9.7(1.1) & 62.2(1.1) & 38.0(1.7)& 17.3(1.3) & 6.3(1.3) \\
fit with cubic term & 33.4(6.9) &33.7 (9.8) &35.6(10.6) & 26.3 (7.7) & 25.7 (2.5) & 24.2 (6.4) & 32.4 (13.4) & 2.9(1.4) \\
$NLO$ & 92.5(7) & 52.8(8) &43.3(9) &17.2(1.0) & 79.5(1.0)&44.1(1.7) & 27.9(1.3) & 6.3(1.3) \\
\hline
\end{tabular}
\caption{$\sigma$-term in MeV using the fit parameters determined from ${\cal O}(p^3)$ $\chi$PT, using a cubic fit Eq.~(\ref{cubic}) and NLO. We used the
scale from the nucleon
mass to convert to physical units.}
\label{tab:sigma term}
\end{center}
\end{table}
\begin{figure}[h!]
\begin{minipage}{8cm}
\epsfxsize=7.5truecm
\epsfysize=12truecm
\mbox{\epsfbox{octet.ps}}
\caption{\label{fig:octet} Comparison of masses for the low lying octet
baryons. Results from this work are shown by the
filled (black) triangles for $L=2.1$~fm and (blue) squares for $L=2.7$~fm with $
a=0.089$~fm and with the open (red) triangles for $L=2.1$~fm and $a=0.070$~fm.
Results with the hybrid action (LHPC) are shown with the (green) asterisks for $a=0.124
$~fm and results using $N_f=2+1$
Clover fermions (PACS-CS) with
the open (orange) circles and $a=0.0907$~fm.
For the nucleon we also show results
using $N_f=2+1$ asqtad improved staggered fermions (MILC) denoted by the filled (light blue) circles.
The physical masses are shown by the (purple) star. }
\end{minipage}
\hfill
\begin{minipage}{8cm}
\epsfxsize=7.5truecm
\epsfysize=12truecm \vspace*{-3cm}
\mbox{\epsfbox{decuplet.ps}}
\caption{\label{fig:decuplet} Comparison of masses for the low lying decuplet
baryons.The notation is the same as that of Fig.~\ref{fig:octet}. }
\end{minipage}
\end{figure}
We use the relation
$m_\pi^2\sim \mu$ to evaluate the nucleon $\sigma$-term by computing
$m_\pi^2\frac{dM_N}{dm_\pi^2}$. Using our ${\cal O}(p^3)$ fit we find
$\sigma_N=64.2(9)$~MeV in agreement with the value given in Ref.~\cite{Alexandrou:2008tn}.
Generalizing the relation
we can evaluate the corresponding $\sigma$-terms for the other
hadrons. We list in Table~\ref{tab:sigma term}
the values we obtain using the nucleon mass to set the scale.
As can be seen, the value extracted depends on the fitting Ansatz. In
the most interesting case of the nucleon the result of ${\cal O}(p^3)$
changes by
two standard deviations if the coefficient of the cubic term in $m_\pi$
is fitted. In the case of the $\Lambda$ fitting the cubic term gives the
same value as that obtained to ${\cal O}(p^3)$, compatible
to that of the nucleon. This is
another indication of the argument presented above in favor for the
presence of a cubic term in $m_\pi$ of comparable size as that of the nucleon. In fact
the main conclusion of this exercise is that allowing the
coefficient of the cubic term to be determined from the data produces a
$\sigma$-term that for all baryons except the $\Omega$ is comparable within error to the value of nucleon $\sigma$-term. Comparing to the results of NLO we can see that for the nucleon
this fit produces too much curvature as already observed for instance in
Ref.~\cite{WalkerLoud:2008bp}. For the other baryons a reasonable value is obtained
depending on the quality of
the fits as seen in the figures.
\section{Comparison with other lattice results and with experiment}
In this section we show a comparison of recent lattice
results on the baryon masses from various collaborations in Figs.~\ref{fig:octet} and \ref{fig:decuplet}.
For our results we use the lattice spacing determined from the nucleon mass to convert physical units. Results from
the other collaborations are converted to physical units using the lattice spacing that they provide.
The level of agreement of lattice QCD
results using a variety of fermion discretization schemes
before taking the continuum limit or other lattice artifacts into account
is quite impressive. Small discrepancies seen mainly in
the decuplet masses can be attributed to lattice artifacts.
In
particular results using asqtad improved staggered fermions may suffer the most
from discretization errors. The MILC collaboration has simulations on finer lattices
and an update on the masses is expected in the the near future. We note
that results very close to the physical point obtained
using Clover fermions from the PACS-CS collaboration~\cite{Aoki:2008sm}
may have large finite volume effects due to the fact
that $m_\pi L<3.5$ in this simulation.
Finally we show our continuum results
in Fig.~\ref{fig:spectrum}. We take the
continuum limit using results at $\beta=3.9$ and $\beta=4.05$ after
interpolating at a given value of $r_0m_\pi$.
The continuum values used
are collected in Table XIII and VIX for the octet and decuplet respectively.
Residual cut-off effects that may result
from using Eq.~(\ref{s-mass}) to estimate $\mu_s$ at $\beta=4.05$ are not included in the systematic errors.
For the nucleon and the $\Delta$ we use
${\cal O}(p^3)$ to extrapolate to the physical point as done in our previous
work~\cite{Alexandrou:2008tn}. For the strangeness non-zero baryons we use
NLO SU(2)
HB$\chi$PT to
extrapolate to the physical point. In the statistical error we have added
the error arising from the uncertainty in $r_0^N$, i.e. the difference in
the resulting masses when we use $r_0^N=0.471$~fm and $r_0^N\pm 0.021$~fm to
set the scale.
As can be seen, our results compare well with
experiment within the estimated uncertainties.
\begin{figure}[h!]
\mbox{\epsfxsize=10.cm\epsfysize=10.cm\epsffile{spectrum_final.eps}}
\caption{The octet and decuplet spectrum.
Data are shown in the continuum and at the physical point.
For the nucleon and $\Delta$ using ${\cal O}(p^3)$ and for the rest
using NLO SU(2) HB$\chi$PT
for the chiral extrapolation.}
\label{fig:spectrum}
\end{figure}
\section{Summary and Conclusions}
The focus of this work is
the computation of the low-lying baryon masses
using twisted mass fermions at maximal twist.
It is in line with ongoing
efforts by other lattice collaborations worldwide
to predict the baryon spectrum from first principles.
Comparison of lattice results with experiment is regarded
as an important benchmark for
lattice QCD and justifies the use of
different lattice actions, each with different systematic errors.
For example, the twisted mass action with only one parameter to tune,
provides automatic ${\cal O}(a)$ improvement. However
it restores isospin symmetry only in
the continuum limit.
We have examined the consequences of isospin breaking
on the baryon masses and found them
to be either small or compatible
with zero. On our finer lattice at $\beta=4.05$ the maximal
isospin violation is obtained in the octet only
in the case of the Cascade where we find that $m_{\Xi^0}-m_{\Xi^-}\sim 50$~MeV.
Finite volume corrections are estimated in the case of the nucleon and found
to be smaller than statistical errors.
The continuum extrapolation using results at $\beta=3.9$ and $\beta=4.05$
are verified using a finer lattice at $\beta=4.2$ in the case of the nucleon
supporting the analysis carried out. Therefore this study shows that both
cut-off effects and finite volume corrections are small and continuum
results can be extracted using lattice data at $\beta=3.9$ and $4.05$.
An investigation of the Gell-Mann-Okubo relations has been carried out. For the
octet baryon masses we find that these relations are
satisfied at all pion masses even at a non-vanishing lattice spacing.
For the decuplet baryon masses we see deviations and it will be
interesting to study these relations
at finer values of the lattice spacing and smaller quark masses.
Comparison of the results at given lattice spacings
with those of other collaborations
reveals consistency among groups using improved actions
with lattice spacing being smaller than 0.1~fm.
This is a non-trivial consistency check of the several
lattice formulations directly on lattice data without without
the necessity of any extrapolations.
This level of agreement
among different lattice actions, is a welcome outcome of
the collective effort of several collaborations.
The final continuum results at the physical limit shown in Fig.~\ref{fig:spectrum}
are in excellent agreement with
experiment. The largest uncertainty in the final value comes
from systematic errors
in setting the scale, which are an order of magnitude larger
than statistical errors.
Besides the masses we have extracted from the various fits the
$\sigma$-term. To ${\cal O}(p^3)$ $\chi$PT we find a value of $64(1)$~MeV
for the nucleon
$\sigma$-term. Allowing the coefficient of the cubic term in $m_\pi$ to
be determined from the data yields a smaller value of $39(12)$~MeV albeit with
a large statistical error. Fitting with the latter Ansatz produces for all
baryons expect the $\Omega$ a value of the $\sigma$-term compatible
with that of the nucleon. Clearly this is a result that requires further study.
In particular results at smaller pion masses will help to better
determine the curvature of fits.
The next step for the ETM collaboration is to perform the analysis
using a dynamical strange quark. Within the twisted mass formalism this
is accomplished by simulating a non-degenerate doublet.
Such $N_F=2+1+1$ simulations are already available at two values of the lattice spacing~\cite{Baron:2008xa} comparable
to the ones studied in this work.
This future study will provide a nice comparison to the
present work and enable us to gauge unquenching effects in the
strange quark sector.
\section*{Acknowledgments}
We would like to thank all members of ETMC for a
very constructive and enjoyable collaboration and the many fruitful
discussions that took place during the development of this work.
This work was performed using HPC resources from GENCI at IDRIS,
Grant 2009-2009052271. We have also largely benefited from
computer and storage resources in the CCIN2P3 (Lyon).
Computer time for this project was also
made available to us by the
John von Neumann-Institute for Computing on the JUMP and
Jugene systems at the research center
in J\"ulich and the Stella system at the
Donald Smits Center for Information Technology in Groningen.
We thank the staff members for their kind and sustained support.
This work has been supported in part by the DFG
Sonder\-for\-schungs\-be\-reich/ Trans\-region SFB/TR9.
This work was partly supported by funding received from the
Cyprus Research Promotion Foundation under contracts EPYAN/0506/08,
KY-$\Gamma$/0907/11/ and TECHNOLOGY/$\Theta$E$\Pi$I$\Sigma$/0308(BE)/17.
|
2,869,038,155,260 | arxiv | \section{Introduction}
With the advent of suitable technology for high angular resolution
spectroscopy, from space with {\em Hubble Space Telescope} and
from the ground with adaptive optics assistance, it has become
possible to measure the masses of the central black holes in
many nearby galaxies. Observations of nearby galaxies have
led to the identification of scaling relationships that then allow
us to estimate masses of black holes in distant galaxies and
thus determine, at least in principle, the mass function for
supermassive black holes, both locally and over cosmic time.
Consequently, there have been tremendous advances in our
understanding of the evolution of the supermassive black hole
population and that of the galaxies that host them over the
history of the universe. Despite this incredible progress, it is
important to understand that supermassive black hole
measurement is a field in its infancy: even the methods deemed
most reliable have systematic uncertainties that have not been
mitigated in a completely satisfactory way. Masses based on
stellar dynamics and gas dynamics are highly model dependent
and are only as reliable as the underlying assumptions. An
obvious example is the recent work of
\cite[Gebhardt \& Thomas (2009)]{GebThom09}
who found that the mass of
the black hole in M\,87, widely regarded as
one of the most secure measurements, changed by a factor of
two with the introduction of a dark matter halo into the models.
It is also worth noting that only two black hole measurements,
the Milky Way (based on orbital motion of stars in
the Galactic Center) and NGC 4258 (based on motions of
megamasers), meet the formal criterion of measuring the mass
on a sufficiently compact scale that the black-hole nature of the
central source is proven. In all other present measurements, it
remains an article of faith that the central source is a
singularity, although it is hard to see practically how these
masses could be anything else. We will therefore adhere to
the convention of referring to these objects as black holes.
Fundamentally, mass measurements are based on how the
central black holes accelerate nearby matter, either stars or gas.
The advantage of using stellar dynamics is that stars respond
only to gravitational forces. The corresponding disadvantage is
that high angular resolution is necessary; measurement of
stellar motions at large distances from the nucleus makes the
mass determinations more uncertain. Gas motions, on the other
hand, allow us to probe closer to the nucleus; indeed, in the
case of reverberation mapping, angular resolution is irrelevant
since time resolution substitutes for it. The disadvantage of
using gas motions is, of course, that gas also responds to forces
other than gravity, with radiation pressure being the main
concern (see \S\ref{section:radpress}).
It is important to distinguish between {\em direct} and {\em
indirect} methods of mass measurement. {\em Direct
methods} are based on dynamics of gas or stars accelerated by
the central black hole. This would include stellar dynamics, gas
dynamics, and reverberation mapping. {\em Indirect methods},
on the other hand, are based on observables correlated with the
mass of the central black hole. Indirect methods include the
\Msigma\ and \MLbulge\ relationships, the fundamental plane,
as well as AGN scaling relationships such as the BLR
radius-luminosity relationship that we will discuss later.
We also sometimes refer to ``primary,'' ``secondary,'' and
``tertiary'' methods, with the difference
based on model-dependence and assumptions required. Primary methods
require fewer assumptions and have little model dependence:
the masses of the black holes in the Milky Way and NGC
4258, based on proper motions and radial velocities, are thus
certainly primary. Stellar dynamics and gas dynamics are
sometimes also regarded as primary methods as they do not
hinge on other results. On the other hand, reverberation
mapping, as it has been practiced to date, currently depends on
other ``primary direct'' methods for a zero point for the mass
scale, so it is technically a ``secondary method,'' though
it is a still a ``direct method.''
\begin{figure}[b]
\begin{center}
\includegraphics[width=5.2in]{peterson_fig1.eps}
\caption{Methods and applications of black hole mass measurements
in galactic nuclei.}
\label{fig1}
\end{center}
\end{figure}
Figure 1 illustrates in flowchart format how masses are
determined for both quiescent and active galaxies. It is slightly
oversimplified in that stellar and gas dynamics have been used
to measure central masses of a very limited number of AGNs,
as discussed below. One-dimensional (1-d)
reverberation-mapping refers to measurement of mean emission-line
response times, i.e., reverberation mapping as it has been
practiced to date. As noted above, this requires some external
calibration of the mass scale and this is currently provided by
assuming that the \Msigma\ relationship is the same in
quiescent and active galaxies. With two-dimensional (2-d)
reverberation mapping, we aim to produce a velocity--delay
map which should give us sufficient information to model the
BLR dynamics and determine the central masses;
model-dependence will still be something of an issue, as it is with
stellar and gas dynamics. But this will eventually put
reverberation mapping on a par with stellar and gas dynamics
as a ``primary direct'' method and it will be possible to
compare directly the \Msigma\ relationships in quiescent and
active galaxies rather than assume that they are the same. This
may occur in the very near future: the first reliable detections
of velocity-dependent lags were reported at this conference
(see the papers by Bentz and Denney, these proceedings).
In Table 1, we list published masses of a few relatively nearby
AGNs for which black hole measurements have been made
using more than one method. The most accurate measurement
is provided by megamaser motions in NGC 4258. However,
megamaser sources like this are exceedingly rare. Moreover,
megamasers occur in type 2 AGNs where our view of the
central engine is obscured, thus making reverberation mapping
impossible. Mass measurements by stellar dynamics and/or gas
dynamics have been made for a handful of nearby AGNs (e.g.,
Hicks et al., these proceedings), but in both cases, significant future
progress depends on dramatic improvements in angular
resolution. Thus, if only by default, reverberation mapping is
the near-term future for AGN black hole mass
measurement. The downside to this is that reverberation
mapping is a resource-intensive technique
based on high-precision spectrophotometry. However,
as we see below, reverberation results already provide us
with a shortcut to mass estimation.
\begin{table}
\begin{center}
\caption{Comparison of Black Hole Mass Measurements}
\label{tab1}
\begin{tabular}{lcccc}\hline
{\bf Method} & {\bf NGC 4258} & {\bf NGC 3227} & {\bf NGC 4151} \\
& \multicolumn{3}{c}{(Units $10^6\,M_\odot$)}\\ \hline
\underline{Direct methods:} \\
Megamasers & $38.2 \pm 0.1^{[1]}$ & N/A & N/A \\
Stellar dynamics & $33 \pm 2^{[2]}$ & 7--20$^{[3]}$ & $\leq 70^{[4]}$ \\
Gas dynamics & 25--260$^{[5]}$ & $20^{+10}_{-4}$ $^{[6]}$ & $30^{+7.5}_{-22}$ $^{[6]}$ \\
Reverberation & N/A & $7.63^{+1.62}_{-1.72}$ $^{[7]}$ & $46\pm5^{[8]}$ \\
\underline{Indirect methods:}\\
$M_{\rm BH}$--$\sigma_{*}^{[9]}$ & 13 & 25 & 6.1 \\
$R$--$L$ scaling$^{[10]}$ & N/A & 15 & 29--120 \\
\hline
\end{tabular}
\end{center}
$^{[1]}$Herrnstein et al.\ (2005).
$^{[2]}$Siopis et al.\ (2009).
$^{[3]}$Davies et al.\ (2006).
$^{[4]}$Onken et al.\ (2007).
$^{[5]}$Pastorini et al.\ (2007).
$^{[6]}$Hicks \& Malkan (2008).
$^{[7]}$Denney et al.\ (2010).
$^{[8]}$Bentz et al.\ (2006b).
$^{[9]}$G\"{u}ltekin et al.\ (2008).
$^{[10]}$McGill et al.\ (2008).
\end{table}
\section{Reverberation-Based Mass Measurements}
Reverberation mapping
(\cite[Blandford \& McKee 1982]{BMcK82};
\cite[Peterson 1993]{Pet93})
makes use of the time-delayed
response of the broad emission lines to continuum variations to
provide a size estimate for the broad-line region (see
Peterson 2001
for a fairly thorough tutorial). At the present time,
emission-line time delays, or lags $\tau$, have been measured
for $\sim45$ AGNs, in nearly all cases for \Hbeta, but in many
cases for the other Balmer lines and in a few cases for multiple
lines extending into the ultraviolet. We can then obtain the
mass of the black hole (or, more accurately, the mass enclosed
out to the distance $R =c\tau$) by combining this with the
emission-line width $\Delta V$, i.e.,
\begin{equation}
\label{equation:virial}
M_{\rm BH} = \frac {f \left( \Delta V^2 R \right)}{G},
\end{equation}
where $f$ is a dimensionless factor of order unity that depends
on the structure, dynamics, and orientation of the BLR. If, for
example, the BLR is a thin ring of material in a Keplerian
orbit of inclination $i$ around the black hole, then
$f \propto 1/\sin^2 i$. Of course, the real geometry of the BLR must
be much more complex than a simple ring or disk; unified
models suggest that we see type 1 AGNs at inclinations $0\deg
\lesssim i \lesssim 45\deg$, in which case the observed line-of-sight
velocities in a ring or disk are a small projection of the orbital
speed. Indeed, for $i\lesssim 20\deg$, the projected line-of-sight
velocity width of the line is so small that
we would underestimate the mass of the black hole by more than
an order of magnitude. We note in passing that
emission-line lags are unaffected by inclination as long as the
system has axial symmetry and the line emission is isotropic.
Nevertheless, there is evidence, mostly from radio-loud
systems, that inclination is an important element in the
determination of $f$. For example, the ratio of core-to-lobe
power in double radio sources correlates inversely with line
width
\cite[(Wills \& Browne 1986)]{Wills1986};
core-dominated systems,
where we are looking down the system axis, have narrower lines
than systems at larger inclination. Similarly, flat-spectrum
radio sources, in which again we are observing close to the axis
of the system, have narrower emission lines than steep
spectrum sources, which are seen at higher inclination
\cite[(Jarvis \& McLure 2006)]{Jarvis2006}.
But the differences are not extraordinarily
large: for example, Jarvis \& McLure find that for their sample
of radio-loud AGNs, sources with $\alpha > 0.5$ have a mean
FWHM of 6464\,\kms\ for \Hbeta\ and Mg\,{\sc ii}, while
those with $\alpha < 0.5$ have a mean width of 4990\,\kms.
These results clearly indicate that (1) inclination is important,
but (2) the BLR gas has considerable velocity dispersion in the polar
direction. Given this, we now have to recognize that a direct
comparison between reverberation-based masses and, say,
stellar or gas dynamics, will depend very much on the
generally unknown inclination of the BLR. We should generally
not expect good agreement between reverberation masses
and masses measured by other techniques for individual
galaxies unless the
inclination is known or strongly constrained.
On the other hand, given a sample of AGNs, we can determine
an {\em average} value $\langle f \rangle$ by normalizing the
AGN \Msigma\ relationship to that of quiescent galaxies. A
first attempt to do this yielded a value $\langle f \rangle = 5.5
\pm 1.8$
\cite[(Onken et al.\ 2004)]{Onken04}:
while this intuitively seems a little high, we can
again plausibly attribute this to inclination
effects and our predisposition
the inclinations of type 1 AGNs are typically rather
small. The main liability of this particular calibration
of the mass scale is that it
is based on low-redshift, low-luminosity objects.
All of the $\sigma_*$ measurements are based on the Ca\,{\sc
ii} triplet which is unfortunately redshifted into
atmospheric water vapor
bands at $z > 0.06$. We are now attempting to measure
$\sigma_*$ at higher redshift by using observations of the CO
bandhead in the {\em H}-band (1.6\,\micron), a process greatly
aided by use of adaptive optics with integral field units that
concentrate the bright AGN into the central few pixels
and at the same time integrate a few arcseconds in each
coordinate without loss of spectral resolution (see the
papers by Dasyra and Grier in
these proceedings). In addition to new results at higher
redshifts and luminosities, the LAMP collaboration (Bentz,
these proceedings) is adding additional masses and velocity
dispersions at the low-mass end of the AGN distribution. As a
consequence of expanding the range in black hole mass, the
slope of the AGN \Msigma\ relationship is becoming better
defined and shows improved consistency with that for
quiescent galaxies (Woo, these proceedings).
If we assume that the AGN \Msigma\ relationship has the
same zero point and slope as that for quiescent galaxies, a
maximum likelihood analysis places an upper limit on the
intrinsic scatter in the relationship of $\Delta \log M_{\rm BH}
\approx 0.40$\,dex, which is consistent with what is found for
quiescent galaxies (G\"{u}ltekin, these proceedings). This is a
reassuring indication that the masses derived from
reverberation mapping are reliable. There are, of course, other
such indications, including the direct comparisons of a few
cases as in Table 1. Also, in each case in which lags have been
measured for multiple emission lines in a single source, there is
an anticorrelation between lag and line width that is consistent
with a constant virial product $\Delta V^2 R$
\cite[(Peterson \& Wandel 1999)]{Peterson1999}.
Finally, AGNs show the same correlation
between black hole mass and bulge luminosity
(the \MLbulge\ relationship) that is seen in quiescent galaxies
\cite[(Bentz et al.\ 2009b)]{Bentz2009b}.
Indeed, a maximum likelihood analysis
gives an upper limit to intrinsic scatter, $\Delta \log M_{\rm
BH} \approx 0.17$\,dex, which is actually smaller than the
intrinsic scatter in this relationship for quiescent galaxies
($\Delta \log M_{\rm BH} \approx 0.38$\,dex;
\cite[G\"{u}ltekin et al.\ 2008]{Gultekin2008}).
\section{\boldmath BLR Scaling with Luminosity: The $R$--$L$ Relationship}
The emission-line spectrum from an ionized gas, setting aside
elemental abundances, is largely controlled by the particle
density $n$ and an ionization parameter
\begin{equation}
U = \frac{Q({\rm H})}{4 \pi R^2 n c},
\end{equation}
where $Q({\rm H})$ is the number of hydrogen-ionizing
photons emitted per second by the central object. To some low
order of approximation (principally ignoring the Baldwin
Effect), all AGN spectra have similar emission-line flux
ratios and similar emission-line equivalent widths,
suggesting that $U$ and $n$ do not vary much from one source
to another. If we further assume that $Q({\rm H}) \propto L$
where $L$ is the luminosity of the central source in some
arbitrary band, we are led to the na\"{\i}ve prediction that $R
\propto L^{1/2}.$
Thus since the early days of photoionization equilibrium
calculations, some such relationship between the BLR radius
and the continuum luminosity (hereafter simply the $R$--$L$
relationship) of the power-law form $R \propto L^{\alpha}$
had been anticipated. Observationally the $R$--$L$
relationship for \Hbeta\ first emerged as statistically significant
with a slope $\alpha \approx 0.7$ with the addition of 17
PG QSOs by
\cite[Kaspi et al.\ (2000)]{Kaspi00}
to the existing similar-sized
reverberation-mapping database on Seyfert 1 galaxies. It is
sometimes stated that this result was enabled by extension of the
luminosity range by nearly two orders of magnitude; it is more
correct to say, however, that it was due to the fact that the
QSOs are so much more luminous than their host galaxies that
starlight contributes negligibly to the luminosity measurement.
Indeed, it is surprising how much starlight contaminates AGN
luminosity measurements in the optical, though this problem is
certainly exacerbated in reverberation monitoring data as large
spectrograph apertures and spectral extraction windows are
used to mitigate the effects of variable seeing on the total flux
measurements. Bentz et al.\ (2006a, 2009a)
have attempted to
account for starlight contamination of optical luminosities of
AGNs by using unsaturated high-resolution optical images
obtained with \HST/ACS. The surface brightness distribution is
modeled so that the AGN point source can be removed and the
galaxy contamination estimated by simulated aperture
photometry. When this is done, the slope of the $R$--$L$
relationship for \Hbeta\ is found to be $\alpha \approx 0.5$,
remarkably consistent with the na\"{\i}ve prediction.
This leads us to the remarkable realization that it is possible to
estimate masses of the black holes in AGNs from only a single
spectrum: by measuring the luminosity, we infer the BLR
radius and we can combine this with the emission-line width to
measure the mass
\cite[(Wandel, Peterson, \& Malkan 1999)]{Wandel1999}.
This is variously referred to as the
``photoionization method,'' ``single-epoch spectrum method,''
or simply ``masses from scaling relationships.''
An interesting question at this point is how much intrinsic
scatter is in the $R$--$L$ relationship? This is of keen interest for
two reasons: first, this will set a fundamental limit on the
accuracy with which we can estimate masses from single
spectrum, and second, it dictates future observing strategies for
further refinement of the $R$--$L$ relationship. If the intrinsic
scatter is large, then many more reverberation measurements
will be needed to beat down the statistical noise in the
relationship. On the other hand, if the scatter is small, then
improvements in the determination of the $R$--$L$
relationship will come from obtaining {\em better}
reverberation data on a smaller number of objects.
To proceed with this, we want to start by using only the
very best data sets, since these are ones where intrinsic
scatter is most easily isolated: there remain elements of
both art and luck in reverberation studies, and the ugly
truth is that not all of the light curves give as clean
a result as we would like. To minimize the impact of the
lower-quality data, Catherine Grier and I independently visually
inspected all available light curves (for the optical
continuum and \Hbeta\ emission line only) with the intent of
identifying the ``best'' reverberation data: we used a very
simple criterion, that you could easily see the same pattern of
variability in the continuum and \Hbeta\ light curves
and could estimate the
lag simply by inspection
(see Figure \ref{fig2}). About half of the existing light
curves met this criterion. A maximum likelihood analysis of
these data indicates that the intrinsic scatter amounts to $\Delta
\log R \approx 0.11$\,dex. This is really quite remarkable when
one considers that the typical formal errors on this subset of the
best reverberation data amount to $\Delta \log R \approx
0.09$\,dex. From this, we conclude that, at least over the
luminosity and redshift range where it has been calibrated
($41.5 \lesssim \log \lambda L_{\lambda} ({\rm erg\ s}^{-1}) \lesssim
45$ at $\lambda = 5100$\,\AA\ and $z \approx 0$), the $R$--$L$
relationship is as {\em statistically}
effective as reverberation for obtaining BLR
sizes and central black hole masses. If you are wondering
why we emphasize the ``statistical'' aspect of $R$--$L$ scaling,
we refer you to the results of the ``indirect methods'' in
Table 1: the results for individual sources can be quite misleading,
as they certainly are in the case of NGC 4151, which
is a notorious outlier in the \Msigma\ relationship.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4.5in]{peterson_fig2.eps}
\caption{The $R$--$L$ relationship for H$\beta$.
The luminosity is $\lambda L_\lambda (5100\,{\rm \AA})$
and the BLR radius is measured in light days.
The open circles indicate the highest-quality
measurements. The slope of the fit to the highest-quality
data is indistinguishable from that to all the
data, but the scatter is only 0.11 dex.}
\label{fig2}
\end{center}
\end{figure}
\section{\boldmath The $R$--$L$ Relationship at High
Redshift and Indirect Mass Measurements}
\subsection{The $R$--$L$ Relationship for UV Lines}
With the $R$--$L$ relationship, we are able to explore the
black hole mass function, not only locally but at high redshift,
enabling us to trace the history of black hole growth. Some
exploratory work has been done on this and in fact there are
claims that the \Msigma\ and \MLbulge\
relationships evolve over time (e.g., Woo, these proceedings),
although at least
some published claims of evolution of the \Msigma\
relationship are clearly attributable to Malmquist bias.
A lot of additional careful work will be required to sort this out.
An obvious problem with estimating black hole masses at high
redshift, of course, is that the \Hbeta\ emission line is
redshifted out of the visible window at only modest redshift.
We are thus forced to use other emission lines for which
reverberation measurements are actually quite scarce: of the
other potential broad lines for mass measurement, there is but a
single reliable measurement of a Mg\,{\sc ii} lag
\cite[(Metzroth, Onken, \& Peterson 2006)]{Metzroth2006}
and a bare handful of C\,{\sc iv}
lags, though these span quite a large range in luminosity,
$39.5 \lesssim \log \lambda L_{\lambda}({\rm erg\ s}^{-1}) \lesssim
47$ at $\lambda = 1350$\,\AA\ in the rest frame
(Kaspi et al.\ 2007 and references therein).
One of these objects is at $z=2.17$, and
the rest are at $z < 0.06$.
The C\,{\sc iv}\,$\lambda1549$ emission line is especially
important as it allows us to reach quite large redshifts in the
optical.
\cite{Vestergaard2002}
first used C\,{\sc iv} to show that
quasars with black hole masses of $\sim10^9\,M_\odot$ were
already assembled by $z \sim 4$. She assumed that the
C\,{\sc iv} $R$--$L$ relationship has the same slope as
$\Hbeta$ and calibrated the C\,{\sc iv}-based mass scale by
using single-epoch spectra of reverberation-mapped AGNs.
\subsection{Characterizing the Velocity Field}
If there is an ``Achilles' heel'' in determining masses
based on individual spectra, it is probably {\em not}
the $R$--$L$ relationship, but rather how we characterize the
velocity field of the BLR. The line profile is
expected to be sensitive to both inclination and
Eddington rate (e.g., \cite[Collin et al. 2006]{Collin2006}).
In reverberation mapping
experiments, we obtain the best results by measuring
the line width in the ``rms spectrum'' formed by
computing the pixel-by-pixel variance in
all of the spectra obtained in the reverberation
experiment. This isolates the variable part of the
emission line, which arises in the very gas for which
we are measuring the time delay. There are two common
ways to characterize the line width, either FWHM or
the line dispersion $\sigma_{\ell}$, which is the second
moment of the line. The latter seems to give somewhat
better results (\cite[Collin et al. 2006]{Collin06};
\cite[Peterson et al. 2004]{Peterson04}).
For either line-width measure, the modest
scatter, e.g., in the \MLbulge\ relationship, suggests
that the mass measurements are not particularly sensitive
to how we characterize the line width and thus
the velocity field of the BLR.
The situation is rather different with single-epoch
spectra where contamination by other features becomes
a serious issue. \cite{Denney2009} have explored
this in some detail for H$\beta$ by seeing how well
single spectra from reverberation campaigns reproduce
the actual reverberation-based masses. They find some
obvious effects (e.g., sensitivity of FWHM to proper
removal of the narrow-line component) and some
more subtle effects (e.g., $\sigma_\ell$ is badly affected
by blending, particularly when the AGN continuum is
in a faint state and the lines are especially broad).
Arguments about whether particular UV lines can be used
to estimate masses are really about how well the
widths of different lines in AGN spectra are correlated.
In the case of Mg\,{\sc ii} $\lambda2798$, the width of the
Mg\,{\sc ii} line appears to be a good surrogate for the width of
\Hbeta\ (Woo, these proceedings).
\cite[Shen et al.\ (2008)]{Shen2008}, using a large collection of
SDSS spectra, find that $\log \left(\left[{\rm
FWHM}(\Hbeta)\right]/
\left[ {\rm FWHM}({\rm Mg}\,\mbox{\sc ii})\right]\right) = 0.0062$, with
scatter of only $\sim 0.11$\,dex.
\cite[Onken \& Kollmeier (2008)]{Onken2008}
find that the masses derived from \Hbeta\ and
Mg\,{\sc ii} differ as a function of Eddington ratio, although
they argue that it is possible to correct for this bias.
The accuracy that can be obtained using the C\,{\sc iv}
line remains somewhat controversial (see the contributions
to these proceedings by Vestergaard, Netzer, and
Trakhtenbrot). On the positive side, the
limited existing reverberation data for C\,{\sc iv} suggest that
the $R$--$L$ relationship for C\,{\sc iv} has the same slope as
\Hbeta\
\cite[(Kaspi et al.\ 2007)]{Kaspi07}.
Moreover, reverberation-based
masses based on C\,{\sc iv} are consistent with those based on
every other line (i.e., the virial product $\Delta V^2 R/G$ is
consistent for all the lines).
On the other hand, since C\,{\sc iv} is a
resonance line, we often see absorption in its blue wing due to
outflows, complicating accurate measurement of the line width.
Indeed, we sometimes see, particularly in the case of
narrow-line Seyfert 1s (NLS1s),
an extended blue wing of the emission line, again presumably
due to outflowing gas, which makes measuring the
line width problematic.
It seems likely, however, that we should be
able to calibrate out luminosity-dependent effects and
simply avoid spectra with strong absorption features
such as BALs.
\subsection{Use of Scaling Relationships}
The scaling relationships that lead to black-hole mass
estimates must not be used blindly; indeed they should be used
with great caution. We need to keep in mind that when we
think we are measuring mass, we are really measuring
\begin{equation}
M_{\rm BH} \propto \Delta V^2 R \propto \Delta V^2
L^{1/2}.
\end{equation}
Similarly, when we think we are measuring Eddington ratio,
we are really measuring
\begin{equation}
\frac{L}{L_{\rm Edd}} \propto \frac{L}{M_{\rm BH}}
\propto
\frac{L}{\Delta V^2 L^{1/2}} \propto \frac{L^{1/2}}
{\Delta V^2}.
\end{equation}
It is important to keep in mind that {\em any} correlations
among mass, Eddington ratio, and luminosity must be eyed
with great suspicion.
\section {On the Possible Importance of Radiation Pressure}
\label{section:radpress}
Given a nominal bolometric correction, most AGNs are thought to
be radiating at about $0.1 L_{\rm Edd}$ where
$L_{\rm Edd}$ is the Eddington
luminosity, the maximum luminosity at which the source is
stable against disruption by radiation pressure.
In the case of objects like NLS1s,
the luminosities may approach $L_{\rm Edd}$. It stands to reason
that radiation pressure on the BLR gas might be an important
dynamical force. In an elegant paper,
\cite[Marconi et al.\ (2008)]{Marconi2008}
argue that radiation pressure could be an important
factor in the BLR dynamics. Radiation pressure, like gravity,
is diluted geometrically (i.e., $\propto R^{-2}$),
making it hard to distinguish between a high central
mass plus radiation pressure and a lower central mass
and no radiation pressure: failure to account for
radiation pressure could thus lead us to underestimate
the mass of the black hole. Marconi et al.\ argue that
the relative importance of radiation pressure ultimately
comes down to the inertia of the BLR gas clouds: if the
clouds have sufficiently high column density
($N_{\rm H} > 10^{23}$\,cm$^{-2}$),
then the effect of radiation pressure on the gas dynamics
is negligible. Interestingly, the most successful
photoionization equilibrium calculations suggest
column densities of this order. Marconi et al.\
then modify equation (\ref{equation:virial}) to a form like
\begin{equation}
\label{equation:virialplusradiation}
M_{\rm BH} = f \Delta V^2 R/G + g L,
\end{equation}
where the first term on the right hand side is the
mass based on the $R$--$L$ scaling relationship and
the second term accounts for radiation pressure.
Marconi et al.\ claim two successes with this
formulation: first, the scatter between single-epoch
mass estimates and reverberation measurements is
reduced and the difference is no longer
luminosity-dependent (their Figure 1), and second,
NLS1s no longer fall below the normal
\Msigma\ relationship. It is not clear,
however, that anything physical can be inferred
from this. The additional luminosity dependence
of equation (\ref{equation:virialplusradiation})
can be attributed to the fact that the
reverberation-based sample of AGNs is strongly
Malmquist biased: the reverberation-mapped AGNs
were selected because they are among the apparently
brightest known AGNs, and there is a clear correlation
between their black hole masses and luminosities. Thus the
second term in equation (\ref{equation:virialplusradiation})
is bound to reduce scatter, regardless of
whether or not radiation pressure is actually physically
important. Moreover, the argument about NLS1s appears to be
circular: if accounting for radiation pressure increases
their black hole masses enough to place them on the
\Msigma\ relationship, then their Eddington
rates correspondingly drop down to those comparable to
other AGNs, which then begs the question of the origin
of the unusual properties of NLS1s. Also,
\cite[Netzer (2009)]{Netzer2009}
drew large samples of Type 1 and Type 2 AGNs from SDSS
and showed that while their [O\,{\sc iii}] luminosity
distributions and conventionally computed black hole masses
have similar distributions, the radiation-pressure
corrected mass distributions were markedly different,
arguing against the importance of the radiation
pressure term in equation (\ref{equation:virialplusradiation}).
\cite[Marconi et al.\ (2009)]{Marconi2009}
countered that Netzer's result is an evitable
consequence of assuming a fixed column density for the
BLR clouds; unfortunately, this argument exposes the
fact that there is currently no accurate formulation
for estimating black hole masses in the presence of radiation pressure.
At the present time, whether or not radiation pressure
needs to be accounted for in our mass calculations is
not clear, though many of us continue to work on this issue.
We believe that NLS1s probably
provide the best testing ground, and one obvious step
is to further investigate the
\Msigma\ relationship in these sources. Much of the previous
work on NLS1s has necessitated using the [O\,{\sc iii}]
narrow line widths as a surrogate for stellar
velocity dispersions, and this is one area
where we can make an improvement fairly easily by
measuring stellar velocity dispersions in the near IR.
\section{Conclusions}
While great progress has been made in measuring the masses
of the central black holes in
both active and quiescent galactic nuclei, there are still
significant uncertainties. My own guess is that masses
measured by all direct methods are probably uncertain by a factor of a few,
though this could be an underestimate for reverberation-based
masses if, for example,
radiation pressure turns out to be important. Aside from
this possibility, the most significant
unknown in reverberation-based
masses is the inclination of the system. This, of course,
is merely one aspect of a rather larger issue: it is also not
clear that we have identified an unbiased way to
characterize the emission-line widths that enter into the
mass determination. The size of the BLR, at least as measured
with H$\beta$, seems to be remarkably well-characterized
(though we could be fooling ourselves there, too) with
uncertainties as small as $ \Delta \log R \approx 0.1$\,dex.
\begin{acknowledgements}
We are grateful for support of our work by
the National Science Foundation through grant
AST-0604066 and by NASA through grants
HST-GO-10833 and HST-GO-11661 from
Space Telescope Science Institute to
The Ohio State University.
\end{acknowledgements}
|
2,869,038,155,261 | arxiv | \section*{Introduction}
Noble-metal nanoparticles are widely used in plasmonics because their high electrical conductivity supports the excitation of low-loss localized surface plasmon resonances (LSPRs)\cite{MAI-07}. The ensuing optical response of metal nanoparticles can be tuned by variation of their size, shape, or arrangement\cite{BRY-08,SCH-10}. Strong enhancements of the optical field at the surface of metal nanoparticles and in their immediate vicinity are exploited, for instance, in biological and chemical sensors\cite{MAY-08,VER-11}, photovoltaics\cite{ERW-16}, and optoelectronics\cite{PAC-07}. Nanoparticles made of ferromagnetic metals also support the excitation of LSPRs\cite{CHE-11,BON-11,MAC-13,MAC-15}. Since plasmon resonances and magneto-optical activity are strongly linked in ferromagnetic nanoparticles, their magneto-optical spectra can be tailored by employing design rules known from plasmonics. Conversely, nanoscale ferromagnets enable active control of light via magnetization reversal in an external field. Both effects are relevant for technology and are studied in the field of magnetoplasmonics\cite{ARM-13,BOS-16,FLO-18}.
Large ohmic losses in ferromagnetic metals lead to significant damping of plasmon resonances. To overcome this limitation, hybrid structures comprising ferromagnetic and noble metals have been explored as magnetoplasmonic systems. Examples include, Au/Co/Au trilayers\cite{TEM-10}, nanosandwiches\cite{GON-08}, and nanorods \cite{ARME-16}, core-shell Co/Ag or Co/Au nanoparticles\cite{WAN-11,SON-12} and nanowires\cite{TOA-14}, and Au/Ni nanoring resonators\cite{ATM-18}. Contacting subwavelength ferromagnetic elements and noble metals results in materials that can be considered as optical alloys. Various non-contacting realizations have also been investigated. Dimers of two metal nanodisks that are separated by a dielectric layer are particularly attractive as they allow for a strong redistribution of the optical near-field\cite{NOR-04}. In vertical dimers containing noble and ferromagnetic metals, this effect has been exploited to enlarge the magneto-optical response via an increase of the optical field in the ferromagnetic layer\cite{BAN-12} or induction of magneto-optical activity on the low-loss noble metal\cite{ARM-13-2,SOU-14}.
Another way to circumvent large ohmic losses in ferromagnetic nanoparticles involves the excitation of collective plasmon modes. In periodic arrays of metal nanoparticles, constructive interference of the optical fields from individual scatterers produces narrow and intense surface lattice resonances (SLRs)\cite{KRA-08,AUG-08,HUM-14}. Low-loss SLRs in arrays of noble metal nanostructures are used in several contexts, including sensing\cite{OFF-11,SHE-13,LOD-13}, lasing\cite{ZHO-13,HAK-17}, and metamaterials\cite{ZHA-12,POR-13}. In ferromagnetic nanoparticle arrays, SLRs enhance the magneto-optical activity and provide versatility in the design of magneto-optical spectra via tailoring of lattice symmetry or particle shape\cite{KAT-15,MAC-16}. Checkerboard patterns of pure Ni and Au nanodisks have also been studied\cite{KAT-16}. In this hybrid approach, far-field diffractive coupling between the different particles enhances the magneto-optical response via the excitation of low-loss SLRs and induction of magneto-optical activity on the Au nanodisks.
Here, we report on tunable magnetoplasmonics in lattices of Ni/SiO$_2$/Au dimers (Fig. \ref{fig:1}). Our structures combine two aforementioned approaches, namely, the integration of noble and ferromagnetic metals in vertical dimers\cite{BAN-12,ARM-13-2,SOU-14} and ordering of magneto-optically active elements in periodic arrays\cite{KAT-15,MAC-16}. Because the noble metal and ferromagnetic constituents of our lattices interact via optical near-fields within dimers and far-fields between dimers, the hybrid arrays provide a rich playground for the design of optical and magneto-optical effects. First, we present an analytical model to evaluate the effect of dimer polarizability and lattice periodicity on the magnetoplasmonic properties of our system. Next, we compare model calculations and experiments on dimer arrays with different lattice constants. As reference, we discuss experiments on arrays with Au and Ni nanodisks.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{Fig1.pdf}
\caption{(a) Scanning electron microscopy (SEM) image and (b,c) schematic of a Ni/SiO$_2$/Au dimer lattice. The dimers are patterned onto a glass substrate with Au nanodisks at the bottom and Ni nanodisks at the top. The two metal disks are separated by 15 nm SiO$_2$. Optical and magneto-optical measurements are performed with linearly polarized light at normal incidence ($E$-field along $x$). A perpendicular magnetic field ($H$) saturates the magnetization of the Ni nanodisks. We study dimer arrays with different lattice constants ($a_\mathrm{x}$, $a_\mathrm{y}$) and compare the results to those measured on arrays with Au and Ni nanodisks.}
\label{fig:1}
\end{figure}
\section*{Modeling}
We start our analysis by calculating the optical and magneto-optical response of an individual plasmonic nanoparticle based on the modified long wavelength approximation (MLWA)\cite{MOR-09}. The absorption and emission properties of a metal nanoparticle are described by its volume polarizability $\alpha_\mathrm{e}'$, which relates the induced polarization $\boldsymbol{P}$ to the incident electric field $\boldsymbol{E}_\mathrm{i}$. If the particle is small compared to the wavelength of incident light, the electric field inside the particle $\boldsymbol{E}_1$ is approximately uniform. Following classical electrodynamics, the electric field inside the nanoparticle is given by $\boldsymbol{E}_1=\boldsymbol{E}_\mathrm{i}+\boldsymbol{E}_\mathrm{d}$, where $\boldsymbol{E}_\mathrm{d}$ is the depolarization field. $\boldsymbol{E}_\mathrm{d}$ can be calculated by assigning a dipole moment $d\boldsymbol{p}=\boldsymbol{P}dV$ to each volume element $dV$ of the nanoparticle and calculating the retarded depolarization field $d\boldsymbol{E}_\mathrm{d}$ generated by $d\boldsymbol{p}$ in the nanoparticle center\cite{MAC-13-2}. This gives
\begin{equation}
{\boldsymbol{E}_\mathrm{d} = \int d\boldsymbol{E}_\mathrm{d} = -L_\mathrm{d}\boldsymbol{P}}.
\label{depolarization-field}
\end{equation}
\noindent Here, $L_\mathrm{d}$ is the depolarization factor describing interactions between polarizable volume elements inside the particle\cite{BAR-82}. The nanodisks that we consider in our study can be approximated as ellipsoids\cite{MOR-09,BOH-83}. For ellipsoidal particles, Eq. \ref{depolarization-field} can be solved analytically. This gives
\begin{equation}
{L_\mathrm{d} = L-\frac{ik^3V}{6\pi}I-\frac{k^2V}{4\pi}D}.
\label{depolarization-tensor}
\end{equation}
\noindent The three terms in Eq. \ref{depolarization-tensor} include static ($L$) and dynamic ($D$) depolarization factors that account for the particle shape and a radiative reaction correction ($ik^3V/6\pi$)\cite{MAC-13-2}. To calculate $L$ and $D$, we use the integrals given in Refs. \citenum{MOR-09,MAC-13-2}. The net dipole moment of an ellipsoidal particle ($d\boldsymbol{p}=\boldsymbol{P}dV$) can be written as
\begin{equation}
{\boldsymbol{p} = (\epsilon_\mathrm{d}-\epsilon_\mathrm{m})\boldsymbol{E}_{1}V = (\epsilon_\mathrm{d}-\epsilon_\mathrm{m})(\boldsymbol{E}_\mathrm{i}+\boldsymbol{E}_\mathrm{d})V = \alpha_\mathrm{e}\boldsymbol{E}_\mathrm{i}},
\label{dipole-moment}
\end{equation}
\noindent where $\epsilon_\mathrm{d}$ and $\epsilon_\mathrm{m}$ are the permittivity of the particle and surrounding medium, respectively, $\alpha_\mathrm{e}$ is the particle polarizability ($\alpha_\mathrm{e}=\alpha_\mathrm{e}'V$), and $V$ is the particle volume. Combining Eqs. \ref{depolarization-field} and \ref{dipole-moment} gives
\begin{equation}
{\alpha_\mathrm{e} = \frac{(\epsilon_\mathrm{d}-\epsilon_\mathrm{m})}{I+L_\mathrm{d}\epsilon_\mathrm{m}^{-1}(\epsilon_\mathrm{d}-\epsilon_\mathrm{m})}V}.
\label{polarizability}
\end{equation}
\noindent The permittivity of a particle changes in the presence of a large external magnetic field or spontaneous magnetization. In our experiments, we use perpendicular magnetic fields of $\pm400$ mT to saturate the magnetization of Ni nanodisks along the $z$-axis. The permittivity tensor for this configuration contains two off-diagonal components\cite{ZVE-97}
\begin{equation}
\renewcommand*{\arraystretch}{1.2}
\epsilon_\mathrm{d} =
\begin{pmatrix}
\epsilon_\mathrm{xx} & -iQm_\mathrm{z} & 0 \\
iQm_\mathrm{z} & \epsilon_\mathrm{yy} & 0 \\
0 & 0 & \epsilon_\mathrm{yy}.
\end{pmatrix}\,,
\label{permittivity-tensor}
\end{equation}
\noindent where $m_\mathrm{z}$ is the perpendicular magnetization and $Q$ is the Voigt magneto-optical constant. We use tabulated data from Ref. \citenum{VIS-93} to calculate the permittivity of Ni. Because the field-induced diamagnetic moment of Au is small ($m_\mathrm{z}\approx0$) compared to the magnetization of Ni, we set the off-diagonal terms of $\epsilon_\mathrm{d}$ to zero for this material. We use optical constants from Ref. \citenum{JOH-72} to calculate the permittivity of Au.
Following Eq. \ref{polarizability}, non-zero off-diagonal components in $\epsilon_\mathrm{d}$ lead to off-diagonal terms in the polarizability tensor. Macroscopically, this produces a rotation and ellipticity in the polarization of reflected (magneto-optical Kerr effect) or transmitted (Faraday effect) light. For nanoparticles, the microscopic origin of magneto-optical activity can be understood by considering the excitation of two orthogonal LSPRs. One of the LSPRs, which can be described as electric dipole $\boldsymbol{p}$, is driven by the incident electric field $\boldsymbol{E}_\mathrm{i}$. For linearly polarized light at normal incidence, the induced dipole is oriented in-plane along $\boldsymbol{E}_\mathrm{i}$. If the nanoparticle exhibits perpendicular magnetization ($m_\mathrm{z}$), a second electric dipole is induced orthogonal to $\boldsymbol{E}_\mathrm{i}$ and $m_\mathrm{z}$ by spin-orbit coupling. The amplitude and phase relations of the two excited dipoles determine the rotation and ellipticity of light polarization upon reflection or transmission\cite{MAC-13}. In our study, the incident electric field is oriented along the $x$-axis, the magnetization of Ni is saturated by a perpendicular magnetic field, and the spin-orbit induced dipole is oriented along $y$ (Fig. \ref{fig:1}(b)). Hereafter, we refer to the directly excited dipole ($p_\mathrm{x}$) as optical dipole. The dipole along the orthogonal direction ($p_\mathrm{y}$) is labeled as magneto-optical dipole.
If dimers are formed from Au and Ni nanodisks, their optical near-fields couple. To describe this effect, we consider the electric field at each dipole position as the sum of the incident electric field and the scattered field from the dipole in the other disk. This results in two coupled equations
\begin{equation}
\begin{aligned}
\boldsymbol{p}_\mathrm{Ni} = \alpha_\mathrm{Ni}(\epsilon_0\boldsymbol{E}_\mathrm{i1}+\boldsymbol{G}\boldsymbol{p}_\mathrm{Au}), \\
\boldsymbol{p}_\mathrm{Au} = \alpha_\mathrm{Au}(\epsilon_0\boldsymbol{E}_\mathrm{i2}+\boldsymbol{G}\boldsymbol{p}_\mathrm{Ni}).
\end{aligned}
\label{dipole-Ni-Au}
\end{equation}
\noindent Here, $\boldsymbol{E}_\mathrm{i1}$ and $\boldsymbol{E}_\mathrm{i2}$ define the incident electric field at the Ni and Au nanodisks (including a phase difference), and $\boldsymbol{G}$ is a dyadic Green’s function describing how the electric field that is produced by one dipole propagates to the other \cite{DAP-94}. $\boldsymbol{G}$ is given by
\begin{equation}
{\boldsymbol{G} = \frac{e^{ikR}}{4{\pi}\epsilon_{0}R^3}\Big(\big((kR)^2 + ikR - 1\big)I - \big((kR)^2 + 3ikR - 3\big)\frac{\boldsymbol{R}{\otimes}\boldsymbol{R}}{R^2}\Big)},
\label{G-factor}
\end{equation}
\noindent where $\boldsymbol{R}$ is a vector connecting the dipoles in the two disks, $R$ is its amplitude, and $k=2n\pi/\lambda$, with $n$ the refractive index of the spacer layer and surrounding medium. Since electric dipoles are excited in the dimer plane, they mostly couple along the $z$-axis. Consequently, $\boldsymbol{R}{\otimes}\boldsymbol{R}$ in Eq. \ref{G-factor} is approximately zero. The optical and magneto-optical spectra of dimers are defined by dipole excitations along $x$ and $y$. Considering near-field coupling between the Ni and Au nanodisks (Eq. \ref{dipole-Ni-Au}), the effective dipole moment along these axes can be written as
\begin{equation}
\renewcommand*{\arraystretch}{1.2}
\begin{pmatrix}
p_\mathrm{x} \\
p_\mathrm{y}
\end{pmatrix} =
\begin{pmatrix}
p_\mathrm{Ni,x} + p_\mathrm{Au,x} \\
p_\mathrm{Ni,y} + p_\mathrm{Au,y}
\end{pmatrix} =
\begin{pmatrix}
\alpha_\mathrm{xx} & -\alpha_\mathrm{xy} \\
\alpha_\mathrm{xy} & \alpha_\mathrm{yy}
\end{pmatrix}
\begin{pmatrix}
E_\mathrm{x} \\
0
\end{pmatrix},
\label{dimer-dipole}
\end{equation}
\noindent where $\alpha_\mathrm{xx}$, $\alpha_\mathrm{yy}$, and $\alpha_\mathrm{xy}$ are the diagonal and off-diagonal components of the polarizability tensor ($\alpha$) of a single Ni/SiO$_2$/Au dimer. We note that while off-diagonal components are absent in the polarizability matrix of Au, a magneto-optical dipole is induced on the Au nanodisk ($p_\mathrm{Au,y}$) because of near-field coupling to $p_\mathrm{Ni,y}$ (Eq. \ref{dipole-Ni-Au}). The low-loss Au nanodisk thus contributes to the magneto-optical activity of the dimer\cite{ARM-13-2,SOU-14}.
If dimers are ordered into a periodic array, the electric field at each lattice position is a superposition of the incident radiation and dipolar fields from other dimers. The optical and magneto-optical response of a periodic dimer array thus depend on the polarizability of single dimers ($\alpha$) and their two-dimensional arrangement. To take far-field coupling between dimers into account, we define an effective lattice polarizability\cite{GAR-07,ZOU-04}
\begin{equation}
\alpha_\mathrm{eff} = \frac{1}{1/\alpha - S},
\label{effective-polarizability}
\end{equation}
\noindent where $S$ is the lattice factor. For an infinite array, this parameter is given by\cite{DEJ-14,EVL-12}
\begin{equation}
S = \sum_j e^{ikr_j}\Big(\frac{(1-ikr_j)(3\cos^2(\theta_j) - 1)}{{r_j}^3} + \frac{k^2\sin^2(\theta_j)}{r_j}\Big),
\label{lattice-factor}
\end{equation}
\noindent where $r_j$ is the distance between dimers and $\theta_j$ is the angle between the effective dipole moment and the vector connecting the dimers. For a two-dimensional lattice under normal incidence radiation, we can thus write
\begin{equation}
\renewcommand*{\arraystretch}{1.2}
\alpha_\mathrm{eff} = \Bigg(
\begin{pmatrix}
\alpha_\mathrm{xx} & -\alpha_\mathrm{xy} \\
\alpha_\mathrm{xy} & \alpha_\mathrm{yy}
\end{pmatrix}^{-1} -
\begin{pmatrix}
S_\mathrm{x} & 0 \\
0 & S_\mathrm{y}
\end{pmatrix}\Bigg)^{-1},
\label{effective-polarizability2}
\end{equation}
\noindent where $S_\mathrm{x}$ and $S_\mathrm{y}$ are the lattice factors for radiation along $x$ and $y$. Since $\alpha_\mathrm{xx,yy}>>\alpha_\mathrm{xy}$, the diagonal components of the effective lattice polarizability only depend on the diagonal terms of $\alpha$ and $S$. The off-diagonal components of $\alpha_\mathrm{eff}$ contain more intricate parameter relations. By carrying out matrix operations (see Supplementary Note 1), we find
\begin{equation}
\begin{aligned}
\alpha_\mathrm{eff,xx} = \frac{1}{1/\alpha_\mathrm{xx} - S_\mathrm{x}}, \\
\alpha_\mathrm{eff,yy} = \frac{1}{1/\alpha_\mathrm{yy} - S_\mathrm{y}},
\end{aligned}
\label{effective-polarizability-xx}
\end{equation}
\noindent and
\begin{equation}
\alpha_\mathrm{eff,xy} = \frac{\alpha_\mathrm{xy}}{\alpha_\mathrm{xx}\alpha_\mathrm{yy}(1/\alpha_\mathrm{yy} - S_\mathrm{x})(1/\alpha_\mathrm{xx} - S_\mathrm{y})}.
\label{effective-polarizability-xy}
\end{equation}
\noindent The effective dipole moments of the dimer lattice are thus given by
\begin{equation}
\renewcommand*{\arraystretch}{1.2}
\begin{pmatrix}
p_\mathrm{eff,x} \\
p_\mathrm{eff,y}
\end{pmatrix} =
\begin{pmatrix}
\alpha_\mathrm{eff,xx} & -\alpha_\mathrm{eff,xy} \\
\alpha_\mathrm{eff,xy} & \alpha_\mathrm{eff,yy}
\end{pmatrix}
\begin{pmatrix}
E_\mathrm{x} \\
0
\end{pmatrix},
\label{lattice-dipole}
\end{equation}
\noindent Equation \ref{effective-polarizability-xy} reveals a complex relationship between the polarizability of the dimers and their periodic arrangement. Because magneto-optical dipoles ($p_\mathrm{y}$) are excited orthogonal to the optical dipoles ($p_\mathrm{x}$), the polarizability and lattice factor along the $y$-axis also affect $\alpha_\mathrm{eff,xy}$\cite{MAC-16}.
For linearly polarized light at normal incidence, the optical reflectance and magneto-optical activity are linked simply to the effective lattice polarizability. In this geometry, the reflectance of a periodic plasmonic array is proportional to the scattering cross section\cite{BOH-83}
\begin{equation}
R \propto \sigma_\mathrm{sca} = \frac{k^4}{6\pi}|\alpha_\mathrm{eff,xx}|^2,
\label{reflectivity}
\end{equation}
\noindent and thus
\begin{equation}
R \propto |p_\mathrm{eff,x}|^2.
\label{reflectivity-dipole}
\end{equation}
\noindent The magneto-optical Kerr angle $\Phi$ of a dimer lattice is defined as the amplitude ratio of the magneto-optical ($p_\mathrm{eff,y}$) and optical ($p_\mathrm{eff,x}$) dipoles
\begin{equation}
\Phi = \Bigg|\frac{p_\mathrm{eff,y}}{p_\mathrm{eff,x}}\Bigg| = \Bigg|\frac{\alpha_\mathrm{eff,xy}}{\alpha_\mathrm{eff,xx}}\Bigg|.
\label{Kerr-angle}
\end{equation}
\noindent Following Eqs. \ref{reflectivity-dipole} and \ref{Kerr-angle}, it is possible to extract a quantity that is proportional to $|p_\mathrm{eff,y}|$ by multiplying the Kerr angle $\Phi$ by the square root of the optical reflectance $R$
\begin{equation}
|p_\mathrm{eff,y}|\propto\Phi\sqrt{R}.
\label{MO-dipole}
\end{equation}
\section*{Results and Discussion}
To experimentally explore near- and far-field coupling in dimer arrays, we fabricated periodic lattices of Ni/SiO$_2$/Au dimers on glass substrates using electron-beam lithography\cite{POU-18}. The lower Au and upper Ni nanodisks of the dimers have a diameter of $\sim$120 nm and $\sim$110 nm, respectively, and both disks are 15 nm thick. The two metals are separated by 15 nm SiO$_2$. The lattice constant along $x$ and $y$ are 400 nm, 450 nm, or 500 nm. For comparison, we also fabricated arrays of pure Au and Ni nanodisks. The Au nanodisks have the same size as in the dimers. Because the optical reflectance from pure Ni nanodisks is small, we decided to increase their diameter and thickness to $\sim$130 nm and 18 nm. In addition, we fabricated samples with randomly distributed dimers and nanodisks to characterize the optical and magneto-optical response without SLRs. All measurements were conducted with the nanoparticles immersed in index-matching oil ($n=1.52$). The creation of a homogeneous refractive-index environment enhances the efficiency of far-field coupling between scatterers and, thereby, the excitation of collective SLR modes. More experimental details are given in the Methods section.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig2.pdf}
\caption{(a) Optical reflectance ($R$) of randomly distributed Ni/SiO$_2$/Au dimers, Ni nanodisks, and Au nanodisks. (b,c) Measured Kerr angle ($\Phi$) and extracted values of $\Phi\sqrt{R}$ for samples with random dimers and Ni nanodisks. The parameter in (c) is proportional to the magneto-optical dipole amplitude ($|p_\mathrm{y}|$). (d-f) Calculations of $|p_\mathrm{x}|^2$, the magneto-optical Kerr angle ($|p_{y}/p_\mathrm{x}|$), and $|p_{y}|$ for the same nanoparticles. In (d) and (f), the strengths of excited dipoles in the Au and Ni nanodisks of the dimer and their vector sum are plotted separately. These parameter are linked by Eq. \ref{dimer-dipole-moment}. Cosines of the phase difference between dipoles in Au and Ni are depicted in (d) and (f).}
\label{fig:2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig3.pdf}
\caption{Optical reflectance ($R$) of square arrays of (a) Ni/SiO$_2$/Au dimers, (b) Au nanodisks, and (c) Ni nanodisks for three lattice constants. (d-f) Corresponding calculations of $|p_\mathrm{eff,x}|^2$ for the same lattices.}
\label{fig:3}
\end{figure}
We first discuss the optical and magneto-optical response of randomly distributed dimers and nanodisks (Fig. \ref{fig:2}). A filling fraction of 5\% was chosen for these samples to approximately match those of periodic arrays (7\% for $a=400$ nm, 5\% for $a=500$ nm). Because of the low filling fraction, randomly distributed dimers and nanodisks can be considered as non-interacting and, consequently, their optical spectra represent the properties of individual nanoparticles. Figure \ref{fig:2}(a) compares reflectance spectra of randomly distributed Ni/SiO$_2$/Au dimers and Au and Ni nanodisks. Near-field coupling between the Au and Ni disks of dimers red-shifts the LSPR-induced reflectance maximum. The LSPR wavelength of a dimer is measured at $\sim$860 nm, while those of the Au and Ni nanosdisks are recorded at $\sim$790 nm and $\sim$720 nm, respectively. The LSPR linewidth of the dimer structure is also larger than that of the Au nanodisk because of dipolar coupling to a higher-loss excitation in Ni. Figure \ref{fig:2}(b) shows the magneto-optical Kerr angle of the dimer and Ni nanodisk. From data in Figs. \ref{fig:2}(a,b) we also extract $\Phi\sqrt{R}$, which is proportional to the magneto-optical dipole amplitude $|p_\mathrm{y}|$ (Eq. \ref{MO-dipole}). For the dimer structure (red line), $|p_\mathrm{y}|$ is the vector sum of a spin-orbit induced magneto-optical dipole in Ni ($p_\mathrm{Ni,y}$) and the dipole moment that it produces on Au ($p_\mathrm{Au,y}$). The values of $|p_\mathrm{y}|$ for the dimer and Ni nanodisk are similar at $\sim$800 nm, despite the latter containing $\sim$70\% more Ni. This result confirms that the Au nanodisk of a dimer contributes to the magneto-optical activity. We also note that $|p_\mathrm{y}|$ of the dimer structure decays more strongly below the resonance wavelength. This effect is caused by a weakening of the near-field coupling strength at shorter wavelengths, i.e., a decrease of $p_\mathrm{Au,y}$, as illustrated by calculations of the dyadic Green’s function describing dipolar coupling inside the dimer (Supplementary Note 2).
To further delve into the details of near-field coupling in our magnetoplasmonic dimers, we present calculations of $|p_\mathrm{x}|^2$ and $|p_{y}|$ of single nanodisks and dimers in Figs. \ref{fig:2}(d,f). By plotting data in this format, the results can be compared directly to the experimental spectra of Figs. \ref{fig:2}(a,c). We also show the calculated magneto-optical Kerr angle ($|p_\mathrm{y}/p_\mathrm{x}|$) in Fig. \ref{fig:2}(e). In all cases, the wavelengths and lineshapes of plasmon resonances agree well. Main features such as a red-shift of the dimer LSPR are thus reproduced. In the calculations, we can separate how dipole moments in the Au and Ni nanodisks contribute to the optical and magneto-optical response of dimers. Taking the phase difference between excitations in Au and Ni along $x$ and $y$ ($\phi_\mathrm{x}$, $\phi_\mathrm{y}$) into account, the optical and magneto-optical dipoles of dimers are given by
\begin{equation}
\begin{aligned}
|p_\mathrm{x}|^2 = |p_\mathrm{Ni,x} + p_\mathrm{Au,x}|^2 = |p_\mathrm{Ni,x}|^2 + |p_\mathrm{Au,x}|^2 + 2|p_\mathrm{Ni,x}||p_\mathrm{Au,x}|\cos(\phi_\mathrm{x}), \\
|p_\mathrm{y}|^2 = |p_\mathrm{Ni,y} + p_\mathrm{Au,y}|^2 = |p_\mathrm{Ni,y}|^2 + |p_\mathrm{Au,y}|^2 + 2|p_\mathrm{Ni,y}||p_\mathrm{Au,y}|\cos(\phi_\mathrm{y}).
\end{aligned}
\label{dimer-dipole-moment}
\end{equation}
\noindent Analyzing the results of Fig. \ref{fig:2}(f), we find that, in dimers, the maximum magneto-optical dipole strength in Au is about 75\% compared to that of Ni. The strong $p_\mathrm{Au,y}$ is explained by the large polarizability of Au, enabling $p_\mathrm{Ni,y}$ to effectively induce a magneto-optical dipole moment on Au. The calculations thus confirm the big impact of $p_\mathrm{Au,y}$ on the magneto-optical activity of single dimers.
We now consider far-field diffractive coupling in dimer lattices. Optical fields from individual scatterers in periodic arrays produce collective SLRs and narrow diffracted orders (DOs) in far-field measurements. The DO wavelengths are given by
\begin{equation}
\sin\theta_k=\sin\theta_i+k\frac{\lambda}{na},
\label{DO}
\end{equation}
\noindent where $\theta_\mathrm{k}$ is the angle of the $k^{\mathrm{th}}$ diffracted order, $\theta_\mathrm{i}$ is the angle of incidence, $\lambda$ is the wavelength, $n$ is the refractive index of the embedding medium, and $a$ is the lattice constant. For normal incident light ($\theta_\mathrm{i}=0^\circ$), a Rayleigh anomaly associated with the passing of a DO is measured in reflectance or transmittance spectra when $k\lambda=na$. This corresponds to a transition from an evanescent to a propagating lattice mode if $\sin\theta_k=\pm1$ in Eq. \ref{DO}. For a two-dimensional lattice, the wavelengths of Rayleigh anomalies ($\lambda_\mathrm{DO}$) can be calculated using $\sqrt{(p^2+q^2)}\lambda_\mathrm{DO}=na$, where $p$ and $q$ indicate the order of diffraction along $x$ and $y$. Coupling of a DO with the broader LSPRs of individual nanoparticles produces a SLR with an asymmetric line-shape.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig4.pdf}
\caption{(a-c) Real and (d-f) imaginary parts of 1/$\alpha_\mathrm{xx}$ and $S_\mathrm{x}$. The 1/$\alpha_\mathrm{xx}$ curves depict the inverse polarizability of individual Ni/SiO$_2$/Au dimers and Ni and Au nanodisks. $S_\mathrm{x}$ solely depends on the lattice constant. Vertical lines indicate the wavelengths of SLR modes that combine Re(1/$\alpha_\mathrm{xx})-$ Re($S_\mathrm{x}$) = 0 and small Im($1/\alpha_\mathrm{xx})-$ Im($S_\mathrm{x}$). From these data, the effective polarizabilities of a periodic array can be calculated.}
\label{fig:4}
\end{figure}
Figures \ref{fig:3}(a-c) show optical reflectance spectra for square arrays of dimers and Au and Ni nanodisks with lattice constants of 400 nm, 450 nm, and 500 nm. For these lattices, Rayleigh anomalies are observed at $\lambda_\mathrm{DO}=$ 610 nm, 680 nm, and 760 nm, respectively, in agreement with $\lambda_\mathrm{DO}=1.52a$. Because $\lambda_\mathrm{DO}$ only depends on the lattice constant, this feature is shared by all arrays. The signal minimum at the diffracted order is followed by a sharp increase of reflectance caused by the excitation of a collective SLR mode. Because the LSPRs of individual dimers and nanodisks are different, hybridization of these modes with the narrow DO produces SLRs with different lineshapes, resonance wavelengths, and intensities. For all particle types, the excitation of a SLR mode significantly enhances the reflectance in comparison to randomly distributed dimers and nanodisks (Fig. \ref{fig:2}(a)). The induced optical dipoles in lattices ($|p_\mathrm{eff,x}|$) are therefore stronger near the SLR wavelength.
To analyze how excitations in the Au and Ni nanodisks of dimers contribute to the optical response of a periodic array, we consider the effective lattice polarizability along the incident electric field ($\alpha_\mathrm{eff,xx}$ in Eq. \ref{effective-polarizability-xx}). Parameter $\alpha_\mathrm{eff,xx}$ depends on the polarizability of individual dimers $\alpha_\mathrm{xx}$ and the lattice factor $S_\mathrm{x}$. In Fig. \ref{fig:4} we plot the real and imaginary parts of $1/\alpha_\mathrm{xx}$ and $S_\mathrm{x}$ for different lattice parameters. Data for the inverse polarizability of Ni and Au nanodisks are shown also. The effective polarizability of a nanoparticle lattice is resonantly enhanced when the real part of the denominator in Eq. \ref{effective-polarizability-xx}, 1/$\alpha_\mathrm{xx}-S_\mathrm{x}$, becomes zero. This condition corresponds to a crossing of the Re(1/$\alpha_\mathrm{xx}$) and Re($S_\mathrm{x}$) curves in Figs. \ref{fig:4}(a-c). The intensity and linewidth of the resulting SLR modes depend on the slope with which Re(1/$\alpha_\mathrm{xx}$) and Re($S_\mathrm{x}$) cross and the imaginary values of these parameters. For large Im($1/\alpha_\mathrm{xx})-$ Im($S_\mathrm{x}$) (Figs. \ref{fig:4}(d-f)), the SLRs are damped strongly. Since $S_\mathrm{x}$ solely depends on the lattice geometry, single particles only affect the excitation of SLRs through their inverse polarizability. Because Im(1/$\alpha_\mathrm{xx}$) can be written as $-$Im($\alpha_\mathrm{xx}$)/$|\alpha_\mathrm{xx}|^2$, it is approximated by $-1$/Im($\alpha_\mathrm{xx}$) close to the resonance condition (Re($\alpha_\mathrm{xx})\approx0$). For a dimer without gain $\alpha_\mathrm{xx}$ is positive and Im(1/$\alpha_\mathrm{xx}$) is negative. Consequently, the lattice factor $S_\mathrm{x}$ contributes to the damping of SLR modes if Im($S_\mathrm{x}$) is positive. In contrast, negative Im($S_\mathrm{x}$) counteracts the ohmic losses of individual nanoparticles, enabling the excitation of more narrow and intense SLRs. Because Im($S_\mathrm{x}$) changes sign from positive to negative at the DOs of a lattice, stronger SLR excitations are generated when the Re(1/$\alpha_\mathrm{xx}$) and Re($S_\mathrm{x}$) curves cross at $\lambda>\lambda_\mathrm{DO}$.
The integration of Au into Ni/SiO$_2$/Au dimers, enlarges the polarizability of dimers in comparison to Ni nanodisks. Consequently, Im(1/$\alpha_\mathrm{xx}$) is smaller and SLR modes are less damped. Figures \ref{fig:4}(d-f) illustrate the large difference between Im(1/$\alpha_\mathrm{xx}$) of dimers and Ni nanodisks at relevant SLR wavelengths. To put some numbers on the resonant enhancement of the effective polarizability in our lattices, we compare the values of $|p_\mathrm{x}|$ in Fig. \ref{fig:2}(a) and $|p_\mathrm{eff,x}|$ in Figs. \ref{fig:3}(a,c). For single Ni/SiO$_2$/Au dimers and larger Ni nanodisks we measure $\alpha_\mathrm{xx}$(dimer)/$\alpha_\mathrm{xx}$(Ni disk) $\approx$ 1.4. In square lattices of the same particles $\alpha_\mathrm{eff,xx}$(dimer array)/$\alpha_\mathrm{eff,xx}$(Ni disk array) $\approx$ 3.2.
In Fig. \ref{fig:4}(a) multiple crossings between Re(1/$\alpha_\mathrm{xx}$) and Re($S_\mathrm{x}$) are calculated for dimer and nanodisk arays with a lattice constant of 400 nm. However, only one of them, observed at $\lambda=690$ nm for Ni nanodisks, $\lambda=780$ nm for Au nanodisks, and $\lambda=805$ nm for Ni/SiO$_2$/Au dimers, coincides with a situation where Im($1/\alpha_\mathrm{xx})-$ Im($S_\mathrm{x}$) is small (Fig. \ref{fig:4}(d)). Consequently, one intense SLR mode is expected for these lattices, in agreement with the experimental spectra of Figs. \ref{fig:3}(a-c). Similar observations can be made for square arrays with lattice constants of 450 nm and 500 nm. The anticipated wavelengths of low-loss SLR modes for all particle types and lattice constants are indicated by vertical lines in Fig. \ref{fig:4}. Coupling between the diagonal (1,1) DO and LSPRs produces an additional SLR in lattices with $a=500$ nm. However, since Im(1/$\alpha_\mathrm{xx}$) is large at the wavelength of this mode, it appears much more damped in reflectance measurements.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig5.pdf}
\caption{(a,b) Magneto-optical Kerr angle ($\Phi$) of square arrays of (a) Ni/SiO$_2$/Au dimers and (b) Ni nanodisks for three lattice constants. (c,d) Extracted values of $\Phi\sqrt{R}$ for the same lattices. This parameter, which is obtained from data in (a,b) and Figs. \ref{fig:3}(a,c), is proportional to the effective magneto-optical dipole ($|p_\mathrm{eff,y}|$). (e-h) Calculations of the magneto-optical Kerr angle ($|p_\mathrm{eff,y}/p_\mathrm{eff,x}|$) and $|p_\mathrm{eff,y}|$ for Ni/SiO$_2$/Au dimer and Ni nanodisk arrays.}
\label{fig:5}
\end{figure}
Another feature in the experimental reflectance spectra of Figs. \ref{fig:3}(a-c) that can be explained by considering Fig. \ref{fig:4} is the dependence of SLR wavelength on lattice constant. Because the slope of Re(1/$\alpha_\mathrm{xx}$) is particularly large for Au nanodisks, the crossing point between Re(1/$\alpha_\mathrm{xx}$) and Re($S_\mathrm{x}$) and, thus, the reflectance maximum only shifts slightly if Re($S_\mathrm{x}$) moves to higher wavelengths with increasing $a$. In contrast, smaller slopes of Re(1/$\alpha_\mathrm{xx}$) for dimers and Ni nanodisks result in stronger tuning of the SLR wavelength with lattice constant.
To calculate the reflectance spectra of the different lattice ($|p_\mathrm{eff,x}|^2$), we insert data for 1/$\alpha_\mathrm{xx}$ and $S_\mathrm{x}$ from Fig. \ref{fig:4} into Eqs. \ref{effective-polarizability-xx} and \ref{lattice-dipole}. The results are shown in Figs. \ref{fig:3}(d-f). While our model calculations reproduce the main spectral features of the experimental curves, the resonances are more narrow. We attribute this discrepancy to inevitable imperfections in the experiments. For instance, we use a Gaussian beam with a finite wavelength range to excite our samples, while monochromatic plane waves are assumed in the calculations. Also, a finite distribution in the size and shape of the dimers and nanodisks (see Fig. \ref{fig:1}(a)) broadens the experimental resonances.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{Fig6.pdf}
\caption{FDTD simulations of electric field distributions on top of the Ni and Au nanodisks of a dimer array and Ni nanodisks of a pure ferromagnetic lattice. The lattice constant is 400 nm. The disks are 15 nm thick and have a diameter of 110 nm. In the dimer array, the Ni and Au are separated by 15 nm SiO$_2$. Dipoles are excited at normal incidence with the electric field along the $x$-axis. The wavelength is set to $\lambda=780$ nm and the particles are embedded in an uniform medium with $n=1.5$.}
\label{fig:6}
\end{figure}
After establishing the optical response of different lattices, we now turn our attention to the magneto-optical activity of periodic Ni/SiO$_2$/Au dimer arrays. For comparison, we also discuss data for lattices with Ni nanodisks. Figure \ref{fig:5} shows the magneto-optical Kerr angle for square arrays with different lattice constants. Just like the optical reflectance measurements of Figs. \ref{fig:3}(a-c), the magneto-optical Kerr spectra are shaped by DOs (sharp minima) and SLR excitations (strong signal enhancements at $\lambda>\lambda_\mathrm{DO}$). The magnitude of the Kerr effect is comparable for periodic arrays of Ni/SiO$_2$/Au dimers and Ni nanodisks. According to Eq. \ref{Kerr-angle}, the off-diagonal to diagonal polarizability ratio ($|\alpha_\mathrm{eff,xy}/\alpha_\mathrm{eff,xx}|$) determines the Kerr angle of a lattice. Because the diagonal polarizability of the dimer array is much larger than that of the Ni lattice, we conclude that the off-diagonal polarizability of the dimer array must be similarly enlarged. To substantiate this claim, we multiply the Kerr data of Figs. \ref{fig:5}(a,b) with the square root of the reflectance spectra in Figs. \ref{fig:3}(a,c). The resulting parameter $\Phi\sqrt{R}$, shown in Fig. \ref{fig:5}(c,d), is proportional to the effective magneto-optical dipole ($|p_\mathrm{eff,y}|$) of the dimer and Ni lattices (Eq. \ref{MO-dipole}). Alike the effective optical dipole $|p_\mathrm{eff,x}|$ (Fig. \ref{fig:3}), the magneto-optical dipole $|p_\mathrm{eff,y}|$ of the Ni/SiO$_2$/Au dimer arrays is substantially stronger than that of pure Ni lattices. Thus, although the $|p_\mathrm{y}|$'s of individual dimers and larger Ni nanodisks are similar (Fig. \ref{fig:2}(c)), the effective magneto-optical dipole is enhanced much more when dimers are ordered into periodic arrays. This result can be understood by considering Eq. \ref{effective-polarizability-xy} for the off-diagonal polarizabilities of a nanoparticle array. The effective off-diagonal polarizabilities of an array are directly proportional to the off-diagonal polarizabilities of the individual nanoparticles, which, as stated earlier, are similar for dimers and Ni nanodisks. However, the effective off-diagonal polarizability is resonantly enhanced when the real part of the denominator in Eq. \ref{effective-polarizability-xy} becomes zero. For square lattices with $\alpha_\mathrm{xx}=\alpha_\mathrm{yy}$ and $S_\mathrm{x}=S_\mathrm{y}$, this condition is met when the Re(1/$\alpha_\mathrm{xx}$) and Re($S_\mathrm{x}$) curves in Fig. \ref{fig:4} cross. Since resonances in $\alpha_\mathrm{eff,xx}$ and $\alpha_\mathrm{eff,xy}$ are determined by the same parameters in square arrays, the shapes of their optical and magneto-optical spectra are identical. Moreover, because Im(1/$\alpha_\mathrm{xx}$) is smaller for dimers than Ni nanodisks at the resonance wavelength, the magneto-optical Kerr angle is enhanced more by the excitation of an SLR mode in dimer arrays than in Ni lattices. Finally, we calculate the Kerr angle and magneto-optical dipole for both lattice types using the parameters of Fig. \ref{fig:4} and Eqs. \ref{effective-polarizability-xx}, \ref{effective-polarizability-xy}, \ref{lattice-dipole}, and \ref{Kerr-angle}. Results are plotted in Figs. \ref{fig:5}(e-h). The good agreements between the measured and calculated spectra demonstrate that our analytical model describes the physics of combined near- and far-field coupling in hybrid dimer lattices well.
To visualize the excitation of SLRs in dimer and Ni nanodisk arrays, we performed finite-difference time-domain (FDTD) simulations. Results for square arrays with a lattice constant of 400 nm are shown in Fig. \ref{fig:6}. The data are obtained at $\lambda=780$ nm for both particle types. At this wavelength, the magneto-optical Kerr angle is enhanced by the excitation of a collective SLR mode (see Supplementary Note 3). Strong optical dipoles are directly excited by the incident electric field $E_\mathrm{i}$ along the $x$-axis. Through spin-orbit coupling in Ni nanodisks with perpendicular magnetization and near- and far-field interactions between Ni and Au disks, magneto-optical dipoles are induced along the $y$-axis in both Ni and Au. In agreement with our experiments and model calculations, the simulated dipole moments along $x$ and $y$ are larger in Ni/SiO$_2$/Au dimer arrays than in Ni nanodisk lattices.
Finally, we consider the optical and magneto-optical response of rectangular dimer lattices with ${a_\mathrm{x}}\neq{a_\mathrm{y}}$. Based on our model, the optical reflectance of rectangular lattices depends on $\alpha_\mathrm{eff,xx}$. Because the lattice factor $S_\mathrm{x}$ peaks when $\lambda=1.52a_y$, the DO wavelengths are determined by the lattice constant along the $y$-axis. Consequently, only SLRs corresponding to this lattice period are expected in optical reflectance spectra. The same holds true for the magneto-optical response. While the denominator of $\alpha_\mathrm{eff,xy}$ (Eq. \ref{effective-polarizability-xy}) contains terms with $S_\mathrm{x}$ and $S_\mathrm{y}$, the magneto-optical Kerr angle is given by $|\alpha_\mathrm{eff,xy}/\alpha_\mathrm{eff,xx}|$ and thus
\begin{equation}
\Phi = \Bigg|\frac{\alpha_\mathrm{xy}}{\alpha_\mathrm{xx}\alpha_\mathrm{yy}(1/\alpha_\mathrm{xx} - S_\mathrm{y})}\Bigg|.
\label{Kerr-angle-rectangle}
\end{equation}
\noindent Since $S_\mathrm{y}$ peaks when $\lambda=1.52a_x$, the SLR-enhanced magneto-optical response depends on the lattice parameter along the $x$-axis. This cross-dependence of the optical reflectance and magneto-optical Kerr angle on lattice constants $a_\mathrm{x}$ and $a_\mathrm{y}$, which has been observed previously for pure Ni lattices\cite{KAT-15}, is experimentally confirmed for dimers. The model prediction that the magneto-optical dipole $|p_\mathrm{eff,y}|$ of dimer lattices depends on both $S_\mathrm{x}$ and $S_\mathrm{y}$ is also verified by measurements. Experiments and model calculations on rectangular lattices are summarized in Supplementary Note 4.
\section*{Conclusions}
We have experimentally and theoretically explored how plasmon resonances in hybrid Ni/SiO$_2$/Au dimer arrays compare to those of lattices that are made of Au or Ni nanodisks. Our results demonstrate that Ni/SiO$_2$/Au dimer arrays support more intense SLR modes than Ni lattices because the larger polarizability of individual dimer particles produces a stronger resonant enhancement of the effective lattice polarizability. The model that we present provides insight into the optical and magneto-optical response of ordered magnetoplasmonic dimers and offers clear directions on how to tailor the polarizability by material selection, variation of the particle size, or tuning of the lattice period or symmetry.
\section*{Methods}
\subsection*{Sample preparation}
We fabricated the samples on glass substrates using electron-beam lithography. After spin-coating a polymethyl methacrylate (PMMA) layer and baking at 180$^\circ$C for 1 minute, the pattern was defined by exposing the resist layer to the electron beam. We developed the PMMA in a 1:3 methyl isobutyl ketone (MIBK):isopropanol (IPA) solution. Samples with pure Au or Ni nanodisks were fabricated by e-beam evaporation of a 15-nm-thick or 18-nm-thick film, followed by lift-off. For dimer samples, we first evaporated 1 nm Ti and 15 nm Au. After this, the samples were transferred to a magnetron sputtering system for the deposition of 15 nm SiO$_2$ (rf sputtering from a SiO$_2$ target). Finally, 15 nm of Ni was added and the stack was lift-off. We used SEM and atomic force microscopy to determine the nanodisk diameters.
\subsection*{Optical and magneto-optical characterization}
Optical reflectance and magneto-optical Kerr effect measurements were conducted with a Kerr spectrometer (Fig. \ref{fig:7}). The setup consisted of a broadband supercontinuum
laser (SuperK EXW-12 from NKT Photonics), polarizing and focusing optics, a photoelastic modulator (Hinds Instruments I/FS50), and a photodetector. The wavelength of the laser was tuned between 500 nm and 1000 nm. We used linear polarized light at normal incidence. During measurements, a $\pm$400 mT field from an electromagnet switched the magnetization of the Ni nanodisks between the two perpendicular directions. The Kerr rotation ($\theta$) and Kerr ellipticity ($\epsilon$) were simultaneously recorded by lock-in amplification of the modulated signal at 50 kHz and 100 kHz. From these data, we calculated the magneto-optical Kerr angle ($\Phi$) using
\begin{equation}
\Phi = \sqrt{\theta^2+\epsilon^2}.
\label{Kerr-angle-measurement}
\end{equation}
\subsection*{Finite-difference time-domain simulations}
Numerical simulations were carried out using finite-difference time-domain (FDTD) method. 400 nm $\times$ 400 nm unit cells comprising a vertical dimer made of 15-nm-thick Ni and Au nanodisks separated by 15 nm SiO$_2$ ($n=1.5$) or a single Ni disk of the same size were simulated. The disks diameters were set to 110 nm. Linearly polarized light was assumed to impinge along the sample normal from the Ni disk side. Periodic boundary conditions were applied at the edges of the simulation area. A uniform embedding medium with a dielectric constant of $n=1.5$ was used. Broadband reflectivity spectra were obtained by placing an electric field monitor 2 $\mu$m above the nanoparticles. Distributions of near-fields shown in Fig. \ref{fig:6} were calculated near the SLR wavelength. Magneto-optical effects were introduced in the FDTD simulations via off-diagonal terms in the permittivity tensor of Ni, while an isotropic dielectric function was assumed for Au. Distributions of magneto-optical dipolar fields were obtained by subtracting results for two perpendicular magnetization directions in Ni. In the simulations, these magnetic configurations were implemented by using opposite signs for the off-diagonal terms in the Ni permittivity tensor.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{Fig7.pdf}
\caption{Schematic of the magneto-optical Kerr effect spectrometer. The setup consists of a broadband supercontinuum laser, polarizing and focusing optics, a photoelastic modulator, and a photodetector. We operate the instrument under normal incidence with linearly polarized light along the $x$-axis. A perpendicular magnetic field from an electromagnet saturates the magnetization of Ni nanodisks.}
\label{fig:7}
\end{figure}
\section*{Data Availability}
The data that support the findings of this study are available from the corresponding author upon request.
|
2,869,038,155,262 | arxiv | \section{Introduction}
The reduction and discrimination of the background is one of the most important tasks in any dark matter experiment as the signal rate expected from WIMPs is extremely low. EDELWEISS-II is a direct dark matter search experiment based on Ge bolometers. The combined measurement of the ionisation and heat in a particle interaction allows the rejection of the gamma background at the level of (3$\pm$1)$\times$10$^{-5}$ ~\cite{EDW2final}. Interleaved electrode design, recently developed by the collaboration ~\cite{interdigit1}, enables an efficient rejection (6$\times$10$^{-5}$ ~\cite{interdigit2}) of near-surface interactions. Using 10 detectors representing a total mass of 4 kg and with a total effective exposure of 384 kg$\times$days, EDELWEISS-II has recently published its final WIMP search result ~\cite{EDW2final}. A cross-section of 4.4 x 10$^{-8}$ pb has been excluded at 90\%~C.~L. for a WIMP mass of 85 GeV/$c^2$.
To reach the sensitivity to WIMP-nucleon cross-section significantly below 10$^{-8}$ pb in the next phase of the experiment, the background has to be further reduced.
The sources of background are neutrons, gamma-rays and surface beta contaminants. Neutrons may be induced by cosmic-ray muons or generated by the decay of the natural radioactive elements present in the cavern walls and in the set-up components. Details on the muon-induced neutron studies using the EDELWEISS-II setup are given in Ref.~\cite{klaus}, an additional liquid scintillator detector
dedicated to the measurement of muon-induced neutrons is described in Ref.~\cite{veto}. Gamma-rays and beta contaminants are produced by the radioactivity in the construction materials. Surface events induced by surface contaminants are discriminated using the interleaved electrodes. Ref. ~\cite{EDW2final} gives details on the surface event background.
We present in this paper studies of the gamma-ray and neutron background coming from radioactive decays in the set-up and shielding of EDELWEISS-II and EDELWEISS-III. Extensive Monte Carlo simulations have been performed and combined with radiopurity measurements of all materials. These background studies have been used for optimisation of the configuration of the next stage WIMP search experiment at Modane -- EDELWEISS-III.
\section{Experimental set-up and simulations}
EDELWEISS-II is located in the Laboratoire Souterrain de Modane (LSM) where the rock overburden of 4800 m w.e. reduces the cosmic muon flux down to about 5 muons/m$^2$/day~\cite{klaus}. The environmental gamma-ray flux below 4 MeV is dominated by natural radioactivity in the rock and concrete. The uranium, thorium and potassium concentrations have been reported in \cite{chazal}: 0.84$\pm$ 0.2 ppm and 1.9 $\pm$ 0.2 ppm of $^{238}$U, 2.45$\pm$ 0.2 ppm and 1.4 $\pm$ 0.2 ppm of $^{232}$Th, 230$\pm$30 Bq/kg and 77.3$\pm$13 Bq/kg of K in the rock and concrete, respectively. The neutron flux above 1 MeV is about 10$^{-6}$ n/cm$^{2}$/s \cite{fiorucci07}.
The radon level in the laboratory is $\sim$20 Bq/m$^{3}$ thanks to a ventilation system renewing the entire laboratory volume 1.5 times per hour. Further reduction of the radon level (down to $\sim$20 mBq/m$^3$) inside the shielding is achieved by the radon trap facility.
EDELWEISS-II uses cryogenic germanium detectors installed in the 10 mK chamber of a dilution refrigerator specially designed for the experiment. Each detector is enclosed in an individual casing made of electrolytic copper of type CuC2 as termed by the manufacturer and characterised by high purity (99.99\% pure) and concentration of oxygen limited to 5 ppm. The radiopurity of this copper has been measured at LNGS (Italy) using gamma-spectrometry \cite{GeMPI} and the results are shown in Table 1. Only teflon (PTFE) is used to hold the detectors inside the casings in a design specially developed to obtain the lowest possible radioactive background \cite{XFN}.
The detectors are arranged on disks supported by three vertical bars. The disks and the vertical bars are themselves supported by a thick plate at 10 mK and surrounded by a 10 mK thermal screen. The 10 mK plate also plays the role of shielding the Ge crystals from the radioactivity beneath the plate. The 10 mK plate and the 10 mK thermal screen will be referred to hereafter as the 10 mK chamber. The disks, the bars and the 10 mK chamber are made of electrolytic copper of type CuC1 (with oxygen concentration less than 1 ppm and purity of 99.95\%). The radiopurity of CuC1 copper has been measured at LSM. The results of the measurements are shown in Table~\ref{tab:radioactivities}.
To simulate the response of the detectors to various types of particles, the complete set-up has been implemented in the GEANT4 package \cite{Geant4} as shown in Figure~\ref{fig:setup}.
\begin{figure}[htb]
\begin{minipage}{15pc}
\includegraphics[width=18pc]{setup.eps}
\end{minipage}\hspace{4pc}
\begin{minipage}{15pc}
\includegraphics[width=18pc]{zoom.eps}
\end{minipage}
\caption{\label{fig:setup} (Color online) The GEANT4 geometry of the EDELWEISS-II set-up. Left : 1 -- germanium detectors with casings, 2 -- copper disks supporting Ge detectors (10 mK), 3 -- support bars for the copper disks (10 mK), 4 -- 10 mK thermal screen, 5 -- 10 mK thick plate supporting inner detector components, 6 -- internal roman lead shielding, 7 -- 1K thermal screen, 8 -- 4.2K thermal screen, 9 -- 40K thermal screen, 10 -- 100K thermal screen, 11 -- 300K vacuum chamber, 12 -- stainless steel liquid He reservoir, 13 -- stainless steel can. The outer lead shielding including modern and roman lead, is shown in grey. The outer polyethylene shielding and the muon veto are not shown. Right: zoom of the central part showing the germanium detectors with casings (dark yellow and grey) stacked on the copper disks (blue), the vertical support bars (dark yellow), the 10 mK thick plate (dark yellow) and, at the bottom, part of the internal roman lead shielding (grey).}
\end{figure}
Below the 10 mK plate, at 1K, 14 cm of roman lead shields the detectors from the gamma-rays induced by the radioactivity in the cold electronics, the dilution unit and other cryogenic parts. The dilution unit components are made of copper, stainless steel and silver. Four thermal screens at 1K, 4.2K, 40K, 100K and the vacuum chamber at 300K, all made of copper which has not been specially selected for its ultra-low radioactivity, complete the cryostat. Hereafter the thermal screens from 1K to 100K and the vacuum chamber at 300K will be referred to as `screens 7 to 11' according to the numbering in Figure 1. EDELWEISS-II uses coaxial cables from the detectors to room temperature. Resistors together with electrical connectors are installed at the 1K stage below the lead shielding. Cold JFETs are positioned at the 100K stage. The electronics to bias the JFETs, the DACs to bias the detectors, the final amplification, the anti-aliasing filter and the digitisation are all integrated in a single room-temperature module, called bolometer box, which is attached to the stainless steel can (see Figure 1 for details of the set-up, bolometer boxes are not shown).
An 18 cm thick outer layer of modern lead shields the cryostat against ambient gamma-ray background. A 2 cm thick inner roman lead layer has been cast directly on the modern lead. An outer 50 cm thick polyethylene shielding protects the detector against ambient neutrons. The lead and polyethylene shielding is mounted on a mild steel structure with rails allowing the opening of the two halves of the shielding structure.
In addition an 100 m$^2$ plastic scintillator active muon veto surrounds the polyethylene \cite{veto}.
All materials used in the construction have been measured to assess their radioactive contaminations. Table~\ref{tab:radioactivities} shows
a selection of the results. The CuC2 copper of the detector casings was purchased in 2006 and stored in LSM since then.
A few samples of this copper have been exposed to cosmic rays for a few days during their transportation from LSM to LNGS for accurate measurements of radiopurity.
Decay rates of cosmogenic isotopes (see Table ~\ref{tab:radioactivities}) agree with the assumption of a few day activation.
\begin{center}
\begin{table}[htb]
\caption{\label{tab:radioactivities}Radioactive contaminations in materials of the EDELWEISS-II set-up and shielding. All contaminations have been assessed by gamma-ray spectrometry, except for $^{238}$U and $^{232}$Th in lead and mild steel which have been measured by mass spectrometry, and $^{238}$U and $^{232}$Th in polyethylene measured by neutron activation. The radioactivity quoted for the dilution unit is based on measurement of individual components. }
\centering
{\footnotesize
\begin{tabular}{@{}*{8}{l}}
\hline
\\
Component/ & Mass & \multicolumn{5}{l}{Radioactivity in materials (mBq/kg or mBq/unit$^{\star}$) } \\
\\
Material & (kg) & $^{226}$Ra & $^{228}$Th & $^{60}$Co& $^{40}$K & Other radionuclides \\
\\
\hline
\\
Detector holders/PTFE & 0.02 & $<$7 & $<$5 & $<$20 & $<$100 & $^{210}$Pb$<$80 \\
\\
Electrodes/Al & $<$3$\cdot$10$^{-5}$ & 0.27$\pm$0.19 & 1.4$\pm$0.2 & - & 1.1 $\pm^{0.2}_{0.1}$ &$^{26}$Al: 0.38$\pm^{0.19}_{0.14}$ \\
\\
Detector casings/ & 3 & 0.025 & 0.033 & 0.038 & $<$0.39 & $^{238}$U$<$ 1.4, $^{235}$U$<$ 0.9 \\
CuC2 copper$^{a}$ & & $\pm$0.015 & $\pm$0.016 & $\pm$0.010 & & $^{54}$Mn: 0.024$\pm$0.010$^{b}$ \\
\\
Disks, bars, 10 mK chamber/ & 90 & $<$1 & $<$0.7 & $<$1 & $<$110 & $^{210}$Pb:180$\pm$140 \\
CuC1 copper & & & & & & \\
\\
Screens 7 to 11/copper & 320 & $<$3 & $<$2 & $<$2 & $<$25 & \\
\\
Dilution unit$^{\star}$ & $\approx$1 & $<$20 & $<$20 & $<$20 & $<$100 & $^{108}$Ag:331 $\pm$32 \\
\\
1K connectors & 0.32 & 644$\pm$65 &1353$\pm$138 & $<$25 & 1181 $\pm$197 &$^{238}$U:1994$\pm$204 \\
\\
Coaxial cables & 1.4 & 10$\pm$7 & $<$6 & $<$8 & 120 $\pm$60 & $^{210}$Pb$<$110 \\
\\
Bolometer boxes$^{\star}$ & 50 units & 331$\pm$17 & 235$\pm$13 & - & 340$\pm$40 & $^{238}$U:134 $\pm^{65}_{15}$ \\
(warm electronics) & & & & & & $^{210}$Pb :1019$\pm$56 \\
\\
\hline
\\
Roman lead shield & $\approx$120 & $<$0.3 & $<$0.3 & - & $<$1.3 & $^{210}$Pb$<$120 \\
\\
Modern lead shield & 30000 & $<$3 & $<$1 & - & - & $^{210}$Pb: (24$\pm$1)$\times$10$^{3}$ \\
& & & & & & $^{238}$U$<$ 0.01 ppb \\
\\
Polyethylene shield & 40000 & 5$\pm$1 & $<$2 & $<$3 & 16$\pm$2 & $^{238}$U:1 ppb,$^{232}$Th:0.1 ppb \\
\\
Mild steel support & 8600 & - & - & - & - & $^{238}$U$<$ 0.01 ppb \\
& & & & & & $^{232}$Th$<$ 0.01 ppb \\
\\
\hline
\end{tabular}}
\\
$^{a}$ CuC2 copper has been measured at LNGS with the GeMPI detector \cite{GeMPI}.
$^{b}$ The activities of short-lived cosmogenic isotopes in CuC2 copper correspond to (10$\pm$2) days of exposure.
\end{table}
\end{center}
\section{Gamma background}
The Monte Carlo simulation was based on the GEANT4 code with the Low Energy Electromagnetic Interactions physics list. Cross-sections are determined from evaluated data (EPDL97, EEDL and EADL, stopping power data, binding energy based on data of Scofield) \cite{Geant4_manual}. The particle generator uses the GEANT Radioactivity Decay generator (GRDM), which was designed to handle all kinds of decays ($\alpha$, $\beta^{-}$, $\beta^{+}$, EC), the emission of the associated particles and energy distribution, the following de-excitation of the nucleus ($\gamma$, internal conversion) and the accompanying X-rays and Auger electrons ~\cite{Geant4_manual}. The GRDM generator takes into account the total energy loss occurring due to the cascade gamma emission. All emitted particles were followed in GEANT4 and energy depositions in the crystals were stored. Energy depositions occurring in the same crystal within the time window of 50 ms were summed together giving a single event. In a subsequent analysis the fiducial events have been defined in the same way as in real data and the fiducial volume cut was applied to the simulated events.
The decays of $^{226}$Ra, $^{228}$Ra, $^{60}$Co, $^{40}$K, $^{54}$Mn and $^{210}$Pb were simulated in the detector casings, the disks supporting the Ge detectors, the bars supporting the disks, the 10 mK chamber, the cryostat screens 7 to 11 , the dilution unit (as a block for simplicity), 1K connectors, the coaxial cables and the lead shielding (see Figure~\ref{fig:setup}). To simplify the simulation task $^{228}$Ra was assumed to be in equilibrium with $^{228}$Th. In addition, the cosmogenically induced isotopes $^{68}$Ge and $^{65}$Zn in germanium crystals were considered and their activities were chosen to match the measured intensities of the lines at around 10 keV.
PTFE crystal holders, aluminium electrodes and other small parts located close to the crystals have too small mass to give a measurable contribution to the gamma-ray background but their contribution to the neutron background may be enhanced due to high ($\alpha$,n) cross-sections, in particular on aluminium and fluorine (see Section 4). The coaxial cables, the dilution unit and 1K connectors (Table~\ref{tab:radioactivities}) are located below the 14 cm thick lead plate and their contribution was found to be negligible compared to that of the cryostat screens 7 to 11 despite higher radioactivity levels. The gamma-ray background from bolometer boxes (warm electronics) was not simulated as they were located behind lead.
The contribution from $^{210}$Bi in the modern lead shielding is negligible but the energetic gammas of about 2.6 MeV from $^{228}$Th decay chain may reach the detectors, as shown in Table~\ref{tab:eventrate_le}. The background from rock and concrete was shown to be suppressed by several orders of magnitude due to the lead shielding around the cryostat.
As only upper limits were obtained in the radioactivity measurements for CuC1 copper and the copper of the screens 7 to 11, a $\chi^2$ minimisation with 10 free parameters was used to determine those contaminations. The 10 free parameters were: $^{226}$Ra and $^{228}$Ra in CuC1 copper (disks, bars and 10 mK chamber), $^{226}$Ra and $^{228}$Ra in copper of the screens 7 to 11 and $^{226}$Ra and $^{228}$Ra contamination at 300K (not shown in Table~\ref{tab:radioactivities}) which could be due to unaccounted radioactivity in cryogenic pipes, electronics, radon or uncontrolled impurities on the 300K vacuum chamber (6 parameters in total); cosmogenic $^{60}$Co and $^{54}$Mn in CuC1 and copper of the screens 7 to 11 (assumed to be the same), $^{40}$K in CuC1 and copper of the screens 7 to 11 (assumed to be the same) and $^{210}$Pb on the surface of the detector casings.
The radiopurity measurements reported in Table~\ref{tab:radioactivities} for CuC2 copper were used to calculate the gamma contributions from CuC2 copper parts. The upper limits for other copper parts were taken as upper bounds for the fitting procedure.
\begin{figure}[htb]
\begin{minipage}{18pc}
\includegraphics[width=18pc]{plot_alle_c_juin2012.eps}
\end{minipage}\hspace{1pc}
\begin{minipage}{18pc}
\includegraphics[width=18pc]{plot_le_20_200_juin2012_b.eps}
\end{minipage}
\caption{\label{fig:gamma_back} The background ionisation energy spectrum in the fiducial volume of the EDELWEISS-II detectors from measured data (black line) and Monte Carlo simulation (red line) for 185 kg$\times$days. The full energy range of 0--3000 keV is shown on the left and the relevant range for WIMP search (20--200 keV) is shown on the right.}
\end{figure}
\begin{table}[htb]
\centering
\caption{Ionisation event rate in events/kg/day in fiducial volume obtained from simulations.}
{\footnotesize
\begin{tabular}{lllllll|l}
\hline
Material & \multicolumn{7}{c} {Gamma event rate (events/kg/day) at 20-200 keV} \\
& \multicolumn{6}{c} {Fit 1} & Fit 2 \\
& $^{226}$Ra& $^{228}$Ra ($^{228}$Th) & $^{60}$Co& $^{40}$K& Other& Total (\%)& Total (\%)\\
& & & & & radionuclides & & \\
\\
\hline
\\
Ge crystals & 0 & 0 & 0 & 0 & $^{68}$Ge: 1.6 & 1.6 (2) & 1.6 (2) \\
\\
Detector casings/CuC2 copper & 1.2 & 1 & 1 & 0 &$^{210}$Pb: 11 & 14 (17) & 14 (18) \\
\\
Disks, bars, 10 mK chamber/ & 0.2 & 1 & 5 & 0.3 & $^{57}$Co: 0.7 &9.5 (12) & 13.5 (17) \\
CuC1 copper & & & & & $^{54}$Mn: 2.3 & & \\
\\
Screens 7 to 11/copper & 12 & 15 & 3 & 2 & $^{57}$Co: 0.2 & 32.5(40) & 17 (22) \\
& & & & & $^{54}$Mn: 0.3 & \\
\\
Pollution 300K (see text) & 8 & 14 & 0 & 0 & 0 & 22 (27) & 29 (37) \\
\\
Modern lead shield & 0 & 2.6 & 0 & 0 & 0 & 2.6 (3) & 4 (5) \\
\\
\hline
\\
Total MC & 21 & 33.6 & 9 & 2.3 & & 82 & 79 \\
\\
Total data & & & & & & 82 & 82 \\
\\
\hline
\end{tabular}
}
\label{tab:eventrate_le}
\end{table}
Figure~\ref{fig:gamma_back} shows the gamma-background in the fiducial volume of the EDELWEISS-II detectors compared to the GEANT4 simulation results. The data were collected with the EDELWEISS-II set-up containing 15 germanium detectors of the type described in Ref.~\cite{martineau} with a total exposure of 310 kg$\times$days. After a cut on the fiducial volume, data with an exposure of 185 kg$\times$days were compared to the simulations. No multiplicity cuts have been applied, i.e. coincident pulses between detectors were included. Multiple-hit events contribute about 30\% to the background rate in data and simulations and their rejection does not change the results presented here.
Some characteristic peaks are observed in the 0--3000 keV region: $^{60}$Co peaks at 1173 and 1332 keV, $^{40}$K at 1460 keV, 238 keV and 2614 keV from $^{228}$Th (the peaks are linked here to the sub-chain starting with the closest long-lived parent isotope rather than to the gamma-ray emitter).
On the right plot of Figure~\ref{fig:gamma_back} the 46 keV peak from $^{210}$Pb can be seen.
The contributions to the gamma background in the low-energy region are presented in Table~\ref{tab:eventrate_le} for two fitting results, corresponding to the minimum and maximum contributions of the thermal cryostat screens. The primary source of gamma background is connected to the U/Th daughters and $^{60}$Co in copper screens 7 to 11 and 10 mK copper parts, which contribute between 39\% and 52\% to the total gamma-background. The second most important source (between 27\% and 37\%) is $^{226}$Ra and $^{228}$Ra decays in some detector parts at 300K which must be introduced to match the data. This source (marked as `Pollution 300K' in Table~\ref{tab:eventrate_le}) might be due to radioactivity in cryogenic pipes, bolometer boxes, uncontrolled impurities on the 300K screen or radon still present in the air in the gap between the cryostat and the lead shielding, in spite of the flushing of radon depleted air. The third most important gamma background source is the $^{210}$Pb surface pollution at the level of the detector casings or on the detector's surface (17\%).
\section{Neutron background}
The Monte Carlo simulation used the GEANT4 High Precision (HP) model for neutrons with energies below 20 MeV. Elastic and inelastic scattering, capture and fission were included.
To check the accuracy of the model, simulations of neutrons from Am-Be source placed inside the Pb shielding on the top of the cryostat, were compared to the measured rate and energy spectrum of nuclear recoils. The source has a neutron intensity of $21\pm4$ neutrons/s and the estimated dead time of the DAQ was $30\pm10\%$. The data were collected for about 90 hours and the typical number of detected events above 20 keV after all cuts was about 2000 per crystal giving a statistical error of about 2\%. Similar statistics was accumulated in simulations. Figure~\ref{fig:neutrons-source} shows the energy spectrum of nuclear recoils observed from the neutron source and obtained in the simulations. Data have not been corrected for the dead time on this plot so lie below the simulated histogram but the overall shape of the spectra are in good agreement. The ratio of measured-to-simulated event rates above 20 keV after all cuts, corrected for the dead time and averaged over all crystals, was found to be $1.20 \pm 0.23$, where the error, given at 68\%~C.~L. is dominated by a 19\% uncertainty in the source intensity. Statistical and dead time uncertainties are also included. The ratio is consistent with 1 within errors, proving the validity of the geometrical model of the detector and neutron physics in GEANT4. The deviation of the average ratio from 1 may serve as an estimate of the uncertainty of the evaluated neutron rate if the source of background neutrons is located inside the polyethylene shielding.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{nsp-source.eps}
\caption{Energy spectra of neutrons from the Am-Be source used in the detector calibration. Black histogram -- data from calibration run with the neutron source; red histogram -- spectrum obtained in the simulations of neutrons from this source. Data have not been corrected for the detector dead time of DAQ for this particular run which has been found to be $30\pm10\%$.}
\label{fig:neutrons-source}
\end{center}
\end{figure}
Further tests of the simulation model were done with a strong neutron source giving about $2\times10^5$ neutrons/s, positioned outside the polyethylene and lead shielding. 50 cm of polyethylene should attenuate the fast neutron flux by 5-6 orders of magnitude. The neutron source was placed at several positions around the shielding to check the shielding model and neutron transport in GEANT4. The thickness of the shielding was not exactly the same for different source positions, the difference being as much as 5 cm of polyethylene. Also some small holes in the shielding are unavoidable due to pipes, readout cables, support structure etc, so special attention was paid to neutrons which could squeeze through these holes inside the shielding. To check the effect of the holes, the neutron source was also positioned close to the existing holes with pipes, cables and support beams. A difference up to a factor of 50 was observed in the data collected with different source positions and similar effect has also been found in the Monte Carlo simulations. For all source positions the rate of detected events after all cuts was found to be in agreement with the simulated rate within a factor of three with a typical uncertainty of 20\% for the measurements and simulations. Bearing in mind the challenge of building precise geometry of all shielding and detector components in GEANT4, the agreement between the measured and simulated rate within a factor of three can be considered as reasonably good. This is quite a small difference on a scale of the overall attenuation of the neutron flux by the polyethylene of 5-6 orders of magnitude (depending on the exact thickness of the shielding and neutron energy). A factor of three difference in the neutron rate (if being due to neutron attenuation in polyethylene) corresponds to a thickness of 5 cm of polyethylene. For half of the source positions tested, the difference between the measured and simulated rates does not exceed 50\%. The difference of 50\% in neutron flux attenuation by 50 cm of polyethylene was found between GEANT4 and MCNPX \cite{lemrani} showing a good agreement between the two codes on an overall scale of $10^6$ for the neutron flux attenuation.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{nsp-ss.eps}
\caption{Energy spectra of neutrons originated in U and Th decay chains in stainless steel. 1 ppb of U and Th were assumed in the calculations and resulting event spectra were later normalised to the measured concentrations. Contributions from different channels ($^{238}$U spontaneous fission and ($\alpha$,n) reactions) from the two decay chains are shown separately, together with the total spectrum.}
\label{fig:sources}
\end{center}
\end{figure}
To estimate the event rate due to neutrons in the EDELWEISS-II experiment, we considered the following materials/components as potential sources of neutrons: cavern walls (rock and concrete), lead (shielding), polyethylene (shielding), copper (cryostat and internal parts), stainless and mild steel (support structure), cables, connectors, electronic parts and other components. The results of the simulations are summarised in Table \ref{table-neutrons}. Neutron spectra were generated using SOURCES4A \cite{sources} assuming secular equilibrium in the uranium (U) and thorium (Th) decay chains. Further details about neutron production with SOURCES4A for underground experiments can be found in Ref. \cite{vito1,vito2}. Figure~\ref{fig:sources} shows the energy spectra of neutrons from U/Th decays in stainless steel generated by SOURCES4A. When calculating the neutron-induced event rate, the same cuts have been applied as to the real data: recoil energy threshold of 20 keV, ionisation energy threshold of 3 keV, 90\% acceptance in the nuclear recoil band, multiple hit and surface events have been rejected (multiple hit events have been included on the plots).
\begin{table}[htb]
\centering
\caption{Number of background events due to neutrons in EDELWEISS-II in the run detailed in ~\cite{EDW2final}. The column ``Material" refers to the material in each source which contributes most to neutron production. The column ``Composition" gives the chemical composition of the source used to calculate neutron spectra with the abundance of elements (by the number of atoms, not mass) given in brackets. Only elements with the abundance greater than 1\% are shown (with the accuracy of 1\%). The composition of the mild steel was not known so that of the stainless steel was used instead as giving slightly higher neutron flux than other possible compositions. Neutron yield (columns 4 and 5) is shown as the number of neutrons per gram of material per second per ppb of U and Th concentration. The same cuts as for data have been applied to the simulated events. }
\vspace{0.2cm}
{\footnotesize
\begin{tabular}{llllll}
\hline
Source & Material & Composition (abundance \%) & Neutron yield & \hspace{-0.4cm} in n/g/s/ppb & Neutron events \\
& & & U & Th & (384 kg$\times$days) \\
\hline
Hall walls & Rock & H (17), C (8), O (53), Mg (1), & 2.88$\times$10$^{-11}$ & 7.52$\times$10$^{-12}$ & $<$0.01 \cr
& & Al (3), Si (4), Ca (13), Fe (1) & \cr
Hall walls & Concrete & H (19), C (11), O (52), & 2.21$\times$10$^{-11}$ & 3.96$\times$10$^{-12}$ & $<$0.1 \cr
& & Mg (1), Si (2), Ca (15) & \cr
Shielding & Polyethylene & H (67), C (33) & 2.90$\times$10$^{-11}$ & 6.25$\times$10$^{-12}$ & $<$0.01 \cr
Shielding & Lead & Pb (100) & 1.35$\times$10$^{-11}$ & -- & $<$0.08 \cr
Support & Stainless steel & Cr (17), Mn (0.02), Fe (69), & 1.84$\times$10$^{-11}$ & 5.92$\times$10$^{-12}$ & $<$0.01 \cr
& & Ni (12) & \cr
Support & Mild steel & as above & 1.84$\times$10$^{-11}$ & 5.92$\times$10$^{-12}$ & $<$0.04 \cr
Warm electronics & PCB & H (22), B (2), C (19), N (6), & 7.08$\times$10$^{-11}$ & 2.21$\times$10$^{-11}$ & 1.0$\pm$0.5 \cr
& & O (35), Mg (1), Al (4), Si (8), & \cr
& & Ca (3) & \cr
1K connectors & Aluminium & Al (100) & 1.80$\times$10$^{-10}$ & 8.59$\times$10$^{-11}$ & 0.5$\pm$0.2 \cr
Thermal screens, & Copper & Cu (100) & 1.38$\times$10$^{-11}$ & 9.36$\times$10$^{-13}$ & $<$0.1 \cr
crystal supports & & & \cr
Coaxial cables & PTFE & C (33), F (67) & 8.40$\times$10$^{-10}$ & 3.50$\times$10$^{-10}$ & $<$0.5 \cr
Crystal holders & PTFE & C (33), F (67) & 8.40$\times$10$^{-10}$ & 3.50$\times$10$^{-10}$ & $<$0.01 \cr
Electrodes & Aluminium & Al (100) & 1.80$\times$10$^{-10}$ & 8.59$\times$10$^{-11}$ & $<$0.01 \cr
\hline
Total & & & & & $<$3.1 \cr
\hline
\end{tabular}
}
\label{table-neutrons}
\end{table}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{qplot-ss-u-final.eps}
\caption{Ionisation yield (ratio of ionisation to phonons normalised to this ratio for electron recoils) versus recoil energy for simulated nuclear recoils in EDELWEISS-II from neutrons originated in the uranium decay chain from contamination in the steel support structure around the main copper vessels. Blue curves show the average and the edges of the band which contains 90\% of nuclear recoils in one of the crystals as calculated from the experimental resolutions, that were also included in the simulations \cite{EDW2final}. Green curves show the band which contains 90\% of electron recoils. They appear on the plot because of neutron inelastic scattering and capture resulting in gamma-ray production. The pink curve shows the 3 keV software threshold for ionisation, applied as in real data. Statistics corresponds to about $4.5\times10^4$ years of live time for the uranium decay rate of 5 mBq/kg.}
\label{fig:qplot-ss}
\end{center}
\end{figure}
Apart from the mild steel, the radiopurity of different materials was taken from measurements of decay rates with Ge gamma-spectrometers or from mass-spectrometry data on U and Th concentrations. For mild steel we used the same U/Th concentrations as from the stainless steel. Note that even with 5 times higher concentrations of U/Th, the contribution of mild steel components will not exceed 0.2 events for the data reported in Ref. \cite{EDW2final}.
The measurements of the concentrations of U, Th and K (potassium) in the Modane rock and concrete (rock: $0.84\pm0.2$ ppm U and $2.45\pm0.2$ ppm Th, concrete: $1.9\pm0.2$ ppm U and $1.4\pm0.2$ ppm Th), used in the present work, were initially reported in Ref. \cite{chazal}.
The measurements of the neutron flux at LSM \cite{chazal} require higher values for U/Th (the normalisation requires an additional factor of 2.3 \cite{fiorucci07}) than measured in the rock/concrete due to a possible non-uniformity in U/Th abundances or rock composition. The uncertainties in the U/Th concentration, in the neutron transport through polyethylene and additional normalisation factor for the neutron flux lead to a large uncertainty (about a factor of 4.7) in the neutron event rate from the cavern walls. The upper limit on the neutron event rate from the walls is given in Table~\ref{table-neutrons} taking into account this possible error.
Since most measurements of U/Th concentrations resulted in upper limits (given at 90\%~C.~L.), normalisation of our simulation results gave upper limits on the neutron-induced event rate in EDELWEISS-II. This gives a significant contribution to the uncertainty in the neutron background event rate. The uncertainty of the neutron flux and spectra calculations using SOURCES4A has been discussed in \cite{vito1,carson} and found to be 20-30\% by comparing calculations with different cross-sections for $(\alpha,n)$ reactions and transition probabilities to excited states. The uncertainty due to the neutron transport and geometry model should not exceed 20\% (as follows from the agreement between simulations and data with a neutron source positioned within the shielding). The upper limits shown in Table ~\ref{table-neutrons} take into account all these uncertainties. The total rate shown in the last row is the sum of all upper limits and is not strictly the upper limit on the total event rate. Bearing in mind that some of the radioactivity measurements gave positive signals, we can also estimate that the lower limit on the nuclear recoil rate is 1.0 events in data reported in Ref. \cite{EDW2final}. The neutron background is potentially dominated by neutrons from materials inside the shields, especially cables and electronics.
Figure \ref{fig:qplot-ss} shows an example scatter plot of ionisation yield (ratio of ionisation energy to recoil energy normalised to this ratio for electron recoils) versus recoil energy for simulated nuclear recoils from neutrons originated in the uranium decay chain from contamination in the steel support structure around the main copper vessels. $10^6$ neutrons were sampled using the spectrum from SOURCES4A which corresponds to about $4.5\times10^4$ years of live time for the uranium decay rate of 5 mBq/kg (assuming secular equilibrium). Only events in the fiducial volume of the detectors are shown on the scatter plot. Ionisation yield, $Q$, has been calculated using the relation $Q=0.16(E_{rec}(keV))^{0.18}$, where $E_{rec}(keV)$ is the recoil energy. This relation has been proven to be valid for EDELWEISS detectors \cite{martineau,qvalue}.
To conclude, the neutron rate from radioactivity has been calculated as 1.0-3.1 events (90\%~C.~L.) at 20-200 keV in the EDELWEISS-II data run if the same cuts are applied to both data and simulations. Muon-induced neutrons are expected to contribute $\le0.7$ events \cite{klaus}.
\section{Expected background in EDELWEISS-III}
The next stage of the EDELWEISS experiment, EDELWEISS-III is currently under construction at LSM. It will contain 40 Ge detectors (800 g each) with improved configuration of electrodes and higher fraction of fiducial mass per crystal (about 600 g) making the total fiducial mass about 24~kg~\cite{edw3}. Larger target mass requires better purity of materials close to the detectors and additional neutron shielding to reduce the expected background and achieve the projected sensitivity of a few $\times10^{-9}$ pb. Materials and components which could contribute significantly to the gamma-ray or neutron background rate in EDELWEISS-II are being replaced by their counterparts with better radiopurity, for instance the cryostat screens 7 to 11 and other copper parts at 10 mK (disks supporting the Ge detectors, vertical bars and 10 mK chamber) are made of ultra radiopure NOSV copper \cite{nosv}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{nsp-pe-u-new.eps}
\caption{Simulated energy spectrum of nuclear recoils in EDELWEISS-III detectors from neutrons originated in the uranium decay chain from contamination in the inner polyethylene shielding. Recoil energy has been calculated as energy deposited in a single crystal and events with all hit multiplicities have been included. Statistics corresponds to about $2.6\times10^4$ years of live time.}
\label{fig:nsp-edw3}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{mult-pe-u-final.eps}
\caption{Hit multiplicity distribution for nuclear recoils in EDELWEISS-III detectors from neutrons originated in the uranium decay chain from contamination in the inner polyethylene shielding. Statistics corresponds to about $2.6\times10^4$ years of live time. See text for details.}
\label{fig:mult-edw3}
\end{center}
\end{figure}
Radioactivity measurements of most new components were done at LSM using low-background gamma-ray spectrometry. Extensive simulations of gamma-rays and neutrons were carried out for a geometry of EDELWEISS-III with additional neutron shielding and the results were normalised to the measured concentrations of radioactive isotopes. The results of the measurements and simulations are shown in Table~\ref{table:edw3}. For some components, such as cables and connectors, various parts were screened separately using a HPGe detector. We present in Table ~\ref{table:edw3} the data for the parts which contribute the most to the background rate. The uncertainties in the radioactivity levels are given at 90\%~C.~L.. Some measurements gave only upper limits leading to large uncertainties in the expected background rates. Neutron event rates were calculated assuming secular equilibrium in the U/Th decay chains except for $^{210}$Pb sub-chain. The neutron rate is also affected by a large uncertainty in the chemical composition of the component or its part which may contribute to the background. Since a significant fraction of neutrons may come from ($\alpha$,n) reactions, exact knowledge of the chemical composition of the material is crucial in the estimate of the neutron event rate. However, in some cases, for instance electronics parts, it is not known precisely which particular part is contaminated the most and hence, it is difficult to predict the expected rate of events with high accuracy. We emphasise that we try to avoid placing materials containing elements with high cross-section of ($\alpha$,n) reactions (low energy threshold), for example fluorine, close to the crystals. As can be seen from Table~\ref{table:edw3}, a large contamination of printed circuit boards (PCBs) used in electronics (rows 8 and 9 in Table ~\ref{table:edw3}) could compromise the sensitivity of the experiment if crystals were not shielded from their radioactivity by a 14 cm thick lead plate and 10 cm of additional polyethylene shielding.
An example nuclear recoil energy spectrum for neutron events from radioactivity (uranium decay chain) in the inner polyethylene shielding and hit multiplicity distribution are shown in Figures \ref{fig:nsp-edw3} and \ref{fig:mult-edw3}, respectively. Multiplicity has been defined as the number of hits in different crystals where at least one hit was in the region of interest: recoil energy 20-200 keV, ionisation energy $>3$ keV, ionisation yield 0.1-0.5 and hit location is within the fiducial volume; other hits have only been required to have an energy higher than 10 keV. 35-40\% of events are single hit events with this selection. Energy spectrum is plotted for all events with any multiplicity. Recoil energy has been calculated as energy deposited in a single crystal. Statistics corresponds to about $2.6\times10^4$ years of live time.
\begin{sidewaystable}
\begin{center}
\caption{\label{table:edw3}Radioactive contaminations in materials of the EDELWEISS-III set-up. All contaminations have been assessed by gamma-ray spectrometry at LSM, except for NOSV copper of thermal screens that have been taken from the measurements reported in \cite{nosv}. The last two columns give the expected gamma-induced background in events/kg/day at 20-200 keV and neutron-induced background in a year of running in 24 kg of fiducial mass. For 15 keV threshold the gamma background will change by less than 3\% whereas the neutron background will increase by about 15--20\%. The first 5 rows with data show the materials positioned close to the crystals so crystals are directly exposed to the radiation from these components. The next 3 rows show the materials below the lead plate and polyethylene beneath the detectors. A small gamma rate from warm electronics is due to the additional lead which shields the crystals from the gamma radiation. The gamma-induced rate is given for all events within the fiducial volume without excluding coincidences between different crystals. For neutron-induced rate the coincidences were excluded assuming the threshold for a second hit of 10 keV (35-40\% of events are single hit events with this selection). }
\vspace{2mm}
\begin{tabular}{@{}*{11}{l}}
\hline
Component & Material & Mass & \multicolumn{5}{l}{Radioactivity in materials (mBq/kg) } & Gammas & Neutrons \cr
& & (kg) & $^{226}$Ra & $^{228}$Th & $^{210}$Pb& $^{40}$K & $^{60}$Co & (kg$\times$days)$^{-1} $& Events/year \cr
\hline
Cables & Apical, Cu & 0.2 & 26$\pm$15 & $<$50 & 346$\pm$110 & 167$\pm$126& $<$25 & 5--11 & 0.03--0.07 \cr
Connectors & Delrin, brass & 0.056 & 32$\pm$20 & $<$53 & 11000$\pm$1000 & 680$\pm$220 & $<$36 & 1--8 & 0.02--0.06 \cr
Screws & Brass & 0.1 & 4.9$\pm$1.3 & $<$3 & $<$100 & $<$40 & $<$3 & $<$1 & $<$0.003 \cr
Screens, support & Cu & $\sim$500 & $<$0.016 & $<$0.012 & -- & $<$0.11 & $<$0.018 & $<$7 & $<$0.01 \cr
Shielding & CH$_{2}$ & $\sim$90 & 0.65$\pm$0.08 & 0.30$\pm$0.07 & $<$3 & $<$1 & $<$0.06 & 7-14 & 0.03--0.06 \cr \hline
Connectors & Al, resin & 1.6 & 80$\pm$9 & 158$\pm$6 & 743$\pm$48 & 129$\pm$33 & $<$4 & 0.2--0.3 & 0.3--0.5 \cr
Cables & PTFE & $\sim$1 & $<$35 & $<$28 & 190$\pm$40 & 440$\pm$110 & $<$19 & $<$1 & $<$0.1 \cr
Cold electronics & PCB & 0.23 & 7800$\pm$500 & 12600$\pm$1200 & 4500$\pm$400 & 6500$\pm$1200 & $<$120 & 1--2 & 0.04--0.06 \cr \hline
Warm electronics & PCB & - & 26500$\pm$1500$^{*}$ & 19300$\pm$1100 & 82000$\pm$5000 & 27000$\pm$3000 & - & $<$1 & 0.3--0.5 \cr \hline
Total & & & & & & & & 14--44 & 0.7--1.4 \cr \hline
\end{tabular}
\end{center}
$^{*}$ Decay rates for warm electronics are given for the whole set (not in mBq/kg).
\end{sidewaystable}
In addition to the components specified in Table~\ref{table:edw3} we expect to have less than 0.3 neutrons per year from components which were present already in EDELWEISS-II, such as lead shielding, mild and stainless steel support structure etc, bringing the total expected neutron rate to about 0.7-1.7 events per year of running. Decreasing the software energy threshold down to 15 keV will increase the expected neutron rate by 15--20\%.
By comparing Tables \ref{tab:eventrate_le} and \ref{table:edw3} we can see that the improvement in the background event rate induced by gamma-rays as measured per unit mass and unit exposure time, will be up to a factor of 6. Even in the worst possible scenario of all contaminations being close to the 90\%~C.~L. upper limits (a factor of 2 improvement in gamma-induced event rate), we expect that better performance of the new ``fiducial inter-digitized" (FID) detectors compared to the old type ID detectors will allow us to reach the projected sensitivity of a few $\times10^{-9}$ pb to WIMP-nucleon cross-section.
Since single nuclear recoil events from neutrons cannot be rejected by any discrimination technique, special measures have been taken in the new design to reduce possible background from neutrons, specifically, an additional polyethylene shielding will be installed in EDELWEISS-III. Our simulations (see Table~\ref{table:edw3}) show that this shielding will suppress the neutron background by more than an order of magnitude (per unit target mass) compared to the EDELWEISS-II setup. The neutron background given in Table \ref{table:edw3} corresponds to the rate per unit mass and exposure of $(0.8-1.9)\times10^{-4}$ events/kg/day in EDELWEISS-III compared to $(2.6-8.1)\times10^{-3}$ events/kg/day in EDELWEISS-II. An improvement by at least an order of magnitude will allow us to achieve the projected sensitivity with about 3000 kg$\times$days of statistics with EDELWEISS-III.
\section{Conclusions}
An extensive study of the gamma and neutron background in the EDELWEISS experiment has been performed, based on Monte Carlo simulations combined with radiopurity data. The primary source of gamma background in EDELWEISS-II is the copper from the cryostat screens and 10 mK parts. The neutron background is potentially dominated by neutrons produced by $\alpha$-n reactions in materials inside the shields, in particular cables and electronics. The calculated neutron rate from radioactivity of 1.0-3.1 events (90\%~C.~L.) at 20-200 keV in the EDELWEISS-II data run together with the expected upper limit on the misidentified gamma-ray events ($\le0.9$), surface betas ($\le0.3$) \cite{EDW2final}, and muon-induced neutrons ($\le0.7$) \cite{klaus}, do not contradict 5 observed events in nuclear recoil band \cite{EDW2final}.
The background studies performed in the present work have contributed to the design of the next stage of the experiment, EDELWEISS-III. New cryostat screens and 10 mK parts will be built from ultra-pure copper and an inner polyethylene shielding against neutrons from materials inside the external shielding will be installed.
The expected gamma-ray and neutron induced background rates from radioactivity in EDELWEISS-III at 20-200 keV are 14-44 events/kg/day and 0.7-1.4 events in 40 detectors per year, respectively. With these improvements and the projected increase by an order of magnitude of the detector mass, the goal is to soon probe the range of spin-independent WIMP-nucleon cross-sections down to a few $\times 10^{-9}$ pb.
\section*{Acknowledgments}
The help of the technical staff of the Laboratoire Souterrain de Modane is gratefully acknowledged. Matthias Laubenstein has kindly measured the copper of type CuC2 using the GeMPI detector at the Laboratori Nazionale de Gran Sasso, LNGS, developed by the Max-Planck-Institut fuer Kernphysik (MPIK) in Heidelberg. The EDELWEISS project is supported in part by the Agence Nationale pour la Recherche (France) under contract ANR-10-BLAN-0422-03, the Russian Foundation for Basic Research (Russia) and the Science and Technology Facilities Council (UK, grants ST/I003371/1, ST/I00338X/1, ST/J000671/1, ST/J000663/1, ST/K003186/1, ST/K006444/1, ST/K003151/1). Background studies for dark matter search are funded in part by the German ministry of science and education (BMBF) within the "Verbundforschung Astroteilchenphysik" grant 05A11VK2.
|
2,869,038,155,263 | arxiv | \section{Introduction}
The study of the roots of random polynomials
is among the most important and popular topics
in Mathematics and in some areas of Physics.
For almost a century
a considerable amount of literature about this problem
has emerged from
fields as probability, geometry, algebraic geometry,
algorithm complexity, quantum physics, etc.
In spite of its rich history it is still an extremely active field.
There are several reasons that lead to consider random polynomials
and several ways to randomize them, see Bharucha-Reid and Sambandham
\cite{brs}.
The case of algebraic polynomials
$P_d(t)=\sum^d_{j=1}a_jt^j$
with independent identically distributed coefficients was the first one to be extensively studied
and was completely understood during the 70s.
If $a_1$ is centered, $\mathbb P(a_1=0)=0$ and $\mathbb{E}\,(|a_1|^{2+\delta})<\infty$
for some $\delta>0$,
then, the asymptotic expectation and
the asymptotic variance of the number of real roots of $P_d$,
as the degree $d$ tends to infinity, are of order $\log(d)$
and, once normalized, the number of real roots converges in distribution
towards a centered Gaussian random variable.
See the books by Farahmand \cite{far98} and Bharucha-Reid and Sambandham \cite{brs}
and the references therein for the whole picture.
The case of systems of polynomial equations
seems to be considerably harder and has received in consequence much less attention.
The results in this direction are confined to the Shub-Smale model
and some other invariant distributions.
The ensemble of Shub-Smale random polynomials was introduced
in the early 90s
by Kostlan \cite{kostlan}.
Kostlan argues that this is the most natural distribution
for a polynomial system.
The exact expectation was obtained in the early 90's by geometric means,
see Edelman and Kostlan \cite{ek} for the one-dimensional case
and Shub and Smale \cite{ss} for the multi-dimensional one.
In 2004, 2005 Aza\"is and Wschebor \cite{aw-pol} and Wschebor \cite{w}
obtained by probabilistic methods
the asymptotic variance as the number of equations and
variables tends to infinity.
Recently, Dalmao \cite{d} obtained the asymptotic variance and a CLT
for the number of zeros as the degree $d$ goes to infinity in
the case of one equation in one variable.
Letendre in \cite{l2} studied the asymptotic behavior of the volume of random real algebraic submanifolds.
His results include the finiteness of the limit variance, when the degree tends to infinity,
of the volume of the zero sets of Kostlan-Shub-Smale systems with strictly less equations than variables.
Some results for the expectation and variance of related models
are included in \cite{aw-pol,ll,l}.
In the present paper we prove that, as the degree goes to infinity, the asymptotic variance of the normalized number of real roots of a
Kostlan-Shub-Smale square random system with $m$ equations and $m$ variables
exists in $(0,\infty)$.
We use
Rice Formulas \cite{aw} to show the finiteness of the limit variance and
Hermite expansions as in Kratz and Le\'on \cite{kl-97} to show that it is strictly positive.
Furthermore, we strongly exploit the invariance under isometries
of the distribution of the polynomials.
The reader may wonder, in view of the results mentioned above, if the normalized number of roots satisfies a CLT when the degree of the system tends to infinity.
The answer is affirmative if $m = 1$ \cite{d} but for the time being we cannot give an answer to this question for $m>1$.
The ingredients to prove a CLT for a non linear functional of a Gaussian process are:
a) to write a representation in the It\^o-Wiener chaos of the normalized functional;
b) to demonstrate that each component verifies a CLT (Fourth Moment Theorem \cite{np}, \cite{pta})
and
if the functional has an expansion involving infinitely many terms:
c) to prove that the tail of the asymptotic variance tends uniformly (w.r.t. $d$) to zero.
In the present case we lack a proof of c).
For $m = 1$ the fact that the invariance by rotations is equivalent with the stationarity
allows to build a proof similar to the one made for the number of crossings of a stationary Gaussian process.
The rest of the paper is organized as follows. Section 2
sets the problem and presents the main result.
Section 3 deals with the proof and Section 4 presents some auxiliary results
as well as the explicit form of the asymptotic variance.
\section{Main Result}
Consider a square system ${\mathbf P}$ of $m$ polynomial equations in $m$ variables
with common degree $d>1$.
More precisely,
let ${\mathbf P}=(P_1,\dots,P_m)$ with
\begin{equation*}
P_{\ell}(t)=\sum_{|\boldsymbol j|\leq d}a^{(\ell)}_{\boldsymbol j}t^{\boldsymbol j},
\end{equation*}
where
\begin{enumerate}
\item $\boldsymbol j=(j_{1},\dots,j_{m})\in\mathbb N^{m}$
and $|\boldsymbol j|=\sum^{m}_{k=1}j_{k}$;
\item $a^{(\ell)}_{\boldsymbol j}=a^{(\ell)}_{j_1\dots j_m}\in\mathbb R$, $\ell=1,\dots,m$, $|\boldsymbol j|\leq d$;
\item $t=(t_{1},\dots,t_{m})$
and $t^{\boldsymbol j}=\prod^m_{k=1} t^{j_k}_{k}$.
\end{enumerate}
We say that ${\mathbf P}$ has the Kostlan-Shub-Smale
(KSS for short) distribution if the coefficients $a^{(\ell)}_{\boldsymbol j}$
are independent centered normally distributed random variables with variances
\begin{equation*}
\mbox{\rm Var}\left(a^{(\ell)}_{\boldsymbol j}\right)=\binom{d}{\boldsymbol j}=\frac{d!}{
j_1!\dots j_m!(d-|\boldsymbol j|)!}.
\end{equation*}
We are interested in the number of real roots of ${\mathbf P}$
that we denote by $N^{{\mathbf P}}_d$. Shub and Smale \cite{ss} proved that $\mathbb{E}\,(N^{{\mathbf P}}_d)=d^{m/2}$.
Our main result is the following.
\begin{theorem}\label{teo}
Let ${\mathbf P}$ be a KSS random polynomial system with $m$ equations, $m$ variables and degree $d$.
Then, as $d\to\infty$ we have
$$
\lim_{d\to\infty}\frac{\mbox{\rm Var}(N^{{\mathbf P}}_d)}{d^{m/2}}=V^2_\infty,
$$
where $0<V^2_\infty<\infty$.
\end{theorem}
\subsection{Explicit expression of the variance}\label{s:ex}
Using the method of section 12.1.2 of \cite{aw}
an explicit expression for the limit variance can be given.
For $k=1,\dots,m$ let $\xi_k,\eta_k$ be independent standard normal random vectors on $\mathbb R^k$.
Let us define
\begin{itemize}
\item $\bar{\sigma}^2(t)=1-\frac{t^2\exp(-t^2)}{1-\exp(-t^2)}$;
\item $\bar{\rho}(t)=\frac{(1-t^2-\exp(-t^2))\exp(-t^2/2)}{1-(1+t^2)\exp(-t^2)}$;
\item $m_{k,j} = \mathbb{E}\,\left(\|\xi_k\|^j\right)=2^{j/2}\frac{\Gamma((j+k)/2)}{
\Gamma(k/2)},$
where
$\|\cdot\|$ is the Euclidean norm on $\mathbb R^k$;
\item for $k=1,\ldots,m-1$, $M_k(t)=\mathbb{E}\,\left[
\|\xi_k\|\,\|\eta_k+\frac{e^{-t^2/2}}{(1-e^{-t^2})^{1/2}}\xi_k\|\right]$;
\item for $k=m$,
$ M_m(t)=
\mathbb{E}\,\left[\|\xi_m\|\, \|\eta_m+\frac{\bar{\rho}(t)}{(1-\bar{\rho}^2(t))^{1/2}}
\xi_m\|\right].$
\end{itemize}
\begin{theorem}\label{teo:var}
We have
\begin{equation*}
V^2_\infty
=\frac12+\frac{\kappa_m \kappa_{m-1}}{2(2\pi)^m}
\cdot\int^\infty_0
t^{m-1}\left[\frac{\bar{\sigma}^4(t)(1-\bar{\rho}^2(t))}{1-e^{-t^2}}\right]^{1/2}
\left[\prod^m_{k=1}M_k(t)-\prod^m_{k=1}m^2_{k,1}\right]dt.
\end{equation*}
\end{theorem}\qed
\section{Proof}
\subsection{Preliminaries}
It is customary and convenient to homogenize the polynomials.
That is, to add an auxiliary variable $t_0$
and to multiply the monomial in $P_\ell$
corresponding to the index $\boldsymbol j$ by $t^{d-|\boldsymbol j|}_0$.
Let ${\mathbf Y}=(Y_{1},\dots,Y_{m})$ denote the resulting vector of $m$
homogeneous polynomials in $m+1$ real variables
with common degree $d>1$.
We have,
\begin{equation*}
Y_{\ell}(t)=\sum_{|\boldsymbol j|=d}a^{(\ell)}_{\boldsymbol j}t^{\boldsymbol j},\quad \ell=1,\dots, m,
\end{equation*}
where this time
$\boldsymbol j=(j_{0},\dots,j_{m})\in\mathbb N^{m+1}$;
$|\boldsymbol j|=\sum^{m}_{k=0}j_{k}$;
$a^{(\ell)}_{\boldsymbol j}=a^{(\ell)}_{j_0\dots j_m}\in\mathbb R$;
$t=(t_{0},\dots,t_{m})\in\mathbb R^{m+1}$
and
$t^{\boldsymbol j}=\prod^m_{k=0} t^{j_k}_{k}$. \medskip
Since ${\mathbf Y}$ is homogeneous, its roots consist of lines through $0$ in
$\mathbb R^{m+1}$. Then, it is easy to check that each root of ${\mathbf P}$ corresponds
exactly to two (opposite) roots of ${\mathbf Y}$ on the unit sphere $S^m$ of
$\mathbb R^{m+1}$.
{Furthermore, one can prove that the subset of homogeneous polynomials ${\mathbf Y}$
with roots lying in the hyperplane $t_0=0$ has Lebesgue measure zero.
Then, denoting by
$N^{{\mathbf Y}}_d$ the number of roots of ${\mathbf Y}$ on $S^m$,
we have $N^{{\mathbf P}}_d=N^{{\mathbf Y}}_d/2$ almost surely.}
From now on we work with the homogenized version ${\mathbf Y}$.
The standard
multinomial formula shows that for all $s,t\in\mathbb R^{m+1}$ we have
\begin{equation*}
r_d(s,t)
:=\mathbb{E}\,({Y_{\ell}(s)Y_{\ell}(t)})
=\<s,t\>^{d},
\end{equation*}
where $\<\cdot,\cdot\>$ is the usual inner product in $\mathbb R^{m+1}$.
As a consequence, we see that
the distribution of the system ${\mathbf Y}$ is invariant under the action of the
orthogonal group in $\mathbb R^{m+1}$.
For the ease of notation we omit the dependence on $d$ of ${\mathbf Y}$.
\medskip
In the sequel we need to consider the derivative of $Y_\ell$, $\ell=1,\dots,m$.
Since the parameter space is the sphere $S^m$,
the derivative is taken in the sense of the sphere,
that is, the spherical derivative $Y'_\ell(t)$ of $Y_\ell(t)$ is the
orthogonal projection of the free gradient on
the tangent space $t^\bot$ of $S^m$ at $t$.
The $k$-th component of $Y'_\ell(t)$ at a given basis of the tangent space
is denoted by $Y'_{\ell k}(t)$.
The covariances between the derivatives and between
the derivatives and the process are obtained via routine computations
from the covariance of $Y_\ell$.
In particular, the invariance under isometries is preserved after derivation
and for each $t\in S^m$, ${\mathbf Y}(t)$ is independent from
${{\mathbf Y}}'(t)=(Y'_1(t),\ldots,Y'_m(t))$.
\subsection{Finiteness of the limit variance}\label{s:var}
In this section we prove that
$$
\lim_{d\to\infty}\frac{\mbox{\rm Var}(N^{{\mathbf P}}_d)}{d^{m/2}}<\infty.
$$
Recall that $\mathbb{E}\,(N^{{\mathbf P}}_d)=d^{m/2}$, we write
\begin{equation}\label{Rice:second}
\mbox{\rm Var}\big(N^{{\mathbf P}}_d\big)
=\mbox{\rm Var}\left(\frac{N^{{\mathbf Y}}_d}{2}\right)
=\frac14\big[\mathbb{E}\,\big(N^{{\mathbf Y}}_d\big(N^{{\mathbf Y}}_d-1\big)\big)-\mathbb{E}\,^2\big(N^{{\mathbf Y}}_d\big)\big]+\frac{d^{m/2}}{2}.
\end{equation}
The quantity $\mathbb{E}\,(N^{{\mathbf Y}}_d(N^{{\mathbf Y}}_d-1))$ is computed using Rice formula \cite[Th. 6.3]{aw} and a localisation argument.
\begin{multline*}
\mathbb{E}\,(N^{\mathbf Y}_d(N^{\mathbf Y}_d-1))
=
\int_{(S^m)^2}\mathbb{E}\,[|\det {\mathbf Y}'(s)\det {\mathbf Y}'(t)|\,|{\mathbf Y}(s)={\mathbf Y}(t)=0]\\
\cdot p_{{\mathbf Y}(s),{\mathbf Y}(t)}(0,0)dsdt.
\end{multline*}
Here $ds$ and $dt$ are the $m$-geometric measure on $ S^m$
but we will use in other parts $ds$ and $dt$ for the Lebesgue measure.
The following Lemma allows us to reduce this integral
to a one-dimensional one.
The proof is a direct consequence of the co-area formula.
\begin{lemma}\label{lem:intACV}
Let ${\mathcal H}$ be a measurable function defined on $\mathbb R$.
Then, we have
\begin{multline*}
\int_{(S^m)^2} {\mathcal H}( \langle s,t \rangle )\,ds\,dt
= \kappa_m\kappa_{m-1}\int_0^{\pi}\sin(\psi)^{m-1}
{\mathcal H}(\cos(\psi))\, d\psi \\
=\frac{\kappa_m\kappa_{m-1}}{\sqrt{d}}\int_0^{ \sqrt d \pi}
\sin\left(\frac{z}{\sqrt d}\right)^{ m-1 }
\mathcal {\mathcal H}\left(\cos\left(\frac{z}{\sqrt d}\right)\right)\, dz,
\end{multline*}
where $\kappa_m$ is the $m$-geometric measure of $S^m$.
\qed
\end{lemma}
Let $\{e_0, e_1,\dots, e_m\}$ be the canonical basis of $\mathbb R^{m+1}$.
Because of the invariance of ${\mathbf Y}$ by isometries we can assume without loss of generality that
\begin{equation}\label{eq:st}
s = e_0, \quad t = \cos(\psi) e_0 +\sin(\psi) e_1.
\end{equation}
For $s^\bot$ we choose as basis $\{e_1,\ldots,e_m\}$ and
$\{ \sin(\psi) e_0 - \cos(\psi) e_1, e_2,\dots,e_m\}$ for $t^\bot$.
Finally, take $\psi= z/\sqrt{d}$ and use Lemma \ref{lem:intACV}. Hence,
\begin{multline*}
d^{-m/2}\mathbb{E}\,(N^{{\mathbf Y}}_d(N^{{\mathbf Y}}_d-1))\\
=\frac{\kappa_m\kappa_{m-1}}{(2\pi)^m\sqrt d}
\int_0^{\sqrt d\pi}\sin^{m-1}\left(\frac z{\sqrt d}\right)
\frac{d^{m/2}}{\left(1-\cos^{2d}(\frac z{\sqrt d})\right)^{m/2}}{{\mathcal E}\left(\frac z{\sqrt d}\right)}dz,
\end{multline*}
where ${\mathcal E}(z/\sqrt{d})$
is the conditional expectation written for $s,t$ as in \eqref{eq:st}.
Now, we deal with the conditional expectation ${\mathcal E}(z/\sqrt{d})$.
Introduce the following notation
\begin{eqnarray}\label{eq:cs}
\mathcal{A}\left(\frac z{\sqrt d}\right)&=&-\sqrt d \cos^{d-1}\left(\frac z{\sqrt d}\right)\sin\left(\frac z{\sqrt d}\right); \\
\mathcal{B}\left(\frac z{\sqrt d}\right)&=&\cos^d\left(\frac z{\sqrt d}\right)
-(d-1)\cos^{d-2}\left(\frac z{\sqrt d}\right)\sin^2\left(\frac z{\sqrt d}\right);\notag\\
\mathcal{C}\left(\frac z{\sqrt d}\right)&=&\cos^d\left(\frac z{\sqrt d}\right);\notag\\
\mathcal{D}\left(\frac z{\sqrt d}\right)&=&\cos^{d-1}\left(\frac z{\sqrt d}\right);\notag
\end{eqnarray}
and -omitting the $(z/\sqrt{d})$-
\begin{equation*}
\sigma^2=1-\frac{\mathcal{A}^2}{1-\mathcal{C}^2},\quad
\rho = \frac{\mathcal{B}(1-\mathcal{C}^2) -\mathcal{A}^2\mathcal{C}}{1-\mathcal{C}^2-\mathcal{A}^2}.
\end{equation*}
Thus, the variance-covariance matrix of the vector
$\left(Y_\ell(s),Y_\ell(t),\frac{Y'_\ell(s)}{\sqrt d},\frac{Y'_\ell(t)}{\sqrt d}\right)$
at the given basis, can be written in the following form
\begin{eqnarray}\label{eq:c1}
\left[\begin{array}{c|c|c}
A_{11}&A_{12} &A_{13}\\ \hline
A_{12}^\top&I_m\,\,&\,A_{23}\\ \hline
A_{13}^\top&A_{23}^\top\,\,&I_m\\
\end{array}\right],
\end{eqnarray}
where $I_m$ is the $m\times m$ identity matrix,
\begin{equation}\label{eq:c2}
A_{11}=\left[\begin{array}{cc}
1&\mathcal{C}\\
\mathcal{C}&1
\end{array}\right], \;
A_{12}= \left[\begin{array}{cccc}
0& 0&\cdots& 0\\
-\mathcal{A}&0& \cdots & 0
\end{array}\right],\:
A_{13}= \left[\begin{array}{cccc}
\mathcal{A}& 0&\cdots& 0\\
0&0& \cdots & 0
\end{array}\right],
\end{equation}
and $A_{23}$ is the $m\times m$ diagonal matrix $\mbox{\rm diag}( \mathcal{B},\mathcal{D},\ldots,\mathcal{D}).$
Gaussian regression formulas (see \cite[Proposition 1.2]{aw}) imply that the conditional distribution of the vector
$\big({\frac{Y'_\ell(s)}{\sqrt{d}},\frac{Y'_\ell(t)}{\sqrt{d}}}\big)$
(conditioned on ${\mathbf Y}(s)={\mathbf Y}(t)=0$)
is centered normal with variance-covariance matrix given by
\begin{equation} \label{e:jm:b}
\left[\begin{array}{c|c}
B_{11}&B_{12} \\ \hline
B_{12}^\top&B_{22} \\
\end{array}\right],
\end{equation}
with $B_{11}=B_{22}= \mbox{\rm diag}( \sigma^2, 1,\dots,1)$ and
$B_{12}= \mbox{\rm diag}( \sigma^2 \rho, \mathcal{D},\dots,\mathcal{D})$.
It is important to remark that if $A=(A_1\,A_2\ldots A_m)$ is a matrix with columns vectors $A_j$,
it holds that $\det( A)=Q_m(A_1,A_2,\dots,A_m)$ for a certain polynomial $Q_m$ of degree $m$ from $\mathbb R^{m^2}$ to $\mathbb R$.
Using representation of Gaussian vectors from a standard one we can write
\begin{multline*}
{{\mathcal E}\left(\frac z{\sqrt d}\right)}
=\int_{(\mathbb R^{m^2})^2}\phi_{m^2}(\mathbf x) \phi_{m^2}(\mathbf y)
\left| Q_m\left(\left(\begin{array}{c}
\sigma x_{11}\\
x_{12}\\
\cdot\\
x_{1m}\end{array}\right),
\ldots,
\left(\begin{array}{c} \sigma x_{m1}\\
x_{m2}\\
\cdot\\
x_{mm}\end{array}\right) \right) \right|
\\%%
\left| Q_m\left(\left(\begin{array}{c}
\sigma( \rho x_{11}+\sqrt{1-\rho^2}y_{11}) \\
\mathcal{D} x_{12}+\sqrt{1-\mathcal{D}^2}y_{12}\\
\cdot\\
\mathcal{D} x_{1m}+\sqrt{1-\mathcal{D}^2}y_{1m}\end{array}\right),
\ldots,
\left(\begin{array}{c}
\sigma( \rho x_{m1}+\sqrt{1-\rho^2}y_{m1}) \\
\mathcal{D} x_{m2}+\sqrt{1-\mathcal{D}^2}y_{m2}\\
\cdot\\
\mathcal{D} x_{mm}+\sqrt{1-\mathcal{D}^2}y_{mm}\end{array} \right)
\right) \right| d\mathbf x d\mathbf y,
\end{multline*}
where $\phi_{m^2}$ is the standard normal density in $\mathbb R^{m^2}$.
Because of the homogeneity of the determinant we have
\begin{equation*}
{{\mathcal E}\left(\frac z{\sqrt d}\right)}
= \sigma^2\int_{(\mathbb R^{m^2})^2}Q_m ( \mathbf{x}) Q_m ( \mathbf{z})
\phi_{m^2}(\mathbf{x}) \phi_{m^2}(\mathbf{y}) d\mathbf x d\mathbf y
=: \sigma^2 G(\rho,\mathcal{D}),
\end{equation*}
where $\mathbf{z}=\mbox{\rm diag}(\rho,\mathcal{D},\dots,\mathcal{D}){\mathbf{x}}+\mbox{\rm diag}(\sqrt{1-\rho^2},\sqrt{1-\mathcal{D}^2},\dots,\sqrt{1-\mathcal{D}^2}){\mathbf{y}}$.
Now, we return to the expression of the variance in \eqref{Rice:second}.
We have
\begin{multline}\label{m:1}
d^{-m/2} \mbox{\rm Var}\left(N^{{\mathbf P}}_d\right)
=\frac{1}{4 d^{m/2}} \big[\mathbb{E}\,(N^{{\mathbf Y}}_d(N^{{\mathbf Y}}_d-1))- (\mathbb{E}\,(N^{{\mathbf Y}}_d))^2\big] + \frac{1}{2}\\
=
\frac12+\frac{\kappa_m\kappa_{m-1}}{4(2\pi)^m}
\int_0^{\sqrt d\pi}\sin^{m-1}\left(\frac z{\sqrt d}\right)d^{(m-1)/2}
\\\bigg[ \frac{ \sigma^2(\frac z{\sqrt d}) } {(1-\cos^{2d}(\frac z{\sqrt d}))^{m/2}}
G\Big(\rho\Big(\frac z{\sqrt d}\Big), \mathcal{D}\Big(\frac z{\sqrt d}\Big)\Big)- G(0,0) \bigg]dz.
\end{multline}
The proof of the convergence of this integral is done in several steps. \medskip
In the rest of this section $\mathbf{C}$ denotes an unimportant constant, its value can change from one occurrence to another. It can depend on $m$, but recall that $m$ is fixed. \medskip
\noindent {\it Step 1:} Bounds for $G$.
\begin{itemize}
\item $G(\rho,\mathcal{D}) = \int_{(\mathbb R^{m^2})^2}Q_m ( \mathbf{x}) Q_m ( \mathbf{z})
\phi_{m^2}(\mathbf{x}) \phi_{m^2}(\mathbf{y}) d\mathbf x d\mathbf y$;
\item $G(0,0) = \int_{(\mathbb R^{m^2})^2}Q_m ( \mathbf{x}) Q_m ( \mathbf{y})
\phi_{m^2}(\mathbf{x}) \phi_{m^2}(\mathbf{y}) d\mathbf x d\mathbf y$;
\item $ | \sqrt{1-\rho^2} - 1| \leq \mathbf{C} |\rho|$; $ | \sqrt{1-\mathcal{D}^2} - 1| \leq \mathbf{C} |\mathcal{D}|$;
\item $| Q_m( \mathbf{x})| \leq \mathbf{C} ( 1+ \|\mathbf{x}\|_{\infty})^m$;
\item any partial derivative of $Q_m (\mathbf{w}) $ is a polynomial of degree $m-1$ and thus it is bounded by
$\mathbf{C} ( 1+ \|\mathbf{w}\|_{\infty})^{m-1}$.
\end{itemize}
Applying that to a point between $ \mathbf{y}$ and $\mathbf{z}$, we get
\begin{multline*}
| Q_m ( \mathbf{z}) - Q_m ( \mathbf{y}) | \leq \mathbf{C} ( 1 + \|\mathbf{y}\|_{\infty} + \|\mathbf{z}\|_{\infty}
)^{m-1} ( |\rho| +|\mathcal{D}|) \\
\leq \mathbf{C} ( 1 + \|\mathbf{x}\|_{\infty} + \|\mathbf{y}\|_{\infty}
)^{m-1} ( |\rho| +|\mathcal{D}|) ,
\end{multline*}
and
\begin{multline*}
| Q_m ( \mathbf{x}) \cdot Q_m ( \mathbf{z}) -Q_m ( \mathbf{x}) \cdot Q_m ( \mathbf{y}) |
\\ \leq \mathbf{C} ( 1+ \|\mathbf{x}\|_{\infty})^m
( 1+ \|\mathbf{x}\|_{\infty}+\|\mathbf{y}\|_{\infty})^{m-1} ( |\rho| +|\mathcal{D}|) .
\end{multline*}
The finiteness of all the moments of the supremum of Gaussian random variables finally yields
\begin{equation*
|G(\rho,\mathcal{D})-G(0,0)|\le \mathbf C (|\rho |+|\mathcal{D}|).
\end{equation*}
\noindent {\it Step 2:} Point-wise convergence. It is a direct consequence of the expansions of sine and cosine functions.
As $d$ tends to infinity:
\begin{itemize}
\item $ \mathcal{A}(\frac z{\sqrt d}) \to -z \exp(-z^2/2)$;
\item $ \mathcal{B}(\frac z{\sqrt d}) \to (1-z^2) \exp(-z^2/2)$;
\item $\mathcal{C}(\frac z{\sqrt d})$ and $\mathcal{D}(\frac z{\sqrt d})$ tend to $ \exp(-z^2/2)$;
\item $\sigma^2(\frac z{\sqrt d}) \to \frac{1-(1+z^2) \exp(-z^2)}{1- \exp(-z^2)} =\bar{\sigma}^2(z)$;
\item $\rho(\frac z{\sqrt d})
\to \frac{(1-t^2-\exp(-t^2))\exp(-t^2/2)}{1-(1+t^2)\exp(-t^2)} =\bar{\rho}(z)$;
\end{itemize}
being $\bar{\sigma}^2$ and $\bar{\rho}$ as in Subsection \ref{s:ex}.
This, in view of the continuity of the function $G$, implies the point-wise convergence of the integrand in \eqref{m:1}. \medskip
\noindent {\it Step 3:} Symmetrization.
We have
$\mathcal{A}(\pi-z/\sqrt{d})=(-1)^{d-1}\mathcal{A}(z/\sqrt{d})$,
$\mathcal{B}(\pi-z/\sqrt{d}) =(-1)^{d}\mathcal{B}(z/\sqrt{d})$,
$\mathcal{C}(\pi-z/\sqrt{d}) =(-1)^{d}\mathcal{C}(z/\sqrt{d})$,
$\mathcal{D}(\pi-z/\sqrt{d}) =(-1)^{d-1}\mathcal{D}(z/\sqrt{d})$,
$\sigma^2(\pi-z\sqrt{d})=\sigma^2(z/\sqrt{d})$ and
$\rho(\pi-z\sqrt{d})=(-1)^{d}\rho(z/\sqrt{d})$.
Hence, $B_{12}(\pi-z/\sqrt{d})$ in \eqref{e:jm:b} becomes
$$
\big((-1)^d\sigma^2(z/\sqrt{d}) \rho(z/\sqrt{d}),(-1)^{d-1} \mathcal D(z/\sqrt{d}), \ldots,(-1)^{d-1} \mathcal D(z/\sqrt{d}) \big),
$$
the rest being unchanged. This corresponds, for example to performing some change of signs
(depending on the parity of $d$) on the coordinates of $ Y'_\ell(t)$.
Gathering the different $\ell$ this may imply a change of sign in $\det ({\mathbf Y}'(t))$ that plays no role because of the absolute value. As a consequence
$$
\mathcal E(\pi-z/\sqrt{d}) = \mathcal E(z/\sqrt{d}).$$
In conclusion, for the next step it suffices to dominate the integral
in the r.h.s of \eqref{m:1} restricted to the interval $[0,\sqrt{d}\pi/2]$.\\
\noindent {\it Step 4:} Domination.
The following lemma gives bounds for the different terms
\begin{lemma} \label{l:liste}
There exists some constant $\alpha$, $0<\alpha \leq 1/2 $ and some integer $d_0$ such that for $\frac z{\sqrt d} \leq \frac{\pi}{2}$ and $d>d_0 $:
\begin{itemize}
\item
$ \mathcal{C} \leq \mathcal{D} \leq \cos^{d-2} (\frac z{\sqrt d}) \leq \exp(-\alpha z^2)$;
\item $ |\mathcal{A}|\leq z \exp(-\alpha z^2)$;
\item$ |\mathcal{B}| \leq (1+z^2) \exp(-\alpha z^2)$;
\item for $ z\geq z_0$, $ 1- \mathcal{C}^2 \geq 1- \mathcal{C}^2 - \mathcal{A}^2\geq \mathbf C > 0$;
\item $ 0\leq 1-\sigma^2 \leq \mathbf C \exp(-2\alpha z^2)$;
\item $ |\rho| \leq \mathbf C (1+z^2)^2 \exp(-2\alpha z^2)$.
\end{itemize}
\end{lemma}
\begin{proof}
We give the proof of 1, the other cases are similar or easier. On $[0,\pi/2] $ there exists $ \alpha_1$, $0<\alpha_1< 1/2$ such that
$$
\cos(\psi) \leq 1-\alpha_1 \psi^2.
$$
Thus,
$$
\cos^{d-2} \bigg(\frac{ z }{\sqrt{d}} \bigg)
\leq
\bigg( 1-\frac{\alpha_1z^2}{d}\bigg)^{d-2} \leq \exp \bigg( -\frac{ \alpha_1 z ^2(d-2)}{d}\bigg)
\leq \exp \bigg( -\alpha z^2 \bigg),
$$
as soon as $\alpha < \alpha_1$ and $d$ is big enough.
\end{proof}
We have to find a dominant and to prove the convergence of the integral at zero and at infinity.
At zero, since the function $G$ is bounded
we have to give bounds for
$$\frac{d^{\frac{m-1}{2}} \sin^{m-1} \Big(\frac{z}{\sqrt d}\Big) \sigma^2(\frac z{\sqrt d}) }
{\big(1-\cos^{2d}(\frac z{\sqrt d})\big)^{m/2}}.$$
Clearly, $d^{\frac{m-1}{2}} \sin^{m-1}(z/\sqrt{d})\leq z^{m-1}$.
Besides,
$$
\frac{ \sigma^2\left(\frac z{\sqrt d}\right)}{\big(1-\cos^{2d}(\frac z{\sqrt d})\big)^{\frac{m}{2}}} =
\frac{ 1 - c^2_d(z )- c^{\prime2}_d(z ) }
{ (1- c^2_d(z ))^{\frac{m}{2}+1}},
$$
where $c(z)=\mathcal{C}(z/\sqrt{d})$.
For the denominator, using Lemma \ref{l:liste}, we have
\begin{equation} \label{e:md}
1- c^2_d(z ) \geq \mathbf C (1- \exp(- 2 \alpha z^2)).
\end{equation}
We turn now to the numerator,
let $X_d(.)$ be a formal Gaussian stationary process on the line with covariance $c_d$.
Hence,
\begin{multline*}
1-c_d^2(z) - c^{\prime2}_d(z ) = \mbox{\rm Var}\big( X_d(z)| X_d(0),X'_d(0)\big)
\\
=
\mbox{\rm Var}\big( X_d(z) - X_d(0) -zX'_d(0)| X_d(0),X'_d(0)\big) \\
\leq \mbox{\rm Var}\big( X_d(z) - X_d(0) -zX'_d(0)\big)
=z^4 \mbox{\rm Var}\Big(\int _0^1 (1-t )X''_d(ut) dt\Big),
\end{multline*}
where we used the Taylor formula with the integral form of the remainder.
The covariance function $ \cos(z/\sqrt{d} )$ corresponds to the spectral measure
$ \mu = \frac 1 2 \big(\delta_{-d^{-1/2}} + \delta_{d^{-1/2}}\big) $, see \cite{aw}.
The spectral measure associated to $c_d (z)= \cos^d(z/\sqrt{d} )$ is the $d$-th convolution of $\mu$
and a direct computation shows that its fourth spectral moment exists and is bounded uniformly in $d$.
As a consequence, $\mbox{\rm Var} (X''_d(t) )$ is bounded uniformely in $d$, yielding that
\begin{equation}\label{e:mn}
1-c_d^2(z) - c^{\prime2}_d(z ) \leq \mathbf C z^4.
\end{equation}
Using \eqref{e:md} and \eqref{e:mn} we get the convergence at zero.
At infinity, define
\begin{multline*}
{\mathcal H}\left( \sigma^2\left(\frac z{\sqrt d}\right),
\mathcal{C}\left(\frac z{\sqrt d}\right), \rho\left(\frac z{\sqrt d}\right), \mathcal{D} \left(\frac z{\sqrt d}\right) \right)
\\
=\frac{ \sigma^2(\frac z{\sqrt d}) } {\big(1-\cos^{2d}(\frac z{\sqrt d})\big)^{m/2}}
G\left(\rho\bigg(\frac z{\sqrt d}\bigg), \mathcal{D}\bigg(\frac z{\sqrt d}\bigg)\right)dz.
\end{multline*}
Multiplication of bounded Lipchitz functions gives a Lipchitz function, thus
\begin{multline*}
\left|{\mathcal H}\bigg( \sigma^2\bigg(\frac z{\sqrt d}\bigg), \mathcal{C}\bigg(\frac z{\sqrt d}\bigg),
\rho\bigg(\frac z{\sqrt d}\bigg), \mathcal{D} \bigg(\frac z{\sqrt d}\bigg) \bigg)
-{\mathcal H}(1,0,0,0) \right|\\ \leq \mathbf C \big( |\sigma^2-1| + |\mathcal{C}| + |\rho| + |\mathcal{D} | \big).
\end{multline*}
The proof is achieved with Lemma \ref{l:liste}.
\newpage
\subsection{Positivity of the limit variance}
\subsubsection{Hermite expansion of the number of real roots}
We introduce the Hermite polynomials $H_n(x)$ by $H_0(x)=1$, $H_1(x)=x$ and
$H_{n+1}(x)=xH_n(x)-nH_{n-1}(x)$.
The multi-dimensional versions are,
for multi-indexes
$\boldsymbol\alpha=(\alpha_\ell)\in\mathbb N^m$ and
$\boldsymbol\beta=(\beta_{\ell,k})\in\mathbb N^{m^2}$,
and vectors ${\mathbf y}=(y_\ell)\in\mathbb R^m$
and ${\mathbf y}'=(y'_{\ell,k})\in\mathbb R^{m^2}$
$$
{\mathbf H}_{\boldsymbol\alpha}(\mathbf y)=\prod^m_{\ell=1} H_{\alpha_\ell}(y_\ell),\quad
{ \overline { \mathbf H}}_{\boldsymbol\beta}(\mathbf y')=\prod^{m}_{\ell,k=1}
H_{\beta_{\ell,k}}(y'_{\ell,k}).
$$
It is well known that the standardized Hermite polynomials
$\{\frac1{\sqrt{n!}}H_n\}$, $\{\frac1{\sqrt{\boldsymbol\alpha!}}{\mathbf H}_{\boldsymbol\alpha}\}$
and $\{\frac1{\sqrt{\boldsymbol\beta!}}\overline{{\mathbf H}}_{\boldsymbol\beta}\}$
form orthonormal bases of the spaces $L^2(\mathbb R,\phi_1)$, $L^2(\mathbb R^m,\phi_m)$ and $L^2(\mathbb R^{m^2},\phi_{m^2})$ respectively.
Here, $\phi_j$ stands for the standard Gaussian measure on $\mathbb R^j$ ($j=1,m,m^2$)
and $\boldsymbol\alpha!=\prod^m_{\ell=1}\alpha_\ell !$, $\boldsymbol\beta!=\prod^m_{\ell,k=1}\beta_{\ell,k}!$.
See \cite{np,pta} for a general picture of Hermite polynomials.
Before stating the Hermite expansion for the normalized number of roots of ${\mathbf Y}$ we need to introduce some coefficients.
Let $f_{\boldsymbol\beta}$ ($\boldsymbol\beta\in\mathbb R^{m^2}$)
be the
coefficients in the Hermite's basis
of the function
$f:\mathbb R^{m^2}\to\mathbb R$ such that $f({\mathbf y'})=|\det(\mathbf y')|$.
That is $f({\mathbf y'})=\sum_{\boldsymbol\beta\in\mathbb R^{m^2}}f_{\boldsymbol\beta}\overline{{\mathbf H}}_{\boldsymbol\beta}({\mathbf y'})$ with
\begin{eqnarray}\label{eq:fb}
f_{\boldsymbol\beta}&=&f_{(\boldsymbol\beta_1,\ldots,\boldsymbol\beta_m)}
=\frac1{\boldsymbol\beta!}\int_{\mathbb R^{m^2}}|\det(\mathbf y')|\overline{{\mathbf H}}_{\boldsymbol\beta}(\mathbf y')\phi_{m^2}(\mathbf y')d\mathbf y' \notag\\
&=&\frac1{\boldsymbol\beta_1!\ldots\boldsymbol\beta_m!}\int_{\mathbb R^{m^2}}|\det(\mathbf y')
|\prod_{l=1}^m H_{\boldsymbol\beta_l}(\mathbf y'_l)\frac{\exp{-\frac{||\mathbf y'_l||^2}{2}}}{(2\pi)^{\frac{m}2}}d\mathbf y'_l,
\end{eqnarray}
with
$\boldsymbol\beta_l=(\beta_{l 1},\ldots,\beta_{l m})$ and
${\mathbf y}'_l=(y'_{l1},\dots,y'_{l m})$: $l=1,\dots,m$.
Parseval's Theorem entails
$||f||^2_2=\sum_{q=0}^\infty \sum_{|\boldsymbol\beta|=q}f_{\boldsymbol\beta}^2\boldsymbol\beta!<\infty$.
Moreover,
since the function $f$ is even w.r.t. each column, the above coefficients are zero whenever
$|\boldsymbol\beta_l|$ is odd for at least one $l=1,\ldots,m.$
To introduce the next coefficients let us consider first the coefficients in the Hermite's basis in
$L^2(\mathbb R,\phi_1)$
for the Dirac delta $\delta_0(x)$.
They are $b_{2j}=\frac1{\sqrt{2\pi}}(-\frac12)^j\frac1{j!},$
and zero for odd indices \cite{kl-97}. Introducing now the distribution $\prod_{j=1}^m\delta_0(y_j)$ and denoting as $b_{\boldsymbol\alpha}$ its coefficients
it holds
\begin{equation}\label{eq:b}
b_{\boldsymbol\alpha}=\frac1{[\frac{\boldsymbol\alpha }2]!}\prod_{j=1}^m\frac1{\sqrt{2\pi}}\bigg[-\frac12\bigg]^{[\frac{\alpha_j}2]}
\end{equation}
or $b_{\boldsymbol\alpha}=0$ if at least one index $\alpha_j$ is odd.
Since the formulas for the covariances of Hermite polynomials
work in a neater way when the underlying random variables
are standardized, we define the standardized derivative as
\begin{equation*}
\overline{{Y}}_\ell'(t):=\frac{Y_\ell'(t)}{\sqrt{d}},\quad\mbox{and}\quad \overline{\mathbf{Y}}'(t):=(\overline{{Y}}_1'(t),\ldots,\overline{{Y}}'_m(t)),
\end{equation*}
where $Y_\ell'(t)$ denotes the spherical derivative of $Y_\ell$ at $t\in S^m$.
{As said above, the $k$-th component of $\overline{{Y}}_\ell'(t)$ in a given basis
is denoted by $\overline{{Y}}_{\ell k}'(t)$.}
\begin{proposition}\label{prop:expansion}
With the same notations as above, we have, in the $L^2$ sense, that
\begin{equation*}
\bar{N}_d:=\frac{N^{{\mathbf Y}}_d-2d^{m/2}}{2d^{m/4}}=\sum^\infty_{q=1}I_{q,d},
\end{equation*}
where
\begin{equation*}
I_{q,d}=\frac{d^{m/4}}{2}\int_{S^{m}}
\sum_{|\boldsymbol\gamma|=q}c_{\boldsymbol\gamma}{\mathbf H}_{\boldsymbol\alpha}({\mathbf Y}(t)){ \overline { \mathbf H}}_{
\boldsymbol\beta}(\overline{\mathbf{Y}}'(t))
dt,
\end{equation*}
with $\boldsymbol\gamma=(\boldsymbol\alpha,\boldsymbol\beta)\in\mathbb N^m\times\mathbb N^{m^2}$
and $|\boldsymbol\gamma| = |\alpha|+|\beta| $
and $c_{\boldsymbol\gamma}=b_{\boldsymbol\alpha}f_{\boldsymbol\beta}$.
\end{proposition}
\begin{remark}
Hermite polynomials' properties imply that for $q\neq q'$
$$\mathbb{E}\,(I_{q,d}I_{q',d})=0.$$
\end{remark}
\begin{remark}
The main difficulty in order to obtain a CLT
relies on the bound of the variance of the tail $\sum_{q\geq Q}I_{q,d}$
because of the degeneracy of the covariances of $({\mathbf Y},\overline{\mathbf{Y}})$ near the diagonal $\{(s,t)\in S^m\times S^m:s=t\}$.
Besides, on the sphere finding a convenient re-scaling as in the one-dimensional case \cite{d}
is a difficult issue.
\end{remark}
\noindent Proposition \ref{prop:expansion} is a direct consequence of the following lemma.
\begin{lemma}
For $\varepsilon>0$ define
\begin{equation*}
N_\varepsilon:=\int_{S^{m}}
|\det({\mathbf Y}'(t))|\,\delta_\varepsilon({\mathbf Y}(t))dt,
\end{equation*}
where
$\delta_\varepsilon(\mathbf y)
:=\prod^m_{\ell=1}\frac{1}{2\varepsilon}\mathbf{1}_{\{| y_\ell|<\varepsilon\}}$
for $\mathbf{y}=(y_1,\ldots,y_m)$,
and ${\mathbf Y}'$ is the spherical derivative of ${\mathbf Y}$.
Then, we have the following.
\begin{enumerate}
\item For $\mathbf{v}\in\mathbb R^m$, let $N_d^{{\mathbf Y}}(\mathbf{v})$ denote the number of real roots in $S^m$ of the
equation ${\mathbf Y}(t)=\mathbf{v}$. Then, $N_d^{{\mathbf Y}}(\mathbf{v})$ is bounded above by $2d^m$ almost surely.
\item $N_\varepsilon\to N_d^{{\mathbf Y}}$ almost surely
and in the $L^2$ sense as $\varepsilon\to0$.
\item The random variable $N_{d}^{{\mathbf Y}}$ admits a Hermite's
expansion.
\end{enumerate}
\end{lemma}
\begin{proof}
Since the paths of ${\mathbf Y}$ are smooth, Proposition 6.5 of \cite{aw} implies that
for every $\mathbf{v} \in \mathbb R^m$ almost surely there is no point $t \in S^m$ such that
${\mathbf Y}(t) = \mathbf{v}$ and the spherical gradient is singular. Using the local inversion theorem, this implies that
the roots of ${\mathbf Y} = \mathbf{v}$ are isolated and by compactness they are finitely many.
As a consequence, $N^{{\mathbf Y}}_d(\mathbf{v})$ is well defined and a.s. finite.
Moreover, for every $t\in\mathbb R^{m+1}$ such that $Y(t)=\mathbf{v},\, \|t\|=1$,
we have that the set $\{Y'_1(t),\ldots, Y'_m(t),t\}$ is almost surely linearly independent in $\mathbb R^{m+1}$.
This implies that $N^{{\mathbf Y}}_d(\mathbf{v})$ is uniformly bounded by the B\'ezout's number $2d^m$ concluding 1
(see for example Milnor \cite[Lemma 1, pag. 275]{Milnor}).
By the inverse function theorem, a.s. for every regular value $\mathbf{v}\in\mathbb R^m$, $N^{{\mathbf Y}}_d(\cdot)$ is locally constant in a neighborhood of $\mathbf{v}$.
Furthermore, by the Area Formula (see Federer \cite{federer}, or \cite{aw} Proposition 6.1), for small $\varepsilon>0$ we have
\begin{equation}\label{eq:Nepsdef}
N_\varepsilon = \frac {1} {(2\varepsilon)^m}
\int_{[-\varepsilon,\varepsilon]^m }N^{{\mathbf Y}}_d(\mathbf{v})\, d\mathbf{v},\quad a.s.
\end{equation}
Hence,
\begin{equation} \label{e:kac}
N^{{\mathbf Y}}_{d}(0)=
\lim_{\varepsilon\to0} N_{\varepsilon},\quad a.s.
\end{equation}
From 1. and (\ref{eq:Nepsdef}) we have $N_\varepsilon\leq 2d^m$ a.s. Then, the convergence in \eqref{e:kac} also happens in $L^2$.
This convergence allows us getting a Hermite's expansion.
We have
$$
\delta_{\varepsilon}(\mathbf y)
=\sum_{\boldsymbol\alpha\in\mathbb N^m} b^\varepsilon_{\boldsymbol\alpha}{\mathbf H}_{\boldsymbol\alpha}(\mathbf y),
$$
$$
\left|\det \left(\frac{\mathbf y'}{\sqrt
d}\right)\right|=\sum_{\boldsymbol\beta\in\mathbb N^{m^2}}f_{\boldsymbol\beta}{ \overline { \mathbf H}}_{\boldsymbol\beta}\left(\frac{\mathbf y'}{\sqrt d}\right),$$
where $b^\varepsilon_{\boldsymbol\alpha} $ are the Hermite coefficients of $\delta_\varepsilon(\mathbf y)$ and the $f_{\boldsymbol\beta}$ have been already defined.
Furthermore, we know that
$\lim_{\varepsilon\to0}b^\varepsilon_{\boldsymbol\alpha}=b_{\boldsymbol\alpha}$.
Now, taking limit
and regrouping terms we get as in
Estrade and Le\'on \cite{el} that
$$
N_d=d^{m/2}\sum_{q=0}^\infty\sum_{|\boldsymbol\alpha|+|\boldsymbol\beta|=q}
b_{\boldsymbol\alpha}f_{\boldsymbol\beta}\int_{ S^{m}}
{\mathbf H}_{\boldsymbol\alpha}({\mathbf Y}(t)){ \overline { \mathbf H}}_{\boldsymbol\beta}(\overline{\mathbf{Y}}'(t))dt.
$$
This concludes the proof.
\end{proof}
\subsubsection{$V_\infty>0$}
To prove that $V_\infty>0$ we use
the Hermite expansion.
In fact,
$$
V^2_\infty=\lim_{d\to\infty}\sum^{\infty}_{q=2}\mbox{\rm Var}(I_{q,d})
\geq\lim_{d\to\infty}\mbox{\rm Var}(I_{2,d}).
$$
By Proposition \ref{prop:expansion}, we have,
\begin{equation*}
I_{2,d}
=\frac{d^{m/4}}2\sum_{|\boldsymbol\gamma|=2}c_{\boldsymbol\gamma}\int_{S^m}H_{\boldsymbol\alpha}(\mathbf Y(t))H_{\boldsymbol\beta}(\overline{\mathbf Y}'(t))dt.
\end{equation*}
The coefficients $c_{\boldsymbol\gamma}=b_{\boldsymbol\alpha}f_{\boldsymbol\beta}$ vanish for any odd
$\alpha_\ell$ and $|\boldsymbol\beta_{\ell}|$.
Thus, the only possibilities to satisfy the condition $|\boldsymbol\gamma|=2$
are that either only one of the indices is $2$ and the rest vanish, or
that $\beta_{\ell,k}=\beta_{\ell,k'}=1$ for some $k\neq k'$ and the rest vanish.
Hence,
\begin{align*}
I_{2,d}& =\frac{d^{m/4}}2
\int_{S^m}\Bigg[\sum^m_{\ell=1}\left( b_{2}b^{m-1}_{0}f_{(0,\ldots,0)}
H_{2}(Y_\ell(t))
+b^{m}_{0}\tilde f_{\ell 12}
H_{2}(\overline{Y}'_{\ell,1}(t))\right)\\
&+
\sum^m_{k=2}b^{m}_{0}\tilde f_{\ell k2}
H_{2}(\overline{Y}'_{\ell,k}(t))
+
\sum_{k\neq k'}b^{m}_{0}\tilde f_{\ell kk'1}
H_{1}(\overline{Y}'_{\ell,k}(t)) H_{1}(\overline{Y}'_{\ell,k'}(t))
\Bigg]dt,
\end{align*}
where $\tilde f_{\ell k2}=f_{(0,\ldots,\beta_{\ell k},0,\ldots,0)}$, $\beta_{\ell k}=2$
and $\tilde f_{\ell kk'1}=f_{(0,\ldots,\beta_{\ell k},\ldots,\beta_{\ell k'}0,\ldots,0)}$, $\beta_{\ell k}=\beta_{\ell k'}=1$.
By \eqref{eq:c1}-\eqref{eq:c2} the variables in different sums are orthogonal
when evaluated at $s,t\in S^m$.
Now,
by Mehler's formula, $\mathbb{E}\,(H_2(\xi)H_2(\eta))=2(\mathbb{E}\,(\xi\eta))^2\geq 0$
for jointly normal variables $\xi,\eta$.
Hence,
bounding the sum of the variances
by one convenient term, we have
\begin{align*}
\mbox{\rm Var}(I_{2,d})&\geq
\mbox{\rm Var}\left(\frac{d^{m/4}}{2} b^m_0\tilde{f}_{\ell 2 2}
\int_{S^m}
H_2(\overline{Y}'_{\ell 2}(t))dt\right)\\
&=\frac{d^{m/2}}{2} (b^m_0\tilde{f}_{\ell 2 2})^2
\int_{(S^m)^2}
(\mathbb{E}\, \overline{Y}'_{\ell, 2}(s)\overline{Y}'_{\ell,2}(t))^2dsdt\\
&={(b^m_0\tilde{f}_{\ell 2 2})^2\frac{d^{m/2}}{2}}
\int_{(S^m)^2}
\bigg(\<s,t\>^{d}-(d-1)\<s,t\>^{d-2}\sqrt{1-\<s,t\>^2}\bigg)^2dsdt,
\end{align*}
where last equality is a consequence of \eqref{eq:cs}.
The integral tends to a positive limit
as can be seen
using Lemma \ref{lem:intACV} and the scaling $t=z/\sqrt{d}$
as in Section \ref{s:var}.
Finally, by \eqref{eq:b} $b_0\neq 0$.
Besides,
by the symmetry
of the function $f(\cdot)=|\det(\cdot)|$ and \eqref{eq:fb},
$\tilde{f}_{\ell k2}=\tilde{f}_{\ell k'2}$ for all $\ell,k,k'$.
Therefore,
adding up \eqref{eq:fb} w.r.t. $\ell$ and $k$, we get
$$
\tilde{f}_{\ell 22}
=\frac{1}{m^2}\left(\mathbb{E}\,(|\det({\mathbf{y}'})|\|{\mathbf{y}'}\|^2_{F})-m^2\mathbb{E}\,(|\det({\mathbf{y}'})|)\right),
$$
being $\|\cdot\|_{F}$ is Frobenious' norm
and ${\mathbf{y}'}$ an $m\times m$ standard Gaussian matrix.
Straightforward computations using polar coordinates show that
$\tilde{f}_{\ell 22}>0$ for all $m\geq 1$.
This concludes the proof.\\
\bibliographystyle{amsplain}
|
2,869,038,155,264 | arxiv | \section{Introduction}
Synchronisation among oscillators with some form of coupling has been called
universal \cite{strogatz2004} and is prevalent concept in Nature \cite{Pit}.
In 1665 Huygens, the inventor of the pendulum clock, observed synchronisation
between two pendulum clocks \cite{Huy} hanging from the same support. The first
observation of Huygens was about the two clocks hanging in the same wall
beam of his house when he was lying in bed with some indisposition. He
observed both phase and phase opposition as a final state of the coupled
system. The second observation was made later, when Huygens hung the two clocks on
a board sitting on two chairs.
The two systems observed by Huygens are quite different, therefore originating
two lines of research completely separated. The later case has been studied in
many papers \cite{Benn,Col,Frad, Jova,Col2,Martens,Oud,Sen} by considering
momentum conservation in the clocks-beam system, since the plank supporting the clocks
is able to move. The system in that case is non-perturbative, there are three
bodies that can move, i.e., three degrees of freedom: the pendula and the wood
beam. The model is a classical mechanics paradigm, and in most of the works
cited above the friction is considered viscous and not dry.
The first case, when the pendulum clocks are suspended at a very rigid house beam,
therefore not able to move, has been approached in the works \cite{Abr2,
Abr,Vass}. In this model the interaction is considered perturbative. When one
of the pendulums suffers the internal impact from the clock escape mechanism, a
small travelling wave perturbs the second one and vice-versa.
Recently, in \cite{OlMe} a theoretical model for this interaction was
developed. In the same work, simulation and some experimental studies were
carried on, that confirmed the validity of the proposed model when the support
wall has infinite mass and, therefore, not moving. Naturally, the center of
mass of the wall is not, in this case, a \textquotedblleft degree of freedom
of the system\textquotedblright\ and the coupling is, via very weak travelling
waves, propagated in the rigid structure of the wall.
In this article, we use as a conceptual starting point \cite{OlMe} presenting
a mathematical model where the coupling is assumed to be attained through the
exchange of impacts between three identical oscillators, where each one of the
clocks interacts with the two other clocks. The model presents the
advantage of being independent of the physical nature of the oscillators, and
thus it can be used in other oscillator systems where synchronisation and
phase locking has been observed \cite{Pit}.
The ideas for the model presented here originated from the Andronov
\cite{And,OlMe} model of the phase-space limit cycle of isolated clocks,
and assumes the exchange of single impacts (travelling solitons, for this
system) between the oscillators at a specific point of the limit cycle.
The fundamental hypotheses in this article are the existence of an
asymptotically stable limit cycle for each oscillator and one very small
interaction between each pair of clocks per cycle.
We point out that in \cite{OlMe} the authors obtained phase opposition, which
is in line with the original Huygens observations \cite{Huy}. In this paper we
obtain a particularly symmetric asymptotic state at which all the clocks
remain at a phase difference of $\frac{2\pi}{3}$ between each other.
A natural step forward is to generalise the results obtained here to a larger
set of oscillators and apply these results to bidimensional and tridimensional
swarms of oscillators. The results of the present work can be used, namely, to
study interacting insects or neuronal networks, that have been studied using
the over-simplified integrate and fire models of Kuramoto
\cite{Campbell1997,Campbell1999,Kuramoto1975,Mirollo1990,Strogatz2000}.
The paper is organised in $5$ sections. In section $2$, we discuss the
original model of the pendulum clock and we briefly recall the model for two
identical clocks. In section $3$, we deduce the model for three identical
clocks hanging at the same wall with mutual interactions. In section$\ 4$, we
analyse the model, computing its symmetries and stabilities. In section $5 $,
we draw conclusions and point out directions for future work.
\section{Model for the synchronisation of two oscillators}
\subsection{Some background}
For the sake of completeness we present here a very short theory of
synchronisation for two oscillators exchanging small perturbations at each
cycle. We consider identical oscillators. This theory can be applied to
networks of identical oscillators, electronic oscillators and many other real
world systems. We intend to consider, in future work, the case of slighly
different oscillators, which give rise to regions of stability versus
instability in the parameter space of these systems, i.e., Arnold Tongues
\cite{boyland1986,gilmore2011}.
For basic, classical definitions and notions related to synchronisation, like
phase and frequence, we follow and refer to \cite{Pit}, and for concepts
concerning general theory of dynamical systems like, for instance, limit
cycle, we refer to \cite{arrowsmith1990}. In this paper, we always assume that
an oscillator is a dynamical system having a limit cycle. We use the word
clock when referring to a special type of oscillator described by the Andronov
model \cite{OlMe}.
Given a point $p_{0}$ in the limit cycle $\gamma$, the necessary time to
return to $p_{0}$, after one round on the limit cycle, is the period $T_{0}$.
A phase $\varphi$ is a real coordinate that describes the position of the
representative point of the system on the limit cycle \cite{Nakao,Pit}.
Let $B_{\gamma}$ be the basin of attraction of the limit cycle. Consider the
points outside the limit-cycle $\gamma$\ but in $B_{\gamma}$. We extend the
definition of phase to $B_{\gamma}$ as follows. We assign the same phase
$\varphi$ to all points $p$ in $B_{\gamma}$ that converge to the same $p_{0}$
on the limit cycle $\gamma$ as $t\rightarrow\infty$, being the phase of
$p_{0}$ precisely $\varphi$ \cite{gu1975}. The set of points $p$ that share
the same phase is an \textit{isochron curve}. If the oscillator's states are
on the same isochron at a given point in time, they continue to be on the same
isochron in time \cite{gu1975,Nakao}. When each clock suffers a perturbation,
its state can go slightly off the limit cycle and generically jump to another
isochron. Moreover, we assume that the limit cycles are structurally stable
under small perturbations.
When we consider two oscillators, $1$ and $2$, with orbits on the limit-cycle
or sufficiently near the limit cycle, each one has a particular phase,
respectively $\varphi$ and $\psi$.
The study of the synchronisation of these two oscillators consists in
establishing a dynamical system for the phase difference of the two oscillators.
We have two possible lines of research \cite{Pit}. The first is to consider
the phase difference along continuous time, i.e., to look at the function
$\phi\left( t\right) =\psi\left( t\right) -\varphi\left( t\right) $\ for
$t\in\left[ 0,+\infty\right[ $. The second line of research, that we adopt
in this paper, is to consider the phase difference $\phi_{n}=\psi_{n}%
-\varphi_{n}$ taken at discrete instants $n=0,1,2,\ldots$. In this paper, we
consider exclusively this last approach.
There is phase synchronisation when the phase differences between the
oscillators tend to a specific attractor. When this attractor is an isolated
point, then there is phase locking. Naturally, \textquotedblleft
richer\textquotedblright\ coupled states can occur \cite{Martens}. The main
goal for any theory of synchronisation is to obtain this phase difference
dynamics and to establish the existence and nature of the attractor. In the
case of Huygens observations, the attractor was the point $0$ or the point
$\pi$ and the phase dynamics was unidimensional.
\subsection{The Andronov model for an isolated clock}
We recall here the model for the sake of completeness of this article.
Assuming that dry friction predominates in the internal metal pieces of the
clock and the viscous damping is not predominant, using the angular coordinate
$q$, the differential equation governing the isolated pendulum clock is%
\begin{equation}
\ddot{q}+\mu\text{ }\operatorname*{sign}\dot{q}+q=0,
\end{equation}
where $\mu>0$ is the dry friction coefficient and $\operatorname*{sign}\left(
x\right) $ is the classical function taking the value $-1$ at $x<0$ and $1$
at $x>0$. In \cite{And} it was considered that, in each cycle, the escape
mechanism gives to the pendulum a fixed amount of normalized kinetic energy
$\frac{h^{2}}{2}$ so to compensate the loss of kinetic energy occurred because
of the dry friction in each complete cycle. This transfer of kinetic energy is
called a \textit{kick}. The origin is fixed so that the kick is given
precisely when $q=-\mu$. The phase portrait is shown in Fig. $1$.%
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=2.1075in,
width=2.2753in
]%
{limitcycle.eps}
\caption{Limit cycle of an isolated clock represented as a solid curve in the
phase space. Horizontal axis represents the angular position and in the
vertical axis the velocity.}%
\label{Fig1}%
\end{center}
\end{figure}
As in \cite{OlMe}, with initial conditions $q\left( t=0\right) =-\mu$ and
$\dot{q}\left( t=0\right) =v_{0}$, a Poincar\'{e} section \cite{Bir} is
represented vol. II, page 268) as the half line $q=-\mu^{+}$ and $\dot{q}>0$
\cite{And}. The symbol $+$ means that we are considering that the section is
taken immediately after the kick. Due to friction during a complete cycle, a
loss of velocity $-4\mu$ occurs. By considering the velocity, $v_{n}=\dot
{q}\left( 2n\pi^{+}\right) $, at the Poincar\'{e} section in each cycle, the
non-linear discrete dynamical system \cite{And} is obtained
\begin{equation}
v_{n+1}=\sqrt{\left( v_{n}-4\mu\right) ^{2}+h^{2}}\text{.} \label{recu}%
\end{equation}
This equation has the asymptotically stable\ fixed point
\begin{equation}
v_{f}=\frac{h^{2}}{8\mu}+2\mu\text{.}%
\end{equation}
Any initial condition $v_{0}\in\left( 4\mu,+\infty\right) $ is attracted to
$v_{f}$. Each cycle corresponds to a phase increment of $2\pi$ and the phase
$\varphi$ is linear with respect to $t$, precisely
\[
\varphi=2\pi t.
\]
As already mentioned, the nature of limit cycle is not of fundamental
importance when we consider the interaction of three identical clocks, as we
shall see in the sequel. We have presented here the basis of our reasonings in
the non-usual case when the computations of the limit cycle are explicit and
the usual angular phase is a linear function of $t$.
\subsection{Two interacting oscillators}
We present briefly the conclusions of considering two pendulum clocks
suspended at the same wall, in a simplified version of \cite{OlMe}, where the
clocks are assumed to have natural angular frequencies near each other but
different. Here, we assume that the two clocks have the same angular
frequency. When one clock receives the kick, the impact propagates in the wall
slightly perturbing the second clock. The perturbation is assumed to be
instantaneous since the time of travel of sound in the wall between the clocks
is assumed very small compared to the period.
Consider two oscillators and index them by $i=1,2$. Each oscillator satisfies
the differential equation
\begin{equation}
\ddot{q}_{i}+\mu_{i}\text{ sign\ }\dot{q}_{i}+q_{i}=-\alpha_{i}\digamma\left(
q_{j}\right) ,\text{ for }i,j=1,2\text{, }i\not =j\text{.}
\label{coupledandro}%
\end{equation}
As in the \textit{Andronov model}, the kinetic energy of each oscillator
increases of the fixed amount $h_{i}$ when $q_{i}=-\mu_{i}$. The coupling term
is the normalised force $-\alpha_{i}\digamma\left( q_{j}\right) $, where
$\digamma$ is the interaction function and $\alpha_{i}$ a constant with
acceleration dimension. Following \cite{OlMe}, the effect of the interaction
function $\digamma$ is considered to produce an increment $-\alpha$ in the
velocity of each clock, leaving the position invariant when the other is
struck by the energy kick. The reader finds the detailed treatment in
\cite{OlMe}. Here we only recall some ideas from that article, for the sake of
completeness and to make our three clocks model more simple and natural to
deal with.
To describe and investigate the effect of the kicks, we construct a discrete
dynamical system for the phase difference between the two clocks. We compute
each cycle using as reference one of the clocks (the choice is irrelevant,
since the model is symmetric). We choose, to fix ideas, clock 1 as the
reference: whenever its phase reaches $0$ $\left( \operatorname{mod}%
2\pi\right) $, the number of cycles increases one unit from $n$ to $n+1$.
If there exists an attracting fixed point for that dynamical system, the phase
locking occurs. As in \cite{OlMe}, the assumptions are the following.
\begin{enumerate}
\item Dry friction.
\item The pendulums have the same natural angular frequency $\omega=1$.
\item The perturbation in the momentum is always in the same vertical
direction in the phase space \cite{Abr2, Abr}.
\item Since the clocks have the same construction, the energy dissipated at
each cycle of the two clocks is the same, $h_{1}=h_{2}=h$. The friction
coefficient is the same for both clocks, $\mu_{1}=\mu_{2}=\mu$.
\item The perturbative interaction is instantaneous. This is a reasonable
assumption, since in general the perturbation propagation time between the two
clocks is several orders of magnitude lower than the periods.
\item The interaction is symmetric, the coupling has the same, very small,
constant $\alpha$ when the clock $1$ acts on clock $2$, and conversely.
\end{enumerate}
We compute at this point the phase difference when clock $1$ returns to the
initial position. The secular repetition of perturbations leads the system
with the two clocks in phase opposition as Huygens observed in 1665
\cite{Huy}. The discrete dynamical model that we deduce from \cite{OlMe} for
the phase difference between the two clocks $\phi_{n}=\psi_{n}-\varphi_{n}$ is
the Adler equation \cite{adler1946study,Pit}%
\begin{equation}
\phi_{n+1}=\phi_{n}+\varepsilon\sin\phi_{n}, \label{Perturb}%
\end{equation}
with a very small constant $\varepsilon=\frac{16\mu\alpha}{h^{2}}$. In the
interval $\left[ 0,2\pi\right[ $, there are two fixed points which are $\pi$
and $0$ respectively attracting and reppeling.
Equation (\ref{Perturb}) is the starting point from where we begin, in the
present paper, the study the three symmetric clocks in mutual interaction.
\begin{remark}
In any model with a perturbation of phase given by equation (\ref{Perturb})
per cycle, i.e., Adler's perturbation \cite{adler1946study,Pit}, despite being
a physical clock (with Andronov model or any different model) or other type of
oscillator, electric, quantic, electronic or biological, the theory presented
here for three oscillators interacting by small impacts will be exactly the
same, with the same conclusions.
\end{remark}
\section{Model for three pendulum clocks placed in the three vertices of an
equilateral triangle}
\subsection{Hypotheses}
We consider three pendulum clocks suspended at the same wall, placed in the
three vertices of an equilateral triangle, say the vertices are $A$, $B$, and
$C$ and $B$ are the extreme points of the basis of the triangles.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.7743in,
width=3.6443in
]%
{triangle.eps}%
\caption{The three clocks hang at the three vertices of a triangle.}%
\label{Fig2}
\end{center}
\end{figure}
This geometric setting is purely conceptual. Any set of three dynamical
systems receiving symmetric impacts from the other two will have the same type
of response of the clocks depicted in the three vertices of an equilateral triangle.
Call the clocks placed in the three vertices $A$, $B$ and $C$, respectively,
$O_{1}$, $O_{2}$ and $O_{3}$. When the clock $A$ receives the kick from the
escape mechanism, the impact propagates in the wall slightly perturbing the
other two clocks. As in \cite{OlMe}, the perturbation is assumed to be
instantaneous, since the time of travel of sound in the wall between the
clocks is assumed very small compared to the period. As for the two clocks
model discussed in \cite{OlMe}, we make the following assumptions, now
formulated for three clocks.
\begin{enumerate}
\item The system has {dry friction }\cite{And}.
\item \label{work3}{The pendulums of clocks }$O_{1},$ $O_{2}$ and $O_{3}$
{have respectively natural angular frequencies ${\omega}_{1}={\omega}%
_{2}={\omega}_{3}=1$.}
\item {The perturbation in the momentum is always in the same vertical
direction in the phase space } \cite{Abr2, Abr}.
\item {The friction coefficient is the same for all the three clocks, ${\mu
}_{1}={\mu}_{2}={\mu}_{3}={\mu}$. The energy dissipated at each cycle of the
three clocks is the same, and the energy furnished by the escape mechanism to
compensate the loss of energy to friction in each cycle is $h_{1}=h_{2}%
=h_{3}=h$. }
\item {The perturbative interaction is instantaneous. This is a reasonable
assumption, since in general the perturbation propagation time between two
clocks is several orders of magnitude lower than the periods \cite{OlMe}.}
\item \label{hyp6}{The interaction is symmetric. The couplings have the same
constant $\alpha$ when one clock acts on another and conversely. In this model
$\alpha$ is assumed to be very small.}
\item \label{hyp67}{Each perturbation from clock $i$ to clock $j$ (where
$i,j\in\{1,2,3\}$ with $i\not =j$), when clock $i$ suffers its internal impact
of kinetic energy $h^{2}$, gives rise to a small perturbative change of phase
which is in first order a $2\pi$-periodic differentiable odd function $P$ of
the real variable $\phi$%
\begin{equation}
P\left( \phi\right) =\varepsilon\sin\phi\text{,} \label{Perturb1}%
\end{equation}
where $\phi={\phi}_{ij}$ is the phase difference between clock $i$ and clock
$j$.}
\end{enumerate}
\begin{remark}
The value of $\varepsilon$ is the above mentioned $\varepsilon=\frac
{8\mu\alpha}{h^{2}}$ from {\cite{OlMe} where $\mu$ is the dry friction
coefficient, $\frac{h^{2}}{2}$ is the kinetic energy furnished by the internal
escape mechanism of each clock once per cycle and $\alpha$ the interaction
coefficient between the clocks. The greater the $\alpha$ is, the greater the
mutual influence among the clocks. In this paper, we do not need to
particularize }$\varepsilon$, since we are not interested in doing
experimental computations. Therefore, we are interested in the fundamental
result of symmetry between three oscillators subject to very weak mutual
symmetric interaction.
\end{remark}
Most of the reasonings are independent on the form of the function $P\left(
\phi\right) $, therefore we consider a general differentiable odd function of
the real variable $\phi$, $P(\phi)$, for the development of the model, and
consider it of the form (\ref{Perturb1}) when we analyze the model in section 4.
Observe that $|\sin(x+\varepsilon\sin y)-\sin x|<{\varepsilon}$ when
$\varepsilon${\ is assumed to be sufficiently small. Therefore, we restrict
our model to first order. }We consider all the values of variables and
constants in IS units.
\subsection{Construction of the model}
We now construct a dynamical system using as reference the phase of the clock
in the vertex A (= clock $O_{1}$). This reference is arbitrary: any of the
clocks can be used as the reference clock with the same results at the end,
since the system is symmetric. We compute the effects of all phase differences
and perturbations when the clock at A makes a complete cycle returning to the
initial position. Without loss of generality, we consider the next working hypotheses.
\begin{enumerate}
\item \label{work1}The initial phase of clock at A at $t=0^{-}$ is zero, i.e.,
$\psi_{1}(0^{-})=0^{-}$, the minus ($-$) superscript means that at the instant
$0^{-}$ clock $1$ is just about to receive the internal energy kick from its
escape mechanism.
\item \label{work2}We consider that the initial phases of the three clocks
are: $\psi_{3}(0^{-})=\psi_{3}^{0}>\psi_{2}(0^{-})=\psi_{2}^{0}>0^{-}=\psi
_{1}(0^{-})=\psi_{1}^{0}$.
\item \label{work4}The perturbation satisfies the relation $P\left(
x+Px\right) \simeq Px$ in first order.
\end{enumerate}
To obtain the desired model, we need to proceed through 6 steps, starting from
the following initial conditions, that is the phase differences of all pairs
of clocks.
In the sequel ${\psi}_{i}^{j}$ denotes the phase of clock $O_{i}$ at the
$j-th$ step.\newline
\begin{center}
\textbf{INITIAL CONDITIONS}
\end{center}
The phase difference between $O_{3}$ and $O_{1}$ is
\[
(CA)_{0}={\psi}_{3}^{0}-{\psi}_{1}^{0}={\psi}_{3}^{0},
\]
and the phase difference between $O_{1}$ and $O_{3}$ is symmetric, in the
sense that
\[
(AC)_{0}={\psi}_{1}^{0}-{\psi}_{3}^{0}=-{\psi}_{3}^{0}=-(CA)_{0}.
\]
The phase difference between $O_{2}$ and $O_{1}$ is
\[
(BA)_{0}={\psi}_{2}^{0}-{\psi}_{1}^{0}={\psi}_{2}^{0}
\]
and the phase difference between $O_{1}$ and $O_{2}$ is
\[
(AB)_{0}={\psi}_{1}^{0}-{\psi}_{2}^{0}=-{\psi}_{2}^{0}=-(BA)_{0}.
\]
The phase difference between $O_{3}$ and $O_{2}$ is
\[
(CB)_{0}={\psi}_{3}^{0}-{\psi}_{2}^{0}
\]
and the phase difference between $O_{2}$ and $O_{3}$ is
\[
(BC)_{0}={\psi}_{2}^{0}-{\psi}_{3}^{0}=-(CB)_{0}.
\]
\newpage
\begin{center}
\textbf{STEPS LEADING TO THE CONSTRUCTION OF THE MODEL}
\end{center}
\textbf{STEP 1:} first impact. Interactions of $O_{1}$ on $O_{2}$ and of
$O_{1}$ on $O_{3}$, at $t=0$.
When the system in position A attains phase $0$ $(\operatorname{mod}2\pi)$ it
receives a sudden supply of energy, for short \textquotedblleft a
kick\textquotedblright, from its escape mechanism, this kick propagates in the
common support of the three clocks and reaches the other two clocks.
Now, the phase difference between $O_{3}$ and $O_{1}$ is corrected by the
perturbative value $P$:
\[
\left( CA\right) _{I}=\left( CA\right) _{0}+P\left( \left( CA\right)
_{0}\right) ={\psi}_{3}^{0}+P\left( {\psi}_{3}^{0}\right) .
\]
The phase difference between $O_{1}$ and $O_{3}$ is
\[
\left( AC\right) _{I}=\left( AC\right) _{0}+P\left( \left( AC\right)
_{0}\right) =-{\psi}_{3}^{0}+P\left( -{\psi}_{3}^{0}\right) =-(CA)_{I},
\]
since $P$ must be an odd function of the mutual phase difference.
The phase difference between $O_{2}$ and $O_{1}$ is
\[
\left( BA\right) _{I}=\left( BA\right) _{0}+P\left( \left( BA\right)
_{0}\right) ={\psi}_{2}^{0}+P\left( {\psi}_{2}^{0}\right) ,
\]
and the symmetric phase difference between $O_{1}$ and $O_{2}$ is
\[
\left( AB\right) _{I}=\left( AB\right) _{0}+P\left( \left( AB\right)
_{0}\right) =-{\psi}_{2}^{0}+P\left( -{\psi}_{2}^{0}\right) )=-\left(
BA\right) _{I}.
\]
The phase difference between $O_{3}$ and $O_{2}$ depends on $\left(
CA\right) _{I}$ and $\left( BA\right) _{I}$ and it is
\[
\left( CB\right) _{I}=\left( CA\right) _{I}-\left( BA\right) _{I}={\psi
}_{3}^{0}-{\psi}_{2}^{0}+P({\psi}_{3}^{0})-P({\psi}_{2}^{0})=-(CA)_{I},
\]
\textbf{STEP 2}: first natural time shift. The next clock to arrive at
$2\pi^{-}$, from working hypothesis 3.2 (\ref{work2}), is the clock ${O}_{3}$
at vertex $C$. The situation right before $O_{3}$ receives its kick of energy
is when the phase of this clock is $2\pi^{-}$.
At this point we have%
\[%
\begin{cases}
\psi_{3}^{2} & ={2\pi}^{-}\\
\psi_{1}^{2} & =2\pi-(CA)_{I}=2\pi+(AC)_{I}=2\pi-\left( {\psi}_{3}%
^{0}+P({\psi}_{3}^{0})\right) \\
\psi_{2}^{2} & =2\pi-\left( CB\right) _{I}=2\pi+\left( BC\right) _{I}%
=2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P({\psi}_{2}^{0})-P({\psi}_{3}^{0}).
\end{cases}
\]
\textbf{STEP 3}: second impact. Clock $O_{3}$ receives its internal kick, at
the position $2\pi$.
Now, we have%
\[%
\begin{cases}
\psi_{3}^{3} & ={2\pi}\\
\psi_{1}^{3} & =\psi_{1}^{2}+P(\psi_{1}^{2})\\
& =2\pi-\left( {\psi}_{3}^{0}+P({\psi}_{3}^{0})\right) +P\left(
2\pi-\left( {\psi}_{3}^{0}+P({\psi}_{3}^{0})\right) \right) \\
& =2\pi-\left( {\psi}_{3}^{0}+P({\psi}_{3}^{0})\right) -P\left( {\psi}%
_{3}^{0}+P({\psi}_{3}^{0})\right) \\
& \simeq2\pi-{\psi}_{3}^{0}-2P\left( {\psi}_{3}^{0}\right) \\
\psi_{2}^{3} & =\psi_{2}^{2}+P(\psi_{2}^{2})\\
& =2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P({\psi}_{2}^{0})-P({\psi}_{3}^{0})\\
& +P(2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P({\psi}_{2}^{0})-P({\psi}_{3}^{0}))\\
& =2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P({\psi}_{2}^{0})-P({\psi}_{3}^{0})\\
& +P({\psi}_{2}^{0}-{\psi}_{3}^{0}+P({\psi}_{2}^{0})-P({\psi}_{3}^{0}))\\
& \simeq2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P\left( {\psi}_{2}^{0}\right)
-P\left( {\psi}_{3}^{0}\right) +P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) \\
&
\end{cases}
\]
\textbf{STEP 4}: second natural time shift. The next clock to arrive at
$2\pi^{-}$, from working hypothesis3.2 (\ref{work2}), is the clock $O_{2}$ at
vertex $B$. The situation right before $O_{2}$ receives its kick of energy is
when the phase of this clock is $2\pi^{-}$.
Then we have
\[%
\begin{cases}
\psi_{2}^{4} & =2\pi^{-}\\
\psi_{1}^{4} & =\psi_{1}^{3}+2\pi-\psi_{2}^{3}\\
& \simeq2\pi-{\psi}_{3}^{0}-2P\left( {\psi}_{3}^{0}\right) +2\pi\\
& -\left( 2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P\left( {\psi}_{2}^{0}\right)
-P\left( {\psi}_{3}^{0}\right) +P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) \right) \\
& =2\pi-{\psi}_{2}^{0}-P\left( {\psi}_{2}^{0}\right) -P\left( {\psi}%
_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) \\
\psi_{3}^{4} & =\psi_{3}^{3}+2\pi-\psi_{2}^{3}\\
& \simeq2\pi+{2\pi}-\left( 2\pi+{\psi}_{2}^{0}-{\psi}_{3}^{0}+P\left( {\psi
}_{2}^{0}\right) -P\left( {\psi}_{3}^{0}\right) +P\left( {\psi}_{2}%
^{0}-{\psi}_{3}^{0}\right) \right) \\
& \simeq2\pi-{\psi}_{2}^{0}+{\psi}_{3}^{0}-P\left( {\psi}_{2}^{0}\right)
+P\left( {\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) .
\end{cases}
\]
\textbf{STEP 5}: third impact. Clock $O_{2}$ receives its internal energy
kick. It reaches the position $2\pi$.
Then we have
\[%
\begin{cases}
\psi_{2}^{5} & ={2\pi}\\
\psi_{3}^{5} & =\psi_{3}^{4} +P(\psi_{3}^{4})\\
& \simeq2\pi-{\psi}_{2}^{0}+{\psi}_{3}^{0}-P\left( {\psi}_{2}^{0}\right)
+P\left( {\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) \\
& +P(2\pi-{\psi}_{2}^{0}+{\psi}_{3}^{0}-P\left( {\psi}_{2}^{0}\right)
+P\left( {\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) )\\
& \simeq2\pi-{\psi}_{2}^{0}+{\psi}_{3}^{0}-P\left( {\psi}_{2}^{0}\right)
+P\left( {\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) -P({\psi}_{2}^{0}-{\psi}_{3}^{0})\\
& =2\pi-{\psi}_{2}^{0}+{\psi}_{3}^{0}-P\left( {\psi}_{2}^{0}\right)
+P\left( {\psi}_{3}^{0}\right) -2P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) \\
\psi_{1}^{5} & =\psi_{1}^{4}+P\left( \psi_{1}^{4} \right) \\
& \simeq2\pi-{\psi}_{2}^{0}-P\left( {\psi}_{2}^{0}\right) -P\left( {\psi
}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) +\\
& P\left( 2\pi-{\psi}_{2}^{0}-P\left( {\psi}_{2}^{0}\right) -P\left(
{\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) \right)
\\
& \simeq2\pi-{\psi}_{2}^{0}-P\left( {\psi}_{2}^{0}\right) -P\left( {\psi
}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) -P({\psi
}_{2}^{0})\\
& =2\pi-{\psi}_{2}^{0}-2P\left( {\psi}_{2}^{0}\right) -P\left( {\psi}%
_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) .
\end{cases}
\]
\textbf{STEP 6 (the final)}: third natural time shift. The next clock to
arrive at $2\pi^{-}$, from working hypothesis 3.2 (\ref{work2}), is the clock
$O_{1}$ at vertex $A$. The situation before $O_{1}$ receives its kick of
energy is when the phase of this clock is $2\pi^{-}$, i.e., the cycles is
complete.\newline
At this point we are able to describe what happens to the phases after a
complete cycle of the reference clock.
We have
\[%
\begin{cases}
\psi_{1}^{6} & ={2\pi}^{-}\\
\psi_{2}^{6} & =\psi_{2}^{5}+2\pi-\psi_{1}^{5}\\
& \simeq2\pi+2\pi-\left( 2\pi-{\psi}_{2}^{0}-2P\left( {\psi}_{2}^{0}\right)
-P\left( {\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) \right) \\
& =2\pi+{\psi}_{2}^{0}+2P\left( {\psi}_{2}^{0}\right) +P\left( {\psi}%
_{3}^{0}\right) +P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) ;\\
\psi_{3}^{6} & =\psi_{3}^{5}+2\pi-\psi_{1}^{5}\\
& \simeq2\pi-{\psi}_{2}^{0}+{\psi}_{3}^{0}-P\left( {\psi}_{2}^{0}\right)
+P\left( {\psi}_{3}^{0}\right) -2P\left( {\psi}_{2}^{0}-{\psi}_{3}%
^{0}\right) +2\pi\\
& -\left( 2\pi-{\psi}_{2}^{0}-2P\left( {\psi}_{2}^{0}\right) -P\left(
{\psi}_{3}^{0}\right) -P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) \right)
\\
& =2\pi+{\psi}_{3}^{0}+P({\psi}_{2}^{0})+2P({\psi}_{3}^{0})-P({\psi}_{2}%
^{0}-{\psi}_{3}^{0}).
\end{cases}
\]
Now, we compute the phase differences after the first cycle of $O_{1}$.
We have
\begin{align*}
(BA)_{I} & =-(AB)_{I}=\psi_{2}^{6}-\psi_{1}^{6}\\
& \simeq2\pi+{\psi}_{2}^{0}+2P\left( {\psi}_{2}^{0}\right) +P\left( {\psi
}_{3}^{0}\right) +P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) -2\pi\\
& ={\psi}_{2}^{0}+2P\left( {\psi}_{2}^{0}\right) +P\left( {\psi}_{3}%
^{0}\right) +P\left( {\psi}_{2}^{0}-{\psi}_{3}^{0}\right) \\
& =(BA)_{0}+2P((BA)_{0})+P((CA)_{0})+P((BA)_{0}-(CA)_{0})\\
&
\end{align*}
and%
\begin{align*}
& (CA)_{I}\\
& =-(AC)_{I}=\psi_{3}^{6}-\psi_{1}^{6}\\
& =2\pi+{\psi}_{3}^{0}+P({\psi}_{2}^{0})+2P({\psi}_{3}^{0})-P({\psi}_{2}%
^{0}-{\psi}_{3}^{0})-2\pi\\
& ={\psi}_{3}^{0}+P({\psi}_{2}^{0})+2P({\psi}_{3}^{0})-P({\psi}_{2}^{0}%
-{\psi}_{3}^{0})\\
& =({(CA)}_{0})+P({(BA)}_{0})+2P({(CA)}_{0})-P((BA)_{0}-(CA)_{0})\\
&
\end{align*}
Hence, if we set $x=BA$ and $y=CA$, we obtain the system
\[
\left\{
\begin{array}
[c]{c}%
x_{1}=x_{0}+2P(x_{0})+P(y_{0})+P(x_{0}-y_{0})\\
y_{1}=x_{0}+P(x_{0})+2P({y}_{0})-P(x_{0}-y_{0}).
\end{array}
\right.
\]
\begin{center}
\textbf{THE MODEL}
\end{center}
By iterating the argument above, we get, for $n$ equal to the number of cycles
described by $O_{1}$, the discrete dynamical system:%
\[
\left\{
\begin{array}
[c]{c}%
x_{n+1}=x_{n}+2P(x_{n})+P(y_{n})+P(x_{n}-y_{n})\\
y_{n+1}=y_{n}+P(x_{n})+2P({y}_{n})-P(x_{n}-y_{n}).
\end{array}
\right.
\]
If we write
\[
\left\{
\begin{array}
[c]{c}%
\varepsilon\varphi\left( x,y\right) =2P(x)+P(y)+P(x-y)\\
\varepsilon\gamma\left( x,y\right) =P(x)+2P({y})+P(y-x),
\end{array}
\right.
\]
then we have
\[
\varphi\left( x,y\right) =\gamma\left( y,x\right) \text{,}%
\]
and the iteration is a perturbation of the identity as
\[
\left[
\begin{array}
[c]{c}%
x_{n+1}\\
y_{n+1}%
\end{array}
\right] =\left[
\begin{array}
[c]{cc}%
1 & 0\\
0 & 1
\end{array}
\right] \left[
\begin{array}
[c]{c}%
x_{n}\\
y_{n}%
\end{array}
\right] +\varepsilon\left[
\begin{array}
[c]{c}%
\varphi(x_{n},y_{n})\\
\varphi(y_{n},x_{n})
\end{array}
\right] ,
\]
that we can also write as
\begin{equation}
X_{n+1}=F(X_{n})=X_{n}+\varepsilon\Omega(X_{n}), \label{Model}%
\end{equation}
where
\[
X_{n+1}=\left[
\begin{array}
[c]{c}%
x_{n+1}\\
y_{n+1}%
\end{array}
\right] ,
\]
\[%
\begin{array}
[c]{c}%
F(X_{n})
\end{array}
=\left[
\begin{array}
[c]{cc}%
1 & 0\\
0 & 1
\end{array}
\right] \left[
\begin{array}
[c]{c}%
x_{n}\\
y_{n},
\end{array}
\right]
\]
and
\[%
\begin{array}
[c]{c}%
\Omega(X_{n})
\end{array}
=\left[
\begin{array}
[c]{c}%
\varphi(x_{n},y_{n})\\
\varphi(y_{n},x_{n})
\end{array}
\right] .
\]
We now consider $P\left( x\right) =\varepsilon\sin x$, where $\varepsilon
=\frac{\alpha\mu}{8h^{2}}$ from hypothesis \ref{Perturb1}, explicitly,%
\begin{align*}
\varphi\left( x,y\right) & =2\sin x+\sin y+\sin\left( x-y\right) \\
\gamma\left( x,y\right) & =\sin x+2\sin y+\sin\left( y-x\right) .\\
&
\end{align*}
\section{Analysis of the model}
\subsection{Fixed points and local stability}
In this section, we analyze the model (\ref{Model}) obtained in the previous
section. In a nutshell, in this section, we see that the system is
differentiable and invertible in $S=\left[ 0,2\pi\right] \times\left[
0,2\pi\right] $ when $\varepsilon>0$ is small. The perturbation map
$\Omega\left( x,y\right) $ is periodic in $%
\mathbb{R}
^{2}$. This implies that the solution of the problem in the set $S$ is a
dynamical system and not the usual semi-dynamical system associated with
discrete time. That will provide a reasonable simple structure to the problem
of the stability of fixed points and will enable to derive global properties.
Moreover, we prove that for small $\varepsilon$ the set $S$ is invariant for
the dynamics of $F$, meaning that the two phase differences of oscillators
$O_{2}$ and $O_{3}$ relative to oscillator $O_{1}$ stay in the interval
$\left[ 0,2\pi\right[ $.
In particular, the map $\Omega$ has the zeros $\left( \pi,\pi\right) $,
$\left( \frac{2}{3} \pi,\frac{4}{3} \pi\right) $ and $\left( \frac{4}{3}
\pi,\frac{2}{3} \pi\right) $ in the interior of the set $S=\left[
0,2\pi\right] \times\left[ 0,2\pi\right] $, which are fixed points of the
model $F$. There are also four trivial fixed points, $\left( 0,0\right) $,
$\left( 0,2\pi\right) $, $\left( 2\pi,0\right) $ and $\left( 2\pi
,2\pi\right) $ at the corners of $S$, and the four fixed points $\left(
0,\pi\right) $, $\left( \pi,2\pi\right) $, $\left( 2\pi,\pi\right) $ and
$\left( \pi,0\right) $ on the edges of $S$.
We now compute the Jacobian matrix $J\left( x,y\right) $ to establish the
dynamical nature of the fixed points in the usual way.
We have%
\begin{equation}
J\left( x,y\right) =\left[
\begin{array}
[c]{cc}%
1 & 0\\
0 & 1
\end{array}
\right] + \varepsilon\left[
\begin{array}
[c]{cc}%
2\cos x+\cos\left( x-y\right) & -\cos\left( x-y\right) +\ \cos y\\
\cos x-\cos\left( x-y\right) & \cos\left( x-y\right) +2\cos y
\end{array}
\right] . \label{Jacobmatrix}%
\end{equation}
We first consider the fixed points of $F$ in the interior of $S$. We start
with $\left( \frac{2}{3} \pi,\frac{4}{3} \pi\right) $ and $\left( \frac
{4}{3} \pi,\frac{2}{3} \pi\right) $. The Jacobian is exactly the same%
\[
\left[
\begin{array}
[c]{cc}%
1-3\frac{\sqrt{3}}{2}\varepsilon & 0\\
0 & 1-3\frac{\sqrt{3}}{2}\varepsilon
\end{array}
\right] ,
\]
meaning that those two points are locally asymptotically stable for
$\varepsilon$ sufficiently small.
The Jacobian matrix of $F$ at $\left( \pi\text{,}\pi\right) $ is
\[
\left[
\begin{array}
[c]{cc}%
1-\varepsilon & -2\varepsilon\\
-2\varepsilon & 1-\varepsilon
\end{array}
\right] ,
\]
with eigenvalues $1-3\varepsilon$ and $1+\varepsilon$, which qualifies
$\left( \pi\text{,}\pi\right) $ as a saddle point. The stable manifold has
direction $\left( 1,1\right) $, and the unstable manifold is tangent at
$\left( \pi\text{,}\pi\right) $ to the vector $\left( -1,1\right) $.
We now consider now the points placed at the vertexes of $S$. The Jacobian
matrix of $F$ at $\left( 0\text{,}0\right) $, $\left( 0,2\pi\right) $,
$\left( 2\pi,0\right) $ and $\left( 2\pi,2\pi\right) $ is, for all of
them, the following
\[
\left[
\begin{array}
[c]{cc}%
1+3\varepsilon & 0\\
0 & 1+3\varepsilon
\end{array}
\right] ,
\]
which qualifies all the vertexes of $S$ as a repellers.
On the vertical edges of $S$ we have the fixed points $\left( 0,\pi\right)
$, and $\left( 2\pi,\pi\right) $, at which the Jacobian matrix of $F$ is
\[
\left[
\begin{array}
[c]{cc}%
1+\varepsilon & 0\\
2 \varepsilon & 1-3 \varepsilon
\end{array}
\right] ,
\]
which qualifies $\left( 0,\pi\right) $, and $\left( 2\pi,\pi\right) $ as
saddle points. The stable manifold has the direction of the $y$ axis and the
unstable manifold is tangent at $\left( 0\text{,}\pi\right) $ and $\left(
2\pi,\pi\right) $ to the vector $\left( 2,1\right) $.
Finally, at the horizontal edges of $S$ we have the Jacobian matrix of $F$ at
$\left( \pi,0\right) $, and $\left( \pi,2\pi\right) $
\[
\left[
\begin{array}
[c]{cc}%
1-3 \varepsilon & 2 \varepsilon\\
0 & 1+ \varepsilon
\end{array}
\right] ,
\]
which qualifies $\left( \pi,0\right) $ and $\left( \pi,2\pi\right) $ again
as saddle points. The stable manifold is the direction of the $x$ axis and the
unstable manifold is tangent at $\left( \pi,0\right) $ and $\left( \pi
,2\pi\right) $ to the vector $\left( 1,2\right) $.
The local analysis of the fixed points of $F$ reveals a very symmetric
picture. When $\varepsilon>0$ is small ($0<\varepsilon<{\varepsilon}_{0}%
=\frac{1}{9}$ is good enough), $F$ is a small perturbation of the identity,
$F(\partial{S})=\partial{S}$, the restriction of $F$ to the boundary of $S$,
$\partial{S}$, is a bijection (see section 4 for more details), and the
Jacobian determinant of $F$ is never null in the interior of $S$. Therefore,
$F$ is invertible on S.
\subsection{Heteroclinic connections and invariant sets}
We focus our attention on the existence of invariant subsets of $S$ for the
dynamics of $F$. Additionally, we below prove that $S$ is itself an invariant
set for the dynamics of $F$.
Recall that an \textit{heteroclinic} (sometimes called a heteroclinic
connection, or heteroclinic orbit) is a path in phase space which joins two
different equilibrium points. In the sequel, by \textit{sa-heteroclinic},
\textit{rs-heteroclinic}, and \textit{ra-heteroclinic}, we mean an
heteroclinic orbit connecting a saddle point to an attractor, an heteroclinic
orbit connecting a repeller to a saddle point, and an heteroclinic orbit
connecting a repeller to an attractor, respectively.
Let $F$ be our model map in some set $T$ with two fixed points $p$ and $q$.
Let $M_{u}\left( F,p\right) $ and $M_{s}\left( F,q\right) $ be the stable
manifold and the unstable manifold (\cite{AlSaYo}: pages 78, 403) of the fixed
points $p$ and $q$, respectively. Then, if by $M$ we denote the heteroclinic
connecting $p$ and $q$, we have
\[
M\subseteq M_{s}\left( F,p\right) \cap M_{u}\left( F,q\right) .
\]
In particular, $M$ is invariant, the $\alpha$-limit and $\omega$-limit sets of
the points of $M$ is respectively $p$ and $q$ (\cite{AlSaYo}: page 331).
The other orbits, i.e., with initial conditions not in $M$, cannot cross the
heteroclinic connections when the map $F$ is invertible. In that case, it
would be violated the injectivity of the map. In the sequel, we study the
heteroclinics that connect saddle points to the attractors. Those
heteroclinics determine the nature of all the flow of the dynamical system in
the plane, due to the invertible nature of $F$.
\subsubsection{Vertical heteroclinics}
Consider the two vertical lateral edges of $S$, $s_{0}$ and $s_{1}$ that are
the sets $s_{k}=\left\{ \left( x,y\right) \in S:\left( x=2k\pi\right)
\wedge0\leq y\leq2\pi\right\} $, $k=0,1$. Consider the image of these
segments under $F$. If we write $F=(F_{1},F_{2})$, then
\[%
\begin{cases}
F_{1}\left( 2k\pi,y\right) & =2k\pi+\varepsilon\sin y+\varepsilon\sin\left(
-y\right) =2k\pi\\
F_{2}\left( 2k\pi,y\right) & =y+2\varepsilon\sin y+\varepsilon\sin
y=y+3\varepsilon\sin y,
\end{cases}
\]
meaning that for $\varepsilon$ small enough the edges $s_{k}$, $k=0,1$, are
invariant, as already mentioned in section 2. Because of the initial
conditions, on each of the edges $s_{k}$, $k=0,1$, the dynamics is given by%
\[%
\begin{cases}
x_{n+1} & =2k\pi,\\
y_{n+1} & =y_{n}+3\varepsilon\sin y_{n}\text{,}%
\end{cases}
.
\]
For $\varepsilon<\frac{1}{9}$, the map $g:\left[ 0,2\pi\right]
\rightarrow\left[ 0,2\pi\right] $ defined by $g(t)=t+3\varepsilon\sin t$ is
a homeomorphism from the interval $\left[ 0,2\pi\right] $ into itself, as we
can see in figure \ref{homeo}. Moreover, since there is an attracting fixed
point of this map at $\pi$, the dynamics in the sets $s_{0}$ and $s_{1}$ can
be splitted in two subsets where the dynamics is again invariant, which is not
very important for our global discussion but establishes that the stable
manifolds of the saddle points $\left( 0,\pi\right) $ and $\left( 2\pi
,\pi\right) $ are, exactly and respectively, the sets $s_{0}$ and $s_{1}$
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.2528in,
width=2.2632in
]%
{Fig3.eps}%
\caption{Graph of the map $g$, which is an homeomorphism in the interval
$\left[ 0,2\pi\right] $.}%
\label{homeo}%
\end{center}
\end{figure}
We have just shown that both $s_{0}$ and $s_{1}$ contain two heteroclinic
connections: in $s_{0}$, the line segment $s_{0}^{-}$ from $\left(
0,0\right) $ to $\left( 0,\pi\right) $ and $s_{0}^{+}$ from $\left(
0,2\pi\right) $ to $\left( 0,\pi\right) $; and in $s_{1}$, the line segment
$s_{0}^{-}$ from $\left( 2\pi,0\right) $ to $\left( 2\pi,\pi\right) $ and
$s_{0}^{+}$ from $\left( 2\pi,2\pi\right) $ to $\left( 2\pi,\pi\right) $.
The total number of vertical $rs$-heteroclines is $4.$
\subsubsection{Horizontal heteroclinics}
Consider the two horizontal top and bottom edges of $S$, $r_{0}$ and $r_{1}$,
that are the sets $r_{k}=\left\{ \left( x,y\right) \in S:0\leq x\leq
2\pi\wedge\left( y=2k\pi\right) \right\} $, $k=0,1$. Consider the image of
these segments under $F$. As before, if we write $F=(F_{1},F_{2})$, then
\[%
\begin{cases}
F_{1}\left( x,2k\pi\right) & =x+3\varepsilon\sin x\\
F_{2}\left( x,2k\pi\right) & =2k\pi,
\end{cases}
\]
meaning that, for $\varepsilon$ small enough, the edges $r_{k}$, $k=0,1$, are
invariant. Because of the initial conditions, on each of the edges $s_{k}$,
$k=0,1$, the dynamics is given by%
\[%
\begin{cases}
x_{n+1} & =x_{n}+3\varepsilon\sin x_{n}\\
y_{n+1} & =2k\pi\text{.}%
\end{cases}
\]
For $\varepsilon<\frac{1}{9}$, the map $g:\left[ 0,2\pi\right]
\rightarrow\left[ 0,2\pi\right] $, defined by $g(t)=t+3\varepsilon\sin t$,
is the same occurred before, now involved in the dynamics in the invariant
edges $r_{0}$ and $r_{1}$. The stable manifolds of $\left( \pi,0\right) $
and $\left( \pi,2\pi\right) $ are again, respectively, the edges $r_{0}$ and
$r_{1}$.
Arguing as for $s_{0}$ and $s_{1}$, we have that both, $r_{0}$ and $r_{1}$,
contain two analogous heteroclinic connections.
We have just proved, in detail, that the boundary of $S$ is an invariant set.
More is true: each edge of $S$ is an invariant set.
Since the map $F$ is invertible, the initial conditions in the interior of $S
$, $S^{0}$, cannot cross the invariant boundary $\partial S=s_{0}\cup
s_{1}\cup r_{0}\cup r_{1}$, meaning that ${S}^{0}$ is an invariant set. This
means, in particular, that for equal clocks there will be no secular drift of
phase differences of the three clocks, the delays and advances are contained
in the set $S=\left[ 0,2\pi\right] \times\left[ 0,2\pi\right] $.
The total number of horizontal $rs$-heteroclines is $4$. The total number of
$rs$-heteroclinics in the boundary of $S$ is $8$.
\subsubsection{Diagonal heteroclinics}
Finally, we now show that, $S^{o}$, the interior set of $S$, can be splitted
in two subsets, $S_{U}$ and $S_{D}$, $U$ for up and $D$ for down, where the
dynamics is again invariant. Consider now the set
\[
\Delta=\left\{ \left( x,y\right) \in S:y=x\text{, }x\in\left[
0,2\pi\right] \right\} ,
\]
the diagonal of $S$ connecting $\left( 0,0\right) $ to $\left( 2\pi
,2\pi\right) $. The image of a point of $\Delta$ by $F$ is now
\[%
\begin{cases}
F_{1}\left( x,x\right) & =x+3\varepsilon\sin x,\\
F_{2}\left( x,x\right) & =x+3\varepsilon\sin x.
\end{cases}
\]
Hence, the same homeomorphism $g$ as before appears again. We repeat the same
reasonings as before and deduce that $\Delta$ is invariant under $F$, and it
splits $S^{o}$ in two open sets: the triangle above it and the triangle below
it. Moreover, the stable manifold of the saddle point $\left( \pi,\pi\right)
$ is the set $\Delta$.
This also proves the existence of two heteroclinics in $\Delta$, connecting
$\left( 0,0\right) $ to $\left( \pi,\pi\right) $ and $\left( 2\pi
,2\pi\right) $ to $\left( \pi,\pi\right) $, respectively. The total number
of $rs$-heteroclines is now $10$, respectively $8$ on the edges and $2$ on the
main diagonal $\Delta$, all of them connecting repellers to saddles.
Consider now the other diagonal of $S$, i.e., the set
\[
\tilde{\Delta}=\left\{ \left( x,y\right) \in S:y=2\pi-x\text{, }x\in\left[
0,2\pi\right] \right\} .
\]
The image of a point of $\tilde{\Delta}$ under $F$ now is
\[%
\begin{cases}
F_{1}\left( x,y\left( x\right) \right) & =x+\varepsilon\sin x+\varepsilon
\sin2x,\\
F_{2}\left( x,y\left( x\right) \right) & =2\pi-\left( x+\varepsilon\sin
x+\varepsilon\sin2x\right) .
\end{cases}
\]
Hence, $\tilde{\Delta}$ is invariant.
The map $h_{1}:\left[ 0,2\pi\right] \rightarrow\left[ 0,2\pi\right] $,
defined as $h_{1}(t)=t+\varepsilon\sin t+\varepsilon\sin2t$, is a
homeomorphism with $5$ fixed points from $\left[ 0,2\pi\right] $ to itself
(see figure \ref{homeo2}).
We repeat the same reasonings as before and deduce that the set $\tilde
{\Delta}$ splits the interior set $S^{o}$ again in two open sets: the triangle
above and the triangle below. So, now we have splitted $S^{0}$ in four small triangles.
There are four heteroclinic connections in $\tilde{\Delta}$, one connecting
the repeller $\left( 0,2\pi\right) $ to the attractor $\left( \frac{2\pi
}{3},\frac{4\pi}{3}\right) $ (ra-heteroclinic), two $sa$-heteroclinics
connecting the saddle point $\left( \pi,\pi\right) $ to the attractors
$\left( \frac{2\pi}{3},\frac{4\pi}{3}\right) $ and $\left( \frac{4\pi}%
{3},\frac{2\pi}{3}\right) $, and, finally, the last heteroclinic on this
diagonal set is the one that connects the repeller $\left( 2\pi,0\right) $
to the attractor $\left( \frac{4\pi}{3},\frac{2\pi}{3}\right) $
(ra-heteroclinic). The total number of sa-heteroclinics is now $2$.%
[ptb]
\begin{figure}
\begin{center}
\includegraphics[
height=2.3774in,
width=2.4457in
]%
{Fig4.eps}%
\caption{The homeomorphism $h_{1}$ with five fixed points.}%
\label{homeo2}%
\end{center}
\end{figure}
\bigskip
We proceed with the same line of reasoning for the other $sa$-heteroclinics.
Consider now the set
\[
d_{1}=\left\{ \left( x,y\right) \in S:y=\pi+\frac{x}{2}\text{, }x\in\left[
0,\frac{2\pi}{3}\right] \right\}
\]
and the map $F$ applied to the points of $d_{1}$:%
\[%
\begin{cases}
F_{1}\left( x,y\left( x\right) \right) & =x+2\varepsilon\sin
x-2\varepsilon\sin\left( \frac{x}{2}\right) ,\\
F_{2}\left( x,y\left( x\right) \right) & =\pi+\frac{1}{2}\left(
x+2\varepsilon\sin x-2\varepsilon\sin\left( \frac{x}{2}\right) \right) .
\end{cases}
\]
The points of $d_{1}$ stay in $d_{1}$ under the action of $F$, proving that
this set also is invariant. The function $h_{2}:[0,2\frac{\pi}{3}%
]\rightarrow\lbrack0,2\frac{\pi}{3}]$, defined as $h_{2}(t)=t+2\varepsilon\sin
t-2\varepsilon\sin\left( \frac{t}{2}\right) $, is a homeomorphism, from
which we can readily see that the dynamics in $d_{1}$ is quite simple. The
graph of this homeomorphism can be seen in figure \ref{homeo3}. There is one
$sa$-heteroclinic from the saddle at $\left( 0,\pi\right) $ to the attractor
$\left( \frac{2\pi}{3},\frac{4\pi}{3}\right) $. Actually, there is another
heteroclinic in the segment connecting the repeller $\left( 2\pi,2\pi\right)
$ to the attractor $\left( \frac{2\pi}{3},\frac{4\pi}{3}\right) $, but this
is not an sa-heteroclinic. Up to now, we have $3$ $sa$-heteroclinic
connections.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.3549in,
width=2.3929in
]%
{Fig5.eps}%
\caption{The homeomorphism $h_{1}$ with two fixed points, one repeller and the
other attractor.}%
\label{homeo3}%
\end{center}
\end{figure}
Consider now the set $c_{1}=\left\{ \left( x,y\right) \in S:y=2x\text{,
}x\in\left[ \frac{2\pi}{3},\pi\right] \right\} $ and the map $F$ applied to
the points of $c_{1}$%
\begin{align*}
F_{1}\left( x,y\right) & =x+\varepsilon\sin x+\varepsilon\sin2x,\\
F_{2}\left( x,y\right) & =y+2\varepsilon\sin x+2\varepsilon\sin2x,
\end{align*}
the points of $c_{1}$ stay in $c_{1}$ under $F$, proving that this set is
invariant. Actually, the segment would be invariant if we extended $x$ to the
interval $\left[ 0,\pi\right] $, but we are not interested in heteroclinics
from repellers to attractors. Moreover, the dynamics is given by a restriction
of $h_{1}$ to the interval $\left[ \frac{2\pi}{3},\pi\right] $. In this
interval there are only two fixed points, the attractor $\frac{2\pi}{3}$ and
the repeller $\pi$. This procedure adds one more $sa$-heteroclinic to the
global picture. So, we have found, up to now, $4$ $sa$-heteroclinics.
In $S_{D}$, we consider
\[
c_{2}=\left\{ \left( x,y\right) \in S:y=2\left( x-\pi\right) \text{,
}x\in\left[ \pi,\frac{4\pi}{3}\right] \right\}
\]
and
\[
d_{2}=\left\{ \left( x,y\right) \in S:y=\frac{x}{2}\text{, }x\in\left[
\frac{4\pi}{3},2\pi\right] \right\} .
\]
Following exactly the same reasonings as before, we obtain two more
$sa$-heteroclinics, one connecting $\left( \pi,0\right) $ to the attractor
$\left( \frac{4\pi}{3},\frac{2\pi}{3}\right) $ and the other connecting
$\left( 2\pi,\pi\right) $ to the same attractor.
\subsection{Phase portrait}
The total number of $sa$-heteroclinics is $6$. All of them are straight
segments. The other $8$ $sa$-heteroclinics split the set $S$ in six invariant
sets as can be seen in Figure \ref{Portrait}, where the red curves represent
saddle-node heteroclines. The flow curves represented in the phase portrait.
Since the map $F$ is invertible, no orbit can cross either the red curves,
blue curves or black flow curves. There are only two attractors and the
dynamics, due to the invertible nature of the map $F$ and its large symmetry,
is relatively simple: in every invariant set in the plane, the restriction
maps are again homeomorphisms and the flow curves must follow, by continuity,
the heteroclinic connections on the outer boundaries of each invariant set.
Consequently, only the orbits on the outer edges and main diagonal, i.e., in
the set $s_{0}\cup s_{1}\cup r_{0}\cup r_{1}\cup d$ are not attracted to the
two attractors $\left( \frac{2\pi}{3},\frac{4\pi}{3}\right) $ and $\left(
\frac{4\pi}{3},\frac{2\pi}{3}\right) $. The upper attractor $\left(
\frac{2\pi}{3},\frac{4\pi}{3}\right) $ attracts the points in the open upper
triangle $S_{U}$ with converse results for the lower attractor $\left(
\frac{4\pi}{3},\frac{2\pi}{3}\right) $ in $S_{D}$. The full picture can be
seen in figure \ref{Portrait}.
\begin{figure}[ptb]
\begin{center}
\includegraphics[
height=3.1445in,
width=3.1254in
]{flow.eps}
\end{center}
\caption{Phase portrait of $F$, for small $\varepsilon$. In red the 16
saddle-node heteroclinic connections. For illustrative purposes, we represent
in blue some straight line invariant sets, actually heteroclinics, connecting
repellers to attractors. All the points in the interior of $S_{U}$ and $S_{D}$
belong to heteroclinics connections for $F$.}%
\label{Portrait}%
\end{figure}
\section{Conclusions and future work}
In this paper we have proved that three oscillators, mutually interacting with
symmetric coupling, converge to a final symmetric locked state with mutual
phase differences of $\frac{2\pi}{3}$, this can happen in two different
settings, clockwise or counterclockwise, depending on the initial conditions.
This very symmetrical final locked state induces us to consider the conjecture
that $n$ oscillators weakly interacting with all the others $n-1$ oscillators
will reach a final state with mutual phase differences of $\frac{2\pi}{n}$
clockwise or counterclockwise distributed.
In future work, already in preparation, we shall discuss the same phenomenon
with slightly different natural angular frequencies $\omega_{1}$,
$\ \omega_{2}$ and $\omega_{3}$ and, in particular, the existence and form of
Arnold Tongues \cite{boyland1986,gilmore2011}.
As done for \cite{OlMe}, it would be interesting to check experimentally our
model, to see if the real world matches the theoretical predictions.
\paragraph*{Aknowledgements}
The author ED was partially supported by the program Erasmus+. The author HMO
was partially supported by FCT/Portugal through the project UID/MAT/04459/2013.
\section*{Appendix}
In this Appendix, as another way to determine the nature of the two
attractors, that is their global asymptotical stability, we point out the
existence, in the sets $S_{U}$ and $S_{D}$, of two Liapounov functions,
$V_{U}$ and $V_{D}$, respectively \cite{lasalle1976stability}.
Consider, first, the invariant set $S_{U}$. Define $V_{U}: S_{U}
\rightarrow\mathbb{R}$ as follows:%
\[
V_{U}\left( x,y\right) =\left( x-\frac{2\pi}{3}\right) ^{2}+\left(
y-\frac{4\pi}{3}\right) ^{2}-\left( x-\frac{2\pi}{3}\right) \left(
y-\frac{4\pi}{3}\right) \text{.}
\]
The \textit{discrete orbital derivative} inside the invariant set $S_{U}$ is
\[
DF\left( x,y\right) =V_{U}\left( F\left( x,y\right) \right) -V_{U}
\left( x,y\right) .
\]
We have
\begin{align*}
{DF\left( x,y\right) } & =V_{U}\left( F\left( x,y\right) \right)
-V_{U} \left( x,y\right) \\
& =V_{U}(x+\varepsilon\phi(x,y),y+\varepsilon\gamma(x,y))-V_{U}(x,y)\\
& ={\varepsilon}^{2}[{\phi}^{2}(x,y)+{\gamma}^{2}(x,y)-{\phi}(x,y)\cdot
{\gamma}(x,y)]\\
& +\epsilon\lbrack(x-\frac{2}{3}\pi)(2\phi(x,y)-\gamma(x,y))+(y-\frac{4}%
{3}\pi)(2\gamma(x,y)-\phi(x,y)],\\
&
\end{align*}
where, recall that
\[%
\begin{cases}
\varphi\left( x,y\right) & =2\sin x+\sin y+\sin\left( x-y\right) \\
\gamma\left( x,y\right) & =\sin x+2\sin y+\sin\left( y-x\right)
=\phi(y,x).
\end{cases}
\]
An easy computation shows that
\[%
\begin{cases}
2\varphi\left( x,y\right) -\gamma(x,y) & =3(\sin x+\sin\left( x-y\right)
)\\
2\gamma\left( x,y\right) -\phi(x,y) & =3(\sin y-\sin\left( x-y\right) \\
\phi(x,y)+\gamma(x,y) & =3(\sin x+\sin y).
\end{cases}
\]
Hence,
\begin{align*}
\frac{DF(x,y)}{\varepsilon} & =\varepsilon\lbrack{(\phi(x,y)+\gamma
(x,y))}^{2}-3\phi(x,y)\cdot\gamma(x,y)]\\
& =3\varepsilon\lbrack3{(\sin x+\sin y)}^{2}-\phi(x,y)\cdot\gamma(x,y)]\\
& +(x-\frac{2}{3}\pi)\cdot(\sin x+\sin(x-y))+(y-\frac{4}{3}\pi)\cdot(\sin
y-\sin(x-y))\\
&
\end{align*}
By using numerical analysis, we conclude that this discrete orbital
derivative, $DF$, is non-positive in $S_{U}$ for small $\varepsilon$, more
precisely, zero for the fixed points and negative elsewhere.
Analogously, by the same method, the function $V_{D}:S_{D}\rightarrow R$
defined as
\textbf{%
\[
V_{D}\left( x,y\right) =\left( x-\frac{4\pi}{3}\right) ^{2}+\left(
y-\frac{2\pi}{3}\right) ^{2}-\left( x-\frac{4\pi}{3}\right) \left(
y-\frac{2\pi}{3}\right) \text{,}%
\]
}
is a Liapounov function on $S_{D}$.
|
2,869,038,155,265 | arxiv | \section*{Introduction}
The very first stationary functional derivatives Schr\"{o}dinger
equation was introduced in 1928 by P. Jordan and W. Pauli
( Zur Quantumelectrodynamik ladungsfreier Felder,
Zeitung f\"{u}r Physik, Vol. 47):
For wave functionals $F(\phi(x))$
of massless scalar fields $\phi(x), x\in\mathbb{R}$,
\begin{equation}
-\left(\frac{\hbar}{4\pi}\right)^2\int\!dx\, \left[
\frac{\delta^2}{\delta \phi(x)^2}
+ c^2\left(\frac{d\phi(x)}{d x}\right)^2\right]F(\phi(x))=\lambda F(\phi(x)).
\end{equation}
There was a vivid discussion of "Volterra mathematics" between W. Heisenberg, P. Jordan, and W. Pauli. However until now there has been no sound mathematical progress in solution of such equations. Pertrurbation and lattice approximations do not converge in meaningful examples.
Moreover, according to P. Dirac ( ``Lectures on quantum field theory" Yeshiva University, N.Y. 1966,
Section ``Relationship of the Heinseberg and Schr\"{o}dinger Pictures"),
\begin{quotation}
\textsl{
The interactions that are physically important in quantum field theory are so violent
that they will knock any Schr\"{o}dinger state vector out of Hilbert space
in the shortest possible time interval.}
\textsl {[...] It is better to abandon all attempts at using the Schr\"{o}dinger picture
with these Hamiltonians.}
\textsl {[...] I don't want to assert that the Schr\"{o}dinger picture will not come back.
In fact, there are so many beautiful things about it that I have the feeling in
the back of my mind that it ought to come back. I am really loath to have to give it up.}
\end{quotation}
Heisenberg partial derivatives equations for interacting quantized fields are non-linear. In contrast, Schr\"{o}dinger equations for states are linear. Presumably they may be solved by well developed Hilbert space methods.
Unfortunately, in the second quantization formalism, a ``violent" Schr\"{o}dinger operator is not densely defined in the
Fock space (see \cite{RS}, vol. II, Chapter X). For the sake of operator methods one needs to apply cutoffs.
This article proposes a rigorous mathematical theory of Schr\"{o}dinger functional differential operators with combined ultraviolet and infrared cutoffs:
\begin{itemize}
\item Section 1 is a convenient review of infinite dimensional distributions.
\item Section 2 begins with a rigorous treatment of functional derivative operators.
Theorem 2.2 asserts a lower bound for cutoff Hamiltonian functional derivative operators defined by classical Hamiltonians bounded from below.
The coherent states matrix elements of the corresponding evolution operators are given the form of antinormal functional Feynman integral (Theorem 2.3).
\item Section 3 introduces a quantized infinite dimensional Galerkin approximation of cutoff functional derivative equations by partial derivative equations. shows that
this Feynman integral is a double limit of finite dimensional Gaussian integrals (Theorem 3.1). Thus we have a convergent computational scheme in pseudo-Euclidean space,
a viable alternative to lattice approximations.
\end{itemize}
The original antinormal Feynman integral, based on Chernoff's product formula, was introduced by
J. Klauder and B. Scagerstam (see \cite{KS}, page 69). Another, based on an infinite-dimensional symbolic calculus, has been used in \cite{D} to solve non-cutoff functional Schr\"{o}dinger equations with integrable infinite-dimensional Hamiltonians.
Here the functional Feyman integral is rigorously defined as a limit of Klauder-Scagerstam integrals associated with approximating finite-dimensional Hamiltonians.
\medskip
\emph{In the text the triangles $\triangleright$ and $\triangleleft$ mark the beginning and the end of a proof.}
\section{Review of infinite-dimensional distributions}
\subsection{Bosonic Fock representations}
Let $\mathcal{H}$ be a complex (separable) Hilbert $*$-space with a given complex conjugate isometric involution
$\phi\rightarrow \phi ^{*}$.
The $*$-subspaces of $\mathcal{H} $ are invariant under the conjugation.
The Hermitian inner product of $\alpha$ and $\beta$ is denoted
$\alpha^{*}\beta$. It is complex conjugate linear in $\alpha^{*}$ and linear in $\beta$.
\smallskip
The \emph{real part } of $\mathcal{H}$ is the real Hilbert subspace
$\Re\mathcal{H}=\{\rho\in\mathcal{H}: \rho^{*}=\rho\}$, and the \emph{imaginary part}
of $\mathcal{H}$ is the real Hilbert subspace $\Im\mathcal{H}=\{i\pi\in\mathcal{H}:
\quad \pi\in\Re\mathcal{H}\}$ .
Since any $\phi=\rho+i\pi$ with $\rho=(\phi+\phi^{*})/2,\ \pi=(\phi-\phi^{*})/2i$, the $*$-space
is the direct orthogonal sum of the real part $\Re\mathcal{H}$ and the imaginary part $\Im\mathcal{H}$.
Along with $\phi^{*}=\rho-i\pi$ this implies that a choice of the involution $*$ is uniquely
defined by the choice of the real part $\Re\mathcal{H}$.
\smallskip
An operator $o$ in $\mathcal{H}$ is \emph{real} if it commutes with
the involution $*$.
\smallskip
Let $\mathcal{H}^{*}$ denote the antidual Hilbert space of $\mathcal{H}$ with
respect to the Hermitian form $\phi^{*}\psi$.
\smallskip
The Hilbert space $\mathcal{H}\times\mathcal{H}^{*}$ carries
the conjugation $(\alpha,\beta^{*})=(\beta,\alpha^{*})$. The
corresponding real part $\mathcal{R}$
is the \emph{antidiagonal} $\{(\phi,\phi^{*}):\phi\in\mathcal{H}\}$.
The isometry $\phi\mapsto (1/\sqrt{2})(\phi,\phi^{*})$ is
a representation of $\mathcal{H}$ as a real Hilbert space.
\medskip
A \emph{Fock representation of bosonic canonical commutation relations}
over $\mathcal{H}$ is described by
\begin{enumerate}
\item
A Fock Hilbert $*$-space $\mathcal{F}=\mathcal{F}(\mathcal{H})$;
\item
Two families of \emph{creators} $\mathcal{F}^{+}(\phi)$ and
\emph{annihilators} $\mathcal{F}^{-}(\phi)$ which are linear unbounded operators
in $\mathcal{F}$ with a common invariant dense $*$-domain
$\mathcal{P}$ in $\mathcal{F}$ such that $\mathcal{F}^{+}(\phi)$
and $\mathcal{F}^{-}(\phi)$ are complex linear with respect to $\phi\in\mathcal{H}$
and the Hermitian adjoint
$ [\mathcal{F}^{+}(\phi)]^{\dagger}=\mathcal{F}^{-}(\phi^{*})$
\item
The unit \emph{vacuum} vector $F_{0}$ in $\Re\mathcal{P}$.
\item
$\mathcal{F}^{-}(\phi)F_{0}=0$ for any $\phi$; and $\mathcal{P}$
the linear span of the \emph{power} Fock vectors
\begin{equation}
\mathcal{F}^{+}(\phi^{*})^{n}F_{0},\ \phi\in \mathcal{H},\ n=0,1,2,... .
\end{equation}
\item
The commutators of creators and annihilators and satisfy the
\emph{ canonical Fock commutation relations} on $\mathcal{P}$:
\begin{equation}
[\mathcal{F}^{-}(\alpha^{*}), \mathcal{F}^{+}(\beta)]=\alpha^{*}\beta,\quad
[\mathcal{F}^{+}(\alpha), \mathcal{F}^{+}(\beta)]= 0
=[\mathcal{F}^{-}(\alpha), \mathcal{F}^{-}(\beta)].
\end{equation}
\end{enumerate}
The polarization formula
\begin{equation}
\mathcal{F}^{+}(\phi_{1})\ldots \mathcal{F}^{+}(\phi_{n})F_{0}=\frac{1}{2^{n}n!}\sum_{\sigma_{j}} \sigma_{1}\ldots\sigma_{n}
\mathcal{F}^{+}(\sigma_{1}\phi_{1}+\ldots+
\sigma_{n}\phi_{n})^{n}F_{0},
\end{equation}
where $2^{n}$ coefficients $\sigma_{j}\in\{1,-1\},\ j=1,\ldots,n$,
shows that $\mathcal{P}$ is the compex span of the product Fock vectors
$\mathcal{F}^{+}(\phi_{1})\ldots \mathcal{F}^{+}(\phi_{n})F_{0}$.
\begin{remark}
For a given $\mathcal{H}$ all \emph{Fock representations}
$(\mathcal{F}, \ F_{0}, \ \mathcal{F}^{+}, \mathcal{F}^{-})$ are unitary equivalent.
\end{remark}
The \emph{Segal functor} $\Gamma$ (see \cite{BSZ}, Chapter I) assigns to an operator
$o$, with a dense domain $\mathcal{H}'$ in $\mathcal{H}$,
an operator $\mathcal{F}(o)$ in $\mathcal{F}$, with the dense domain $\mathcal{P}'$,
spanned by $\mathcal{F}^{+}(\phi')^{n},\ \phi'\in\mathcal{H}', n=1,2...$ such that
\begin{equation}
\mathcal{F}(o)F_{0}=F_{0},\
\mathcal{F}(o)[\mathcal{F}^{+}(\phi')^{n}F_{0}]=\mathcal{F}^{+}(o\phi')^{n}F_{0}.
\end{equation}
Then
\begin{itemize}
\item
If $o_{2}o_{1}$ exists on a dense domain in $\mathcal{H}$, then
$\mathcal{F}(o_{2}o_{1})=\mathcal{F}(o_{1})\mathcal{F}(o_{2})$.
\item
$\mathcal{F}(1)=1,\ \mathcal{F}(o^{-1})=\mathcal{F}(o)^{-1},\
\mathcal{F}(o^{\dag})=\mathcal{F}(o)^{\dag}$.
\item
If $o$ is a unitary operator, then $\mathcal{F}(o)$ is unitary as well.
\item
If $o$ is an ortogonal projector, then $\mathcal{F}(o)$ is an ortogonal projector too.
\item
$\mathcal{F}(o)$ is non-negative if $o$ is a non-negative operator.
\item
If $o$ is an (essentially) selfadjoint operator, then $\mathcal{F}(o)$ is essentially selfadjoint.
\end{itemize}
The \emph{tangential Fock functor} $d\Gamma$ assigns to the operator $o$ an
operator $\dot{\mathcal{F}}(o)$ defined on $\mathcal{F}(\mathcal{H}')$ by
\begin{equation}
\dot{\mathcal{F}}(o)F_{0}=0,\quad
\dot{\mathcal{F}}(o)[\mathcal{F}^{+}(\phi')^{n}F_{0}]=n\mathcal{F}^{+}(o\phi')\mathcal{F}^{+}(\phi')^{n-1}F_{0}.
\end{equation}
Thus
\begin{itemize}
\item
If the commutator $[o_{2},o_{1}]$ exists on a dense domain in $\mathcal{H}$, then
$\dot{\mathcal{F}}([o_{2},o_{1}])=[\dot{\mathcal{F}}(o_{1},
\dot{\mathcal{F}}(o_{2})]$.
\item
If $o\geq 0$, then $\dot{\mathcal{F}}(o)\geq 0$.
\item
If $o$ is an (essentially) self-adjoint operator, then $\dot{\mathcal{F}}(o)$ is
essentially self-adjoint in $\mathcal{F}$.
\item
If $o$ generates a strong (semi)group $\exp(-to)$ with real parameter $t$,
then $\dot{\mathcal{F}}(o)$ generates the strong (semi)group $\mathcal{F}(\exp(-to))$.
\end{itemize}
\subsection{Functional Fock representations}
\subsubsection{Integration on $\Re\mathcal{H}$}
Let $p$ denote an orthogonal projector in $\mathcal{H}$
of finite rank $r(p)$.
We assume that $p$ commute with the conjugation.
Then $p$ is the orthogonal projector of $\Re\mathcal{H}$ onto
$\Re p\mathcal{H}$ as well.
\smallskip
The \emph{functional integral} $\int\! d\xi\, F(\xi)$
of a functional $F$ on $\Re \mathcal{H}$ is
the limit of the normalized Lebesgue integrals over
the finite dimensional spaces $p\Re \mathcal{H}$
as $p$ converges to the unit operator $\mathbf{1}$, i.e., for
every $\epsilon > 0$ there exists $p_{\epsilon}$ such that if
$p\Re \mathcal{H}\supset p_{\epsilon}\Re \mathcal{H}$ then
the absolute value\begin{equation}
|(2\pi)^{-r(p)/2}\int\! d(p\xi)\, F(p\xi)-
(2\pi)^{-d(p_{\epsilon})/2}\int\! d(p_{\epsilon}\xi)\,
F(p_{\epsilon}\xi)|<\epsilon.
\end{equation}
The finite-dimensional renormalizations are chosen so that the
Gaussian functional integral
\begin{equation}
\int \! d\xi\,e^{-\|\xi\|^{2}/2}=1.
\end{equation}
A \emph{flag} $(p_{n})=p_{1}<\ldots<p_{n}<\ldots$ is an increasing
sequence of orthogonal $*$-projectors such that the union
$\cup (p_{n}\mathcal{H})$ is dense in $\mathcal{H}$.
\begin{proposition}
For any flag $(p_{n})$
\begin{equation}
\lim_{n\rightarrow\infty}(2\pi)^{-d(p_{n})}\int\!d(p_{n}\xi)\, F(p_{n}\xi)
=\int\!d\xi^{*}d\xi\, F(\xi).
\end{equation}
\end{proposition}
$\triangleright$\quad
Since $\cup (p_{n}\mathcal{H})$ is dense in $\mathcal{H}$,
for any positive $\epsilon$ there exists a projector $p_{n}$ that has the same rank as $p_{\epsilon}$ and the (constant) Jacobian of the
orthogonal projection of $p_{\epsilon}\mathcal{H}$ onto
$p_{n}\mathcal{H}$ is within $\epsilon$ from $1$. Now
for any $p_{m} > p_{n}$, the orthogonal projections from
$(p_{\epsilon} + p_{m})\mathcal{H}$ onto $p_{m}\mathcal{H}$
have the same Jacobian.
Thus the integrals in the left hand side of the equation are within $\epsilon$ from the integral on the right hand side.
\hfill $\triangleleft$
\begin{proposition}
The functional integral has the following properties:
\begin{enumerate}
\item $\int\! d\xi\, F(\xi)$
is a positive linear functional on the space of integrable functionals $G$.
\item The integral over a product Hilbert space is equal to the iterated
functional integrals.
\item Integration by parts: Let $D_{\eta}F$ denote the
directional derivative
of $F$ in the direction of $\eta\in\Re\mathcal{H}$. Then
\begin{equation}
\int\! d\xi \,F(\xi)\,D_{\eta}G(\xi)= - \int\!d\xi \,D_{\eta}F(\xi)\,G(\xi)
\end{equation}
if $FG\rightarrow 0 $ as the scalar product
$\xi\rightarrow\infty$ and both integrals exist.
\item The functional integral is invariant under translations
and orthogonal transformations in $\mathcal{H}$.
\end{enumerate}
\end{proposition}
$\triangleright$\quad
These properties follow directly from the corresponding properties of
finite-dimensional Lebesgue integrals. (For the integration by parts
note that for given $\xi$ in $\mathcal{H}$
we may choose the projectors $p'$ such that $p\xi=\xi$.
\hfill $\triangleleft$
\subsubsection{Gauss Fock representation on $\Re\mathcal{H}$}
In the \emph{Gauss (or real wave) Fock representation on} $\Re\mathcal{H}$
(compare with \cite{Friedrichs} and \cite{BSZ} )
\begin{itemize}
\item
$\mathcal{F}(\mathcal{H})$ is the Gauss Hilbert space
$\mathcal{G}(\mathcal{H})$ the completion of the space $\mathcal{L}^{2}(\Re\mathcal{H},e^{-\|\phi\|^{2}/2})$
of functionals $F=F(\xi),\ \xi\in\Re\mathcal{H}$, with $F^{*}(\xi)=\overline{F(\xi)}$ (complex conjugation) and
the Hermitian product
\begin{equation}
F^{*}G=\int\! d\xi e^{-\|\xi\|^{2}/2}F^{*}(\xi)G(\xi);
\end{equation}
\item
the vacuum vector $F_{0}=1$;
\item
the annihilators and creators are
\begin{equation}
\mathcal{F}^{-}(\phi^{*})F(\xi)=\partial^{*}F(\xi),
\mathcal{F}^{+}(\phi)F(\phi)=(-\partial F(\xi)+\xi\phi)F(\xi).
\end{equation}
\end{itemize}
\emph{Occasionally we denote $\mathcal{F}^{-}$ and $\mathcal{F}^{+}$ in $\mathcal{G}$
as $\mathcal{G}^{+}$ and $\mathcal{G}^{-}$}.
\subsubsection{Bargmann Fock representation on $\mathcal{H}$}
Since $\mathcal{H}$, as a real Hilbert space, has been identified
with $\Re(\mathcal{H}\times\mathcal{H}^{*})$,
the \emph{functional integral} $\int\! d\phi^{*}d\phi\, F(\phi,\phi^{*})$
is defined as the limit of Lebesgue
integrals over finite dimensional $*$-subspaces $p\mathcal{H}$.
Now the normalizing constants are $\pi^{-\mbox{\scriptsize dim}(p)}$ so that
\begin{equation}
\int\! d\phi^{*}d\phi\, e^{-\phi^{*}\phi} =1.
\end{equation}
The Hermitian adjoint \emph{Cauchy-Riemann operators} on $\mathcal{H}$ are
\begin{equation}
\partial_{\zeta}=(1/2)(D_{\Re \zeta}-iD_{\Im \zeta}),\
\partial_{\zeta}^{*}=(1/2)(D_{\Re \zeta}+iD_{\Im \zeta}),
\end{equation}
the former being linear and the latter anti-linear in $\phi$.
\smallskip
A \emph{continuous} functional $F$ on $\mathcal{H}^{\infty}$
is an \emph{entire functional} if
$\partial_{\zeta}^{*}F(\phi,\phi^{*})=0$ for all $\phi$ and
$\zeta$. Notationally $F=F(\phi)$.
A \emph{continuous} functional $F$ on $\mathcal{H}^{\infty}$
is a \emph{anti-entire functional} if
$\partial_{\zeta}F(\phi,\phi^{*})=0$ for all $\phi$ and
$\xi$. Notationally $F=F(\phi)$.
\smallskip
In the \emph{Bargmann (or complex wave) Fock representation} on
$\mathcal{H}$ (see \cite{Bargmann})
\begin{itemize}
\item
the Fock space $\mathcal{F}(H)$ is the Bargmann space
$\mathcal{B}(\mathcal{H})$, the (closed) subspace of
anti-entire functionals
$F=F(\phi^{*})$ in $\mathcal{L}^{2}(\mathcal{H}^{*}\times\mathcal{H},
e^{-\phi^{*}\phi}d\phi^{*} d\phi)$;
\item The conjugation $F^{*}(\phi^{*})=\overline{F[(\phi^{*})^{*}]}$
\item The vacuum functional $F_{0}=1$;
\item The annihilation and creation operators are
\begin{equation}
\mathcal{F}^{-}(\zeta^{*})F(\phi^{*})=\partial_{\zeta^{*}}F(\phi^{*}),\
\mathcal{F}^{+}(\zeta)F(\phi^{*})= (\phi^{*}\zeta)F(\phi^{*}).
\end{equation}
\end{itemize}
\emph{Occasionally we denote $\mathcal{F}^{-}$ and $\mathcal{F}^{+}$ in $\mathcal{B}$
as $\mathcal{B}^{+}$ and $\mathcal{B}^{-}$}.
\subsection{Bargmann-Segal transform}
The \emph{coherent functionals} $F_{\alpha}$ on $\mathcal{H}$ are
\begin{equation}
F_{\alpha} =\sum_{n=1}^{\infty}\frac{1}{n!}\mathcal{F}^{+}(\alpha)^{n}F_{0},\
\alpha\in\mathcal{H}.
\end{equation}
By induction, Fock commutation relations imply
\begin{equation}
[\mathcal{F}^{+}(\alpha)^{m}F_{0}]^{*}[\mathcal{F}^{-}(\beta)^{n}F_{0}]
= \delta_{mn}(\alpha^{*}\beta)^{m},
\end{equation}
so that
\begin{equation}
F_{\alpha}^{*}F_{\beta}=F_{\alpha^{*}\beta},
\end{equation}
Then $F_{\alpha}^{*}F_{\alpha} < \infty$ so that
$F_{\alpha}\in\mathcal{F}$ and
the correspondence between $\alpha$ and $F_{\alpha}$ is
one to one.
Note that in Bargmann space $\mathcal{B}$ the coherent functionals
$F_{\alpha}(\psi^{*})=\exp(\psi^{*}\alpha)$.
The entire functional $F(\alpha)=F^{*}F_{\alpha}$ of the argument $\alpha\in\mathcal{H}$
is the \emph{ Bargmann-Segal transform} of $F\in\mathcal{F}$.
The following proposition is fundamental (see \cite{Berezin71}):
\begin{proposition}
The coherent functionals $F_{\alpha},\ \alpha\in\mathcal{H}$,
form a continual orthogonal basis of $\mathcal{F}$ as follows:
\begin{enumerate}
\item
Every $F\in \mathcal{F}$ has the weak expansion in $F_{\alpha}$:
\begin{equation}
F(\beta^{*})= \int\!d\alpha^{*}d\alpha\,
e^{-\alpha^{*}\alpha}e^{\beta^{*}\alpha}\, F(\alpha^{*}).
\end{equation}
\item If $G ,F\in\mathcal{F}$ then
\begin{equation}
G^{*}F = \int\!d\alpha^{*}d\alpha\, G^{*}(\alpha)F(\alpha^{*}).
\end{equation}
in particular, $\|F \|^{2} = \int\!d\alpha^{*}d\alpha\,
|F(\alpha^{*})|^{2}$ so that the Bargmann-Segal transform is one to one.
\end{enumerate}
\end{proposition}
$\triangleright$\quad
The first part follows from the weak convergence of functional integrals
\begin{equation}
F^{*} \int\!d\alpha^{*}d\alpha\,e^{\beta^{*}\alpha}\,
F_{\beta^{*}}(\alpha^{*})=
\int\!d\alpha^{*}d\alpha\,
e^{\beta^{*}\alpha}\,( F^{*} F_{\beta^{*}})(\alpha^{*}).
\end{equation}
By the same token the second part follows from the first. In both
cases the commutation with integration is justified by integration over
finite dimensional subspaces with conjugation in $\mathcal{H}$. \hfill $\triangleleft$
\subsection{ Fock Sobolev scales}
Let $o$ be a real (i.e., commuting with the conjugation)
selfadjoint non-negative operator in $\mathcal{H}$ with the
discrete spectrum $\{\lambda_{k}: k=1,2,\ldots\}$. In particular each
$\lambda_{k}$ has a finite multiplicity $m_{k}$. Assume that the operator $(1+o)^{-p}$ has finite trace for some $p>0$.
\textsc{examples}: the harmonic oscillator operator
$-\nabla^{2}+x^{2}$ in $\mathcal{H}=\mathcal{L}^{2}(\mathbb{R}^{n})$,
positive globally hypoelliptic operators in
$\mathcal{L}^{2}(\mathbb{R}^{n})$ (see \cite{Shubin}),
Beltrami Laplacians, or, more generally,
positive elliptic operators in $\mathcal{L}^{2}(M)$ on
compact Riemann manifolds $M$ (see \cite{Shubin}).
\smallskip
For $s\leq 0$, denote by $\mathcal{H}^{s}$ the \emph{ Hilbert $*$-space}
of all $\phi\in \mathcal{H}$ with the Hermitian product
$\phi^{*}(1+o)^{s}\psi$. Its antidual $\mathcal{H}^{-s}$ with respect
to the basic Hertmitean form $\alpha^*\beta$ is the completion of
$\mathcal{H}$ with respect to the Hermitian product $\phi^{*}(1+o)^{-s}\psi$.
If $s'>s$, then $\mathcal{H}^{s}$ is a dense subspace
of $\mathcal{H}^{s'}$, and the iclusions are continuous. Therefore,
by definition, the family of the Hilbert $*$-spaces
$\mathcal{H}^{s},\ -\infty<s<\infty$ is a \emph{ Sobolev scale} generated by $o$.
The intersection $\mathcal{H}^{\infty}=\bigcap_{s}\mathcal{H}^{s}$
is the Frechet space with the topology of simultaneous convergence with respect to all
Hilbert norms. Since $(1+o)^{-p}$ has finite trace for some $p>0$,
the space $\mathcal{H}^{\infty}$ is nuclear.
Its antidual with respect to the basic Hertmitean form $\alpha^{*}\beta$ is the strict
inductive limit (see \cite{RS}, Section V.4)
$\mathcal{H}^{-\infty}=\bigcup_{s}\mathcal{H}^{s}$,
a nuclear space again.
Thus we get a Gelfand triple
\begin{equation}
\mathcal{H}^{\infty}\subset\mathcal{H}\subset\mathcal{H}^{-\infty}.
\end{equation}
Similarly, starting with the Fock quantized $\mathcal{F}$ and $\mathcal{F}(o)$
instead of $\mathcal{H}$ and $o$, we get the
\emph{Fock scale} of Hilbert spaces $\mathcal{F}^{s}$ and
the triple (see \cite{KMT} and \cite{BSZ}, Section 7.3)
\begin{equation}
\mathcal{F}^{\infty}\subset\mathcal{F}\subset\mathcal{F}^{-\infty}.
\end{equation}
Using $\mathcal{F}$ and $\dot{\mathcal{F}}(o)$ instead of $\mathcal{F}(o)$
we obtain the \emph{tangential Fock scale} of the Hilbert spaces
$\dot{\mathcal{F}}^{s}$ and the triple
\begin{equation}
\dot{\mathcal{F}}^{\infty}\subset\mathcal{F} \subset
\dot{\mathcal{F}}^{-\infty}.
\end{equation}
Note that the product states
$[\prod_{j=1}^{n}\mathcal{F}^{+}(\phi_{j})]F_{0}$ belong to
$\mathcal{F}^{\infty}$ if an only if all $\phi_{j}\in
\mathcal{H}^{\infty}$.
\textsc{example} Consider the Fock representation over $\mathcal{H}=\mathbb{C}^{d}$
with the standard complex conjugation:
\begin{eqnarray*}
& &
\mathcal{F}=\mathcal{L}^{2}(\mathbb{R}^{d}),\
F_{0}=(4\pi)^{-1}e^{-u^{2}/4}\\
& &
\mathcal{F}^{-}(u-iv)F(x)=(xu/2 - \partial_{v})F(x),\
\mathcal{F}^{+}(u+iv)F(x)=(xu/2 + \partial_{v})F(x)
\end{eqnarray*}
Let $o=1$.
Then (see \cite{Obata}, Section 6.2) $\mathcal{F}^{\infty}(\mathbb{C}^{d})$
consists of all real analytic functions $F(x)$ such that
for any $\epsilon>0$
\begin{equation}
e^{(1/4 - \epsilon)x^{2}}F\in\mathcal{L}^{1}(\mathbb{R}^{d}),
\end{equation}
and the Fourier transform $G(z)$ of $e^{- x^{2}/4}F$ satisfies
\begin{equation}
|G(z)|\prec \exp[(1/2 - \epsilon)z^{2}]
\end{equation}
for all $z\in\mathbb{C}^{d}$.
\smallskip
On the other hand, $\dot{\mathcal{F}}^{\infty}(\mathbb{C}^{d})$ is Schwartz space
$\mathcal{S}(\mathbb{R}^{d})$ of rapidly decreasing infinitely
differentiable functions on $\mathbb{R}^{d}$ (see \cite{KMT}, p.185).
\smallskip
Note that, if $\phi\in \mathcal{H}^{\infty}$, then $\mathcal{F}^{\infty}$
and $\dot{\mathcal{F}}^{\infty}$ are
invariant for
$\mathcal{F}^{+}(\phi)$ and $\mathcal{F}^{-}(\phi^{*})$.
Also, since $\mathcal{H}^{\infty}$ is invariant for pseudodifferential
operators on $X$ (see \cite{Shubin}, Sections 4.3 and 23.2), they are invariant,
correspondingly, for quantized and
tangentially quantized pseudodifferential operators.
\begin{remark}
Under the unitary equivalence of Fock representations,
$\mathcal{F}^{\infty}$ and $\mathcal{F}^{-\infty}$ correspond to $(\mathcal{H}^{\infty})$
and $(\mathcal{H}^{-\infty})^{*}$ in
Hida's white noise calculus \mbox{(see \cite{Obata})}.
The spaces $ \dot{\mathcal{F}}^{\infty}$ and
$\dot{\mathcal{F}}^{-\infty}$ correspond to the maximal
Kristensen-Mejlbo-Poulsen space and their space of temperate distributions \mbox{(see \cite{KMT})}.
Thus their properties are immediately translated into the corresponding properties of $\mathcal{F}^{\infty}$
and $\mathcal{F}^{-\infty}$ and $ \dot{\mathcal{F}}^{\infty}$ and
$\dot{\mathcal{F}}^{-\infty}$.
In particular, $\mathcal{H}^{\infty}$ and $\mathcal{H}^{-\infty}$ are nuclear spaces,
However, the spaces $ \dot{\mathcal{F}}^{\infty}$ and
$\dot{\mathcal{F}}^{-\infty}$ are not nuclear. Still they have the Montel
property: their closed boinded subsets are compact. In paticular,
these spaces are reflexive.
\end{remark}
\begin{proposition}
The map of $(\phi,F)$ to $\mathcal{F}^{-}(\phi^{*})F$ is continuous
\mbox{(a)}\ from $\mathcal{H}^{-\infty}\times\mathcal{F}^{\infty}$ to $\mathcal{F}^{\infty})$
(and, by duality, from $\mathcal{H}^{-\infty}\times\mathcal{F}^{-\infty}$ to
$\mathcal{F}^{-\infty}$);
\mbox{(b)}\ from $\mathcal{H}^{-\infty}\times
\dot{\mathcal{F}}^{\infty}$ to $\dot{\mathcal{F}}^{\infty}$ (and, by duality, from
$\mathcal{H}^{-\infty}\times\dot{\mathcal{F}}^{\infty}$ to
$\dot{\mathcal{F}}^{\infty}$).
\end{proposition}
$\triangleright$\quad
The first half of part (a) follows from Theorem 4.3.9 in \cite{Obata} for
annihilators $G(k_{0,1})$;
The first half of part (b) from the proof of Theorem 4.3.12 in \cite{Obata}
for its annihilators $G(k_{1,0})$. \hfill $\triangleleft$
\section{ Cutoff functional derivatives operators}
\subsection{Functional derivatives operators}
>From now on we assume that $\mathcal{H}=\mathcal{L}^{2}(X)$, where $X$ is either a compact Riemann manifold, or the Euclidean space $\mathbb{R}^{d}$ with the Riemannian measures $dx$.
The scaling operators $o$ are correspondingly Beltrami Laplacian, and harmonic oscillator operators.
Then $\dot{\mathcal{H}}^{\infty}$ is,
correspondingly, the space $\mathcal{C}^{\infty}(X)$ of infinitely differentiable
functions on the compact Riemann manifold $X$, and the Schwartz space
$\mathcal{S}(\mathbb{R}^{d})$ (see \cite{Shubin}, Section 7 and Section 25).
Since delta-functions $\delta_{x}=\delta_{x}^{*}$
belong to $\mathcal{H}^{-\infty}$, the operators $\mathcal{F}^{-}_{x}=\mathcal{F}^{-}(\delta_{x})$ are well defined, and, by Proposition 1.4, are continuous
in $\mathcal{F}^{\infty}$.
Let
\begin{equation}
\mathcal{F}^{-}_{x_{[n]}}=
\mathcal{F}^{-}_{x_{1}}...\mathcal{F}^{-}_{x_{n}},\
(x_{1}\ldots x_{n}\in X^{n}.
\end{equation}
By Proposition 1.4, for given $G,F\in \dot{\mathcal{F}}^{\infty}$, the matrix element
$G^{*}\dot{\mathcal{F}}^{-}_{x_{[n]}}F$ belongs to $\mathcal{H}^{-\infty}$.
If a \emph{Wick symbol} $W_{k,l}\in(\mathcal{H}^{-\infty})^{k+l}$
then $W_{k,l}\Big(F^{*}\dot{\mathcal{F}}^{-}_{x_{[k]}}\dot{\mathcal{F}}^{-}_{y_{[l]}}F\Big)$ is a continuous bilinear form on $\dot{\mathcal{F}}^{\infty}$. It defines a continuous \emph{functional derivatives operator} $\widehat{W}_{k,l}$ from
$\dot{\mathcal{F}}^{\infty}$ to $\dot{\mathcal{F}}^{-\infty}$ which heuristically is
\begin{equation}
\widehat{W}_{k,l}=\int\!dx_{[k]}dy_{[l]}\, W_{k,l}(x_{[k]},y_{[l]})
\dot{\mathcal{F}}^{+}_{x_{[k]}}\dot{\mathcal{F}}^{-}_{y_{[l]}}.
\end{equation}
A finite sum $\widehat{W}=\sum_{k+l\leq m} \widehat{W}_{k,l}$ is a
\emph{functional derivatives operator of order} $m$ from $\dot{\mathcal{F}}^{\infty}$ to $\dot{\mathcal{F}}^{-\infty}$.
\smallskip
A functional derivatives operator is \emph{local} if the distributions $W_{k,l}=W_{k,l}(x)\delta(X)$,
where $X$ is identified with the submanifold $\{(x,x,...,x)\}\subset X^{k+l}$, and
$W_{k,l}(x)\in \mathcal{H}^{-\infty}$. Then
\begin{equation}
\widehat{W}=\int\!dx\,\sum_{k\leq m} W_{k,l}(x)(\mathcal{F}^{+}_{x})^{k}(\mathcal{F}^{-}_{x})^{l}.
\end{equation}
Annihilators $\dot{\mathcal{G}}^{-}(\phi^{*})$ in Gauss Fock representation are directional derivatives $D_{\phi^{*}}$.
Since delta-functions $\delta_{x}=\delta_{x}^{*}$
belong to $\mathcal{H}^{-\infty}$ it is possible to
consider the \emph{functional derivative}
$D_{x}F(\phi)= D_{\delta_{x}}F(\phi)$. Indeed, by Theorem 4.2.4 from \cite{Obata},
this directional derivative exists for
$F\in\mathcal{G}^{\infty}$ and coincides with
$D_{x}=\mathcal{G}^{-}(\delta_{x})$.
On the other hand, a translation is not a continuous operator
in $\dot{\mathcal{G}}^{\infty}$ so that $D_{\delta_{x}}$ does not belong in this space.
However, by proposition 1.4, it may be continuously extended as
$\dot{\mathcal{G}}^{-}(\delta_{x})$. It is the limit of
a family $D_{\eta}$ as $\eta\in\mathcal{H}^{\infty}$ converge
to $\delta_{x}\in\dot{\mathcal{H}}^{\infty})$. By proposition 1.4, this is a continuous operator in $\dot{\mathcal{G}}^{\infty}$ denoted again
as $D_{x}$.
By Proposition 1.4, the Hermitian adjoints of the functional derivatives $D_{x}$,
\begin{equation}
D_{x}^{\dagger}F(\phi)=(-D_{\delta_{x}}+\phi(x))F(\phi)
\end{equation}
are continuous operators in $\dot{\mathcal{G}}^{-\infty}$.
Thus the multiplication with $\delta_{x}$, which is the operator
$D_{x}+D_{x}^{\dagger}$, is continuous from
$\dot{\mathcal{G}}^{\infty}$ to $\dot{\mathcal{G}}^{-\infty}$.
The coherent state quadratic form $F_{\alpha}^{*}\widehat{W}F_{\beta}$ in
Bargmann space $\mathcal{B}$ is
\begin{eqnarray*}
& &
F_{\alpha}^{*}\Big[\int\!dx_{[k]}dy_{[l]}\,
W_{k,l}(x_{[k]},y_{[l]})
\prod_{i=1}^{l}\mathcal{B}^{+}(\delta_{y_{i}})\prod_{j=1}^{k}
\mathcal{B}^{-}(\delta_{x_{j}})\Big]
F_{\beta}\\
& &
\int\!dx_{[k]}dy_{[l]}\,
W_{k,l}(x_{[k]},y_{[l]})
\prod_{i=1}^{l}(\mathcal{B}^{-}(\delta_{y_{i}})F_{\alpha})^{*}\prod_{j=1}^{k}
\mathcal{B}^{-}(\delta_{x_{j}})
F_{\beta}\\
& &
\int\!dx_{[k]}dy_{[l]}\, W_{k,l}(x_{[k]},y_{[l]})\int\! d\xi^{*}d\xi\:e^{-\xi^{*}\xi}
\prod_{i=1}^{l}\alpha^{*}(y_{i})e^{\alpha^{*}\xi}
\prod_{j=1}^{k}\beta(x_{j})e^{\xi^{*}\beta(x_{j})}\\
& &
=W_{k,l}(\alpha^{*},\beta)e^{\alpha^{*}\beta},
\end{eqnarray*}
where the \emph{Wick symbol} of $\widehat{W}_{k,l}$
\begin{equation}
W_{k,l}(\alpha^{*},\beta)=\int\!dx_{[k]}dy_{[l]}\, W_{k,l}(x_{[k]},y_{[l]})
\prod_{i=1}^{l}\alpha^{*}(y_{i})\prod_{j=1}^{k}\beta(x_{j})
\end{equation}
is a continuous holomorhic polynomial of order $(k,l)$ on
$\mathcal{H}^ {*\infty}\times\mathcal{H}^ {\infty}$.
A \emph{functional derivatives operator of order} $n$ is a finite sum of operators
$\widehat{W}=\sum_{k+l\leq n}\widehat{W}_{k,l}$ with the Wick symbol
$W(\alpha^{*},\beta)=\sum_{k\leq m,l\leq n}W(\alpha^{*},\beta)$.
The correspondence between functional derivatives operators and the Wick symbols is one to one.
The continuous complex analytic polynomial $W(\alpha^{*},\beta)$ is uniquely defined
by its Taylor coefficients at the origin $(0,0)$.
Therefore, the correspondence between $W(\alpha^{*},\beta)$ and the
restricted Wick symbols $W(\alpha^{*},\alpha)$ is one to one.
The restricted Wick symbols are continuous (real analytic) polynomials on $\mathcal{H}^{\infty}$.
Real valued restricted Wick symbols are \emph{Hamiltonian functionals}, and the
corresponding operators are \emph{Hamiltonian operators}.
\subsection{Cutoff functional derivatives operators}
A functional derivatives operator $\hat{H}$ is a \emph{cutoff} if its Hamiltonian
functional $W(\alpha^{*},\alpha)$ has the (unique) continuous extension from
$\mathcal{H}^{\infty}$ to $\mathcal{H}^{-\infty}$. This is equivalent to inclusion
of the terms
$W_{k,l}(x_{[k]},y_{[l]})\in (\mathcal{H}^{\infty})^{k+l}$
(see \cite{Obata}, the characterization theorem 3.6.2 ); in particular,
the polynomial $W(\alpha^{*},\alpha)$ belongs to
$\mathcal{G}(\mathcal{H}^{*}\times\mathcal{H})$.
The Hamiltonian functionals and their derivatives $D_{\phi}$ in the directions of $\phi\in\mathcal{G}^{\infty}$ are, actually, integrable with respect to the Gauss measure on $\mathcal{H}^{-\infty}$.
A cutoff operator $\hat{H}$ is a continuous operator in $\dot{\mathcal{G}}^{\infty}$.
Thus it has a dense domain in $\mathcal{G}$. Its Hermitian adjoint
$\hat{H}^{\dagger}$ is also cutoff of the same order with complex conjugate Wick symbol $\bar{H}$. Thus cutoff operators are closable.
A cutoff operator $\hat{H}$ is symmetric on $\mathcal{G}^{\infty}$ if and only if its Hamiltonian functional is real-valued.
\begin{theorem}
Any functional derivatives operator $\hat{H}$ is the strong
limit of a sequence of cutoff operators $\hat{H}_{n}$.
\end{theorem}
$\triangleright$\quad
It suffices to consider operators $\hat{H}=\hat{H}_{k,l}$.
Separately for $X=\mathbb{R}^{d}$ and for $X$, a compact Riemann manifold, we construct
a sequence of cutoff Wick symbols $W_{n}$ from
$(\mathcal{H}^{\infty})^{k+l}$ which converges to $C$ in
$(\mathcal{H}^{-\infty})^{k+l}$ as $n\rightarrow\infty$.
Then the cutoff operators $\hat{H}_{n}$ strongly converge to $\hat{H}$ in the
topological operator space $\mathcal{L}(\mathcal{F}^{\infty},\mathcal{F}^{-\infty})$.
\smallskip
\textsc{case of} $X=\mathbb{R}^{d}$.
Let $\chi,\kappa$ be non-negative infinitely differentiable functions with compact
support on $\mathbb{R}^{d}$ such that $\chi(0)=1$ and $\int\!dy\; \kappa(y)=1$.
For every $x\in\mathbb{R}^{d}$ the sequence of $\kappa_{n,x}(y)= n^{d}\kappa(ny-x)$
from $\mathcal{S}(\mathbb{R}^{d})$ converges to the delta function
$\delta_{x}$ in $\mathcal{S}'(\mathbb{R}^{d})$ as $n\rightarrow\infty$.
At the same time the sequence of $\chi_{n}(x)=\chi(x/n)$ converges to $1$ in $\mathcal{S}'(\mathbb{R}^{d})$ as $n\rightarrow\infty$.
Now the sequence of the cutoff Wick symbols from $\mathcal{S}(\mathbb{R}^{d})^{k+l}$
\begin{equation}
W_{n}(x_{[k+l]})=\prod_{1}^{ k+l}\chi (x_{i}/n)\int\!\prod_{1}^{ k+l}dy_{i}\; \kappa_{n,x_{i}}(y_{i})c(y_{[k+l]})
\end{equation}
converges to $C(x_{[k+l]})$ in $\mathcal{S}'(\mathbb{R}^{d})^{k+l}$ as $n\rightarrow\infty$.
\smallskip
\textsc{Case of a compact Riemann manifold} $X$.
In this case $\chi_{n}(x)=1$ for all $x$.
Since the geodesic exponential mapping is one to one on an open neighborhood $W$
of the diagonal in $X\times X$ , for every pair $(x,y)\in W$ there is a
unique geodesic curve from $x$ to $y$ in $X$. Let $sy$ denote the point
at the geodesic distance $s$ from $x$.
Choose a non-negative infinitely differentiable function $\kappa(x,y)$
on $X\times X$ with support in $W$ such that
$\int\!dy\; \kappa(x,y)=1$ for all $x$. Let $\kappa_{x}(y)= \kappa(x,y)$.
Then the sequence of the cutoff Wick symbols
\begin{equation}
W_{n}(x_{[k+l]})=\int\!\prod_{1}^{ k+l}dy_{i}\; \kappa_{n,x_{i}}(y_{i})c(y_{[k+l]})
\end{equation}
belong to $(\mathcal{H}^{\infty})^{k+l}$ and converges to the Wick symbol $W(x_{[k+l]})$ in the topology of $(\mathcal{H}^{-\infty})^{k+l}$
as $n\rightarrow\infty$.
\hfill $\triangleleft$
A continuous polynomial
$A(\phi^{*},\phi)\in
\mathcal{G}(\mathcal{H}^{*}\times\mathcal{H})$ is the \emph{antinormal symbol}
of $\widehat{W}$ if the coherent state matrix of $\widehat{W}$ in the
Bargmann Fock space $\mathcal{B}$
\begin{equation}
F_{\alpha}^{*}\widehat{W}F_{\beta}=\int\! d\phi^{*}d\phi\; e^{-\phi^{*}\phi}e^{\alpha^{*}\phi}
A(\phi^{*},\phi)e^{\phi^{*}\beta}.
\end{equation}
The functional $e^{\alpha^{*}\phi}$ of $(\alpha^{*},\phi)$ is the integral kernel of
the identity operator on the closed Bargmann subspace $\mathcal{B}$ of anti-entire
functionals $A(\phi^{*})$ in the Gauss Hilbert space $\mathcal{G}$, and is orthogonal
to all entire functionals $E(\phi)$. Therefore
$e^{\alpha^{*}\phi}$ is the integral kernel of the orthogonal projector $\mathbf{P}$
of $\mathcal{G}$ onto $\mathcal{B}$.
\begin{theorem}
Let $\widehat{W}$ be a local cutoff functional derivative operator.
If the Hamiltonian functional $W(\alpha^{*},\alpha)$ is bounded from below on
$\mathcal{H}^{\infty}$ then the Hamiltonian operator $\widehat{W}$ is
lower bounded on $\mathcal{B}^{\infty}$.
\end{theorem}
$\triangleright$\quad
The Hamiltonian functional
\begin{eqnarray*}
& &
W(\alpha^{*},\alpha)=e^{-\alpha^{*}\alpha}\int\!d\phi^{*}d\phi\;
e^{-\phi^{*}\phi + \alpha^{*}\phi+\phi^{*}\beta}A(\phi^{*},\phi)\\
& &
=\int\!d\phi^{*}d\phi\;e^{-(\alpha^{*}-\phi^{*})(\alpha-\phi)}
A(\phi^{*},\phi)
\end{eqnarray*}
The Poisson transformation semigroup
\begin{eqnarray*}
& &
W(\alpha^{*},\alpha;t)=\int\!d\phi^{*}d\phi\,e^{-(\alpha^{*}-\phi^{*})(\alpha-\phi)/t}A(\phi^{*},\phi)\\
& &
\int\!d\phi^{*}d\phi\,e^{-\phi^{*}\phi/\sqrt{t}}
A(\phi^{*}+\sqrt{t}\alpha^{*},\phi+\sqrt{t}\alpha)
\end{eqnarray*}
is the fundamental solution for the diffusion equation
\begin{equation}
(\partial_{t}-\Delta_{G})W(\alpha^{*},\alpha;t)=0,\ t > 0,\
W(\alpha^{*},\alpha;0+)=A(\alpha^{*},\alpha),
\end{equation}
where $\Delta_{G}=\int\!dx\,D_{x}^{2}$
is the \emph{Gross Laplacian} (see \cite{Obata}, Section 5.3).
By theorem 5.2.5 from \cite{Obata}, the Poisson group is a strongly continuous
operator semigroup in $\mathcal{G}^{\infty}$
generated by $\Delta_{G}$. Note that antinormal symbols of all cutoff Hamiltonian operators belong to $\mathcal{G}^{\infty}$.
The Gross Laplacian maps continuously $\mathcal{G}^{\infty}$ into $\mathcal{G}^{\infty}$
(see \cite{Obata}, Proposion 5.3.2) and, therefore, has a dense domain in $\mathcal{G}$
which includes antinormal symbols of all cutoff Hamiltonian operators.
All above shows that the Hamiltonian functional
\begin{equation}
W(\alpha^{*},\alpha)=e^{\Delta_{G}}A(\alpha^{*},\alpha)\sum_{n\geq 0}[(-1)^{n}/n!]\Delta_{G}^{n}A(\alpha^{*},\alpha),
\end{equation}
the latter series being just a finite sum, justifying the heuristic
expression for $e^{\Delta_{G}}$. Now the formal inversion makes sense
\begin{equation}
A(\alpha^{*},\alpha)=e^{-\Delta_{G}}A(\alpha^{*},\alpha)=\sum_{n\geq 0}[(-1)^{n}/n!]\Delta_{G}^{n}W(\alpha^{*},\alpha)
\end{equation}
In particular, any cutoff operator $\widehat{W}$
has a unique antinormal symbol $A$; that the polynomials
$W(\alpha^{*},\alpha)$ and $A(\alpha^{*},\alpha)$ have the same order;
and that the order of the polynomial $W(\alpha^{*},\alpha)-A(\alpha^{*},\alpha)$
is strictly less than the order of the polynomial
$A(\alpha^{*},\alpha)$.
Since the lower bound of the operator $\widehat{W}$ is never less than the lower bound
of its antinormal symbol $A$, this completes the proof. \hfill $\triangleleft$
\subsection{Antinormal Feynman integral}
By Theorem 2.2, a cutoff operator $\hat{H}$ with the lower bounded Hamiltonian functional $H$ has the Friedrichs extension from $\mathcal{H}^{\infty}$. Let us preserve the notation $\hat{H}$ for this extension.
\begin{theorem}
Let $A(\phi^{*},\phi)$ be the antinormal symbol of a cutoff operator $\hat{H}$.
Then the coherent state matrix
$F_{\alpha}^{*}e^{-i\hat{H}}F_{\beta}$ is equal to
\begin{equation}
\lim_{N\rightarrow \infty}
\int\prod_{j=1}^{N}\!d\phi_{j}^{*}d\phi_{j}\,
\exp\sum_{j=0}^{N}\Big[(\phi_{j+1}-\phi_{j})^{*}\phi_{j} -
iA(\phi_{j}^{*},\phi_{j})/N\Big]
\end{equation}
with $\phi_{N+1}= \alpha,\ \phi_{0} = \beta$.
\end{theorem}
$\triangleright$\quad
As in \cite{KS}, pp. 69-70, consider the strongly differentiable operator family in $\mathcal{B}$
\begin{equation}
[O(t)F](\alpha^{*})= \int\!d\phi^{*}d\phi\, e^{-\phi^{*}\phi}
e^{\alpha^{*}\phi}e^{-iA(\phi^{*},\phi)t}F(\phi^{*})
\end{equation}
We have $\|O(t)\|\leq 1$ (since $|e^{-iA(\phi^{*},\phi)t}|=1$), and
the strong $t$-derivative $A'(0)=\hat{H}$. Then, by the Chernoff's product theprem \cite{Chernoff}, the evolution operator
\begin{equation}
e^{-i\hat{H}}F=\lim_{N\rightarrow \infty}[H(1/N)]^{N}F.
\end{equation}
\smallskip
The coherent state matrix $F_{\alpha}^{*}[A(t/N)]^{N}F_{\beta}$ is
the $N$-iterated Gaussian integral over $\mathcal{H}$ which, by the Fubini's theorem,
is equal to the $N$-multiple Gaussian integral over $\mathcal{H}^{N}$. \hfill $\triangleleft$
\begin{remark}
In the notation $\tau_{j}=jt/N,\ \phi_{\tau_{j}} = \phi_{j},\
j=0,1,2,\ldots,N$,
and $\Delta \tau_{j}= \tau_{j+1}-\tau_{j}$, the multiple integral (2) is
\begin{equation}
\int\prod_{j=1}^{N}\!d\phi_{\tau_{j}}^{*}d\phi_{\tau_{j}}\:
\exp i\sum_{j=0}^{N}\Delta t_{j}\left[-i(\Delta\phi_{\tau_{j}}/
\Delta \tau_{j})^{*} \phi_{\tau_{j}}\rangle -
A(\phi_{\tau_{j}}^{*},\phi_{\tau_{j}})\right].
\end{equation}
Its limit at $N=\infty$ is a rigorous mathematical definition
of the heuristic Hamiltonian Feynman type integral over histories, with the
higher derivatives renormalization $A$ of the Hamiltonian functional $H$,
for the coherent state matrix
\begin{equation}
\int_{\alpha}^{\beta}\prod_{0< \tau < t}\!d\phi_{\tau}^{*}d\phi_{\tau}\:
\exp i\int_{0}^{t}d\tau
\left[-i(\partial_{\tau}\phi_{\tau})^{*} \phi_{\tau} -
A(\phi_{\tau}^{*},\phi_{\tau})\right].
\end{equation}
\end{remark}
\section{Quantized Galerkin approximations}
Let $\{p_{n}\}$ be a flag of \emph{finite dimensional} orthogonal projectors
in $\mathcal{H}^{\infty}$ (so that $ p_{n}$ is an
increasing sequence of orthogonal projectors strongly converging to the
unit operator on a dense subspace in $\mathcal{H}$).
Then $\{P_{n}=\mathcal{G}(p_{n})\}$ is the corresponding flag of
infinite dimensional \emph{quantized } orthogonal projectors in $\mathcal{G}$.
\smallskip
For a cutoff Hamiltonian operator $\hat{H}$ in the Gauss space $\mathcal{G}$, the
\emph{reduced Hamiltonian operators } $\hat{H}_n$ are
Friedrichs extensions of $P_{n}\hat{H}P_{n}$ in $\mathcal{G}$. They are uniformly bounded from below.
\begin{proposition}
Reduced Hamiltonian operators $\hat{H}_n$ are
polynomial partial differential operators in $P_{n}\mathcal{G}$ with the
normal symbol $H(p_{n}\alpha^{*},p_{n}\alpha)$.
\end{proposition}
$\triangleright$\quad
Coherent state matrix elements of $\hat{H}_n$ are
$F^{*}_{\alpha}P_{n}\hat{H}P_{n}F_{\beta}=F^{*}_{p_{n}\alpha}\hat{H}F_{p_{n}\beta}$. \hfill $\triangleleft$
\medskip
Let $f$ be a complex bounded continuous function on the real axis
$\mathbb{R}^{+}$.
Then, by the spectral theorem, for any selfadjoint non-negative operator $T$ in
$\mathcal{G}$ the operator $f(A)$ is bounded with the operator norm $\leq \sup |f|$.
If a family of such functions $f_{t}$ depends continuously on a parameter $t$
in a compact $K\subset\mathbb{R}$ then the operator family $f_{t}(A)$ is
uniformly strongly continuous on $K$ with respect to $t$.
\begin{theorem}
The operators $f_{t}(\hat{H})$ are strong operator limits of the
operators $f_{t}(\hat{H}_n)$ as $n\rightarrow\infty$, uniformly on compact
$t\geq 0$-intervals.
\end{theorem}
$\triangleright$\quad
\textsc{part i}\
The sequence $\hat{H}_nF$ converges strongly to $\hat{H}F$ in $\mathcal{G}$.
$\triangleright$\quad
Since the cutoff operator $\hat{H}$ is continuous
in $\mathcal{G}^{\infty}$, the bilinear form $G^{*}\hat{H}F$ is
separately continuous on that Frechet space. By a Banach theorem
(see \cite{Rudin}, Theorem 2.17), the bilinear form is, actually continuous
on $\mathcal{G}^{\infty}$. Along with the equality
\begin{equation}
P_{n}F(\phi^{*},\phi)=F(p_{n}\phi^{*},p_{n}\phi),
\end{equation}
this implies that the operator $\hat{H}$ is the weak limit of
$\hat{H}_n=P_{n}\hat{H}P_{n}$ in $\mathcal{G}^{\infty}$.
Since $\mathcal{G}^{\infty}$ is a Montel space, a weakly covergent sequence
$P_{n}\hat{H}P_{n}F$ converges in the topology of $\mathcal{G}^{\infty}$, and, therefore, of $\mathcal{G}$. \hfill $\triangleleft$
\smallskip
\textsc{part ii}\
If $\lambda$ is a given complex number with non-zero imaginary part,
then, for any $G\in\mathcal{G}$, the sequence of the resolvents
$(\lambda-\hat{H}_n)^{-1}G$ converges strongly to
$(\lambda-\hat{H})^{-1}G$.
$\triangleright$\quad
Since the operator norms $\|(\lambda+\hat{H}_n)^{-1}\|$ are uniformly bounded,
it suffices to consider the dense set of
$G=(\lambda+i\hat{H})^{-1}F$ with $F\in\mathcal{G}^{\infty}$. In such a case
\begin{eqnarray*}
& &
\|(\lambda+\hat{H}_n)^{-1}G-(\lambda+\hat{H})^{-1}G\|\\
& &
= \|(\lambda+\hat{H}_n)^{-1})(\hat{H}_n-\hat{H})
(\lambda+\hat{H})^{-1}F\|\\
& &
\prec |\Im\lambda|^{-1}\|(\hat{H}_n-\hat{H})
(\lambda+\hat{H})^{-1}F\|,
\end{eqnarray*}
which converges to zero, by the \textsc{part i}.
\smallskip
\textsc{part iii}\ As in the proof of theorem VIII.20 in \cite{RS}, the
\textsc{part ii} implies the strong convergence of $f_{t}(\hat{H}_n)$ to
$f_{t}(\hat{H})$ (uniformly on compact $t$-intervals).
\hfill $\triangleleft$
\begin{corollary}
The sequence $e^{-i\hat{H}_nt}$ converges strongly to
$e^{-i\hat{H}t}$ as $N\rightarrow\infty$, uniformly on compact $t$-intervals.
In particular, any solution $F(\phi^{*},\phi;t)$ of the corresponding functional
derivatives Schr\"{o}dinger equation
\begin{equation}
\partial_{t}F+i\hat{H}F=0,\ F(\phi^{*},\phi;0) \in\mathcal{D}(\hat{H})
\end{equation}
is the limit of the solutions $F_{n}\in\mathcal{D}(\hat{H}_n)$ as $n\rightarrow\infty$
of the partial differential Schr\"{o}dinger equations
\begin{equation}
\partial_{t}F_{n}+i\hat{H}_nF_{n}=0,\ F_{n}(\phi^{*},\phi;0)
=P_{n}F(\phi^{*},\phi; 0)\in\mathcal{D}(\hat{H}_n)
\end{equation}
uniformly on compact $t$-intervals.
\end{corollary}
\begin{remark}
Theorem 2.3 (applied to $\mathcal{H}_{n}=p_{n}\mathcal{H}$) shows that
the antinormal Feynman integral for the reduced Schr\"{o}dinger equation is
the limit of multiple finite dimensional integrals with respect to Gaussian measures.
This is a (convergent!) alternative for standard space-time lattice
approximations in quantum field theory.
\end{remark}
In a forthcoming paper we show that the rate of convergence is $\prec t^{2}/n$
so that the limit exists if $t\rightarrow \infty$ with the rate
$n^{(1+\epsilon)/2},\epsilon >0$. Therefore, the remark is applicable to scattering matrices.
\bibliographystyle{amsalpha}
|
2,869,038,155,266 | arxiv | \section{Introduction}
The first geometrical models for rigid particles result as a
byproduct of the point-like versions for highly dimensional
models that involve the extrinsic curvature of the worldvolume
swept out by relativistic strings or branes
\cite{Polyakov}.
Thenceforth, the interest in this sort of particle models have
grown by leaps and bounds because one can find potential applications
both in particle physics and mathematics. For instance, they can
describe spinning particles, whether massive or massless, defined
on timelike trajectories \cite{Plyuschay,Nes}, and when the model
is linear in the geodesic curvature it turns out to be related to
a massless particle with $W_3$ gauge symmetry \cite{ramos}.
The story did not end there. Recently, Nersessian and Ramos
proposed certain models for massive particles associated with null
curves \cite{nerse1,nerse2}. Immediately, considerable effort has
been devoted by other authors in the understanding of the geometry
of these models as well as its applications \cite{ferra1,ferra2,ferras}.
However, a key drawback for all these models resides in their higher order
derivative nature, as a consequence physicists have been reluctant
to consider their study due to technical difficulties like the
increasing of the degrees of freedom and the equations of motion
being at least of fourth-order in derivatives of the fields which
do not appear to be tractable. Certainly this
unpleasant fact appears to be a great difficulty but these models
have the advantage of encode the spin content of the particles in
the geometry of the worldlines. The standard way to studying this
sort of particles is through the theory of deformations sheltered
by a Frenet-Serret (FS) basis adapted to the worldline.
Unfortunately this gives raise to lengthy and annoying computations
due to the above mentioned non-trivial higher order derivative property
inherent to rigid particles \cite{ACG,nesterenko}. In this paper
we aim to study a powerful tool for the worldline geometry either
timelike or lightlike, namely, the conserved linear momentum whose
existence is a simple consequence of the Noether theorem.
A striking property of the stress tensor for particles or extended
objects is that its conservation in time yields not only the equations of
motion but also the intrinsic geometrical properties for every model
under consideration \cite{Noether}. To overcome the majority of the
typical technical obstacles in the obtaining of the rigid particle
dynamics, we appeal to an auxiliary variables method that was originally
introduced for the study of general surfaces and applied to describe
fluid membranes \cite{jemal}. Even though most of the progress in
the study of particle models has been made in the spirit of the
standard theory of deformations, the conserved linear momentum has not
been exploited completely in this context. Therefore, we provide an
alternative way to analyse point particle models by means of an easy
obtaining of the conserved linear momentum. The main idea behind the
work is to replace the original action by one equivalent depending
on lower order derivatives evading in this way the standard theory
of deformations. We assure that this approach simplifies the dynamical
point particle description.
The outline of the paper is as follows. In Sect. 2 we begin with a
glimpse of the worldline Frenet-Serret geometry describing both,
timelike and lightlike, particle trajectories. This brief
section will serve mainly to explain our notation and the basic facts
to be used in this paper. In Sect. 3 we apply an auxiliary variables
method to obtain the conserved linear momentum associated to a local
geometrical action depending of the geodesic curvature and the torsion.
We emphasize our approach for the case of null curves since the
existing point particle models with this geometry are not widely known.
We conclude in Sect. 4 by mentioning some comments. We have tried
throughout the paper to follow an index-free notation in order to avoid a
cumbersome notation. Definitions of constructed deformations which are
helpful to understand the geometrical nature of a particle worldline
and important identities of the theory of deformations for curves have
been collected in Appendix A. To complement our approach in the null
case we obtain the Casimir invariants associated to the Poincar\'e
symmetry which is the subject of Appendix B. In our context, these are
useful to integrate the equations of motion.
\section{Worldline geometry}
\subsection{Timelike curves}
Consider a relativistic particle whose timelike worldline can be
described by the embedding $x^\mu = X^\mu (\xi)$, where $x^\mu$
are local coordinates in Minkowski spacetime with metric $\eta_{\mu \nu}= {\mbox{diag}}(-1,+1,+1, \ldots ,+1)$ and $(\mu ,\nu = 0,1, \ldots,N-1)$,
$\xi$ is an arbitrary parameter and $X^\mu$ are the embedding functions.
The vector tangent to the worldline is given by $\dot{X}^\mu = dX^\mu/d\xi$
such that the one-dimensional metric along the curve is $\gamma =
\eta_{\mu \nu}\dot{X}^\mu \dot{X}^\nu \equiv \dot{X} \cdot \dot{X}$. We
assume that for timelike curves $\dot{X}^2 < 0$ is satisfied. The
infinitesimal arclength for the worldline is given by
\begin{equation}
d\tau = (-\dot{X}\cdot \dot{X})^{1/2}d\xi\,.
\label{eq:1}
\end{equation}
This arclength is invariant under reparametrizations of the worldline.
We introduce $N-1$ normal vectors to the worldline, denoted by
$n^\mu {}_i \quad (i=1,2,\ldots,N-1)$. These are defined implicitly by
$n^i \cdot \dot{X} = 0$ and normalized as $n_i \cdot n_j = \delta_{ij}$.
Though we may choose to label points along the curve arbitrarily, the most
convenient approach to study the dynamics for relativistic particles is to let
the parameter to be the arclength along the worldline. We will denote with a
prime differentiation with respect to $\tau$. Therefore we introduce the
orthonormal basis $\left\lbrace X', \eta_i \right\rbrace $ which satisfy
$X'\cdot X' = -1, \,\,\,X'\cdot \eta_i = 0$ and $\eta_i \cdot \eta_j =
\delta_{ij}$. This basis obeys the following $N$-dimensional FS equations
\cite{ACG,doCarmo}
\begin{eqnarray}
X'' &=& k_1 \eta_1 \,, \nonumber \\
\eta_1 ' &=& k_1 X' - k_2 \eta_2 \,, \nonumber \\
\eta_2 '&=& k_2 \eta_1 - k_3 \eta_3\,, \nonumber \\
\ldots && \ldots
\label{eq:FS}
\\
\eta_{N-2} ' &=& k_{N-2} \eta_{N-3} - k_{N-1} \eta_{N-1}\,,
\nonumber \\
\eta_{N-1} ' &=& k_{N-1} \eta_{N-2}\,,\nonumber
\end{eqnarray}
where $k_i$ stands for the independent $i$th FS curvature and $k :=k_1$ is
known as the {\it geodesic curvature}.
Note that from the FS equations (\ref{eq:FS}) we can express the
geodesic curvature as
\begin{equation}
k_1 = - X' \cdot \eta_1 '\,.
\label{eq:k}
\end{equation}
Also note that the geodesic curvature is given in terms of second order derivatives
of the embedding functions, $k_1 = \sqrt{X''\cdot X''}$.
\subsection{Lightlike curves}
We turn now to consider a null curve, for the sake of simplicity,
in a $3+1$ ambient Minkowski spacetime with metric $\eta_{\mu \nu}$
described by the embedding $x^\mu = \mathsf{X}^\mu (\rho)$ where $x^\mu $
are local coordinates in the background spacetime, $\rho$ is an
arbitrary parameter and $\mathsf{X}^\mu$ are the embedding functions $(\mu
= 0,1,2,3)$. Hereafter, in order to compare with respect to the
timelike case (see for instance, (\ref{eq:1}) and (\ref{eq:pseudo})),
we consider the signature of $\eta_{\mu \nu}$ to be $(+,-,-,-)$. With
this convention timelike vectors have a positive norm. The tangent
vector to the curve is given by $ \dot{\mathsf{X}}^\mu = d\mathsf{X}^\mu /d\rho$. It
satisfies that $\dot{\mathsf{X}}\cdot\dot{\mathsf{X}}=0$ since the curve lies on the
light cone so the arclenght vanishes. This null condition
on the tangent vectors shatters our accustomed vision of the
worldline geometry which leads us to promote $\Upsilon =
\ddot{\mathsf{X}}\cdot \ddot{\mathsf{X}}$ as the corresponding worldline metric
\cite{nerse1}. This new point of view necessarily forces
the introduction of a new parameter called pseudo-arclength which becomes
fruitful to normalize the derivative of the lightlike tangent
vector \cite{nerse1,null}.
The infinitesimal pseudo-arclength for a null curve is given by
\begin{equation}
d\sigma = (-\ddot{\mathsf{X}}\cdot\ddot{\mathsf{X}})^{1/4}d\rho\,.
\label{eq:pseudo}
\end{equation}
This pseudo-arclength is invariant under reparametrizations of the curve.
We shall use again a prime to denote derivation with respect to $\sigma$.
To analyse the geometry of null curves is desirable to adapt a FS frame
constructed in a similar way as in the timelike case \cite{nerse1,ferra1,null}.
In such approach we consider a basis adapted to null curves spanned
by $\left\lbrace e_+,e_1,e_-,e_2 \right\rbrace $ where $e_+$ and $e_-$ are
lightlike whilst $e_1$ and $e_2$ are spacelike. The null FS basis has
the structure
\begin{eqnarray*}
e_+ & = \mathsf{X}'\,,\\
e_+ ^2 &= e_- ^2 = 0\,,\\
e_\pm \cdot e_1 &= e_\pm \cdot e_2 = e_1 \cdot e_2 =0\,, \\
e_+ \cdot e_- &= -e_1 \cdot e_1 = - e_2 \cdot e_2 = 1\,.
\end{eqnarray*}
This basis obey the following 4-dimensional FS equations
\cite{nerse1,ferra1,null}
\numparts
\begin{eqnarray}
{e}_+ ' &=& e_1\,,\\
{e}_1 ' &=& \kappa_1\,e_+ + e_-\,,
\label{eq:NFS2}
\\
{e}_- '&=& \kappa_1 \,e_1 + \kappa_2 \,e_2\,,
\label{eq:NFS3}\\
{e}_2 ' &=& \kappa_2 \,e_+ \,,
\end{eqnarray}
\endnumparts
where $\kappa_1$ and $\kappa_2$ are independent curvature functions of the
null curve similarly as in the timelike case. Occasionally, $\kappa_1$
is known as the torsion due to its dependence of the third-order
derivatives of the field variables. Note that the torsion can be
expressed in several forms. For our purposes below, one convenient
way is
\begin{equation}
\kappa_1 = \frac{1}{2} e_+ '' \cdot e_+ ''\,.
\label{eq:kappa1}
\end{equation}
It is worth noting that $\kappa_1$ is given in terms of the
third-order derivatives of the field variables, $2\kappa_1 =
(\mathsf{X}''' \cdot \mathsf{X}''')$.
\section{FS dynamics}
\subsection{Timelike case}
We assume that the dynamics of a rigid particle is specified by an action
invariant under reparametrizations of the timelike worldline of the form
\begin{equation}
S_0 [X] = \int d\tau \,L(k_1)\,,
\label{eq:action}
\end{equation}
where $L$ is a scalar under reparametrizations. It is usual that
under an infinitesimal deformation of the embedding $X \to X +
\delta X$, the response of the functional (\ref{eq:action})
casts out the equations of motion and the Noether charges
\cite{ACG,Noether}. To accommodate an auxiliary variables method
describing rigid particles we follow the seminal work given in
\cite{jemal}. We would like to distribute this deformation among
the parametrization, the FS basis and $k_1$. That is why we consider
them as new independent variables. To promote them as intermediate
auxiliary variables it is necessary to implement constraints
involving their definitions smeared out with Lagrange multipliers.
Thus, we construct now a new functional
action $S [k_1,\eta_1, X', X,f,\lambda^1,\lambda^{11},\lambda]$ of
the form
\begin{eqnarray}
\fl
S= S_0[X,k_1] + \int d\tau f\cdot \left( X' -\frac{dX}{d\tau}\right)
+ \int d\tau \,\left[ \lambda^1 \,\left(X' \cdot \eta_1 \right) +
\lambda^{11}\, \left(\eta_1 \cdot \eta_1 - 1 \right)
\right] \nonumber \\
\lo + \int d\tau \, \lambda\, \left( k_1 + X' \cdot \eta_1 ' \right) \,.
\label{eq:Caction}
\end{eqnarray}
This is a suitable departure point which provides both
geometrical and physical insights into the mechanical
systems described by (\ref{eq:action}) by means of a conserved
linear momentum.
The Euler-Lagrange (EL) derivative for $X$ is such that in the
extremum condition it shows a conservation law,
\begin{equation}
\frac{d}{d\tau}\left\lbrace f^\mu + \left[ L + \lambda \,k_1
+ (f \cdot X') \right]\,X^{'\,\mu} \right\rbrace = 0\,,
\label{eq:conserv}
\end{equation}
where we have employed the identities (\ref{eq:T1}) and the expression
(\ref{eq:k}).
The EL derivative associated to $X'$ exhibits the
geometrical form of $f^\mu$ in terms of the Lagrange multipliers
and the FS basis
\begin{equation}
f = -\lambda\, k_1\,{X'}
- \lambda^1 \,\eta_1 + \lambda\,k_2\,\eta_2 \,,
\label{eq:f1}
\end{equation}
where we have used the expressions (\ref{eq:FS}). As a result we
obtain $(f\cdot X')=\lambda\,k_1$. Correspondingly, the EL
derivative for $\eta_1$ and by exploiting the FS equations
(\ref{eq:FS}) we have
\begin{equation}
\eqalign{
\lambda^1 = \lambda'\, ,\\
2 \lambda^{11} = \lambda\,k_1\,.
}
\label{eq:Lm}
\end{equation}
Finally, the EL derivative for $k_1$ yields
\begin{equation}
\lambda = - L^*
\end{equation}
where we have introduced the notation $L^* = d L/d k_1$. Thus,
we can identify the Lagrange multipliers (\ref{eq:Lm}) as
$\lambda^1 = - L^{*\,'}$ and $2\lambda^{11} = - k_1\,L^*$.
Hence, putting all of these results together in the conservation law
(\ref{eq:conserv}) we therefore get
\begin{equation}
\frac{d}{d\tau} \left[ (L - L^* k_1)\,X' + L^{*\,'}\,
\eta_1 - L^* k_2\,\eta_2 \right] =0\,,
\label{eq:eom}
\end{equation}
which allows us to identify the conserved linear momentum,
written in terms of the FS basis,
\begin{equation}
p = (L - L^* k_1)\,X' + L^{*\,'}\,
\eta_1 - L^* k_2\,\eta_2 \,.
\label{eq:fmu1}
\end{equation}
This is nothing but the linear momentum associated to the
Noether charge specialized to a constant infinitesimal translation
$\delta X^\mu = \epsilon^\mu$, \cite{ACG}. Further, the momentum
(\ref{eq:fmu1}) is in accordance with the momentum conjugated to the
embedding variables in an Ostrogradski Hamiltonian approach for the
action (\ref{eq:action}) \cite{hamFS}.
The FS projections of the total derivative (\ref{eq:eom}) permit to
deduce the mechanical and geometrical properties of the generic action
(\ref{eq:action}). The projection of (\ref{eq:eom}) along $\eta_3$
implies the vanishing of $k_3$ thereby the motion is performed in
$2+1$ dimensions. Similarly, the projection of (\ref{eq:eom}) along
$\eta_2$ leads to $(L^*)^2k_2 = $const., which can be interpreted as a
conservation of the spin of the particle \cite{ACG}. To finish the
tangential projection casts out the equations of motion, namely,
$L^{*\,''} + (L - L^*\,k_1)\,k_1 - L^* \,k_2 ^2 =0$. These properties
as well as the Poincar\'e invariants have been well discussed in
\cite{ACG,nesterenko}.
In closing this subsection, we apply the formalism developed above
to the linear correction to the free relativistic particle, $L =
-m + \alpha k_1$, where $m$ and $\alpha$ are constants. Obviously
we have $L^* = \alpha$. The corresponding linear momentum is given by
$p = - m \,X ' -\alpha k_2\, \eta_{2} $ and the equation of motion results
$\alpha m k_1 + {\mbox{{\small const.}}}=0$.
\subsection{Lightlike case}
Now, we shall consider actions for null curves that are invariant under
reparametrizations of the form
\begin{equation}
S_0 [\mathsf{X}]= \int d\sigma \,L(\kappa_1)\,,
\label{eq:null-act}
\end{equation}
where $L$ is invariant under worldline reparametrizations.
An auxiliary variables method will distribute the deformation
$\mathsf{X} \to \mathsf{X} + \delta \mathsf{X}$ among $\mathsf{X}$ itself, $e_+$ and $\kappa_1$
considering all of them as new independent variables. Once
again, bearing in mind the necessity of promote them as
auxiliary variables we need to implement them
through their definitions and structure properties
smeared with appropriated Lagrange multipliers.
Following the timelike case, we now construct the functional
$S[\kappa_1,e_+, \mathsf{X} , \mathsf{f},\Lambda_{++},\Lambda ]$ written as
\begin{equation}
\fl
S= S_0[\mathsf{X},\kappa_1] + \int d\sigma \,\,\mathsf{f}\cdot \left( e_+ -
\frac{d}{d\sigma}\mathsf{X}\right) + \int d\sigma \,\,\Lambda_{++}\,e_+ ^2
+ \int d\sigma \Lambda \,\left( \kappa_1 - \frac{1}{2} e_+ '' \cdot e_+ ''
\right).
\end{equation}
A direct computation of the EL derivative for $\mathsf{X}$ shows that in the
extremum condition we have
\begin{equation}
\frac{d}{d\sigma}\left\lbrace \mathsf f^\mu - \frac{1}{2}
\frac{d}{d\sigma} \left[ \left( L + 4\Lambda\,\kappa_1 +
(\mathsf f\cdot e_+) \right) e_1^\mu \right] \right\rbrace = 0\,,
\label{eq:law}
\end{equation}
where we have employed the identities
(\ref{eq:TT2}) and the expression (\ref{eq:kappa1}).
The EL derivative for $e_+$ allows us to write $\mathsf f^\mu$ in terms
of the null FS frame as
\begin{equation}
\mathsf f = \left( \Lambda\,e_1 '\right)'' - 2\,\Lambda_{++}\,e_+ \,.
\end{equation}
By making use of the null FS equations we obtain
$(\mathsf f \cdot e_+)= 2\Lambda\,\kappa_1 + \Lambda''$.
Finally, we compute the EL derivative with respect to $\kappa_1$,
\begin{equation}
\Lambda = - L^*\,,
\end{equation}
where we have used one more time $*$ to denote derivative with respect to
$\kappa_1$, i.e., $L^* = dL/d \kappa_1$. We are ready to insert the information
into the conservation law (\ref{eq:law}). With the previous results we obtain
$\mathsf f = - \left(L^*\,e_1 '\right) '' - 2\Lambda_{++}\,e_+ $, such that
the conservation law (\ref{eq:law}) reads
\begin{equation}
\mathsf E= \frac{d}{d\sigma}\left\lbrace 2\Lambda_{++}\, e_+
+ (L^*\,e_1 ')'' + \frac{1}{2} \frac{d}{d\sigma}
\left[ \left( L - 6 L^*
\kappa_1 - L^{* ''} \right) e_1 \right] \right\rbrace = 0\,,
\label{eq:eom-n}
\end{equation}
which helps to identify the corresponding linear momentum,
$\mathsf p = \mathsf p _+ e_+ + \mathsf p _- e_- +
\mathsf p _1 e_1 + \mathsf p _2 e_2$, in the null FS frame.
If the conservation law (\ref{eq:eom-n}) is expressed as $\mathsf E^\mu =
\mathsf p^{\mu\,'}=0$, it is straightforward to obtain the conditions that the
momentum components must satisfy,
\begin{equation}
\eqalign
{
\mathsf p _+ ' + \mathsf p _1 \kappa_1 + \mathsf p _2 \kappa_2 =0\,,\\
\mathsf p _- \kappa_2 + \mathsf p _2 ' = 0\,,
}
\label{eq:condi1}
\end{equation}
and
\begin{equation}
\eqalign
{
\mathsf p _1 + \mathsf p _- ' = 0\,,\\
\mathsf p _+ + \mathsf p _1 ' + \mathsf p _-\kappa_1 = 0\,,
}
\label{eq:condi2}
\end{equation}
where the FS equations in the null frame has been exploited. The equations
(\ref{eq:condi1}) are in fact the equations of motion of the particles
governed by (\ref{eq:null-act}) whereas (\ref{eq:condi2}) are simple
geometrical identities. The momentum acquires the form
\begin{equation*}
\mathsf p = (\mathsf p _- ^{''} - \mathsf p_-\kappa_1 )e_+
+ \mathsf p_- e_- - \mathsf p _- ' e_1 + \mathsf p _2 e_2\,.
\end{equation*}
We remark at this point that it is not necessary to know
the form that $\Lambda_{++}$ must hold.
A straightforward computation in (\ref{eq:eom-n}) leads us to identify the
independent components of the momentum in terms of the worldline curvatures,
$\mathsf p_- = (L - 2L^* \kappa_1 + L^{*\,''})/2$ and $\mathsf p_2= 2L^{*\,'}
\kappa_2 + L^* \kappa_2 '$.
Thus, we can write the linear momentum in the null FS frame
\begin{eqnarray}
\mathsf p &=&-\frac{1}{2}\left[ \left( L - 2 L^*\,\kappa_1 + L^{*\, ''} \right)\,
\kappa_1 - \left( L - 2L^{*}\,\kappa_1 + L^{*\,''}\right) ^{''}
\right] \,e_+ \nonumber \\
&+& \frac{1}{2} \left( L -2 L^{*}\,\kappa_1 + L^{*\,''}\right) \,e_-
- \frac{1}{2} \left( L - 2L^{*}\kappa_1 + L^{*\,''} \right)'\,e_1
\nonumber \\
&+& \left( 2L^{*\,'}\,\kappa_2 + L^* \,\kappa_2 ' \right) \,e_2\,.
\label{eq:PP}
\end{eqnarray}
This is the general expression for the momentum associated to
(\ref{eq:null-act}). It is worth pointing out that the momentum
(\ref{eq:PP}) is completely determined by two independent
components $\mathsf p_-$ and $\mathsf p_2$ in the 4-dimensional case.
In fact, this is bequeathed from the theory of deformations where in
order to preserve null curves in the variational procedure, two independent
normal variations are necessary \cite{null}.
To project the conservation law (\ref{eq:eom-n}) into the null FS
frame is equivalent to express equations (\ref{eq:condi1}) and (\ref{eq:condi2})
in terms of the independent components of the momentum (\ref{eq:PP}).
The first equation of (\ref{eq:condi2}) is
\begin{equation}
\mathsf E_- = L' - L^*\,\kappa_1 ' = 0\,.
\end{equation}
This is merely an identity based in the chain rule from ordinary
calculus. The second equation of (\ref{eq:condi2}), $\mathsf E_1$, is just a null identity.
Now, the second equation of (\ref{eq:condi1}) is written as
\begin{equation}
\mathsf E_2 = (L - 2L^*\,\kappa_1 + L^{*\,''})\kappa_2 +
2\left[ \left( L^{*\,2} \kappa_2
\right)'/L^*\right]' = 0\,.
\label{eq:eom-n2}
\end{equation}
This equation determines $\kappa_2$ in terms of $\kappa_1$.
Finally, the first equation of (\ref{eq:condi1}) results
\begin{eqnarray}
\mathsf E_+ &=& \left( L - 2L^{*}\kappa_1 + L^{*\,''}\right)^{'''} -
2 \left( L - 2L^*\kappa_1 + L^{*\,''} \right)' \kappa_1 \nonumber \\
&-& \left( L - 2L^{*}\kappa_1 + L^{*\,''} \right) \kappa_1 ' +
2\left[ \left( L^{*\,2} \kappa_2 \right)'/L^*\right]\kappa_2 =0\,.
\label{eq:eom-n1}
\end{eqnarray}
The expressions (\ref{eq:eom-n2}) and (\ref{eq:eom-n1}) determine the
equations of motion governing the dynamics of particles described by
the action (\ref{eq:null-act}) which do not appear to be tractable in
general. In fact, in the most simple cases they are two coupled
differential equations whose solutions are null helices \cite{ferra1,ferra3}.
There, the equations of motion can be integrated and expressed in terms
of the mass and the spin of the particle \cite{null}. We must remark that
in a $2 + 1$ ambient Minkowski spacetime, besides $\kappa_2 = 0$,
the momentum component $\mathsf p_2$ disappears and the only equation of motion
is $\left( \mathsf p_- ^{''} - \mathsf p_- \kappa_1 \right)'
- \mathsf p_- ^{'} \kappa_1 = 0$. This latter equation also appears to be
intractable in general but surprisingly we find that for an arbitrary Lagrangian
$L$ it is possible to reduce the order of the equation of motion. We show
briefly how this comes about. In general, the equations of motion are equivalent
to the associated constants of motion given by the first and second Casimir
invariants (See for example \cite{ferras} for a proof of this statement.).
By putting expressions (\ref{eq:M}) and (\ref{eq:S}) together we find that
\begin{equation}
L^{*\,'}\left( {\mathsf p}_- ^{2}\right)' - L^* \left[ \left(
{\mathsf p}_- ' \right)^{2} + M^2 \right] + (L - L^{*\,''}){\mathsf p}_- ^2
+ 2S\mathsf p_- = 0\,.
\label{eq:MSE}
\end{equation}
The immediate implication of this ODE in $\kappa_1$ is that it is equivalent
to the original equation of motion besides reduced in the order.
In a $3+1$ ambient spacetime, the integration for an arbitrary $L$ can be
treated along the same lines but the computation is rather involved.
We survey the application of the formalism by considering first a model
for particles, in a $3+1$ ambient spacetime, given by a correction to
the pseudo-arclenght parameter Lagrangian, $L = 2\left( \alpha + \beta\,
\kappa_1\right) $, where $\alpha$ and $\beta$ are constants. Obviously,
$L^* = 2\beta$. The associated linear momentum is given by
\begin{equation}
\mathsf p = -[\beta \kappa_1 ^{''} + (\alpha - \beta \kappa_1)\kappa_1 ]
e_+ + (\alpha - \beta \kappa_1)e_- + \beta \kappa_1 '\,e_1
+ 2\beta \kappa_2 ' \, e_2 \,.
\end{equation}
Hence, from (\ref{eq:eom-n2}) and (\ref{eq:eom-n1}), the equations of
motion are
\begin{eqnarray}
\beta \kappa_1 ^{'''}
- \frac{3\beta}{2} \kappa_1 ^{' \,2} - \beta \kappa_2 ^{' \,2} + \alpha
\kappa_1 ' &=& 0 \,, \\
2\beta \kappa_2 ^{''} - \beta \kappa_1 \kappa_2 + \alpha \kappa_2 &=&0\,,
\end{eqnarray}
which can be integrated and expressed in terms of the Casimir invariants.
Recently, these equations of motion have been extensively studied in
\cite{ferra1,ferra3}.
Another example more complicated than the previous one can be
found in a $2+1$ ambient spacetime. Consider $L= 2\left( \alpha +
\beta\,\kappa_1 ^2 \right)$ with $\alpha$ and $\beta$ being constants.
This model resembles to the $1+1$ timelike effective model for a relativistic
kink in the field of a soliton \cite{pichu}. The corresponding linear momentum is
\begin{eqnarray}
\mathsf p &=& \left\lbrace \beta \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right)'' - \left[ \alpha + \beta \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right)
\right] \kappa_1 \right\rbrace e_+ \nonumber \\
&+& \left[ \alpha + \beta \left(-3 \kappa_1 ^2 + 2\kappa_1 ''
\right) \right] e_- - \beta \left( -3\kappa_1 ^2 + 2\kappa_1 ''
\right) ' e_1 \,.
\end{eqnarray}
From (\ref{eq:eom-n1}) we obtain the equation of motion
\begin{equation}
\fl
\left\lbrace \beta \left( -3\kappa_1 ^2 + 2\kappa_1 '' \right)'' -
2 \left[ \alpha + \beta \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right)
\right]\kappa_1 \right\rbrace ' +\alpha \kappa_1 ' + \beta \left( -
3 \kappa_1 ^2 + 2\kappa_1 '' \right) \kappa_1 ' = 0,
\label{eq:eomk2}
\end{equation}
which can be integrated immediately to give a fourth-order ODE in $\kappa_1$
\begin{equation}
\kappa_1 ^{(4)} - 5\kappa_1 \kappa_1 '' - \frac{5}{2} (\kappa_1 ')^{2} +
\frac{5}{2} \kappa_1 ^3 - \gamma\,\kappa_1 - \lambda_{(3)} = 0\,,
\label{eq:reduced}
\end{equation}
where $\gamma= \alpha/2\beta$ and $\lambda_{(3)}$ is an integration
constant which, in principle, can be written in terms of the Casimir
invariants. One can go further on the integration of the Eq. (\ref{eq:eomk2})
if we appeal to the expression (\ref{eq:MSE}). The original equation
of motion is equivalent to
\begin{equation}
\fl
\eqalign{
2\beta\kappa_1 \left\lbrace \left[ \alpha + \beta \left( -3 \kappa_1 ^2 +
2\kappa_1 '' \right) \right]^{'\,2} + M^2 \right\rbrace -
2\beta \kappa_1 ' \left\lbrace \left[ \alpha + \beta \left( -3 \kappa_1 ^2 +
2\kappa_1 '' \right) \right]^{2}\right\rbrace '
\\
- \left( \alpha + \beta \kappa_1 ^2 - 2\beta \kappa_1 '' \right)
\left[ \alpha + \beta \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right)
\right]^{2} + S \left[ \alpha + \beta \left( -3 \kappa_1 ^2 + 2\kappa_1 ''
\right) \right]= 0\,,}
\label{eq:int}
\end{equation}
where $M^2$ is the first Casimir invariant given by
\begin{equation}
\eqalign{
M ^2 = \left\lbrace \left[ \alpha + \beta\left( \kappa_1 ^2 -
2\beta \kappa_1 ''\right) \right]^2 \right\rbrace '' - 3\beta ^{2}
\left[ \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right) ' \right] ^2 \\
-2 \left[ \alpha + \beta\left( \kappa_1 ^2 -
2\beta \kappa_1 ''\right) \right]^2 \kappa_1 \,,}
\end{equation}
and $S$ is the associated second Casimir invariant which becomes
\begin{equation}
\fl
\eqalign{
S = - \left[ \alpha + \beta \left( \kappa_1 ^2 - 2\kappa_1 '' \right) \right]
\left[ \alpha + \beta \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right) \right] -
4\beta ^{2} \kappa_1 ' \left( -3 \kappa_1 ^2 + 2\kappa_1 '' \right)' \\
+ 4 \beta \kappa_1 \left[ -\alpha \kappa_1 + \beta \left( -3 \kappa_1 ^2
+ 2\kappa_1 '' \right) '' - \beta \kappa_1 \left( - 3 \kappa_1 ^2 + 2\kappa_1 ''
\right) \right].}
\end{equation}
Despite we have enormously reduced the order of the original equation of motion,
the equivalent equation (\ref{eq:int}) turns out to be complicated as opposed
to the equation (\ref{eq:reduced}). The main benefit of (\ref{eq:reduced}) resides
in its simplicity. In fact, alike to (\ref{eq:reduced}) is the resulting equation
of motion for a particle described by a model linear in
$\kappa_1$ in a $3+1$ ambient spacetime \cite{ferra3}.
\section{Concluding remarks}
In this paper we have analysed worldline theories by obtaining the
associated conserved linear momentum. This has been done by means of
an auxiliary variables method. The main advantage of the method is based in the
reduction of the higher order derivative nature of the fields, obtaining
considerable simplication in the variational procedure and avoiding awkward
computations. We have tailored this auxiliary
variables method to the FS frame of each curve, either timelike of lightlike.
Based on Poincar\'e and reparametrization
invariance of the action the conservation of the momentum leads us
to the full mechanical content of the worldline theories. Equations
(\ref{eq:eom}) and (\ref{eq:eom-n}) provide the dynamics
for arbitrary Lagrangians $L(k_1)$ and $L(\kappa_1)$, when they are
implemented by the FS frame projections. We showed that the auxiliary
variables method is in fact a powerful alternative to study embedded theories.
Although originally it was implemented to study general surfaces characterized by
their extrinsic geometry, like the lipid membranes \cite{jemal},
its application is immediate to relativistic brane models under interaction
with other fields.
The complete integrability of the equations of motion faces several technical
difficulties when one tries to integrate them for a general Lagrangian $L(\kappa_1)$. It
seems to be intractable in general. We explored some examples to see
our machinery at work.
For the simplest cases, like a constant and linear in $\kappa_1$, the
integrability is a well known fact. Nevertheless, we have shown the existence
of other model, quadratic in $\kappa_1$, in a $2+1$ ambient spacetime where we
have also obtained integrability. Work along this issue is in progress.
\ack
E.R. cordially would like to thank Alberto Molgado for useful comments and
suggestions. Also, E. R. has been benefited from conversations with Jemal Guven
and Eloy Ay\'on. E. R. acknowledges partial support from CONACyT
under grant 51111. R.C. acknowledges partial support by grants COFAA, EDI,
SIP-20071135 and CONACyT J1-60621-I. N.B. acknowledges partial support by grant
SEP-2003-C02-43780. The authors would like to thank
anonymous referees who suggested several important improvements.
|
2,869,038,155,267 | arxiv | \section{Introduction}
We consider the Einstein-de Sitter equation
$$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}(g^{\alpha\beta}R_{\alpha\beta})-\Lambda g_{\mu\nu}=
\frac{8\pi G}{c^4}T_{\mu\nu}
$$
for the energy-momentum tensor of a perfect fluid
$$T^{\mu\nu}=(c^2\rho +P)U^{\mu}U^{\nu}-Pg^{\mu\nu}.
$$
Here $R_{\mu\nu}$ is the Ricci tensor associated with the
metric
$$
ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu},$$
$G$ is the gravitational constant, $c$ the speed of light,
$\rho$ the mass density, $P$ the pressure and $U^{\mu}$ is
the 4-dimensional velocity. $\Lambda$ is the cosmological
constant which is supposed to be positive in this article.
See \cite[\S 111]{LandauL}.
Spherically symmetric solutions for the problem with
$\Lambda=0$ was investigated in \cite{ssEE} and the aim of
this article is to describe the similar results on
the problem with positive cosmological constants.\\
In this article we suppose that the pressure $P$ is a given function of
the density $\rho$ and pose the following\\
{\bf Assumption.} {\it $P$ is an analytic function of $\rho>0$ such that
$0<P, 0<dP/d\rho<c^2$ for $\rho>0$, and $P\rightarrow 0$ as
$\rho \rightarrow +0$. Moreover
there are positive constants $A, \gamma$ and an analytic function $\Omega$ on a neighborhood of $[0, +\infty[$ such that
$\Omega(0)=1$ and
$$P=A\rho^{\gamma}\Omega(A\rho^{\gamma-1}/c^2).
$$
We assume that $1<\gamma<2$ and $\displaystyle \frac{1}{\gamma-1}$
is an integer. }
We are keeping in mind the equation of state for neutron stars. See \cite{ssEE}, \cite[p. 188]{ZeldovichN}.\\
We consider spherically symmetric metrics of the form
$$ds^2=e^{2F(t,r)}c^2dt^2-
e^{2H(t,r)}dr^2-R(t,r)^2(d\theta^2+\sin^2\theta d\phi^2).
$$
We suppose that the system of coordinates is co-moving, that is,
$$U^0=e^{-F},\qquad U^1=U^2=U^3=0
$$
for $x^0=ct, x^1=r, x^2=\theta, x^3=\phi$. Then
the equations turn out to be
\begin{subequations}
\begin{eqnarray}
e^{-F}\frac{\partial R}{\partial t}&=&V \label{Aa} \\
e^{-F}\frac{\partial \rho}{\partial t}&=&
-(\rho+P/c^2)\Big(\frac{V'}{R'}+\frac{2V}{R}\Big) \label{Ab} \\
e^{-F}\frac{\partial V}{\partial t}&=&
-GR\Big(\frac{m}{R^3}+\frac{4\pi P}{c^2}\Big)+\frac{c^2\Lambda}{3}R+ \nonumber \\
&&-\Big(1+\frac{V^2}{c^2}-\frac{2Gm}{c^2R}
-\frac{\Lambda}{3}R^2\Big)\frac{P'}{R'(\rho+P/c^2)} \label{Ac} \\
e^{-F}\frac{\partial m}{\partial t}&=&
-\frac{4\pi}{c^2}R^2PV \label{Ad}
\end{eqnarray}
\end{subequations}
Here $X'$ stands for $\partial X/\partial r$.
The coefficients of the metric are given by
$$P'+F'(c^2\rho+P)=0$$
and
$$e^{2H}=\Big(
1+\frac{V^2}{c^2}-\frac{2Gm}{c^2R}-\frac{\Lambda}{3}R^2\Big)^{-1}
(R')^2.
$$
In order to specify the function $F$, we introduce the state variable
$u$ by
$$u=\int_0^{\rho}\frac{dP}{\rho+P/c^2},
$$
and fix the idea by putting
$$e^F=\sqrt{\kappa_+}e^{-u/c^2}
$$
with a positive constant $\kappa_+$ specified in the next Section.
We note that there are analytic functions $\Omega_u,
\Omega_{\rho}, \Omega_P$ on a neighborhood of
$[0,+\infty[$ such that
$\Omega_u(0)=\Omega_{\rho}(0)=\Omega_P(0)=1$ and
\begin{align*}
u&=\frac{\gamma A}{\gamma-1}\rho^{\gamma-1}\Omega_u(A\rho^{\gamma-1}/c^2), \\
\rho&=A_1u^{\frac{1}{\gamma-1}}\Omega_{\rho}(u/c^2), \\
P&=AA_1^{\gamma}u^{\frac{\gamma}{\gamma-1}}\Omega_P(u/c^2).
\end{align*}
Here $\displaystyle A_1:=\Big(\frac{\gamma-1}{\gamma A}\Big)^{\frac{1}{\gamma-1}}$. See \cite{TOVdS}.
We put
$$
m=4\pi \int_0^r\rho R^2R'dr,$$
supposing that $\rho$ is continuous at $r=0$. The coordinate $r$ can be
changed to $m$, supposing that $\rho>0$, and the equations are reduced to
\begin{subequations}
\begin{eqnarray}
e^{-F}\Big(\frac{\partial R}{\partial t}\Big)_m&=&
\Big(1+\frac{P}{c^2\rho}\Big)V, \label{Ba} \\
e^{-F}\Big(\frac{\partial V}{\partial t}\Big)_m&=&
\frac{4\pi}{c^2}R^2PV\frac{\partial V}{\partial m}-GR\Big(\frac{m}{R^3}+\frac{4\pi P}{c^2}\Big)+\frac{c^2\Lambda}{3}R+ \nonumber \\
&&-\Big(1+\frac{V^2}{c^2}-\frac{2Gm}{c^2R}-\frac{\Lambda}{3}R^2\Big)
\Big(1+\frac{P}{c^2\rho}\Big)^{-1}
4\pi R^2\frac{\partial R}{\partial m}. \label{Bb}
\end{eqnarray}
\end{subequations}
Here $(\partial/\partial t)_m$ means the differentiation with respect to $t$
keeping $m$ constant. We will change the coordinate $m$ to $r$ later
through a fixed equilibrium, and we shall construct solutions near the equilibrium.
\section{Equilibrium}
Let us consider solutions independent of $t$, that is,
$F=F(r), H=H(r), \rho=\rho(r), V\equiv 0, R\equiv r$. The
equations are reduced to the Tolman-Oppenheimer-Volkoff-de
Sitter equation
\begin{subequations}
\begin{eqnarray}
\frac{dm}{dr}&=&4\pi r^2\rho, \label{Ca} \\
\frac{dP}{dr}&=&-(\rho+P/c^2)
\frac{\displaystyle G\Big(m+\frac{4\pi r^3}{c^2}P\Big)-\frac{c^2\Lambda}{3}r^3}{r^2\displaystyle\Big(1-\frac{2Gm}{c^2r}-\frac{\Lambda}{3}r^2\Big)}. \label{Cb}
\end{eqnarray}
\end{subequations}
This equation was analyzed in \cite{TOVdS}. Let us summarize the results.
For arbitrary positive central density $\rho_c$ there
exists a unique solution germ $(m(r), P(r)), 0<r\ll 1,$ such that
\begin{subequations}
\begin{eqnarray}
m&=&\frac{4\pi}{3}\rho_cr^3+[r^2]_2r, \label{mc} \\
P&=&P_c-(\rho_c+P_c/c^2)
\Big(\frac{4\pi G}{3}(\rho_c+3P_c/c^2)-\frac{c^2\Lambda}{3}\Big)
\frac{r^2}{2}+[r^2]_2. \label{Pc}
\end{eqnarray}
\end{subequations}
Here $[X]_Q$ denotes a convergent power series of the
form $\sum_{k\geq Q}a_kX^k$.
We denote
\begin{align*}
\kappa(r.m)&:=1-\frac{2Gm}{c^2r}-\frac{\Lambda}{3}r^2, \\
Q(r, m, P)&:=G\Big(m+\frac{4\pi r^3}{c^2}P\Big)-\frac{c^2\Lambda}{3}r^3.
\end{align*}
we concentrate ourselves to solutions satisfyting $\kappa(r, m(r))>0$.
Moreover a solution $(m(r), P(r)), 0<r<r_+,$ of (\ref{Ca})(\ref{Cb}) is
said to be {\bf monotone-short} if $r_+<\infty$,
$dP/dr<0$ for $0<r<r_+$, that is, $Q(r, m(r), P(r))>0$, and
$P \rightarrow 0$ as $r\rightarrow r_+-0$ and if
$$\kappa_+:=\lim_{r\rightarrow r_+-0}\kappa(r,m(r))=
1-\frac{2Gm_+}{c^2r_+}-\frac{\Lambda}{3}r_+^2
$$
and
$$Q_+:=\lim_{r\rightarrow r_+-0}Q(r, m(r), P(r))=
Gm_+-\frac{c^2\Lambda}{3}r_+^3 $$
are positive. Here
$$m_+:=\lim_{r\rightarrow r_+-0}m(r)=4\pi\int_0^{r_+}
\rho(r)r^2dr.
$$
{\it We suppose that there is a monotone-short solution $(\bar{m}(r),
\bar{P}(r)), 0<r<r_+,$ satisfying (\ref{mc})(\ref{Pc}), and fix it hereafter.}\\
As for sufficient conditions for the existence of
monotone-short prolongations, see \cite{TOVdS}.
Anyway, the associated function $u=\bar{u}(r)$ turns out to be analytic
on a neighborhood of $[0, r_+]$ and
$$\bar{u}(r)=\frac{Q_+}{r_+^2\kappa_+}(r_+-r)+[r_+-r]_2$$
as $r\rightarrow r_+-0$. See \cite[Theorem 4]{TOVdS}.
\section{Equations for the small perturbation from the equilibrium}
Using the fixed equilibrium $m=\bar{m}(r)$, we take the variable $r$
given by its inverse function.
We are going to a solution near equilibrium of the form
$$R=r(1+y),\qquad V=rv. $$
Here $y, v$ are small unknowns. The equations turn out to be
\begin{subequations}
\begin{eqnarray}
e^{-F}\frac{\partial y}{\partial t}&=&
\Big(1+\frac{P}{c^2\rho}\Big)v, \label{Ea} \\
e^{-F}\frac{\partial v}{\partial t}&=&
\frac{(1+y)^2}{c^2}\frac{P}{\bar{\rho}}v\frac{\partial}{\partial r}(rv) + \nonumber \\
&&-G(1+y)\Big(\frac{m}{r^3(1+y)^3}+\frac{4\pi}{c^2}P\Big)+
\frac{c^2\Lambda}{3}(1+y)+ \nonumber \\
&&-\Big(1+\frac{r^2v^2}{c^2}-
\frac{2Gm}{c^2r(1+y)}-
\frac{\Lambda}{3}r^2(1+y)^2\Big)
\times \nonumber \\
&&\times
\Big(1+\frac{P}{c^2\rho}\Big)^{-1}
\frac{(1+y)^2}{\bar{\rho}r}\frac{\partial P}{\partial r}. \label{Eb}
\end{eqnarray}
\end{subequations}
Here $m=\bar{m}(r)$ is a given function and $\rho, P$ are considered
as given functions of $r$ and the unknowns $y, z(:=r\partial y/\partial r)$
as follows:
\begin{align*}
\rho&=\bar{\rho}(r)(1+y)^{-2}(1+y+z)^{-1}, \\
P&=\bar{P}(r)(1-\Gamma (\bar{u}(r))(3y+z)-\Phi(\bar{u}(r), y, z)).
\end{align*}
Here
$$\Gamma:=\frac{\rho}{P}\frac{dP}{d\rho}
$$
and $\Phi(u, y, z)$ is an analytic function of the form
$\sum_{k_0\geq 0, k_1+k_2\geq 2}u^{k_0}y^{k_1}z^{k_2}$.
We shall denote such a function by $[u;y,z]_{0;2}$ hereafter.
Moreover we shall use
\begin{align*}
1+\frac{P}{c^2\rho}&=
\overline{\Big(1+\frac{P}{c^2\rho}\Big)}
\Big(1-\frac{\bar{P}}{c^2\bar{\rho}}
\overline{\Big(1+\frac{P}{c^2\rho}\Big)^{-1}}(\Gamma-1)(3y+z)+ \\
&+[\bar{u}(r); y, z]_{0;2}\Big).
\end{align*}
\section{Analysis of the linearized equation}
Let us linearize (\ref{Ea})(\ref{Eb}):
\begin{subequations}
\begin{eqnarray}
e^{-\bar{F}}\frac{\partial y}{\partial t}&=&
\overline{\Big(1+\frac{P}{c^2\rho}\Big)}v, \\
e^{-\bar{F}}\frac{\partial v}{\partial t}&=&E_2y''+E_1y'+E_0y,
\end{eqnarray}
\end{subequations}
where $y''=\partial^2y/\partial r^2, y'=\partial y/\partial r$ and
\begin{align*}
E_2&=e^{-2\bar{H}}\overline{(\rho+P/c^2)^{-1}}\overline{P\Gamma}, \\
\frac{E_1}{E2}&=
\frac{d}{dr}\Big(\bar{H}+\bar{F}-\log(\overline{1+P/c^2\rho})+
\log(\overline{P\Gamma}r^4)\Big), \\
E_0&= \frac{4\pi G}{c^2}3\overline{(\Gamma-1)P}+ \\
&+\Big(-1-3\overline{\Gamma e^{-2H}}+
3\overline{(\Gamma-1)e^{-2H}(1+P/c^2\rho)^{-1}}\Big)
\overline{(\rho+P/c^2)^{-1}}\frac{1}{r}\frac{d\bar{P}}{dr} +\\
&+3e^{-2\bar{H}}\overline{(\rho+P/c^2)^{-1}}
\frac{1}{r}\frac{d}{dr}\overline{P\Gamma}+ \\
&+\Lambda\Big(c^2+r\frac{d\bar{u}}{dr}\Big).
\end{align*}
Here $\bar{X}, \overline{XXX}$ denote the evaluations along the fixed equilibrium. Putting
$$\mathcal{L}y:=-e^{2\bar{F}}\overline{(1+P/c^2\rho)}
(E_2y''+E_1y'+E_0y),
$$
we get the linearized wave equation
$$\frac{\partial^2y}{\partial t^2}+\mathcal{L}y=0.
$$
We can rewrite $\mathcal{L}$ in the formally self-adjoint form
$$\mathcal{L}y=-\frac{1}{b}\frac{d}{dr}a\frac{dy}{dr}+Qy,
$$
where
\begin{align*}
a&=e^{\bar{H}+\bar{F}}\frac{\overline{P\Gamma}r^4}{\overline{1+P/c^2\rho}}
\\
b&=e^{3\bar{H}-\bar{F}}
\frac{r^4\bar{\rho}}{\overline{1+P/c^2\rho}} \\
Q&=-e^{2\bar{F}}\overline{1+P/c^2\rho }E_0.
\end{align*}
It is easy to see that $Q$ is bounded on $0\leq r\leq r_+$.
Therefore \cite[Proposition 7]{ssEE} is still valid:
\begin{Proposition}
The operator $\mathfrak{T}_0,
\mathcal{D}(\mathfrak{T}_0)=
C_0^{\infty}(0,r_+)$, $\mathfrak{T}_0y=\mathcal{L}y$ in the Hilbert space
$L^2((0,r_+); b(r)dr)$ admits the Friedrichs extension $\mathfrak{T}$, a
self adjoint operator, whose spectrum consits of
simple eigenvalues $\lambda_1<\lambda_2<\cdots<\lambda_{\nu}<\cdots
\rightarrow +\infty$.
\end{Proposition}
As in \cite{ssEE}, we introduce the new variable $x$ instead of $r$ defined by
$$x:=\frac{\tan^2\theta}{1+\tan^2\theta} \quad\mbox{with}\quad
\theta:=\frac{\pi}{2\xi_+}\int_0^r
\sqrt{\frac{\bar{\rho}}{\overline{\Gamma P}}}
e^{-\bar{F}+\bar{H}}dr.$$
Here
$$\xi_+:=\int_0^{r_+}
\sqrt{\frac{\bar{\rho}}{\overline{\Gamma P}}}
e^{-\bar{F}+\bar{H}}dr.$$
Then there are positive constants $C_0, C_1$ such that
\begin{align*}
r&=C_0\sqrt{x}(1+[x]_1) \quad \mbox{as}\quad x\rightarrow 0, \\
r_+-r&=C_1(1-x)(1+[1-x]_1)\quad
\mbox{as} \quad x \rightarrow 1
\end{align*}
Using this variable, we can write the operator $\mathcal{L}$ as
$$\Big(\frac{\xi_+}{\pi}\Big)^2\mathcal{L}y=
-x(1-x)\frac{d^2y}{dx^2}-
\Big(\frac{5}{2}(1-x)-\frac{N}{2}x\Big)\frac{dy}{dx}+
L_1(x)x(1-x)\frac{dy}{dx}+L_0(x)y,
$$
where $L_1(x), L_0(x)$ are analytic functions on a neighborhood
of $[0,1]$, and
$$N:=\frac{2\gamma}{\gamma-1},$$
which is supposed to be an even integer. Since the discussion is
quite parallel to that of \cite[\S 5]{ssEE}, we omit the details.
We may assume that $\xi_+/\pi =1$ without loss of generality, by changing the scale of $t$.
\section{Rewriting (\ref{Ea})(\ref{Eb}) using $\mathcal{L}$}
Let us go back to the equations
(\ref{Ea})(\ref{Eb}). We shall use the analysis of
$\partial P/\partial r$ given in \cite[(6.2)]{ssEE}:
\begin{align*}
-\frac{1}{r\bar{\rho}}\frac{\partial P}{\partial r}&=
-\frac{1}{r\bar{\rho}}\frac{d\bar{P}}{dr}+
(1+\partial_z\Phi/\Gamma)\frac{1}{r\bar{\rho}}
\frac{\partial}{\partial r}\overline{P\Gamma}(3y+z) + \\
&+\frac{\bar{P}}{r\bar{\rho}}\cdot[Q0]+
\frac{1}{r\bar{\rho}}\frac{d\bar{P}}{dr}\cdot
[Q1],
\end{align*}
where $[Q0], [Q1]$ are given by \cite[(6.3a)]{ssEE}, \cite[(6.3b)]{ssEE}.
We put
$$\mbox{the right-hand side of (\ref{Eb})}=[R2]+[R1]+[W]\frac{\Lambda}{3},
$$
where
\begin{align*}
[R2]&:=\frac{(1+y)^2}{c^2}\frac{P}{\bar{\rho}}v(v+w)\quad\mbox{with}\quad w=r\frac{\partial v}{\partial r}, \\
[W]&:=c^2(1+y)-
r^2(1+y)^4(1+P/c^2\rho)^{-1}\Big(-\frac{1}{r\bar{\rho}}\frac{\partial P}{\partial r}\Big).
\end{align*}
We put
$$ [R1]=[R3]+[R4]+[R5]+[R6]+[R7] $$
as in \cite{ssEE}. (The symbols $\Phi^P, \gamma^P$ of
\cite{ssEE} are replaced by $\Phi, \Gamma$ in this article.) But the analysis of $[W]$ is new:
We put
\begin{align*}
[W]&=c^2+r\frac{d\bar{u}}{dr}+[W1]+[W2]+[W3]+[W4], \\
[W1]&:=c^2y-r^2(1+y)^2(1+P/c^2\rho)^{-1}
\Big(-\frac{1}{r\bar{\rho}}\frac{d\bar{P}}{dr}\Big)-r\frac{d\bar{u}}{dr} \\
&=[W1L]+[W1Q], \\
[W1L]&:=c^2y-4r\frac{d\bar{u}}{dr}y+
r\overline{\Big(\frac{P/c^2}{\rho+P/c^2}\Big)}
(\Gamma-1)(3y+z)\frac{d\bar{u}}{dr}, \\
[W2]&:=-r^2(1+y)^4(1+P/c^2\rho)^{-1}
(1+\partial_z\Phi/\Gamma)\frac{1}{r\bar{\rho}}
\frac{\partial}{\partial r}\overline{P\Gamma}(3y+z), \\
[W3]&:=-r^2(1+y)^4(1+P/c^2\rho)^{-1}\frac{\bar{P}}{r\bar{\rho}}
[Q0], \\
[W4]&:=-r^2(1+y)^4(1+P/c^2\rho)^{-1}
\frac{1}{r\bar{\rho}}
\frac{d\bar{P}}{dr}[Q1].
\end{align*}
Then it follows from (\ref{Cb}) that
\begin{align*}
[R1]+[W]\frac{\Lambda}{3}&=[R3L]+[R3Q]+[R4L]+[R4Q]+ [R5]+[R6]+[R7]+ \\
&+([W1L]+[W1Q]+[W2]+[W3]+[W4])\frac{\Lambda}{3}.
\end{align*}
Let us define
\begin{align*}
1+G_1&=(1+\partial_z\Phi/\Gamma)\Big(1+\frac{r^2v^2}{c^2}-
\frac{2Gm}{c^2r(1+y)}-r^2(1+y)^2\frac{\Lambda}{3}\Big)\times \\
&\times\Big(1-\frac{2Gm}{c^2r}-\frac{\Lambda}{3}r^3\Big)^{-1}
\frac{1+\overline{P/c^2\rho}}{1+P/c^2\rho}
(1+y)^2.
\end{align*}
Then we have
\begin{align*}
-e^{-2\bar{F}}\overline{(1+P/c^2\rho)^{-1}}\mathcal{L}y&=
[R3L]+[R4L]+[W1L]\frac{\Lambda}{3} + \\
&+\frac{1}{1+G_1}\Big([R5]+[W2]\frac{\Lambda}{3}\Big).
\end{align*}
Putting
\begin{align*}
G_2&:=G_1([R3L]+[R4L]+[W1L]\frac{\Lambda}{3})+ \\
&-([R3Q]+[R4Q]+[R6]+[R7]+[R2]) + \\
&-([W1Q]+[W3]+[W4])\frac{\Lambda}{3}, \\
H_2&:=e^FG_2, \\
H_1&:=e^{F-2\bar{F}}
\overline{(1+P/c^2\rho)^{-1}}(1+G_1),
\end{align*}
we can write
$$e^F\times(\mbox{the right-hand side of (\ref{Eb})})=-H_1\mathcal{L}y-H_2.
$$
Using these analysis, we claim as \cite[Proposition 11]{ssEE}:
\begin{Lemma}
we have
$$(\partial_zH_1)\mathcal{L}y+\partial_zH_2 \equiv 0 \quad \mbox{mod}
\quad (1-x) $$
as $x\rightarrow 1$.
\end{Lemma}
Here ``$Q_1\equiv Q_0 \quad\mbox{mod}\quad (1-x)$" means that there exists an analytic function $\omega(x, y, z, v, w(:=rv'),y', y'')$ such that
$Q_1=Q_0+(1-x)\omega$.
Proof is similar to that of \cite[Proposition 11]{ssEE}. We see
$$(\partial_zH_1)\mathcal{L}y+\partial_zH_2\equiv e^F[S]$$
and we have to show $[S]\equiv 0$, where
\begin{align*}
[S]&:=(\partial_zG_1)\Big(e^{-2\bar{F}}
\overline{(1+P/c^2\rho)^{-1}}
\mathcal{L}y+[R3L]+[R4L]+[W1L]\Lambda/3\Big)+ \\
&+G_1\partial_z([R3L]+[R4L]+[W1L]\Lambda/3)+ \\
&-\partial_z\Big([R3Q]+[R4Q]+[R6]+[R7]+[R2]+ ([W1Q]+[W3]+[W4])\Lambda/3\Big).
\end{align*}
But we have
$$[S]\equiv-\frac{\partial_zG_1}{1+G_1}\Big([R5]+[W2]\frac{\Lambda}{3}\Big)-
\partial_z\Big([R7]+[W4]\frac{\Lambda}{3}\Big),
$$
since $\partial_z[R3L]$, $\partial_z[R4L]$, $\partial_z[R3Q]$,
$\partial_z[R4Q]$, $\partial_z[R6]$, $\partial_z[R2]$,
$\partial_z[W1L]$, $\partial_z[W1Q]$, $\partial_z[W3]$ are all $\equiv 0$ clearly.
By a tedious calculation, we get
\begin{align*}
&-\frac{\partial_zG_1}{1+G_1}\Big([R5]+[W2]\frac{\Lambda}{3}\Big)\equiv
\partial_z\Big([R7]+[W4]\frac{\Lambda}{3}\Big) \\
&\equiv
-\partial_z^2\Phi\Big(1+\frac{r^2v^2}{c^2}
-\frac{2Gm}{c^2r(1+y)}-r^2(1+y)^2\frac{\Lambda}{3}\Big)(1+y)^2
\frac{1}{r\bar{\rho}}
\frac{d\bar{P}}{dr}(3y+z),
\end{align*}
so that $[S]\equiv 0$. This completes the proof.\\
Putting
$$J:=e^F(1+P/c^2\rho),$$
we rewrite the system of equations (\ref{Ea})(\ref{Eb}) as
\begin{subequations}
\begin{eqnarray}
&&\displaystyle \frac{\partial y}{\partial t}-Jv=0, \label{Fa} \\
&&\displaystyle \frac{\partial v}{\partial t}+H_1\mathcal{L}y+H_2=0. \label{Fb}
\end{eqnarray}
\end{subequations}
\section{Main results}
Let us fix a time periodic solution of the
linearized equation:
$$Y_1=\sin(\sqrt{\lambda}t+\Theta_0)\psi(x),$$
where $\lambda$ is a positive eigenvalue of the operator
$\mathfrak{T}$ and $\psi$ is an associated eigenfunction.
We seek a solution of the form
$$y=\varepsilon(Y_1+\check{y}),\qquad v=\varepsilon(V_1+\check{v}), $$
where $$V_1=e^{-\bar{F}}(1+\overline{P/c^2\rho})^{-1}
\frac{\partial Y_1}{\partial t}.$$
Then we have the equation
\begin{equation}
\mathfrak{P}(\vec{w})=\varepsilon \vec{c},
\quad \mbox{with}\quad \vec{w}=
\begin{bmatrix}
\check{y} \\
\check{v}
\end{bmatrix}.\label{P}
\end{equation}
The Fr\'{e}chet derivative of the nonlinear operator
$\mathfrak{P}$:
$$D\mathfrak{P}(\vec{w})\vec{h}=
\begin{bmatrix}
F_1 \\
F_2
\end{bmatrix},
\quad\mbox{with}\quad
\vec{h}=
\begin{bmatrix}
h \\
k
\end{bmatrix}
$$
is given by
\begin{align*}
F_1&=\displaystyle\frac{\partial h}{\partial t}
-Jk-\Big(
(\partial_yJ)v+(\partial_zJ)vr\frac{\partial}{\partial r}\Big)h \\
F_2&=\displaystyle\frac{\partial k}{\partial t}
+H_1\mathcal{L}h+ \\
&+\Big((\partial_yH_1)\mathcal{L}y+\partial_yH_2+
((\partial_zH_1)\mathcal{L}y+\partial_zH_2)r\frac{\partial}{\partial r}\Big)h + \\
&+\Big((\partial_vH_1)\mathcal{L}y+\partial_vH_2+
\partial_wH_2r\frac{\partial}{\partial r}\Big)k.
\end{align*}
Thanks to Lemma 1 and considerations as \cite{ssEE} we can claim that
there are analytic functions $a_{01}$, $a_{00}$,
$a_{11}$, $a_{10}$, $a_{21}$, $a_{20}$ of $x,y, \partial_xy, \partial_x^2y,
v, \partial_xv$ such that
\begin{align*}
F_1&=\frac{\partial h}{\partial t}-Jk+\Big(a_{01}x(1-x)\frac{\partial}{\partial x}+a_{00}\Big)h, \\
F_2&=\frac{\partial k}{\partial t}+H_1\mathcal{L}h+
\Big(a_{11}x(1-x)\frac{\partial}{\partial x}+a_{10}\Big)h + \Big(a_{21}x(1-x)\frac{\partial}{\partial x}+a_{20}\Big)k.
\end{align*}
Thus we can apply the Nash-Moser(-Hamilton) theorem to get
\begin{Theorem}
Given $T>0$, there is a positive number $\epsilon_0$ such that,
for
$|\varepsilon|\leq \epsilon_0$, there is a solution
$\vec{w}\in C^{\infty}([0,T]\times [0,1])$ of (\ref{P}) such that
$$\sup_{j+k\leq n}
\Big\|\Big(\frac{\partial}{\partial t}\Big)^j\Big(\frac{\partial}{\partial x}\Big)^k\vec{w}\Big\|_{L^{\infty}([0,T]\times[0,1])}
\leq C(n)|\varepsilon|,$$
and hence a solution $(y,v)$ of (\ref{Ea})(\ref{Eb}) of the form
$y=\varepsilon Y_1+O(\varepsilon^2)$.
\end{Theorem}
Note that
$$R(t, r_+)=r_+(1+\varepsilon\sin(\sqrt{\lambda}t+\Theta_0)+O(\varepsilon^2)),$$
provided that $\psi$ has been normalized as $\psi(x=1)=1$, and that
the density distribution enjoys the `physical vacuum boundary' condition:
$$\rho(t,r)=
\begin{cases}
C(t)(r_+-r)^{\frac{1}{\gamma-1}}(1+O(r_+-r)) & (0\leq r<r_+) \\
0 & (r_+\leq r)
\end{cases}
$$
with a smooth function $C(t)$ of $t$ such that
$$C(t)=\Big(\frac{\gamma-1}{A\gamma}\frac{Q_+}{r_+^2\kappa_+}\Big)^{\frac{1}{\gamma-1}}+O(\varepsilon).
$$\\
Also we can consider the Cauchy problem
\begin{align*}
&\frac{\partial y}{\partial t}-Jv=0,\qquad
\frac{\partial v}{\partial t}+H_1\mathcal{L}y+H_2=0, \\
&y\Big|_{t=0}=\psi_0(x),\qquad
v\Big|_{t=0}=\psi_1(x).
\end{align*}
Then we have
\begin{Theorem}
Given $T>0$, there exits a small positive $\delta$ such that if
$\psi_0,\psi_1 \in C^{\infty}([0,1])$ satisfy
$$\max_{k\leq\mathfrak{K}}\Big\{\Big\|\Big(\frac{d}{dx}\Big)^k\psi_0\Big\|_{L^{\infty}},
\Big\|\Big(\frac{d}{dx}\Big)^k\psi_1\Big\|_{L^{\infty}}\Big\}\leq \delta, $$
then there exists a unique solutuon $(y,v)$ of the Cauchy problem
in $C^{\infty}([0,T]\times[0,1])$. Here $\mathfrak{K}$ is sufficiently large number.
\end{Theorem}
\section{Metric in the exterior domain}
Let us consider the moving solutions constructed in the preceding
section, which are defined on $0\leq t\leq T, 0<r\leq r_+$.
We discuss on the extension of the metric
onto the exterior vacuum region $r>r_+$.Keeping in mind the Birkhoff's theorem, we try to patch the Schwarzschild-de Sitter metric
$$ ds^2=\kappa^{\sharp}c^2(dt^{\sharp})^2-
\frac{1}{\kappa^{\sharp}}(dR^{\sharp})^2-(R^{\sharp})^2(d\theta^2+
\sin^2\theta d\phi^2)$$
from the exterior region. Here
$t^{\sharp}=t^{\sharp}(t,r), R^{\sharp}=R^{\sharp}(t,r)$ are smooth functions of $0\leq t\leq T, r_+\leq r\leq r_++\delta$, $\delta$ being a small positive number, and
$$\kappa^{\sharp}=1-\frac{2Gm_+}{c^2R^{\sharp}}-\frac{\Lambda}{3}(R^{\sharp})^2. $$
The patched metric is
$$ds^2=
g_{00}c^2dt^2+2g_{01}cdtdr+g_{11}dr^2+
g_{22}(d\theta^2+\sin^2\theta d\phi^2),
$$
where
\begin{align*}
g_{00}&=\begin{cases}
e^{2F}=\kappa_+e^{-2u/c^2} &\quad (r\leq r_+) \\
\displaystyle \kappa^{\sharp}(\partial_tt^{\sharp})^2-\frac{1}{c^2\kappa^{\sharp}}(\partial_tR^{\sharp})^2 &\quad (r_+<r)
\end{cases}\\
g_{01}&=
\begin{cases}
0 &\quad (r\leq r_+) \\
\displaystyle c\kappa^{\sharp}(\partial_tt^{\sharp})(\partial_rt^{\sharp})-
\frac{1}{c\kappa^{\sharp}}(\partial_tR^{\sharp})(\partial_rR^{\sharp}) &\quad (r_+<r)
\end{cases}\\
g_{11}&=
\begin{cases}
-e^{2H}=\displaystyle -\Big(1+\frac{V^2}{c^2}-
\frac{2Gm}{c^2R}-\frac{\Lambda}{3}R^2\Big)^{-1}(\partial_rR)^2 &\quad(r\leq r_+) \\
\displaystyle c^2\kappa^{\sharp}(\partial_rt^{\sharp})^2-\frac{1}{\kappa^{\sharp}}(\partial_rR^{\sharp})^2 &\quad (r_+<r)
\end{cases}\\
g_{22}&=\begin{cases}
-R^2 &\quad (r\leq r_+) \\
-(R^{\sharp})^2 &\quad (r_+<r).
\end{cases}
\end{align*}
We require that $R=R^{\sharp}$ and $\partial_rR=\partial_rR^{\sharp}$ along $r=r_+$.
It is necessary for that $g_{22}$ is of class $C^1$. Moreover, by the same way as \cite[Supplementary Remark 4]{TOVdS}, we see that
$$\frac{\partial t^{\sharp}}{\partial t},\quad
\frac{\partial t^{\sharp}}{\partial r}, \quad
\frac{\partial^2t^{\sharp}}{\partial r^2}, \quad
\frac{\partial^2R^{\sharp}}{\partial r^2}$$
at $r=r_++0$ are uniquely determined so that $g_{\mu\nu}$ are of
class $C^1$ across $r=r_+$. By a tedious calculation we have
$$\frac{\partial^2R^{\sharp}}{\partial r^2}\Big|_{r_++0}
-\frac{\partial^2R}{\partial r^2}\Big|_{r_+-0}=\mathcal{A}
\Big(\frac{\partial R}{\partial r}\Big)^2,$$
where
$$\mathcal{A}=-\frac{V^2}{c^2}\Big(
\Big(\frac{Gm_+}{c^2R^2}-\frac{\Lambda}{3}R+
\frac{1}{\sqrt{\kappa_+}}\frac{1}{c^2}\frac{\partial V}{\partial t}\Big)
\Big(1+
\frac{V^2}{c^2}-\frac{2Gm_+}{c^2R}-
\frac{\Lambda}{3}R^2\Big)^{-2}\Big|_{r_+-0}.
$$
Since
$$\Big(\frac{Gm_+}{c^2R^2}-\frac{\Lambda}{3}R\Big)\Big(1+
\frac{V^2}{c^2}-\frac{2Gm_+}{c^2R}-
\frac{\Lambda}{3}R^2\Big)^{-2}\Big|_{r=r_+-0}\doteqdot \frac{Q_+}{c^2r_+^2\kappa_+^2}\not=0,
$$
we see that $\partial^2R^{\sharp}/\partial r^2 \equiv \partial^2R/\partial r^2$
if and only if $V\equiv 0$ at $r=r_+$, which
is the case if the solution under consideration is an equilibrium.\\
{\bf\Large Appendix}\\
Let us describe the abstract theorem we have used. This
has been established by the author through \cite{FE}, \cite{OJM},
\cite{ssEE}, therefore the proof is not repeated here.
First of all we introduce the following classes of functions:
Let us denote by $\mathfrak{A}([0,1])$ the set of all functions
defined and analytic on a neighborhood of the interval $[0,1]$,
by $\mathfrak{A}_q([0,1],[0]^p)$ the set of all functions $f$
defined and analytic on a neighborhood of
$[0,1]\times \{0\}\times \cdots\times\{0\} \in \mathbb{R}^{1+p}$ such that
$$
f(x, y_1,\cdots, y_p)=
\sum_{k_1+\cdots+k_p\geq q}
a_{k_1\cdots k_p}(x)y_1^{k_1}\cdots y_p^{k_p}.$$
The set of equations under consideration is
\begin{subequations}
\begin{eqnarray}
&&\frac{\partial y}{\partial t}-J(x,y,z)v=0, \label{A1a} \\
&&\frac{\partial v}{\partial t}+H_1(x,y,z,v)\mathcal{L}y+
H_2(x,y,z,v,w)=0
\label{A1b}
\end{eqnarray}
\end{subequations}
with
\begin{align}
\mathcal{L}&=-x(1-x)\frac{d^2}{dx^2}-
\Big(\frac{N_0}{2}(1-x)-\frac{N_1}{2}x)\Big)\frac{d}{dx}
+ \nonumber \\
&+L_1(x)x(1-x)\frac{d}{dx}+L_0(x).
\end{align}
Here we denote $\displaystyle z=\frac{\partial y}{\partial x}, w=\frac{\partial v}{\partial x}$.
We assume:\\
{\bf (B0):}\ {\it The parameters $N_0, N_1$ are supposed to be greater than $4$.}\\
The coefficients $J, H_1, H_2,
L_1, L_0$ are supposed to be of class
\noindent $\mathfrak{A}_0([0,1], [0]^2),
\mathfrak{A}_0([0,1], [0]^3)$, $\mathfrak{A}_2([0,1], [0]^4), \mathfrak{A}([0,1]),
\mathfrak{A}([0,1])$, respectively,
and their domains are supposed to include
$U_0\times U\times U, U_0\times U\times U\times U,
U_0\times U\times U\times U \times U, U_0, U_0$,
respectively, where
$U_0$ is a neighborhood of $[0,1]$ and $U$ is a neighborhood of
0.\\
We suppose the following assumptions:\\
{\bf (B1): } {\it $H_1(x,0,0,0)=J(x,0,0)^{-1}$ and there is a constant $C>1$ such that
$$
\frac{1}{C}< J(x,0,0) <C
$$
for $\forall x \in U_0$.}\\
{\bf (B2): } {\it We have
$$
\partial_zJ\equiv 0, \qquad
(\partial_zH_1)\mathcal{L}y+
\partial_zH_2\equiv 0,
\qquad
\partial_wH_2\equiv 0
$$
as $x\rightarrow 0$ and as $x\rightarrow 1$.}\\
Here the meaning of `$\equiv 0$ as $x\rightarrow x_0$' is as follows:
Let us denote by
$\mathfrak{A}([x_0]\times [0]^p)$ the set of all functions
defined and analytic on a neighborhood of
$(x_0,0,\cdots, 0)\in \mathbb{R}^{1+p}$; A function $f$ in
$\mathfrak{A}([x_0]\times [0]^p)$ is said to satisfy
$f\equiv 0$ as $x\rightarrow x_0$ iff
$f(x_0, y_1, \cdots, y_p )=0$ for $\forall y_1, \cdots, y_p$, that is,
there is a function $\Omega$ in
$\mathfrak{A}([x_0]\times[0]^p)$ such that
$$
f(x, y_1, \cdots, y_p)=(x-x_0)\Omega(x,y_1,\cdots, y_p).
$$
In the assumption {\bf (B2)}, the functions under consideration are
regarded as functions of $x,y,Dy, D^2y, v, Dv$. Here
and hereafter $D$ stands for $\partial/\partial x$.\\
Let us fix $T>0$ arbitrarily, and fix functions
$y^*, v^* \in C^{\infty}([0,T]\times [0,1])$ such that
all $y^*, z^*=\partial y^*/\partial x, v^*, w^*=
\partial v^*/\partial x$ are confined in $U$ for $0\leq\forall t \leq T$.
We seek a solution $y,v \in C^{\infty}([0,T]\times[0,1])$ of (\ref{A1a})(\ref{A1b}) of the form
\begin{equation}
y=y^*+\tilde{y},\qquad v=v^*+\tilde{v}, \label{A4}
\end{equation}
which satisfies
\begin{equation}
\tilde{y}|_{t=0}=0,\qquad \tilde{v}|_{t=0}=0. \label{A5}
\end{equation}
The conclusion is: {\it There is a small positive
number $\delta(T)$ and a large number $\mathfrak{K}$ such that, if
\begin{equation}
\max_{j+k\leq\mathfrak{K}}
\|\partial_t^j\partial_x^k(y^*, v^*)\|_{L^{\infty}}\leq\delta(T),
\end{equation}
then there exists a solution $(y, v)$ of (\ref{A1a})(\ref{A1b})(\ref{A4})(\ref{A5}).}\\
In fact the equations for $\vec{w}=(\tilde{y},\tilde{v})^T$ turns out to be
\begin{subequations}
\begin{eqnarray}
&&\frac{\partial \tilde{y}}{\partial t}-J\tilde{v}-(\Delta J)v^*=c_1, \label{A8a}\\
&&\frac{\partial \tilde{v}}{\partial t}+H_1\mathcal{L}\tilde{y}+
(\Delta H_1)\mathcal{L}y^*+\Delta H_2=c_2, \label{A8b}
\end{eqnarray}
\end{subequations}
where
\begin{subequations}
\begin{eqnarray}
J&=&J(x,y^*+\tilde{y}, z^*+\tilde{z})\quad\mbox{with}\quad \tilde{z}=
\frac{\partial \tilde{y}}{\partial x}, \\
\Delta J&=&J(x, y^*+\tilde{y}, z^*+\tilde{z})-J(x,y^*, z^*), \\
c_1&=&-\frac{\partial y^*}{\partial t}
+J(x,y^*,z^*)v^*, \\
H_1&=&H_1(x,y^*+\tilde{y}, z^*+\tilde{z}, v^*+\tilde{v}, w^*+\tilde{w}) \nonumber \\
&&\quad\mbox{with}\quad
\tilde{w}=\frac{\partial \tilde{v}}{\partial x}, \\
\Delta H_1&=&
H_1(x,y^*+\tilde{y}, z^*+\tilde{z}, v^*+\tilde{v})-
H_1(x,y^*, z^*, v^*), \\
\Delta H_2&=&H_2(x,y^*+\tilde{y}, z^*+\tilde{z}, v^*+\tilde{v}, w^*+\tilde{w}) + \nonumber \\
&&-H_2(x,y^*, z^*, v^*, w^*), \\
c_2&=&-\frac{\partial v^*}{\partial t}-
H_1(x,y^*,z^*,v^*)\mathcal{L}y^*-
H_2(x,y^*,z^*,v^*,w^*).
\end{eqnarray}
\end{subequations}
We write the equations (\ref{A8a})(\ref{A8b}) as $\mathfrak{P}(\vec{w})=\vec{c}$,
where $\vec{c}=(c_1,c_2)^T$. The domain of the nonlinear
mapping
$\mathfrak{P}$ is $\mathfrak{U}$, the set of all functions $\vec{w}=(\tilde{y},\tilde{v})^T\in \vec{\mathfrak{E}}_0$ such that
\begin{equation}
|\tilde{y}|+|D\tilde{y}|+|\tilde{v}|+|D\tilde{v}|<\epsilon_0.\label{A12}
\end{equation}
Here $\epsilon_0$ is small so that
(\ref{A12}) implies $y,z,v,w \in U$, and
$\vec{\mathfrak{E}}=\mathfrak{E}\times\mathfrak{E},
\mathfrak{E}=C^{\infty}([0,T]\times[0,1]),
\vec{\mathfrak{E}}_0=\{(\phi,
\psi)\in\vec{\mathfrak{E}}\ |\
\phi|_{t=0}=\psi|_{t=0}=0\}$. The Fr\'{e}chet derivative
$D\mathfrak{P}$ of $\mathfrak{P}$ is
\begin{equation}
D\mathfrak{P}(\vec{w})\vec{h}=
\begin{bmatrix}
F_1 \\
F_2
\end{bmatrix}
\qquad \mbox{for}\qquad \vec{h}=
\begin{bmatrix}
h \\
k
\end{bmatrix},
\end{equation}
where
\begin{subequations}
\begin{eqnarray}
F_1&=&\frac{\partial h}{\partial t}
-Jk-\Big((\partial_yJ)v+
(\partial_zJ)v\frac{\partial}{\partial x}\Big)h, \\
F_2&=& \frac{\partial k}{\partial t}+H_1\mathcal{L}h + \nonumber \\
&&+\Big((\partial_yH_1)\mathcal{L}y+\partial_yH_2+
((\partial_zH_1)\mathcal{L}y+\partial_zH_2)
\frac{\partial}{\partial x}\Big)h +
\nonumber \\
&&+\Big((\partial_vH_1)\mathcal{L}y+
\partial_vH_2+
\partial_wH_2\cdot \frac{\partial}{\partial x}\Big)k.
\end{eqnarray}
\end{subequations}
Thanks to {\bf (B2)} we see that there
are $a_{\mu\nu} \in \mathfrak{A}([0,1],[0]^5), \mu=0,1,2,\nu=1,0,$
such that
\begin{subequations}
\begin{eqnarray}
F_1&=&\frac{\partial h}{\partial t}
-Jk+
(a_{01}x(1-x)D+a_{00})h, \\
F_2&=&\frac{\partial k}{\partial t}+H_1\mathcal{L}y + \nonumber \\
&&+(a_{11}x(1-x)D+a_{10})h+
(a_{21}x(1-x)D+a_{20})k,
\end{eqnarray}
\end{subequations}
where
$a_{\mu\nu}$ are analytic functions of $x,y,Dy, D^2y, v, Dv$.
Under this situation, we can apply the Nash-Moser(-Hamilton) theorem.\\
|
2,869,038,155,268 | arxiv | \section{Introduction}\label{sec:Intro}
Let $G$ be a finite group, and let $\textsf{Irr}(G)$ be the set of all complex irreducible character degrees of $G$. Denote by $\cd(G)$ the set of character degrees of $G$, that is to say, $\cd(G)=\{\chi(1)\vert \chi \in \textsf{Irr}(G)\}$.
It is well-known that the complex group algebra $\mathbb{C}G$ determines character degrees and their multiplicities.
There is growing interest in a question regarding the structure of $G$ which can be recovered from the character degree set of $G$ with or without multiplicity. It is well-known that the character degree set of $G$ can not use to completely determine the structure of $G$. For example, the non-isomorphic groups $D_8$ and $Q_8$ not only have the same set of character degrees, but also share the same character table. The character degree set cannot be used to distinguish between solvable and nilpotent groups. For example, if $G$ is either $Q_{8}$ or $S_{3}$, then $\cd(G) = \{1, 2\}$. Recently, Navarro \cite{Navarro15:cdsol} showed that the character degree set alone cannot determine the solvability of the group. Indeed, he constructed a finite perfect group $H$ and a finite solvable group $G$ such that $\cd(G) = \cd(H)$. It is also discovered by Navarro and Rizo \cite{Novarro14:cdnil} that there exists a finite perfect group and a finite nilpotent group with the same character degree set. Notice that in both examples, these finite perfect groups are not nonabelian simple. It remains open whether the complex group algebra can determine the solvability of the group or not (see Brauer's Problem 2 \cite{Brauer63}).
However, the situation for simple groups and related groups is rather different. It has been proved recently that all \emph{quasisimple} groups are uniquely determined up to isomorphism by their complex group algebras~\cite{Bessenrodt15:doublecover} in which a finite group $G$ is quasisimple if $G$ is perfect and $G/Z(G)$ is a nonabelian simple group. In the late 1990s, Huppert \cite{Hupp-I} posed a conjecture which asserts that the nonabelian simple groups are essentially characterized by the set of their character degrees.
\begin{conjecture}{(Huppert)} Let $G$ be a finite group and $H$ a finite nonabelian simple group such that the sets of character degrees of $G$ and $H$ are the same. Then $G \cong H \times A$, where $A$ is an abelian group.
\end{conjecture}
This conjecture is verified for Alternating groups, many of the simple groups of Lie type \cite{Tong12:L4,Wakefield09:L3,Wakefield11:2G2} and for all sporadic simple groups \cite{ADTW-Fi23,ADTW-2013,Hupp-II-VIII,HT}.
In this paper, we initiate an investigation on an extension of Huppert's conjecture for almost simple groups whose socle is the sporadic simple groups. A group $H$ is called \emph{almost simple} if there exists a nonabelian simple group $H_{0}$ such that $H_{0}\leq H\leq \textsf{Aut}(H_{0})$. Indeed, this paper is devoted to studying finite groups with the same character degrees as almost simple groups $H$ with socle $H_{0}$ being one of the Mathieu groups:
\begin{theorem}\label{thm:main}
Let $G$ be a finite group, and let $H$ be an almost simple group whose socle is one of the Mathieu groups. If $\cd(G)=\cd(H)$, then there exists an abelian group $A$ such that $G/A$ is isomorphic to $H$.
\end{theorem}
In order to prove Theorem~\ref{thm:main}, we establish the following steps which Huppert introduced in \cite{Hupp-I}. Let $H$ be an almost simple group with socle $H_{0}$, and let $G$ be a group with the same character degrees as $H$. Then we show that
\begin{description}
\item[Step 1.] $G'=G''$;
\item[Step 2.] if $G'/M$ is a chief factor of $G$, then $G'/M$ is isomorphic to $H_{0}$;
\item[Step 3.] if $\theta \in \textsf{Irr}(M)$ with $\theta(1)=1$, then $I_{G'}(\theta)=G'$ and so $M=M'$;
\item[Step 4.] $M=1$ and $G'\cong H_{0}$;
\item[Step 5.] $G/C_{G}(G')$ is isomorphic to $H$.
\end{description}
Note that to prove Step 2, we determine all finite simple groups whose irreducible character degrees divide some irreducible character degrees of almost simple groups with socle sporadic simple groups, and by Proposition~\ref{prop:simple}, all such simple groups are listed in Table~\ref{tbl:simple}. This result somehow relates to \cite[Theorem 1]{Tong12:CAlg-Alt-Spor} for sporadic simple groups.
\section{Preliminaries}\label{sec:prem}
In this section, we present some useful results to prove Theorem~\ref{thm:main}. We first establish some definitions and notation.
Throughout this paper all groups are finite. Recall that a group $H$ is said to be an almost simple group with socle $H_{0}$ if $H_{0}\leq H\leq \textsf{Aut}(H_{0})$, where $H_{0}$ is a nonabelian simple group. For a positive integer $n$, $\pi(n)$ denotes the set of all prime divisors of $n$. If $G$ is a group, we will write $\pi(G)$ instead of $\pi(|G|)$. If $N\unlhd G$ and $\theta\in \textrm{Irr}(N)$, then the inertia group of
$\theta$ in $G$ is denoted by $I_G(\theta)$ and is defined by $I_G(\theta)=\{g\in G\ |\ \theta^g=\theta\}$. If the character $\chi=\sum_{i=1}^k e_i\chi_i$, where each $\chi_i$ is an irreducible character of $G$ and $e_i$ is a nonnegative integer, then those $\chi_i$ with $e_i>0$ are called the \textsl{irreducible constituents} of $\chi$. The set of all irreducible constituents of $\theta^G$ is denoted by $\textrm{\textsf{Irr}}(G|\theta)$. All further notation and definitions are standard and could be found in \cite{HuppBook,Isaacs-book}.
\begin{lemma}~\cite[Theorems 19.5 and 21.3]{HuppBook}\label{lem:gal}
Suppose $N\unlhd G$ and $\chi\in {\rm{\textsf{Irr}}}(G)$.
\begin{enumerate}
\item[(a)] If $\chi_N=\theta_1+\theta_2+\cdots+\theta_k$ with $\theta_i\in {\rm{\textsf{Irr}}}(N)$, then $k$ divides $|G/N|$. In particular, if $\chi(1)$ is prime to $|G/N|$, then $\chi_N\in {\rm{\textsf{Irr}}}(N)$.
\item[(b)] (Gallagher's Theorem) If $\chi_N\in {\rm{\textsf{Irr}}}(N)$, then $\chi\psi\in {\rm{\textsf{Irr}}}(G)$ for all $\psi\in {\rm{\textsf{Irr}}}(G/N)$.
\end{enumerate}
\end{lemma}
\begin{lemma}~\cite[Theorems 19.6 and 21.2]{HuppBook}\label{lem:clif}
Suppose $N\unlhd G$ and $\theta\in {\rm{\textsf{Irr}}}(N)$. Let $I=I_G(\theta)$.
\begin{enumerate}
\item[(a)] If $\theta^I=\sum_{i=1}^k\phi_i$ with $\phi_i\in {\rm{\textsf{Irr}}}(I)$, then $\phi_i^G\in {\rm{\textsf{Irr}}}(G)$. In particular, $\phi_i(1)|G:I|\in \cd(G)$.
\item[(b)] If $\theta$ extends to $\psi\in {\rm{\textsf{Irr}}}(I)$, then $(\psi\tau )^G\in {\rm{\textsf{Irr}}}(G)$ for all $\tau\in {\rm{\textsf{Irr}}}(I/N)$. In particular, $\theta(1)\tau(1)|G:I|\in {\rm{\cd}}(G)$.
\item[(c)] If $\rho \in {\rm{\textsf{Irr}}}(I)$ such that $\rho_N=e\theta$, then $\rho=\theta_0\tau_0$, where $\theta_0$ is a character of an irreducible projective representation of $I$ of degree $\theta(1)$ and $\tau_0$ is a character of an irreducible projective representation of $I/N$ of degree $e$.
\end{enumerate}
\end{lemma}
\begin{lemma}~\cite[Lemma~3]{HT}\label{lem:factsolv} Let $G/N$ be a solvable factor group of
$G$ minimal with respect to being nonabelian. Then two cases can occur.
\begin{enumerate}
\item[(a)] $G/N$ is an $r$-group for some prime $r$. Hence there exists $\psi\in {\rm{\textsf{Irr}}}(G/N)$ such that $\psi(1)=r^b>1$. If $\chi\in {\rm{\textsf{Irr}}}(G)$ and $r\nmid \chi(1)$, then $\chi\tau\in {\rm{\textsf{Irr}}}(G)$ for all $\tau\in {\rm{\textsf{Irr}}}(G/N)$.
\item[(b)] $G/N$ is a Frobenius group with an elementary abelian Frobenius kernel $F/N$. Then $f=|G:F|\in {\rm{\cd}}(G)$ and $|F/N|=r^a$ for some prime $r$, and $a$ is the smallest integer such that $r^a\equiv 1 \mod f$. If $\psi\in {\rm{\textsf{Irr}}}(F)$, then either $f\psi(1)\in {\rm{\cd}}(G)$ or $r^a$ divides $\psi(1)^2$. In the latter case, $r$ divides $\psi(1)$.
\begin{enumerate}
\item[(1)] If no proper multiple of $f$ is in ${\rm{\cd}}(G)$, then $\chi(1)$ divides $f$ for all $\chi\in {\rm{\textsf{Irr}}}(G)$ such that $r\nmid \chi(1)$, and if $\chi\in {\rm{\textsf{Irr}}}(G)$ such that $\chi(1)\nmid f$, then $r^a\mid \chi(1)^2$.
\item[(2)] If $\chi\in {\rm{\textsf{Irr}}}(G)$ such that no proper multiple of $\chi(1)$ is in ${\rm{\cd}}(G)$, then either $f$ divides $\chi(1)$ or $r^a$ divides $\chi(1)^2$. Moreover if $\chi(1)$ is divisible by no nontrivial proper character degree in $G$, then $f=\chi(1)$ or $r^a\mid \chi(1)^2$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem:exten}
Let $G$ be a finite group.
\begin{enumerate}
\item[(a)] If $G$ is a nonabelian simple group, then there exists a nontrivial irreducible character $\varphi$ of $G$ that extends to ${\rm{\textsf{Aut}}}(G)$.
\item[(b)] If $N$ is a minimal normal subgroup of $G$ so that $N\cong S^k$, where $S$ is a nonabelian simple group, and $\varphi\in {\rm{\textsf{Irr}}}(S)$ extends to ${\rm{\textsf{Aut}}}(S)$, then $\varphi^k\in {\rm{\textsf{Irr}}}(N)$ extends to $G$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (a) follows from~\cite[Theorems~2-4]{Bia} and to prove part (b) see~\cite[Lemma 5]{Bia}.
\end{proof}
\begin{lemma}~\cite[Lemma 6]{Hupp-I}\label{lem:schur}
Suppose that $M\unlhd G'=G''$ and for every $\lambda\in {\rm{\textsf{Irr}}}(M)$ with $\lambda(1)=1$, $\lambda^g=\lambda$ for all $g\in G'$. Then $M'=[M,G']$ and $|M/M'|$ divides the order of the Schur multiplier of $G'/M$.
\end{lemma}
\begin{lemma}~\cite[Theorem D]{Moreto}\label{lem:sol}
Let $N$ be a normal subgroup of a finite group $G$ and let $\varphi \in \textsf{Irr}(N)$ be
$G$-invariant. Assume that $\chi(1)/\varphi(1)$ is odd, for all $\chi(1)\in \textsf{Irr}(G|\varphi)$. Then $G/N$ is solvable.
\end{lemma}
\section{Degree properties of almost simple groups with socle sporadic }\label{sec:simple}
In this section, we determine all finite simple groups whose irreducible character degrees divide some irreducible character degrees of almost simple groups whose socles are sporadic simple groups.
\begin{proposition}\label{prop:simple}
Let $S$ be a simple group, and let $H$ be an almost simple group whose socle is a sporadic simple group. Then the character degrees of $S$ divides some character degrees of $H$ if and only if $H$ and $S$ are as in Table~\ref{tbl:simple}.
\end{proposition}
\begin{proof}
Suppose that $H$ is an almost simple group with socle a sporadic simple group $H_{0}$. Suppose also that $S$ is a simple group whose degrees divide some degrees of $H$. Then the prime divisors of the degrees of $S$ are exactly those primes dividing $|S|$. Therefore, $\pi(S)\subseteq \pi(H)$, and hence by~\cite{Zavar-2008}, we know all such possible simple groups $S$. We only need to check if the degrees of $S$ divide some degrees of $H$ and this could mainly be done by \cite{Atlas,Gap4}. Since our arguments are similar for each group $H$, we only give a detailed proof for the cases where $H$ is $M_{12}:2$ or $M_{22}:2$ which will be frequently used and referred to in Sections~\ref{sec:M12.2} and~\ref{sec:M22.2}.
Suppose first $H=M_{12}:2$. Then $\pi(S)\subseteq \{2,3,5,11\}$, and so by~\cite{Zavar-2008}, $S$ is isomorphic to one of the simple groups $A_5$, $A_6$, $L_2(11)$, $U_5(2)$, $S_4(3)$, $M_{11}$ and $M_{12}$. Note that $U_5(2)$ and $S_4(3)$ have degrees $220$ and $81$, respectively. Therefore, $S$ is isomorphic to $A_5$, $A_6$, $L_2(11)$, $M_{11}$ or $M_{12}$.
Suppose now $H=M_{22}:2$. Then $\pi(S)\subseteq \{2,3,5,7,11\}$, and so by~\cite{Zavar-2008}, $S$ is isomorphic to one of the simple groups in $\mathcal{A}_{1}\cup \mathcal{A}_{2}\cup\mathcal{A}_{3}$, where
\begin{align*}
\mathcal{A}_{1}:=\{&A_5 , A_6,A_7, L_2(7) , L_2(8), M_{22}\}, \\
\mathcal{A}_{2}:=\{&A_{8}, L_{3}(4), L_{2}(49), U_3(3), S_4(7), M_{11}\},\\
\mathcal{A}_{3}:=\{&A_9, A_{10}, A_{11}, A_{12}, L_2(11), U_3(5),U_4(3),U_5(2), U_6(2),\\
&S_4(3), S_6(2), O_8^{+}(2), M_{12}, McL, HS, J_2\}.
\end{align*}
If $S\in \mathcal{A}_{2}$, then $S$ has a degree divisible by $25$, $27$, $44$, $49$ or $64$, which is a contradiction. If $S\in \mathcal{A}_{3}$, then $S$ has a degree which is divisible by $12$, which is also a contradiction. Therefore $S\in \mathcal{A}_{1}$ as claimed.
\end{proof}
\begin{longtable}{lp{9.5cm}}
\caption{Simple groups $S$ whose irreducible character degrees divide some character degrees of almost simple groups $H$ with socle sporadic simple groups.}\label{tbl:simple}\\
\hline
\multicolumn{1}{c}{$H$} & \multicolumn{1}{c}{$S$} \\
\hline
\endfirsthead
\multicolumn{2}{c}%
{\tablename\ \thetable\ -- Continued} \\
\hline
\multicolumn{1}{c}{$H$} & \multicolumn{1}{c}{$S$} \\
\hline
\endhead
\hline \multicolumn{2}{r}{Continued}\\
\endfoot
\hline
\endlastfoot
%
$M_{11}$ &
$A_n$ for $n=5,6$, $M_{11}$ \\
%
$M_{12}$, $M_{12}:2$ &
$A_n$ for $n=5,6$, $L_2(11)$, $M_{11}$, $M_{12}$ \\
%
$M_{22}$, $M_{22}:2$ &
$A_n$ for $n=5,6,7$, $L_2(q)$ for $q=7,8$, $M_{22}$\\
%
$M_{23}$ &
$A_n$ for $n=5,6,7$, $L_2(q)$ for $q=7,8$, $M_{11}$, $M_{23}$ \\
%
$M_{24}$ &
$A_n$ for $n=5,6,7,8$, $L_2(q)$ for $q=7,8,11,23$, $L_3(4)$, $U_3(3)$, $M_{11}$, $M_{24}$ \\
%
$J_1$ &
$A_5$, $L_2(q)$ for $q=7,11$, $J_1$ \\
%
$J_2$ &
$A_n$ for $n=5,6,7$, $L_2(q)$ for $q=7,8$, $U_3(3)$, $J_2$ \\
$J_2:2$ & $A_8$, $L_{3}(4)$, and all $S$ in $J_{2}$, \\
%
$J_3$, $J_3:2$ &
$A_n$ for $n=5,6$, $S_4(3)$, $L_2(q)$ for $q=16,17,19$, $J_3$ \\
%
$J_4$ &
$A_n$ for $n=5,6,7,8$, $L_2(q)$ for $q=7,8,11,23,29,31,32,43$, $L_3(4)$, $L_5(2)$, $U_3(3)$, $U_3(11)$, $M_{11}$, $M_{12}$, $M_{22}$, $J_4$ \\
%
$HS$, $HS:2$&
$A_n$ for $n=5,6,7,8$, $L_2(q)$ for $q=7,8,11$, $L_3(4)$, $M_{11}$, $M_{22}$, $HS$ \\
%
$McL$, $McL:2$&
$A_n$ for $n=5,6,7,8$, $L_2(q)$ for $q=7$, $8$, $11$, $L_3(4)$, $U_3(3)$, $U_3(5)$ $S_4(3)$, $M_{11}$, $McL$ \\
%
$Suz$, &
$A_n$ for $n=5,\ldots,10$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $25$, $27$, $64$,
$L_3(3)$, $L_3(4)$, $L_3(9)$, $L_4(3)$,
$U_3(3)$, $U_3(4)$, $U_4(2)$, $U_4(3)$,
$U_5(2)$,
$S_6(2)$,
$^2B_2(8)$, $G_2(3)$,$^2F_4(2)'$,
$M_{11}$, $M_{12}$, $M_{22}$, $Suz$, $J_2$ \\
%
$Suz:2$ &
$A_{11}$, and all $S$ in $Suz$, \\
%
$Co_3$ &
$A_n$ for $n=5$, $6$, $7$, $8$, $9$, $11$, $L_2(q)$ for $q=7$, $8$, $11$, $23$, $L_3(4)$, $U_3(q)$ for $q=3,5$ , $U_4(3)$, $S_4(3)$, $S_6(2)$, $M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$, $Co_3$ \\
%
$Co_2$ &
$A_n$ for $n=5,\ldots,11$, $L_2(q)$ for $q=7,8,11,23$, $L_3(4)$, $U_3(q)$ for $q=3,5$, $U_4(3)$, $U_5(2)$, $S_4(3)$, $S_6(2)$, $M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$ $J_2$, $Co_2$ \\
%
$Co_1$ &
$A_n$ for $n=5,\ldots,16$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $23$, $25$, $27$, $49$, $64$, $L_3(q)$ for $q=3$, $4$, $9$, $L_4(3)$, $U_3(q)$ for $q=3$, $4$, $5$, $U_4(3)$,$U_5(2)$, $U_6(2)$, $S_4(q)$ for $q=3$, $5$, $8$, $S_6(q)$ for $q=2,3$, $^2B_2(8)$, $O_7(3)$, $O^+_8(2)$, $G_2(q)$ for $q=3,4$, $^3D_4(2)$, $^2F_4(2)'$, $M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$, $McL$,
$J_2$, $HS$, $Co_1$, $Co_{3}$ \\
%
$He$, $He:2$ &
$A_n$ for $n=5,6,7,8$, $L_2(q)$ for $q=7,8,16,17,49$, $ L_3(4)$, $U_3(3)$, $S_4(4)$, $He$ \\
%
$Fi_{22}$ &
$A_n$ for $n=5,\ldots,11$, $A_{13}$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $25$, $27$, $64$, $L_3(q)$ for $q=3,4,9$, $L_4(3)$, $U_3(q)$ for $q=3$, $4$, $U_{4}(3)$, $U_5(2)$, $U_6(2)$, $S_4(3)$, $S_6(2)$, $^2B_2(8)$, $O^+_8(2)$, $G_2(q)$ for $q=3,4$, $M_{11}$, $M_{12}$, $M_{22}$, $J_2$, $McL$, $Fi_{22}$\\
%
$Fi_{22}:2$ & $S_6(3)$, $O_7(3)$, and all $S$ in $Fi_{22}$.\\
%
$Fi_{23}$ &
$A_n$ for $n=5,\ldots,13$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $16$, $17$, $23$, $25$, $27$, $64$, $L_3(q)$ for $q=3$, $4$, $9$, $16^{2}$, $L_4(q)$ for $q=3$, $4$, $U_3(q)$ for $q=3$, $4$, $U_4(q)$ for $q=3$, $4$, $U_5(2)$, $S_4(3)$, $S_4(4)$, $S_6(2)$, $S_6(3)$, $^2B_2(8)$, $O_7(3)$, $O^+_8(2)$, $O^-_8(2)$, $G_2(q)$ for $q=3,4$, $^2F_4(2)'$,
$M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$, $HS$, $J_2$, $Fi_{23}$\\
%
$Fi'_{24}$, $Fi'_{24}:2$ &
$A_n$ for $n=5,\ldots,14$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $16$, $17$, $23$, $25$, $27$, $29$, $49$, $64$, $L_3(q)$ for $q=3$, $4$, $9$, $16^{2}$, $L_4(q)$ for $q=3$, $4$, $L_{6}(3)$, $U_3(q)$ for $q=3$, $4$, $U_{4}(3)$, $U_5(2)$, $S_4(q)$ for $q=3$, $4$, $8$, $S_6(2)$, $S_6(3)$, $S_8(2)$, $^2B_2(8)$, $O_7(3)$, $O^\pm_8(2)$, $G_2(q)$ for $q=3$, $4$, $^3D_4(2)$, $^2F_4(2)'$, $M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$, $He$, $J_2$, $Fi'_{24}$\\
%
$Th$ &
$A_n$ for $n=5,\ldots,10$, $L_2(q)$ for $q=7$, $8$, $13$, $19$, $25$, $27$, $31$, $49$, $64$, $L_3(q)$ for $q=3$, $4$, $5$, $9$, $L_4(3)$, $L_5(2)$, $U_3(q)$ for $q=3$, $4$, $5$, $8$, $S_4(3)$, $S_4(8)$,$S_6(2)$, $^2B_2(8)$, $^{3}D_{4}(2)$, $G_2(q)$ for $q=3$, $4$, $^2F_4(2)'$, $J_2$, $Th$ \\
%
$Ru$ &
$A_n$ for $n=5,\ldots,8$, $L_2(q)$ for $q=7$, $8$, $13$, $25$, $27$, $29$, $64$, $L_3(q)$ for $q=3$, $4$, $U_3(q)$ for $q=3$, $4$, $5$, $J_2$, $Ru$ \\
%
$Ly$ &
$A_n$ for $n=5,\ldots,9$, $A_{11}$, $L_2(q)$ for $q=7$, $8$, $11$, $31$, $32$, $L_3(q)$ for $q=4$, $5$, $U_3(q)$ for $q=3$, $5$, $S_4(3)$, $M_{11}$, $M_{12}$, $M_{22}$, $J_2$, $McL$, $HS$, $Ly$\\
%
$HN$, $HN:2$ &
$A_n$ for $n=5,\ldots,10$, $L_2(q)$ for $q=7$, $8$, $11$, $19$, $L_3(q)$ for $q=4$. $19$, $L_{4}(7)$, $U_3(q)$ for $q=3$, $5$, $8$, $U_4(3)$, $S_4(3)$,$O^{+}_{8}(2)$, $M_{11}$, $M_{12}$, $M_{22}$, $J_1$, $J_{2}$, $HS$, $HN$ \\
%
$O'N$, $O'N:2$ &
$A_n$ for $n=5,\ldots,8$, $L_2(q)$ for $q=7$, $8$, $11$, $16$, $17$, $19$, $31$, $32$, $L_3(q)$ for $q=4$, $7$, $U_3(q)$ for $q=3$, $8$, $S_{4}(3)$, $S_6(2)$, $M_{11}$, $M_{12}$, $M_{22}$, $O'N$ \\
%
$B$ &
$A_n$ for $n=5,\ldots,28$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $16$, $17$, $19$, $23$, $27$, $31$, $32$, $47$, $49$, $64$, $125$, $L_3(q)$ for $q=3$, $4$, $5$, $9$, $16$, $25$, $L_4(q)$ for $q=3$, $4$, $5$, $L_5(2)$, $L_{5}(4)$, $L_6(2)$, $L_{6}(4)$, $U_3(q)$ for $q=3,4,5,8$, $U_4(q)$ for $q=2$, $3$, $4$, $5$, $8$, $U_5(2)$, $U_6(2)$, $S_4(q)$ for $q=4,5,7,8$, $G_2(q)$ for $q=3$, $4$, $5$, $O_7(3)$, $O^\pm_8(2)$, $O^+_8(3)$, $O^\pm_{10}(2)$, $O^\pm_{12}(2)$, $^2F_4(2)'$, $^3D_4(2)$, $M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$, $J_1$, $J_2$ , $J_3$, $HS$, $McL$, $Suz$, $Fi_{22}$, $Co_3$, $Co_2$, $Th$, $B$ \\
%
$M$ &
$A_n$ for $n=5,\ldots,36$, $L_2(q)$ for $q=7$, $8$, $11$, $13$, $16$, $17$, $19$, $23$, $25$, $27$, $29$, $31$, $32$, $41$, $47$, $49$, $59$, $64$, $71$, $81$, $169$, $1024$, $L_3(q)$ for $q=3$, $4$, $5$, $7$, $9$, $16$, $19$, $25$, $L_4(q)$ for $q=3$, $4$, $5$, $7$, $9$, $L_5(q)$ for $q=2$, $4$, $L_6(q)$, for $q=2$, $3$, $4$, $U_3(q)$ for $q=3$, $4$, $5$, $8$, $27$, $U_4(q)$ for $q=2$, $3$, $4$, $5$, $8$, $9$, $U_5(q)$ for $q=2$, $4$, $U_6(2)$ for $q=2$, $4$, $S_4(q)$ for $q=4$, $5$, $7$, $8$, $9$, $S_6(q)$ for $q=2$, $3$, $4$, $5$, $S_8(2)$, $S_{10}(2)$, $^2B_2(8)$, $^2B_2(32)$, $G_2(q)$ for $q=3,4,5$, $O_7(3)$, $O_7(5)$, $O_9(3)$,$O^\pm_8(2)$, $O^\pm_8(3)$, $O^\pm_{10}(2)$, $O^\pm_{10}(3)$, $O^\pm_{12}(2)$, $^{2}G_{2}(27)$, $^2F_4(2)'$, $F_4(2)$, $^3D_4(2)$, $M_{11}$, $M_{12}$, $M_{22}$, $M_{23}$, $M_{24}$, $J_1$, $J_2$, $J_3$, $HS$, $McL$, $Suz$, $Fi_{22}$, $Co_3$, $Co_2$, $Th$, $He$, $O'N$, $Ru$, $M$ \\
\hline
\end{longtable}
\section{Groups with socle $M_{12}$}\label{sec:M12.2}
In this section, we prove Theorem~\ref{thm:main} for almost simple group $H$ whose socle is $H_{0}:=M_{12}$. By \cite{Hupp-II-VIII}, Theorem~\ref{thm:main} is proved when $H=H_{0}=M_{12}$. Therefore, we only need to deal with the case where $H:=M_{12}:2$. For convenience, we mention some properties of $H$ and $H_{0}$ some of which can be obtained in \cite[pp. 31-33]{Atlas}.
\begin{lemma}\label{lem:M12.2}
Let $H_{0}:= M_{12}$ and $H:=M_{12}:2$. Then
\begin{enumerate}
\item[(a)] The Schur multiplier and the group of outer automorphisms of $H_{0}$ are $\mathbb{Z}_{2}$;
\item[(b)] The degrees of irreducible characters of $H$ are
\begin{align*}
1& &
45&=3^2\cdot5 &
66&=2\cdot3\cdot11 &
120&=2^3\cdot3\cdot5\\
%
22&=2\cdot 11 &
54&=2\cdot3^3 &
99&=3^2\cdot11 &
144&=2^4\cdot3^2\\
%
32&=2^5 &
55& =5\cdot11 &
110&=2\cdot 5\cdot 11 &
176&=2^4\cdot11
%
\end{align*}
\item[(c)] If $K$ is a maximal subgroup of $H_{0}$ whose index in $H_{0}$ divides some degrees $\chi(1)$ of $H$, then one of the following occurs:
\begin{enumerate}
\item[(i)] $K\cong M_{11}$ and $\chi(1)/|H_{0}:K|$ divides $2\cdot5$ or $2^2\cdot3$;
\item[(ii)] $K\cong M_{10}:2$ and $\chi(1)/|H_{0}:K|=1$;
\item[(iii)] $K\cong L_2(11)$ and $\chi(1)/|H_{0}:K|=1$.
\end{enumerate}
\item[(d)] If $S$ is a finite nonabelian simple group whose irreducible character degrees divide some degrees of $H$, then $S$ is isomorphic to $A_5$, $A_6$, $L_2(11)$, $M_{11}$ or $M_{12}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Parts (a) and (b) follows from \cite[pp. 31-33]{Atlas} and part (d) follows from Proposition~\ref{prop:simple} and Table~\ref{tbl:simple}. Part (c) is a straightforward calculation.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:main} for $M_{12}:2$}\label{sec:proof-M12.2}
As noted before, by \cite{Hupp-II-VIII}, we may assume that $H:=M_{12}:2$. We further assume that $G$ is a finite group with $\cd(G)=\cd(M_{12}:2)$. The proof of Theorem~\ref{thm:main} follows from the following lemmas.
\begin{lemma}\label{lem:m12-1}
$G'=G''$.
\end{lemma}
\begin{proof}
Assume the contrary. Then there is a normal subgroup $N$ of $G$, where $N$ is maximal such that $G/N$ is a nonabelian solvable group. Now we apply Lemma~\ref{lem:factsolv} and we have one of the following cases:\smallskip
\noindent (a) $G/N$ is a $r$-group with $r$ prime. In this case, $G/N$ has an irreducible character $\psi$ of degree $r^b>1$, and so does $G$. Since $M_{12}:2$ has an irreducible character of degree $32$, we conclude that $r=2$. Let now $\chi \in \textsf{Irr}(G)$ with $\chi(1)=99$. Then Lemma~\ref{lem:gal}(a) implies that $\chi_N\in \textsf{Irr}(N)$, and so by Lemma~\ref{lem:gal}(b), $G$ has an irreducible character of degree $99\psi(1)$, which is a contradiction.\smallskip
\noindent (b) $G/N$ is a Frobenius group with kernel $F/N$. Then $|G:F| \in \cd(G)$ divides $r^{a}-1$, where $|F/N|=r^{a}$. Let $|G:F|\in \{2\cdot 11,5\cdot 11\}$ and $\chi(1)=2^4\cdot 3^2$. Then Lemma~\ref{lem:clif}(b) implies that $r^a$ divides $\chi^2(1)=2^8\cdot3^4$, and it is impossible as $|G:F|$ does not divide $r^a-1$, for every divisor $r^a$ of $2^8\cdot3^4$. Let now $|G:F|\not \in \{2\cdot11,5\cdot11\}$. Then no proper multiple of $|G:F|$ is in $\cd(G)$. If $r=2$, then by Lemma~\ref{lem:clif}(a), both $3^2\cdot11$ and $3^2\cdot5$ must divide $|G:F|$, which is a contradiction. If $r=3$, then by Lemma~\ref{lem:clif}(a), $|G:F|$ is divisible by $2^5$ and $2^4\cdot11$, which is a contradiction. Similarly, if $r\neq 2$, $3$, then $2^5$ and $2^4\cdot3^2$ divide $|G:F|$, which is a contradiction.
\end{proof}
\begin{lemma}\label{lem:M12-2}
Let $G'/M$ be a chief factor of $G$. Then $G'/M \cong M_{12}$.
\end{lemma}
\begin{proof}
Suppose $G'/M\cong S^k$, where $S$ is a nonabelian simple group for some positive integer $k$. Since $S$ is a finite nonabelian simple group whose irreducible character degrees divide some degrees of $M_{12}:2$, by Lemma~\ref{lem:M12.2}(d), $S$ is isomorphic to one of the groups $A_5$, $A_6$, $M_{11}$, $M_{12}$ or $L_2(11)$. If $S$ is isomorphic to one of the groups $A_5$, $A_6$, $M_{11}$, $M_{12}$, $L_2(11)$, then $k=1$ as $G$ has no degree divisible by $5^2$. Assume that $S$ is isomorphic to $A_5$ or $A_6$, then $G'/M$ has a character $\psi$ of degree $5$. If $\chi$ is an irreducible constituent of $\psi^{G/M}$, then $\chi(1)=t\psi(1)=5t$, where $t$ divides $|\Out(S)|$. Consequently, $G$ has a character of degree at most $20$, which is a contradiction. Similarly, in the case where $S$ is isomorphic to $M_{11}$ or $L_2(11)$, the factor group $G/M$ has a character of degree $10t$ with $t=1,2$, and this implies that $G$ has a character of degree at most $20$, which is a contradiction.
Therefore $G'/M$ is isomorphic to $M_{12}$.
\end{proof}
\begin{lemma}\label{lem:M12-3}
Let $\theta \in \textsf{Irr}(M)$ with $\theta(1)=1$. Then $I_{G'}(\theta)=G'$ and $M=M'$.
\end{lemma}
\begin{proof}
Suppose $I=I_{G'}(\theta)<G'$. By Lemma~\ref{lem:clif}, we have $\theta^I= \sum_{i=1}^{k} \phi_i$ where $\phi_i \in \textsf{Irr}(I)$ for $ i=1,2,...,k$. Let $U/M$ be a maximal subgroup of $G'/M\cong M_{12}$ containing $I/M$ and set $t:= |U:I|$. It follows from Lemma~\ref{lem:clif}(a) that $\phi_i(1)|G':I| \in \cd(G')$, and so $t\phi_i(1)|G':U|$ divides some degrees of $G$. Then $|G':U|$ must divide some character degrees of $G$, and hence by Lemma~\ref{lem:M12.2}(c) one of the following holds. \smallskip
\noindent (i) Suppose $U/M\cong M_{11}$. Then $t\phi_i(1)$ divide $2\cdot5$ or $2^2\cdot3$. If $t=1$, then $I/M\cong M_{11}$. Since $M_{11}$ has trivial Schur multiplier, it follows that $\theta$ extends to $\theta _0\in \textsf{Irr}(I)$, and so by Lemma~\ref{lem:clif}(b) $(\theta_0\tau )^{G'}\in \textsf{Irr}(G')$, for all $\tau \in \textsf{Irr}(I/M)$. For $\tau (1)=55\in \cd(M_{11})$, it turns out that $12\cdot55 \cdot\theta_0(1)$ divide some degrees of $G$, which is a contradiction. Therefore, $t\neq 1$, and hence the index of a maximal subgroup of $U/M \cong M_{11}$ containing $I/M$ must divide $2\cdot5$ or $2^2\cdot3$. This implies that $t\phi_i(1)$ divides $2^2\cdot3$ and $I/M \cong L_2(11)$. In particular, $\phi_{i}(1)=1$. Thus $\theta$ extends to a $\phi_{i}$, and so by Lemma~\ref{lem:clif}(b), $144\tau(1) \in \cd(G')$, for all $\tau \in \textsf{Irr}(I/M)$. This leads us to a contradiction by taking $\tau(1)=10\in \cd(L_2(11))$.\smallskip
\noindent (ii) Suppose $U/M\cong M_{10}:2$. In this case $t=1$, or equivalently, $I/M=U/M\cong M_{10}:2$. Moreover, $\phi_i(1)=1$, for all $i$. Then $\theta$ extends to $\phi_i \in \textsf{Irr}(I)$, and so by Lemma~\ref{lem:clif}(b), $66 \tau (1)$ divides some degrees of $G$, for $\tau (1)=10$, which is a contradiction.\smallskip
\noindent (iii) Suppose $U/M\cong L_2(11)$ and $t=\phi_{i}(1)=1$, for all $i$. Then $I/M \cong L_2(11)$, and so $\theta$ extends to $\phi_i \in \textsf{Irr}(I)$. Thus $144 \tau (1)\in \textsf{Irr}(G')$, for all $\tau \in \textsf{Irr}(I/M)$. This is impossible by taking $\tau (1)=10$.
Therefore, $I_{G'}(\theta)=G'$. By Lemma~\ref{lem:schur}, we have that $|M/M'|$ divides the order of Schur multiplier of $G'/M\cong M_{12}$ which is $2$. If $|M/M'|=2$, then $G'/M'$ is isomorphic to $2\cdot M_{12}$ which has a character of degree $32$ \cite[p. 33]{Atlas}. Therefore $M_{12}$ must have a degree divisible by $32$, which is a contradiction. Hence $|M/M'|=1$, or equivalently, $M=M'$.
\end{proof}
\begin{lemma}\label{lem:M12-4}
The subgroup $M$ is trivial, and hence $G'\cong M_{12}$.
\end{lemma}
\begin{proof}
By Lemmas~\ref{lem:M12-2} and \ref{lem:M12-3}, we have that $G'/M \cong M_{12}$ and $M=M'$. Suppose that $M$ is nonabelian, and let $N\leq M$ be a normal subgroup of $G'$ such that $M/N$ is a chief factor of $G'$. Then $M/N\cong S^{k}$, for some nonabelian simple group $S$. It follows from Lemma~\ref{lem:exten} that $S$ possesses a nontrivial irreducible character $\varphi$ such that $\varphi^{k}\in \textsf{Irr}(M/N)$ extends to $G'/N$. By Lemma~\ref{lem:gal}(b), we must have $\varphi(1)^{k}\tau(1)\in \cd(G'/N)\subseteq \cd(G')$, for all $\tau \in \textsf{Irr}(G'/M)$. Now we can choose $\tau\in G'/M$ such that $\tau(1)$ is the largest degree of $M_{12}$, and since $\varphi$ is nontrivial, $\varphi(1)^{k}\tau(1)$ divides no degree of $G$, which is a contradiction. Therefore, $M$ is abelian, and since $M=M'$, we conclude that $M=1$.
\end{proof}
\begin{lemma}\label{lem:M12-5}
There exists an abelian group $A$ such that $G/A\cong M_{12}:2$.
\end{lemma}
\begin{proof}
Set $A:= C_G(G')$. Since $G'\cap A=1$ and $G'A\cong G' \times A$, it follows that $G'\cong G'A/A\unlhd G/A\leq \textsf{Aut}(G')$. By Lemma~\ref{lem:M12-4}, we have $G'\cong M_{12}$, and so we conclude that $G/A$ is isomorphic to $M_{12}$ or $M_{12}:2$. In the case where $G/A$ is isomorphic to $M_{12}$, we must have $G\cong A\times M_{12}$. This is impossible as $32 \in \cd(G)$ but $M_{12}$ has no character of degree $32$. Therefore, $G/A$ is isomorphic to $M_{12}:2$.
\end{proof}
\section{Groups with socle $M_{22}$}\label{sec:M22.2}
In this section, we prove Theorem~\ref{thm:main} for almost simple group $H$ whose socle is $H_{0}:=M_{22}$. Note that Theorem~\ref{thm:main} is proved for $H=H_{0}=M_{22}$, see \cite{Hupp-II-VIII}. Therefore, we only need to focus on case where $H:=M_{22}:2$. For convenience, we mention some properties of $H$ and $H_{0}$ some of which can be obtained from \cite[pp. 39-41]{Atlas}.
\begin{lemma}\label{lem:M22.2}
Let $H_{0}:=M_{22}$ and $H:=M_{22}:2$. Then
\begin{enumerate}
\item[(a)] The Schur multiplier $H_{0}$ is $\mathbb{Z}_{12}$ and the group of outer automorphisms of $H_{0}$ is $\mathbb{Z}_{2}$;
\item[(b)] The degrees of irreducible characters of $H$ are
\begin{align*}
1& &
55&=5\cdot 11 &
210&=2\cdot 3\cdot 5\cdot 7&
385&=5\cdot 7\cdot 11\\
%
21&=3\cdot 7 &
99&=3^{2}\cdot 11 &
231&=3\cdot7\cdot11 &
560&=2^{4}\cdot 5\cdot 7\\
%
45&=3^{2}\cdot 5 &
154&=2\cdot 7\cdot 11 &
&&
%
\end{align*}
\item[(c)] If $K$ is a maximal subgroup of $H_{0}$ whose index in $H_{0}$ divides some degrees $\chi(1)$ of $H$, then one of the following occurs:
\begin{enumerate}[{ \quad (i)}]
\item $K\cong L_3(4)$ and $\chi(1)/|H_{0}:K|$ divides $7$;
\item $K\cong 2^4: S_5$ and $\chi(1)/|H_{0}:K|=1$;
\item $K\cong 2^4:A_6$ and $\chi(1)/|H_{0}:K|$ divides $2$, $3$ or $5$.
\end{enumerate}
\item[(d)] If $S$ is a finite nonabelian simple group whose irreducible character degrees divide some degrees of $H$, then $S$ is isomorphic to $A_5$, $A_{6}$, $A_{7}$, $L_2(7)$, $L_2(8)$ or $M_{22}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Parts (a)-(b) follows from \cite[pp. 31-33]{Atlas} part (d) follows from Proposition~\ref{prop:simple} and Table~\ref{tbl:simple}. Part (c) is a straightforward calculation.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:main} for $M_{22}:2$}\label{sec:proof-M12.2}
Theorem~\ref{thm:main} is true for the Mathieu group $M_{22}$ by \cite{Hupp-II-VIII}. It remains to assume that $H:=M_{22}:2$. In what follows assume that $G$ is a finite group with $\cd(G)=\cd(M_{22}:2)$. The proof follows from the following lemmas.
\begin{lemma}\label{lem:m22-1}
$G'=G''$.
\end{lemma}
\begin{proof}
Assume the contrary. Then there is a normal subgroup $N$ of $G$ where $N$ is a maximal such that $G/N$ is a nonabelian solvable group. We now apply Lemma $\ref{lem:factsolv}$, and so we have the following two cases: \smallskip
\noindent (a) Suppose that $G/N$ is a $r$-group with $r$ prime. Then it has an irreducible character $\psi$ of degree $r^b>1$. It is impossible as the group $M_{22}:2$ does not have a irreducible character of prime power degree.\smallskip
\noindent (b) $G/N$ is a Frobenius group with kernel $F/N$. Then $|G:F| \in \cd(G)$ divides $r^{a}-1$, where $|F/N|=r^{a}$. Let $|G:F|\in\{3\cdot7,5\cdot11\}$ and $\chi(1)=2\cdot7\cdot11$. Then Lemma~\ref{lem:clif}(b) implies that $r^a$ divides $\chi^2(1)=2^2\cdot7^2\cdot11^2$, and it is impossible as $|G:F|$ does not divide $r^a-1$, for every divisor $r^a$ of $2^2\cdot7^2\cdot11^2$. Let now $|G:F|\not \in \{3\cdot7,5\cdot11\}$. Then no proper multiple of $|G:F|$ is in $\cd(G)$.
If $r=3$, then by Lemma ~\ref{lem:clif}(a), both $5\cdot7\cdot11$ and $2^4\cdot5\cdot7$ must divide $|G:F|$, which is a contradiction. If $r=5$, then by Lemma ~\ref{lem:clif}(a), both $3\cdot7\cdot11$ and $2\cdot7\cdot11$ must divide $|G:F|$, which is a contradiction. If $r=11$, then by Lemma~\ref{lem:clif}(a), $|G:F|$ is divisible by $2\cdot3\cdot5\cdot7$ and $2^4\cdot5\cdot7$, which is a contradiction. Similarly, if $r\neq3$, $5$, and $11$, then $3^2\cdot11$ and $5\cdot11$ divide $|G:F|$, which is impossible. Therefore, $G'=G''$.
\end{proof}
\begin{lemma}\label{lem:M22-2}
Let $G'/M$ be a chief factor of $G$. Then $G'/M \cong M_{22}$.
\end{lemma}
\begin{proof}
Suppose $G'/M \cong S^k$, where $S$ is a nonabelian simple group and for some positive integer $k$. Since $S$ is a finite nonabelian simple group whose irreducible character degrees divide some degrees of $M_{22}:2$, by Lemma~\ref{lem:M22.2}(d), the group $S$ is isomorphic to $A_5$, $A_6$, $A_7$, $L_2(7)$, $L_2(8)$ or $M_{22}$. Observe that $G$ has no degree divisible by $25$ and $49$. This implies that $k=1$ in each case.
Assume that $S$ is isomorphic to one of the simple groups as in the first column of Table~\ref{tbl:s2-M22}. Then by \cite{Atlas}, $G'/M$ has a character $\psi$ of degree as in the third column of the same Table~\ref{tbl:s2-M22}. If $\chi$ is an irreducible constituent of $\psi^{G/M}$, then $\chi(1)=t\psi(1)$, where $t$ divides $|\Out(S)|$. Consequently, $G$ has a character of degree at most $d$ as in the forth column of Table~\ref{tbl:s2-M22}, which is a contradiction.
\begin{table
\centering
\caption{The triples $(S,\psi,d)$ in Lemma~\ref{lem:M22-2}}
\label{tbl:s2-M22}
\smallskip
\begin{tabular}{lccc}
\hline
$S$ & $|\Out(S)|$ & $\psi(1)$ & $d$ \\
\hline
$A_5$ & $2$ & $5$ & $10$\\
$A_6$ & $4$ & $5$ & $20$\\
$A_7$, $L_2(7)$ & $2$ & $6$ & $12$\\
$L_2(8)$ & $3$ & $8$ & $24$ \\
\hline
\end{tabular}
\end{table}
For example, if $S$ is isomorphic to $A_7$ or $L_2(7)$, then $G'/M$ has a character $\psi$ of degree $6$, then $\chi(1)=t\psi(1)=6t$, where $t$ divides $|\Out(S)|$, and so $G$ has a character of degree at most $12$, which is a contradiction. Therefore $G'/M \cong M_{22}$.
\end{proof}
\begin{lemma}\label{lem:M22-3}
If $\theta \in \textsf{Irr}(M)$, then $I_{G'}(\theta)=G'$ and $M=M'$.
\end{lemma}
\begin{proof}
Suppose $I:=I_{G'}(\theta)<G'$. By Lemma~\ref{lem:clif} we have $\theta^I= \sum_{i=1}^{k} \phi_i$ where $\phi_i \in \textsf{Irr}(I)$ for $ i=1,2,...,k$. Let $U/M$ be a maximal subgroup of $G'/M\cong M_{22}$ containing $I/M$ and set $t:=|U:I|$. It follows from Lemma~\ref{lem:clif}(a) that $\phi_i(1)|G':I| \in \cd(G')$, and so $t\phi_i(1)|G':U|$ divides some degrees of $G$. Then $|G':U|$ must divide some character degrees of $G$, and hence by Lemma~\ref{lem:M22.2} one of the following holds: \smallskip
\noindent (i) Suppose $U/M\cong L_3(4)$. Then, for each $i$, $t\phi_i(1)$ divide $7$.
As $U/M \cong L_3(4)$ does not have any subgroup of index $7$, by ~\cite[p. 23]{Atlas}, so $t=1$ and $I/M\cong U/M\cong L_3(4)$ and $\phi_i(1)$ divide $7$. If $\phi_i(1)=1$, then $\theta$ extend to $\phi_{i}$, and so by Lemma~\ref{lem:clif}(b), $(\phi_{i}\tau )^{G'}\in \textsf{Irr}(G')$, for all $\tau \in \textsf{Irr}(I/M)$. If $\tau(1)=64\in \cd(L_3(4))$, then $22\tau(1)=2^{7}\cdot11$ divides some degrees of $G$, which is a contradiction. Hence $\phi_i(1)=7$, for all $i$. Then $\phi_{i_{M}}=e_{i}\theta$, where $e_{i}\neq 1$ is the degree of a projective representation of $I/M\cong L_3(4)$, and it is impossible by~\cite[p. 24]{Atlas}.\smallskip
\noindent (ii) Suppose $U/M\cong 2^4:S_5$. Then $I/M\cong 2^4: S_5$ and $\phi_i(1)=1$, for all $i$. Thus $\theta$ extends to $\phi_{i}$ in $I$. It follows from Lemma~\ref{lem:clif}(b) that $\tau(1)|G':I|$ divides some character degrees of $G$, for all $\tau \in I/M$. This is impossible by taking $\tau(1)=4$.\smallskip
\noindent (iii) Suppose $U/M \cong 2^4:A_6$. In this case, $t\phi_{i}(1)$ divides $2$, $3$ or $5$. It follows from \cite{Atlas-Almost} that $U/M$ has no maximal subgroup of index $2$, $3$ and $5$, and this implies that $I=U$. Therefore, $I/M \cong 2^{4}:A_{6}$, that is to say, $I/M$ has an abelian subgroup $A/M$ of order $2^{4}$ such that $I/A \cong A_{6}$.
Let now $\lambda\in \textsf{Irr}(A|\theta)$, and write $\lambda^{I}=\sum f_{i}\mu_{i}$. Since $A\unlhd I$, the degree $\mu_{i}(1)$ divides $2$, $3$ or $5$, for all $i$. Since the index of a maximal subgroup of $I/A \cong A_{6}$ is at least $6$, $\lambda$ is $I$-invariant, and so $\mu_{i_A}=f_{i}\lambda$, for all $i$. If $f_{i}=1$ for some $i$, then $\lambda$ extends to $\lambda_{0}\in \textsf{Irr}(I)$, and so by Lemma~\ref{lem:clif}(b), $\lambda_{0}\tau$ is an irreducible character of $\lambda^{I}$, for all $\tau \in \textsf{Irr}(I/A)$, and so $\lambda_{0}(1)\tau(1)=\tau(1)$ divides $2$, $3$, or $5$. This is impossible as we could take $\tau(1)=8 \in \cd(A_{6})$. Therefore, $f_{i}>1$, for all $i$. Moreover, we know from Lemma~\ref{lem:clif}(c) that each $f_{i}$ is the degree of a nontrivial proper irreducible projective representation of $A_{6}$, by \cite[p. 5]{Atlas}, we observe that $f_{i}\in\{3,5\}$. This shows that $\mu(1)/\lambda(1)$ is odd, for all $\mu\in \textsf{Irr}(I|A)$, and so by Lemma~\ref{lem:sol}, the group $I/A\cong A_{6}$ is solvable, which is a contradiction.
This show $I_{G'}(\theta)=G'$. By Lemma~\ref{lem:schur}, we have that $|M/M'|$ divides the order of Schur multiplier of $G'/M\cong M_{22}$ which is $12$. If $|M/M'|\neq 1$, then $G$ has an irreducible character of degree divisible by one the degrees in the second row of Table~\ref{tbl:c-3-M22}, which is a contradiction. Therefore, $M=M'$.
\end{proof}
\begin{table
\centering
\caption{Some character degrees of $G'/M'$ in Lemma~\ref{lem:M22-3}}
\label{tbl:c-3-M22}
\smallskip
\begin{tabular}{lccccc}
\hline
$G'/M'$ & $2\cdot M_{22}$ & $3\cdot M_{22}$ & $4\cdot M_{22}$ & $6\cdot M_{22}$ & $12\cdot M_{22}$\\
\hline
Degree & $440$ & $384$ & $440$ & $384$ & $384$\\
\hline
\end{tabular}
\end{table}
\begin{lemma}\label{lem:M22-4}
The subgroup $M$ is trivial, and hence $G'\cong M_{22}$.
\end{lemma}
\begin{proof}
It follows from Lemmas~\ref{lem:M22-2} and \ref{lem:M22-3} that $G'/M \cong M_{22}$ and $M=M'$. Assume that $M$ is nonabelian, and let $N\leq M$ be a normal subgroup of $G'$ such that $M/N$ is a chief factor of $G'$. Then $M/N\cong S^{k}$ for some nonabelian simple group $S$. By Lemma~\ref{lem:exten}, $S$ has a nontrivial irreducible character $\varphi$ such that $\varphi^{k}\in \textsf{Irr}(M/N)$ extends to $G'/N$. Now Lemma~\ref{lem:gal}(b) implies that $\varphi(1)^{k}\tau(1)\in \cd(G'/N)\subseteq \cd(G')$, for all $\tau \in \textsf{Irr}(G'/M)$. As $\varphi(1)>1$, if we choose $\tau\in G'/M$ such that $\tau(1)$ is the largest degree of $M_{22}$, then $\varphi(1)^{k}\tau(1)$ divides no degree of $G$, which is a contradiction. Therefore, $M$ is abelian, and hence we are done.
\end{proof}
\begin{lemma}\label{lem:M22-5}
There exists an abelian group $A$ such that $G/A\cong M_{22}:2$.
\end{lemma}
\begin{proof}
Set $A:=C_G(G')$. Since $G'\cap A=1$ and $G'A\cong G' \times A$, it follows that $G' \cong G'A/A\unlhd G/A\leq \textsf{Aut}(G')$. Since $G'\cong M_{22}$, we conclude that $G/A$ is isomorphic to $M_{22}$ or $M_{22}:2$. In the case where $G/A$ is isomorphic to $M_{22}$, we conclude that $G\cong A\times M_{22}$. This is impossible as $560\in\cd(G)$ but $M_{22}$ has no character of degree $560$. Therefore, $G/A$ is isomorphic to $M_{22}:2$.
\end{proof}
|
2,869,038,155,269 | arxiv | \section{Introduction
Using an auxiliary flat fiducial metric, de~Rham, Gabadadze and Tolley (dRGT) first constructed a consistent interacting theory of a massive spin-2 graviton~\cite{deRham:2010kj}. This theory possesses a class of self-accelerating cosmological solutions where the massive graviton
potential plays the role of a cosmological constant
\cite{deRham:2010tw,Koyama:2011xz,Gumrukcuoglu:2011ew,Koyama:2011yg,Nieuwenhuizen:2011sq,
Berezhiani:2011mt,D'Amico:2011jj,Gratia:2012wt,Kobayashi:2012fz,
Volkov:2012cf,Volkov:2012zb,Motohashi:2012jd,Gratia:2013uza}.
The behavior of perturbations around these cosmological solutions is not {fully} understood. Initially,
there appeared to be several inconsistencies like coordinate dependence of the number of
propagating degrees of freedom \cite{Khosravi:2013axa} and related claims of existence of
strong coupling around particular solutions \cite{Gumrukcuoglu:2011zh}. These were
shown to be related to the existence of superluminally
propagating modes \cite{Motloch:2015gta}, which are a typical feature of isotropic perturbations around these
cosmological solutions. For a particular solution where
both the spacetime and fiducial metric are manifestly
homogeneous \cite{Gumrukcuoglu:2011zh}, anisotropic modes are superluminal as well. The Hamiltonian for the
isotropic modes around any self-accelerating cosmological solution is also unbounded from below
\cite{Khosravi:2013axa}. Finally, on specifically constructed alternate backgrounds,
perturbation characteristics have also been shown to be superluminal \cite{Deser:2012qx,Deser:2013eua,Deser:2014hga,Deser:2014fta,Deser:2015wta}.
In this paper we investigate the behavior of all linear metric perturbations around a general
self-accelerating vacuum dRGT solution completing the analysis of Ref.~\cite{Motloch:2015gta}. As a typical solution lacks translation invariance,
it is not possible to employ the standard scalar-vector-tensor decomposition. However, thanks to the rotational invariance of the background and parity invariance of the theory, it is possible to decouple
the system into angular momentum and parity states using the Regge,
Wheeler~\cite{Regge:1957td}, and Zerilli~\cite{Zerilli:1970se} formalism. This formalism
was originally developed to study perturbations around the Schwarzchild metric in
general relativity (see \cite{DeFelice:2011ka, Motohashi:2011pw, Motohashi:2011ds, Kobayashi:2012kh,
Kobayashi:2014wsa, Ogawa:2015pea} for extensions in modified gravity theories).
For a given angular momentum and parity, the various components of the metric perturbations are
derivatively and non-derivatively coupled in a complicated constrained structure that reflects
the fact that only 5 spin modes of the massive graviton propagate. We present here an algorithm capable of
finding hidden constraints and characteristic curves for any set of linear partial differential equations
in 1+1 dimensions, which has
application beyond dRGT. Using this algorithm, we determine the characteristic
curves for all dRGT modes and identify their hyperbolic, parabolic or elliptic nature
for their potential joint solution from initial data.
The paper is organized as follows. In \S\ref{sec:dRGTIntro} we review the construction of
self-accelerated background solutions~\cite{Gratia:2012wt}, perturbation Lagrangian
around them~\cite{Motloch:2014nwa} and example vacuum solutions. In \S\ref{sec:methodology} we review the
Regge-Wheeler-Zerilli analysis, and provide an summary of the algorithm we use
for finding the characteristics. The Appendices contain a full explanation of the
algorithm (\S\ref{sec:AlgorithmAppendix}), decomposition techniques (\S\ref{sec:decomposition})
and crosscheck using an alternate method of auxiliary variables (\S\ref{ssec:oddal}).
In \S\ref{sec:Odd} and \S\ref{sec:Even} we then
investigate odd and even parity perturbations around dRGT cosmological solutions.
We discuss these results in \S\ref{sec:discuss}.
\vfill
\section{Self-accelerating solutions in massive gravity
\label{sec:dRGTIntro}
In this section we provide a concise review of the dRGT theory (\S\ref{sec:dRGT}),
its
self-accelerating isotopic background solutions (\S\ref{sec:isotropic}), perturbations
in unitary gauge (\S\ref{sec:perts}), and specific vacuum background solutions (\S\ref{sec:vacuum}).
\subsection{dRGT theory
\label{sec:dRGT}
The Lagrangian density for the dRGT \cite{deRham:2010kj} nonlinear theory of a massive spin-2 graviton
is given by:
\begin{equation}
\label{drgt}
{\cal L} = \sqrt{-g}\frac{M_{\rm Pl}^2}{2}\left[ R-m^2\sum_{k=0}^4 \frac{\beta_k}{k!} F_k\left(\boldsymbol{\gamma}\right)
\right],
\end{equation}
where $M_{\rm Pl}^2 = (8\pi G)^{-1}$ is the reduced Planck mass,
\begin{align}
F_0(\boldsymbol{\gamma}) & = 1, \nonumber\\
F_1(\boldsymbol{\gamma}) & = \tr{\boldsymbol{\gamma}}, \nonumber\\
F_2(\boldsymbol{\gamma}) & = \tr{\boldsymbol{\gamma}}^2 - \tr{\boldsymbol{\gamma}^2} , \\
F_3(\boldsymbol{\gamma}) & =\tr{\boldsymbol{\gamma}}^3 - 3 \tr{\boldsymbol{\gamma}} \tr{\boldsymbol{\gamma}^2} + 2 \tr{\boldsymbol{\gamma}^3} , \nonumber\\
F_4(\boldsymbol{\gamma}) &= \tr{\boldsymbol{\gamma}}^4 - 6 \tr{\boldsymbol{\gamma}}^2 \tr{\boldsymbol{\gamma}^2} + 3 \tr{\boldsymbol{\gamma}^2}^2 + 8 \tr{\boldsymbol{\gamma}} \tr{\boldsymbol{\gamma}^3}
- 6 \tr{\boldsymbol{\gamma}^4} ,
\nonumber
\end{align}
and $[\,]$ denotes the trace of the enclosed matrix. The matrix $\boldsymbol{\gamma}$ is the square root of the product of the inverse spacetime metric ${\bf g}^{-1}$ and a flat fiducial metric $\boldsymbol{\Sigma}$
\begin{equation}
\ul{\gamma}{\mu}{\alpha} \ul{\gamma}{\alpha}{\nu} = g^{\mu\alpha}\Sigma_{\alpha\nu} .
\label{eqn:gamma}
\end{equation}
$\boldsymbol{\Sigma}$
is itself related to the standard Minkowski metric $\boldsymbol{\eta}$ via a coordinate transformation
using {St\"{u}ckelberg} scalars $\phi^A$
\begin{equation}
\Sigma_{\mu\nu} = \eta_{A B} \partial_\mu\phi^A\partial_\nu\phi^B ,
\end{equation}
which restores diffeomorphism invariance to the theory. Where this transformation is not invertible
the dRGT degrees of freedom encounter a determinant singularity \cite{Gratia:2013gka}.
Smooth continuation of
solutions on the other side of a determinant singularity is sometimes but not always possible
\cite{Motloch:2015gta}.
The parameters of the dRGT theory are $\{\alpha_3,\alpha_4\}$, which
control the $\beta_k$ through
\begin{align}
\beta_0 &= -12 (1+ 2\alpha_3+2\alpha_4), \nonumber\\
\beta_1 &= 6(1 + 3 \alpha_3 + 4\alpha_4),\nonumber\\
\beta_2 &= -2(1+ 6 \alpha_3+12\alpha_4 ), \\
\beta_3 &= 6(\alpha_3+ 4\alpha_4), \nonumber\\
\beta_4 &= -24 \alpha_4,\nonumber
\end{align}
and the graviton mass $m$.
\subsection{Isotropic background solutions
\label{sec:isotropic}
The dRGT theory possesses
solutions for
any isotropic spacetime metric \cite{Gratia:2012wt}
where the stress-energy associated with the graviton potential in Eq.~\eqref{drgt} behaves as a cosmological constant.
Given an isotropic line element,
\begin{equation}
{\dd}s^2 = -b^2(t,r) \dd t^2 + a^2(t,r) \big(\dd r^2 + r^2 \dd \Omega_2^2 \big),
\label{isotropicmetric}
\end{equation}
where $\dd \Omega_2^2$ is the line element on a 2-sphere, and isotropic
{St\"{u}ckelberg} fields,
\begin{eqnarray}
\phi^0 &=& f(t,r),\nonumber\\
\phi^i &=& g(t,r) \frac{x^i}{r} ,
\label{stuckyback}
\end{eqnarray}
this class of self-accelerating solutions requires
\begin{eqnarray}
g(t,r) =x_0 a(t,r)r.
\label{eqn:gsoln}
\end{eqnarray}
The constant $x_0$ solves the polynomial equation $P_1(x_0)=0$ with
\begin{equation}
P_1(x) \equiv 2(3-2x)+6(x-1)(x-3)\alpha_3+24(x-1)^2\alpha_4.
\end{equation}
Distinct self-accelerating {St\"{u}ckelberg} backgrounds represent different solutions of
\begin{equation}
\sqrt{X} = \frac{W}{x_0}+x_0,
\label{eqn:feqnsoln}
\end{equation}
where
\begin{align}
X & \equiv\Bigl(\frac{\dot{f}}{b}+\mu\frac{g'}{a}\Bigr)^2-\Bigl(\frac{\dot{g}}{b}+\mu\frac{f'}{a}\Bigr)^2, \nonumber\\
W & \equiv \frac{\mu}{ab} \( \dot f g' - \dot g f' \),
\label{eqn:XW}
\end{align}
with branches due to the matrix square root $\boldsymbol{\gamma}$ defined in Eq.~\eqref{eqn:gamma}
allowing $\mu\equiv \pm 1$. Here and throughout, we choose $\mu=1$ and overdots denote
derivatives with respect to $t$ whereas primes denote derivatives with respect to $r$.
Where $W=\pm\infty, 0$ or is undefined because either $f$ or $g$ are not
continuously differentiable, there exists a determinant singularity \cite{Gratia:2013gka}.
For any such solution the effective stress tensor due to the presence of the non-derivative graviton interactions takes the form of an effective cosmological constant
\begin{equation}
\label{effTmunu}
T_{\mu\nu} = - \Lambda M_{\rm Pl}^2 g_{\mu\nu},
\end{equation}
where
\begin{equation}
\Lambda = \frac{1}{2} m^2 P_0(x_0),
\end{equation}
with
\begin{align}
P_0(x) &= - 12 - 2 x(x-6) - 12(x-1)(x-2)\alpha_3
\nonumber\\&\qquad -24(x-1)^2\alpha_4 ,
\end{align}
defining its dependence on dRGT parameters.
\subsection{Perturbation Lagrangian
\label{sec:perts}
Ref.~\cite{Motloch:2014nwa} derived the covariant form for the quadratic Lagrangian for perturbations
around the isotropic self-accelerating solutions of the previous section. Here we
consider a specific gauge for the perturbations, called unitary gauge, in which the
{St\"{u}ckelberg} perturbations vanish
\begin{equation}
\phi^{A} = \bar \phi^{A} ,
\end{equation}
where bar denotes the background quantity. This can always be accomplished by an infinitesimal
transformation $x^\mu \rightarrow x^\mu + \xi^\mu$ which changes these scalar fields by
\begin{equation}
\delta \phi^A = \frac{\partial \bar\phi^A}{\partial x^\mu} \xi^\mu.
\label{eqn:unitary}
\end{equation}
Inverting this relation, a {St\"{u}ckelberg} fluctuation $\delta \phi^A$
can always be gauged away fixing $\xi^\mu$ entirely
as long as $\partial \bar\phi^A/\partial x^\mu$ is not
singular, i.e.\ away from a determinant singularity. If the background solution can be continued on the other side
of a determinant singularity, a new unitary gauge can be established there as well.
Notice the background {St\"{u}ckelberg} fields are in general nonzero and so this unitary condition refers to the fact that the perturbed degrees of freedom propagating on the background
come only from the metric
\begin{equation}
g_{\mu\nu} = \bar g_{\mu\nu}+
h_{\mu\nu} .
\end{equation}
The quadratic Lagrangian for the metric fluctuations $h_{\mu\nu}$ is then
\cite{Motloch:2014nwa}
\begin{eqnarray}
\mathcal{L}_2 &=& \mathcal{L}_{hh}^{\rm (EH)}+\mathcal{L}_{hh}^{(\Lambda)} +A \sqrt{-\bar g} M_{\rm Pl}^2 B^{\mu\nu\alpha\beta} h_{\mu\nu} h_{\alpha\beta},
\label{SimplifiedLagr}
\end{eqnarray}
where the Einstein-Hilbert piece
\begin{eqnarray}
\label{EinsteinHilbert}
\frac{ \mathcal{L}_{hh}^{\rm (EH)} }{\sqrt{-\bar g} M_{\rm Pl}^2 }&=&\left( \frac{1}{2}
h^{\mu\alpha}\lu{h}{\alpha}{\nu} - \frac{1}{4} h h^{\mu\nu} \right) \bar R_{\mu\nu}
\\
&& +
\left(\frac{1}{16} h^2 - \frac{1}{8} h_{\mu\nu} h^{\mu\nu} \right) \bar R -\frac{1}{8} h^{\mu\nu;\alpha} h_{\mu\nu;\alpha} \nonumber\\
&&
+ \frac{1}{4} h^{\mu\nu;\alpha} h_{\nu\alpha;\mu}
+\frac{1}{8} h_{;\alpha} h^{;\alpha}
-\frac{1}{4} \ul{h}{\mu\nu}{;\nu} h_{;\mu} \nonumber,
\end{eqnarray}
the effective background cosmological constant piece
\begin{eqnarray}
\frac{\mathcal{L}^{(\Lambda)}_{hh}}{\sqrt{-\bar g}M_{\rm Pl}^2} &=& \left(\frac{1}{4} h_{\mu\nu}
h^{\mu\nu} -\frac{1}{8} h^2 \right)\Lambda ,
\end{eqnarray}
and the dRGT potential piece
\begin{eqnarray}
B^{\mu\nu\alpha\beta}&=&
\frac{ [\bar{\chi}]}{8}\( \bar g^{\mu\nu} \bar g^{\alpha \beta}-\frac{1}{2} \bar g^{\mu\beta}\bar g^{\nu\alpha} - \frac{1}{2} \bar g^{\mu \alpha} \bar g^{\nu \beta} \)
\nonumber\\
&&+ \frac{1}{16} \(\bar g^{\mu\alpha} \bar \chi^{\nu \beta} + \bar g^{\nu\beta} \bar \chi^{\mu\alpha} + \bar g^{\mu \beta} \bar \chi^{\nu\alpha} + \bar g^{\nu \alpha} \bar \chi^{\mu\beta}\)
\nonumber\\
&&- \frac{1}{8}\(\bar g^{\mu\nu}\bar \chi^{\alpha\beta} + \bar g^{\alpha\beta}
\bar \chi^{\mu\nu}\) ,
\end{eqnarray}
with the normalization
\begin{equation}
A= \frac{x_0^2 P_1'(x_0)}{4}m^2 .
\end{equation}
Here $\bar R_{\mu\nu}$ is the usual Ricci tensor built out of the background metric,
$\bar R$ is its trace, and
\begin{equation}
\label{chi}
\bar \chi_{\mu\nu} = \frac{1}{x_0} \bar \gamma_{\mu\nu} - \bar g_{\mu\nu} ,
\end{equation}
whose only nonzero components are
\begin{eqnarray}
\bar \chi_{11} &=& \frac{ x_0^2 b^2 - \dot{ f}^2 + \dot{ g}^2}{x_0^2 +
W} , \nonumber\\
\bar \chi_{12} &=& \frac{ (\dot { g} g' - \dot { f} f')}{x_0^2 + W} , \nonumber\\
\bar \chi_{22} &=& - \frac{ x_0^2 a^2 + f'^2 - g'^2}{x_0^2 + W} .
\end{eqnarray}
From the background equations of motion (EOMs) it can be shown that they satisfy
\begin{equation}
\label{BackgroundEOM}
\bar \chi_{11} \bar \chi_{22} = \bar \chi_{12}^2 .
\end{equation}
To quadratic order, all covariant derivatives can be taken with respect to
the background metric.
\subsection{Vacuum solutions
\label{sec:vacuum}
In the absence of matter, the effective cosmological constant for the background solution leads
a de Sitter spacetime with an expansion rate $H=\sqrt{\Lambda/3}$.
Closed isotropic coordinates
\begin{equation}
\label{eqn:closeddS}
\dd s^2 = -\dd t^2 +\left[ \frac{ \cosh{(H t)} }{1 +(H r)^2/4} \right]^2 \left(\dd r^2 +
r^2 \dd\Omega_2^2\right) ,
\end{equation}
chart the entire spacetime, where $t \in (-\infty, \infty)$ and $r\in [0, \infty)$.
For the purposes of illuminating the causal structure of solutions using conformal diagrams,
it is also useful to introduce conformal coordinates
\begin{equation}
\label{conformal_metric}
\dd s^2 = \left( \frac{1}{H \sin \eta}\right)^2 \left(-\dd\eta^2 + \dd\chi^2 + \sin^2\chi \dd\Omega_2^2\right),
\end{equation}
where
\begin{eqnarray}
\sinh (H t) &=& - \cot \eta ,\nonumber\\
H r &=& 2 \tan(\chi/2) ,
\label{eqn:closedtr}
\end{eqnarray}
restricts $\eta \in (0, \pi)$ and $\chi \in [0, \pi]$.
Specific solutions are defined by the background temporal {St\"{u}ckelberg} field $f$.
Although our treatment is fully general, we will
illustrate our results using two classes of solutions, the so-called
``open solution'' of \cite{Gumrukcuoglu:2011ew}
\begin{equation}
\label{solMukohyama}
f= f_o(\eta,\chi) = \frac{x_0}{H} \cot\eta,
\end{equation}
and the family of solutions from Ref.~\cite{Koyama:2011yg}
\begin{eqnarray}
\label{solKoyama}
f=f_C(\eta,\chi) &=& \frac{x_0}{C H} \left( \ln \left\lvert \frac{C^2 (\cos \chi + \cos
\eta)}{\sin\eta(1-y)}\right\rvert - y \right),\nonumber\\
y &=& \sqrt{1 + C^2( \sin^2\chi/\sin^2\eta -1)},
\end{eqnarray}
where $C \in (0, 1]$ is a free parameter and $y \in [0,\infty)$. Properties of these background solutions
including their determinant singularities were extensively discussed in Ref.~\cite{Motloch:2015gta}.
\section{Methodology
\label{sec:methodology}
Here we present our main analysis techniques. In \S\ref{sec:RW}, we decompose metric fluctuations into parity, angular momentum and spin components using the harmonic functions for tensors on the 2-sphere reviewed in \S\ref{sec:harmonics}. Parity and
angular momentum states obey decoupled EOMs as discussed in \S\ref{sec:eomchi}.
In \S\ref{sec:charsummary}, we show how to resolve hidden
constraints and determine the characteristics and appropriate boundary conditions for
derivatively coupled systems like the metric modes of dRGT. Details of this general
algorithm, which uses the Kronecker decomposition of a matrix pencil reviewed in
\S\ref{sec:KroneckerAppendix}, are given with pedagogical examples in Appendix \ref{sec:AlgorithmAppendix}.
\subsection{Regge-Wheeler-Zerilli decomposition
\label{sec:RW}
The analysis of metric perturbations around isotropic dRGT vacuum solutions is more complicated than standard cosmological perturbation theory due to the background
{St\"{u}ckelberg} fields or more specifically the presence of a new background tensor $\bar \chi_{\mu\nu}$ in addition to the homogeneous and isotropic
metric $\bar g_{\mu\nu}$. In this case the normal modes of fluctuations are no longer eigenfunctions of
three dimensional Laplacian and the usual decoupling of scalar, vector and tensor fluctuations does not apply.
While the dRGT background is generally no longer translationally invariant, it remains rotationally invariant and so the normal modes
are characterized by their angular momentum. In addition, the quadratic Lagrangian is parity invariant and so
the even and odd parity modes are decoupled. The analysis of metric perturbations under these conditions
follows the Regge-Wheeler-Zerilli (RWZ) analysis \cite{Regge:1957td,Zerilli:1970se} originally introduced
for the similarly inhomogeneous Schwarzschild metric.
Here the 10 metric fluctuations of the symmetric $h_{\mu\nu}$ are decomposed in spherical coordinates
$\{t,r,\theta,\phi \}$ and classified according to their transformation properties under rotation and parity.
This classification is reviewed in \S\ref{sec:harmonics} and implies the presence of conserved
quantum numbers for the total angular momentum $\ell$, azimuthal angular momentum $m$, and parity $E$ for even
and $B$ for odd,
as well as the spin $s$ of the various components:
\begin{eqnarray}
\label{RWExpansion}
h_{tt}&=& H_{0}^{\ell m}\, Y_{\ell m},\quad
h_{tr}=H_{1}^{\ell m}Y_{\ell m}, \quad
h_{rr}=H_{2}^{\ell m} Y_{\ell m},\nonumber\\
h_{ta}&=& h_{0}^{\ell m} Y^B_{\ell m,a}
+ \beta^{\ell m}Y^E_{\ell m,a},
\nonumber\\
h_{ra}&=& h_{1}^{\ell m}Y^B_{\ell m,a}
+ \alpha^{\ell m}Y^E_{\ell m,a},\nonumber\\
h_{ab}&=& h_{2}^{\ell m}Y^B_{\ell m,ab} +
G^{\ell m} Y^E_{\ell m,ab} + K^{\ell m} Y_{\ell m}\sigma_{ab} , \nonumber
\end{eqnarray}
where $a,b \in \{ \theta,\phi \}$, $\sigma_{ab}$ is the metric of the 2 sphere and the summation over $\ell,m$ is implicit.
Here $\mathbb{Y}_{\ell m}^{X}$ with $X \in \{ E,B\}$ are the tensor spherical harmonics and depend on $\{ \theta,\phi \}$
whereas the fields or coefficients whose spin and parity are summarized in Table~\ref{tab:Fields} are functions of
$\{ t,r \}$. Note that the angular momentum states for a given spin $s$ are restricted to $\ell \ge s$.
We differ slightly from the original RWZ analysis by removing the spin-0 or trace piece of the rank-2 angular tensors
which better isolates their rotational properties.
\begin{center}
\begin{table}
\caption{Metric Modes. Here $a,b \in \{ \theta,\phi\}$.}
\begin{tabular}{cccccccc}
\hline\hline
``$E$" even \quad &$H_0$ & $H_1$ & $H_2$ & $K$ & $\beta$ & $\alpha$ &$G$ \\\hline
spin & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\
$\mu\nu$ & $tt$ & $tr$ & $rr$ & $ab$ & $ta$ & $ra$ & $ab$ \\
\hline
\\
\hline\hline
``$B$" odd &--&--&--&--& $h_0$ & $h_1$ & $h_2$ \\\hline
spin &--&--&--&--& 1 & 1 & 2\\
$\mu\nu$ & -- & -- & -- & -- & $ta$ & $ra$ & $ab$\\
\hline\hline
\end{tabular}
\label{tab:Fields}
\end{table}
\end{center}
The symmetries of the background imply that groups of $(X,\ell, m)$ modes decouple and can be analyzed
independently. More specifically given the quadratic Lagrangian density, we can immediately integrate over angles to decouple
the Lagrangian density in $\{ t, r\}$ into a sum over independent terms
\begin{eqnarray}
\int d\theta d\phi {\cal L}_2 \equiv M_{\rm Pl}^2 \sum_{X \ell m } \mathcal{L}_{X,\ell m}
\end{eqnarray}
with the help of the orthogonality relations in Eqs.~(\ref{eqn:orthonormality}) and (\ref{eqn:angularidentities}). Note that ${\cal L}_2$ involves covariant derivatives
on the 2 sphere and so different spin states of a given $(X,\ell, m)$ are coupled.
\subsection{Equations of motion and singular points
\label{sec:eomchi}
From the quadratic Lagrangian for each set of $(X,\ell, m)$ modes, we can derive the coupled
EOMs as usual. Isotropy of the background requires that the EOMs for all $m$
modes of a given $\ell$ and $X$ are the same.
The fields for $m \ne 0$ are complex, but we
will use as the shorthand convention $h_1^2$ for $|h_1|^2$, etc and suppress subscripts
$\ell$ and $m$ on the $E$ and $B$ Lagrangian densities and field variables from here on.
The spin components of a given angular momentum and parity obey
a rather complicated set of coupled EOMs.
Although kinetic terms for these modes come from the Einstein-Hilbert term, not the dRGT potential term,
the nature of the constraints differs crucially from general relativity. The lapse and shift perturbations
$H_0, H_1, \beta, h_0$ are still non-dynamical but their elimination becomes more complicated.
Furthermore, there is no remaining gauge freedom in dRGT which in general relativity
eliminates 4 more variables.
Naively, this would leave the dRGT modes with 6 remaining degrees of freedom rather than
the 2 of general relativity. However the special Boulware-Deser ghost-free structure of
the dRGT Lagrangian eliminates the 6th mode leaving 5 remaining degrees of freedom
to represent the spin states of the massive graviton. In the usual convention for
the polarization states these would be spin $2$:
even $G$ and odd $h_2$; spin-1: even $\alpha$ and odd $h_1$; spin-$0$: even $K$ (with $H_2$ present but obeying a constraint).
We can make one further simplification to the EOMs by using
the property of the background solution \eqref{BackgroundEOM} to eliminate
\begin{equation}
\bar \chi_{22} = \frac{\bar \chi_{12}^2}{\bar \chi_{11}} .
\label{eqn:replacement}
\end{equation}
The disadvantage of this approach is that it cannot be applied at points
where $\bar \chi_{11} = 0$. However if we encounter such a point, it is generally possible
to switch the chart of the background to pass through it as $\bar \chi_{11}$ is a component of a tensor not a
scalar.
Since our conclusions will be about coordinate invariant quantities such as
characteristic curves, they are then valid at all such spacetime points. It suffices to
show that we can find a chart where $\bar \chi_{12} \neq 0$ since this implies $\bar \chi_{11}
\neq 0$ through Eq.~(\ref{eqn:replacement}). We checked that it is not possible to have
$\bar \chi_{12} = 0$ in both closed isotropic slicing \eqref{eqn:closeddS} and flat isotropic
slicing anywhere besides $\eta = -\pi/2, \chi = \pi/2$. In closed slicing at this
point $\dot { g} = g' = 0$ and is thus a coordinate-invariant
determinant singularity. Here the background
solution itself is undefined. At each point where the background
solution is defined, our analysis as presented thus works in at least one background coordinate frame.
At points where $\bar \chi_{11}=0$, it is actually still possible to implement our
analysis directly, but the details would differ from those
presented below (see \S\ref{ssec:oddl2}).
\subsection{Constraints and characteristics
\label{sec:charsummary}
As discussed in the previous section, the derivatively coupled EOMs for
the spin states of a given angular momentum and parity obey a complicated constraint
structure. These take the form of differential
equations and not algebraic relations with which the variables involved may be simply eliminated.
In some cases it is still possible to employ techniques
involving auxiliary variables, but
these must be introduced on a case by case basis and do not form a systematic means of proceeding.
For the purposes of counting degrees of freedom and
investigating the characteristics along which field information propagates, we use a systematic
method introduced in the Appendix~\ref{sec:AlgorithmAppendix} based on augmented first order EOMs.
A summary of the method is as follows:
\begin{enumerate}
\item Reduce all EOMs to first order form by introducing auxiliary variables, e.g.~$u_t = \partial u/\partial t$ along with these defining equations as additional EOMs. Cast the
EOMs in a matrix form as
\begin{equation}
\mathbb{A} \dot {\bf u}+ \mathbb{B} {\bf u} ' + \mathbb{C} {\bf u} = 0.
\label{eqn:vecform}
\end{equation}
\item Identify and complete the ``regular blocks" of these equations, which evolve
${\bf u}$ uniquely and consistently, by incorporating hidden algebraic and derivative constraints.
\begin{enumerate}
\item If $\mathbb{A} + \lambda \mathbb{B}$ is invertible for some choice of $\lambda$ then Eq.~(\ref{eqn:vecform}) specifies the evolution of ${\bf u}$ in some suitable temporal coordinate. $\mathbb{A} + \lambda \mathbb{B}$ defines a regular pencil or block. Proceed to step 3.
\item If $\mathbb{A} + \lambda \mathbb{B}$ is singular, in addition to regular $\mu\times \mu$ blocks $\mathbb{R}_\mu$,
it contains overdetermined $(\mu+1)\times \mu$ blocks $\mathbb{L}^P_\mu$,
underdetermined $\mu \times (\mu+1)$ blocks $\mathbb{L}_\mu$ or both in its Kronecker decomposition (see \S\ref{sec:KroneckerAppendix}). Eliminate redundancies and add all missing algebraic
and derivative constraints from the overdetermined blocks to the EOMs. If constraints turn
all underdetermined blocks to regular blocks, proceed to step 3.
\item
If underdetermined blocks remain then solutions are not unique, often due to gauge freedom which can be fixed by addition of
gauge constraints. Add these as EOMs and repeat the previous step.
\end{enumerate}
\item Cast the regular blocks in Weierstrass form $\mathbb{R}_{\mu}(\Omega)$ and read off characteristics from their eigenvalues $\Omega$.
Derivative blocks operate on some linear combination of original fields ${\bf v} = \mathbb{Q}^{-1} {\bf u}$.
If a characteristic is real and the degeneracy or dimension of a block is 1, the block is hyperbolic;
if higher than 1, parabolic. If a characteristic is complex, the block is elliptic. If all blocks are hyperbolic, then the whole system is hyperbolic.
\end{enumerate}
The classification of the system as hyperbolic, parabolic or elliptic has direct implications for the type of initial or boundary data required. In a hyperbolic system, fields are uniquely specified by the EOMs along
characteristics, given data on a surface that intersects the characteristics. If this surface is spacelike, then one
solves a Cauchy problem for the evolution of fields from initial conditions. For a coupled system, a well-posed Cauchy problem requires a joint surface that intersects all characteristics. The slope of the characteristic defines
the analog of lightcones, i.e.~the domain of dependence and influence. Characteristics of a hyperbolic system
also define curves across which the EOMs do not specify the field evolution.
Hence field discontinuities can occur on characteristics, if they occur in the initial data, and their speed of propagation is
given by the slope. We will call characteristics ``superluminal" whenever
they are spacelike.
Of course actual discontinuities would be beyond the regime of validity of dRGT as an effective theory.
For an alternative discussion
of related issues, see \cite{Deser:2014fta, Babichev:2016hys}.
In a parabolic system, the EOMs contain derivatives in the direction orthogonal to the
characteristics which carries information across them. The
prototypical example is the heat diffusion equation where the characteristics are constant
time surfaces. Since the domain of dependence then involves all of the characteristics
``upstream" from a given characteristic, usually one specifies consistent field data on a
given ``initial" characteristic and marches forward across the ``downstream"
characteristics or domain of influence. The domain of dependence also spans the extent of
the characteristic, which is typically spacelike, and so requires spatial boundary
conditions as well.
In an elliptic system, no real characteristics exist and so the domain
of dependence is the entire spacetime. Elliptic systems cannot be solved by marching
initial data forward with the EOMs.
The characteristic analysis is therefore a tool to study the nature of the boundary value problem in the classical theory.
It is a precursor to solving for field configurations either analytically or numerically.
If and only if all subsystems are hyperbolic and share
a joint non-characteristic surface, can the Cauchy problem for the system as a whole
be solved from data on this surface.
\section{Odd ``B" modes
\label{sec:Odd}
We begin the analysis of the propagating degrees of freedom, constraints and characteristics
of metric fluctuations around vacuum self-accelerating dRGT solutions with the odd parity modes.
Odd parity modes are simpler due to the smaller number of degrees of freedom associated
with them.
They also provide a useful cross check on our general EOM-based technique since
there is an alternate approach of introducing auxiliary fields into the Lagrangian,
which we explain in Appendix \ref{ssec:oddal}.
We first present the quadratic Lagrangian in \S\ref{ssec:oddLag} and explain how the method works for
the special case of $\ell = 1$ where only a single spin 1 mode propagates in \S\ref{ssec:oddl1}.
We study the general case $\ell \geq 2$ where there is an additional spin 2 mode
in \S\ref{ssec:oddl2}. We compare this analysis to the alternate approach of \S\ref{ssec:oddal}
in \S\ref{ssec:oddlalt}. A summary of the regular blocks and characteristics of both the
odd and even modes is given in
Table~\ref{tab:Characteristics}.
\subsection{Lagrangian
\label{ssec:oddLag}
As discussed in \S\ref{sec:RW}, the normal mode decomposition of metric fluctuations decouples
the Lagrangian density into independent pieces for a given angular momentum $\{ \ell, m \}$ and parity state.
For each odd or $B$ set, the Lagrangian density in $\{t, r\}$ can be schematically written
\begin{eqnarray}
\mathcal{L}_{B} &=& \sum_{i=1}^{13} D_i(t,r,\ell) {\cal B}_{i}(h_{a_i},h_{b_i}),
\label{OddLagr}
\end{eqnarray}
where the $D_i$ coefficients depend on the background and the total angular momentum $\ell$ but not $m$. ${\cal B}_i$ represents a bilinear operator with at most one derivative in
$t$ or $r$ on each of the fields $h_{a_i}, h_{b_i} \in \{ h_0, h_1, h_2 \}$.
Explicit expressions for these
terms are provided in Eqs.~(\ref{OddLagrRepeat}) and (\ref{eqn:Ds}).
For example ${\cal B}_{13} = h_0 \dot h_2$ and comes from the term $h^{\mu\nu;\alpha}
h_{\nu\alpha;\mu}$ of Eq.~(\ref{EinsteinHilbert}) with $\alpha = t$ and $\mu, \nu$ angular
coordinates or $\mu = t$ and $\alpha, \nu$ angular. Focusing on the former case,
$h^{\mu\nu;\alpha}$ contains $\dot h_2 Y^B_{\ell
m,ab}$ and $h_{\nu\alpha;\mu}$ contains $h_0 \nabla_a Y^B_{\ell m, b}$. Since the covariant derivative on the
sphere $\nabla_a$ raises and lowers the spin weight, the orthogonality of
angular integrals \eqref{eqn:angularidentities} produces the coupling.
It is clear from the number of terms in Eq.~(\ref{OddLagr}) and the explicit form \eqref{eqn:Ds} for their coefficients
that just extracting the expected spin 1 and 2 degrees of freedom is difficult and finding their characteristics even more so.
Unlike in general relativity, no further simplifications are possible since we cannot eliminate
coupled modes utilizing gauge freedom of the theory (see \S\ref{sec:GR}).
\subsection{$\ell = 1$
\label{ssec:oddl1}
Since only $s \le \ell$ fields exist at a given $\ell$, the odd $\ell=1$ case is special in that the spin-$2$ field $h_2$ is
not present and we expect only one of the remaining fields $h_0, h_1$ to propagate after applying all the constraints.
Following our algorithm outlined in \S\ref{sec:RW} and detailed in
\S\ref{sec:AlgorithmAppendix},
we first rewrite
the two EOMs into a first order system by introducing four additional
fields $h_{0t}, h_{0r}, h_{1t}$, and $h_{1r}$ corresponding to derivatives indicated by the second subscript
and add their definitions to the EOMs, e.g.\
\begin{equation}
\dot h_0 - h_{0t} = 0,
\quad h_0' - h_{0r}=0.
\label{eqn:h0consistency}
\end{equation}
We then arrive at a set of six first order
differential equations which can be captured in the form \eqref{eqn:vecform}.
Instead of proceeding directly to the full
Kronecker decomposition, we can first look for all $\mathbb{L}_1^P$ overdetermined blocks by
noticing combinations of equations without temporal
or spatial derivatives and matching these together (see \S\ref{sec:AlgorithmAppendix}).
In fact just by inspection we know that the Kronecker structure of the equations contains at least two $\mathbb{L}_1^P$ overdetermined blocks corresponding to
Eq.~(\ref{eqn:h0consistency}) and its $h_1$ counterpart. In general, each $\mathbb{L}_1^P$ block hides one constraint.
Here these are the consistency relations
\begin{eqnarray}
\dot h_{0r} - h_{0t}' &=& 0, \nonumber\\
\dot h_{1r} - h_{1t}' &=& 0,
\end{eqnarray}
which we add to the EOMs.
At this point we have eight EOMs for six field variables, leading to an $8\times 6$ matrix
pencil \eqref{eqn:vecform}. The $\mathbb{L}_1^P$ discovery process also identifies a block associated with
\begin{eqnarray}
\label{OddKroneckerExample}
\dot \psi - \omega_1 &=& 0 ,\nonumber\\
\psi ' - \omega_2 &=& 0 ,
\end{eqnarray}
related to the overdetermined variable
\begin{equation}
\psi = h_{1t} - h_{0r} .
\end{equation}
In the formula above, $\omega_i$ are two linear combinations of the fields without any
derivatives; their particular form is not important. The block of equations
\eqref{OddKroneckerExample} contains a hidden constraint in a form of the first order
differential equation
\begin{equation}
\omega_1' - \dot \omega_2 = 0 .
\end{equation}
This equation is added to the investigated system, which is now described by a $9 \times 6$ pencil.
Its Kronecker form now contains
\begin{equation}
\{ \mathbb{L}_2, \mathbb{L}_0^P, 3\times \mathbb{L}^P_1 \} .
\end{equation}
In general, $\mathbb{L}_0^P$ structures represent algebraic constraints that contain no derivative terms.
Assuming that
$\bar \chi_{12} \neq 0$, we can use the constraint to
integrate out $h_{1t}$ completely
and at the same time remove one of the nine EOMs which is due to the redundancy
caused now by the removal of $h_{1t}$.
This operation turns the underdetermined
$\mathbb{L}_2$ block into a regular block $\mathbb{R}_2$
and the Kronecker form into the
$8 \times 5$ system
\begin{equation}
\{3 \times \mathbb{L}^P_1, \mathbb{R}_2\big(\tfrac{\bar \chi_{12}}{\bar \chi_{11}}\big) \}
\end{equation}
which contains no underdetermined blocks. Furthermore it contains no hidden constraints
since we have extracted one hidden relation from each of the three over determined $\mathbb{L}_1^P$ blocks.
Using the third step of the algorithm, we identify from $\mathbb{R}_2$
a single characteristic of degeneracy 2 that is defined as integral curve
$t(r)$ of
\begin{equation}
\label{oddl1cha} \frac{dt}{dr} = - \frac{\bar \chi_{12}}{\bar \chi_{11}} .
\end{equation}
We interpret this as one physical degree of freedom (two pieces of initial data or
phase space degrees of freedom) corresponding to the odd parity of the spin-1
mode of the massive graviton.
Spacetime diagrams of the characteristic curves in the $(\eta, \chi)$ coordinates for the background
solutions \eqref{solMukohyama} and \eqref{solKoyama} for $\bar \chi_{\mu\nu}$
are plotted in
Figure~\ref{fig:characteristics} (thick blue lines). Regions of the de Sitter space are
divided into separate copies of the background solutions at the determinant singularity (red thick lines).
The $f_{C=1}$ case is the unique background solution where the
characteristics are everywhere luminal; it can be shown \cite{Motloch:2015gta} that all
other background solutions show regions of spacelike
characteristic curves around the poles $\chi = 0, \pi$.
Although the curves themselves coincide with that for isotropic or even $\ell=0$ perturbations
obtained previously in Ref~\cite{Motloch:2015gta}, as will be shown with the current method
in \S\ref{sec:Even}, the boundary value problem here is very different.
For this odd $\ell=1$ mode, $\mathbb{R}_2$ indicates that the associated degree of freedom is
parabolic not hyperbolic. In this sense the boundary problem is similar to the
heat equation where one specifies field data on a spacelike characteristic surface
and uses the EOMs to march forward in time given
spatial boundary conditions at the ends of characteristics.
Note though that parabolic characteristics need not be spacelike.
For example in the $f_o$ solution, these characteristics are timelike in the inner diamond (Fig.~\ref{fig:characteristics}, left panel).
We shall see that in the even $\ell=0$ case, the system contains two hyperbolic phase space degrees of
freedom that propagate on the same curves. In this case, the EOMs do not evolve
fields off the characteristics and so field data must be specified
on a noncharacteristic surface.
\begin{figure*}
\center
\includegraphics[width = 0.99\textwidth]{figure.pdf}
\caption{Conformal diagrams of the characteristic curves for the $f_o$ background solution and two members of the $f_C$ family of solutions. Thick blue lines correspond to the new spin-0 and
spin-1 modes introduced in dRGT whereas thin solid and dashed lines
represent the luminal characteristics of the spin-2 modes. dRGT modes all share the same
repeated characteristics but come from both hyperbolic and parabolic blocks. Except in special
cases, all modes of the same parity and angular momentum are coupled by non-derivative
terms. Thick red lines represent determinant singularities.
}
\label{fig:characteristics}
\end{figure*}
Although the parabolic nature of the system is robust to field redefinitions, the field content of the overdetermined
and regular blocks is not.
In particular, we can mix fields from
overdetermined blocks to regular blocks since they are non-dynamical. We can also mix variables between regular blocks
of the same characteristic, but not of different characteristics,
as detailed in \S\ref{ssec:KroneckerAmbiguity}. For example, in the discussion
above, we are led to the assignment
\begin{equation}
{\bf v} = \left( h_0, h_1, \psi, h_{0r} {-} \tfrac{\bar \chi_{11}}{\bar \chi_{12}}h_{1r},
{-}\tfrac{\bar \chi_{11}^2}{\bar \chi_{12}^2}h_{1r} \right)^T \!\!\!\! ,
\end{equation}
where the first three variables correspond to the 3 overdetermined blocks, leaving the last two as
the nominal ``propagating" degrees of freedom.
However, an equally valid
representation with the same Kronecker structure is for example
\begin{equation}
{\bf v} = \left( h_0, h_1, \psi,
h_{0r} {\scriptstyle -} \tfrac{\bar \chi_{11} }{\bar \chi_{12}}h_{1r}
{\scriptstyle -} \tfrac{b'}{b} h_0,
\,
{\scriptstyle -}\tfrac{\bar \chi_{11} }{\bar \chi_{12}} h_{0r}{\scriptstyle +}
\psi\right)^T \!\!\!\!.
\end{equation}
More usefully, it is possible to show that
for a particular choice of $\mathbb{P}, \mathbb{Q}$ the field variable $v_4$ corresponding to the first field
in the parabolic block completely decouples from the remaining four fields,
forming an autonomous equation
\begin{equation}
\label{Autonomous}
-\frac{\bar \chi_{12}}{\bar \chi_{11}} \dot v_4 + v_4' + C v_4 = 0
\end{equation}
for some $C(t,r)$ which will not be given here. This is in full agreement with the
alternative analysis of \S\ref{ssec:oddal}, which also finds that one of the fields obeys
an autonomous equation~\eqref{eqq} with the same characteristic.
In this special case, explicit solutions for $v_4$ may be obtained by integrating data
from a non-characteristic curve as {\eqref{Autonomous}} is itself a decoupled hyperbolic equation. Note that the EOM associated with
$v_5$ still remains coupled to $\dot v_4$ so its initial or boundary data cannot be given independently
of this solution.
\begin{center}
\begin{table}
\caption{Characteristics and multiplicity of regular blocks. }
\begin{tabular}{cccccc}
\hline\hline
$\, X \,$ & $\quad\ell\quad$ & $\mathbb{R}_1\big( \frac{\bar \chi_{12}}{\bar \chi_{11}}\big)$ & $\mathbb{R}_2\big( \frac{\bar \chi_{12}}{\bar \chi_{11}}\big)$
& $ \mathbb{R}_1\big(\frac{a}{b}\big)$ & $\mathbb{R}_1\big({\scriptstyle -}\frac{a}{b}\big)$ $\vphantom{\Big(}$ \\
\hline
$B$ & 0 &0 & 0 & 0 & 0 \\
$B$ & 1 & 0 & {1} & 0 & 0 \\
$B$ & $\ge 2$ & 0 & {1} & 1 & 1\\
$E$ & 0 &2 & 0 & 0 &0 \\
$E$ & 1 & 2 & 1 & 0 & 0 \\
$E$ & $\ge 2$ & 2 & 1 & 1 & 1 \\
\hline
\end{tabular}
\label{tab:Characteristics}
\end{table}
\end{center}
\subsection{$\ell \geq 2$
\label{ssec:oddl2}
Although we have all three fields present at $\ell \geq 2$, the analysis is basically the same as for $\ell = 1$.
The difference is that we add three additional fields $h_2,h_{2t}, h_{2r}$,
and four equations: the EOM for $h_2$, the definitions of
$h_{2t}, h_{2r}$ and the constraint associated with this new $\mathbb{L}_1^P$ block
\begin{equation}
\dot h_{2r} - h_{2t}' = 0.
\end{equation}
The other structures in the system are the same as with $\ell=1$ treatment. The $\mathbb{L}^P_1$ block
related to $\psi$ reveals a $\mathbb{L}^P_0$ block which allows us to integrate out $h_{1t}$.
With these constraints, the system is described by a $12\times 8$ Kronecker form
\begin{equation}
\{ 4\times \mathbb{L}^P_1, \mathbb{R}_2\big(\tfrac{\bar \chi_{12}}{\bar \chi_{11}}\big),
\mathbb{R}_1\big({\scriptstyle -}\tfrac{a}{b} \big),\mathbb{R}_1\big(\tfrac{a}{b} \big) \} .
\end{equation}
The first regular block is parabolic and already present at $\ell = 1$; we associate it with the
odd parity spin-1 mode of the graviton. The other two regular blocks
are hyperbolic with characteristic curves
\begin{equation}
\label{oddl2cha} \f{dt}{dr} = \pm \f{a}{b},
\end{equation}
which are luminal and directed radially inward or outward. These modes correspond to the
spin-2 graviton mode and necessarily contain the combination of fields
\begin{equation}
\label{OddSpin2Fields}
h_{2r} \pm
\frac{a}{b} h_{2t}, \qquad (\mbox{hyperbolic, luminal}).
\end{equation}
Luminality of these curves is expected as this mode is inherited
from general relativity which has the same kinetic structure as dRGT.
This association with the spin-2 mode \eqref{OddSpin2Fields} cannot be removed using the freedom in
performing the Kronecker decomposition, since fields in the
other regular blocks have different characteristics and the overdetermined blocks
do not contain either $h_{2t}$ or $h_{2r}$.
Moreover there are non-derivative couplings of these modes to the other modes
through $\mathbb{C}$ that also cannot in general be removed.
{\it Open solution.---}
It is instructive to examine the case of the $f_o$ background solution in detail since its
homogeneity in open slicing permits a traditional scalar-vector-tensor (SVT) analysis
there \cite{Gumrukcuoglu:2011zh}. In this slicing, the constant time surfaces coincide
with the $\bar \chi_{12}/\bar \chi_{11}$ characteristics. In fact, our result seems paradoxical
in that SVT normal modes of the Laplace operator should fully decouple from each other in
linear theory.
To directly compare these results, we reperform our analysis in open frame where $\bar \chi_{22} =
0$, a case that was excluded in our primary analysis but allowed by the technique itself
(see \S\ref{sec:eomchi}). This analysis therefore applies specifically to the open wedge
of de Sitter (upper right triangle of Fig.~\ref{fig:characteristics}, left panel) and
matches the domain investigated in Ref.~\cite{Gumrukcuoglu:2011zh}. We will skip
the details, as all proceeds similarly to our main analysis; as was argued before the
structure of the regular blocks is coordinate invariant and thus the same in the two analyses.
Using the freedom in the Kronecker form, we can in {this particular case} choose ${\bf v}$ such that the fields of a parabolic block
$\mathbb{R}_2$ and one of the $\mathbb{L}_1^P$ blocks
completely decouple from the fields of the remaining five blocks. The EOMs which govern the
corresponding fields can then be solved independently. In this sense the spin-1 vector mode does decouple
from the spin-2 tensor mode. However these fields then source those
in the remaining five blocks, in particular the spin-2 mode.
The resolution to the paradox is that solutions to the EOMs of the decoupled
blocks diverge at either origin or
the spatial infinity in ways that cannot be represented by the vector normal modes of the
Laplace operator.
In other words, the usual SVT normal mode analysis sets these modes to zero by boundary
conditions, eliminating the source to the tensor modes.
This one-way decoupling of the $\ell\ge 2$ odd parabolic block is not a general feature of the dRGT
self-accelerating solutions. More typically, the spin-1 and spin-2 variables mutually
source each other and no simplification of the full system of
EOMs is possible. Furthermore decoupling within the parabolic block that is possible for $\ell=1$
odd modes for all background solution (see Eqs.~\eqref{Autonomous} or \eqref{eqq}) is typically not
possible for $\ell \ge 2$ odd modes.
\subsection{Comparison with alternative analysis
\label{ssec:oddlalt}
In Appendix~\ref{ssec:oddal}, we present an alternative analysis of the odd modes.
Introducing auxiliary fields, we recast the odd Lagrangian as a second order system
for a new variable $\qa$ and $h_2$, with $h_0, h_1$ integrated out {for $\ell \ge 2$}.
As such, there are no hidden constraints between the new variables and the system can be
investigated by standard methods.
In particular, given the second order system EOMs, we perform a characteristic analysis by
searching for curves where discontinuities in the highest derivatives can occur. This
analysis agrees on the spacetime trajectories and total multiplicities of the
characteristics. For $\ell = 1$, integrating out $h_0$ leaves EOMs for $\qa$ and $h_1$ that
again confirm our main analysis.
These alternate analyses however fail to automatically find the distinction between parabolic
blocks and repeated hyperbolic blocks of the same characteristics which requires retention
of the first order derivative structure.
Moreover a drawback of modifying the Lagrangian to resolve constraints
is that different types of constraints require different methods. In fact as we shall
see in the next section, the even modes present such a complex constrained system that it
is unclear how to proceed at the Lagrangian level. Our method provides an algorithmic
method of resolving hidden algebraic or differential constraints for arbitrarily complex systems at the EOM level.
On the other hand, the alternative analysis allows us to perform a Hamiltonian analysis of
the system, see \S\ref{ssec:oddl1Hamiltonian} for the particular case of $\ell=1$.
\section{Even ``E" modes
\label{sec:Even}
In this section we finish our analysis of the perturbations around vacuum
cosmological solutions of dRGT by finding characteristic curves for the even parity or ``E" modes. We
start with the special cases $\ell=0$ and $\ell=1$, the former of which was previously
investigated in \cite{Motloch:2015gta} by the {St\"{u}ckelberg} method \cite{Wyman:2012iw}, and then proceed to the general
case with $\ell \geq 2$.
\subsection{Lagrangian
\label{ssec:evenLag}
The even mode Lagrangian is of the form
\begin{eqnarray}
\mathcal{L}_{E} &=& \sum_{i=1}^{55} E_i(t,r,\ell) {\cal E}_{i}(e_{a_i},e_{b_i}),
\label{EvenLagr}
\end{eqnarray}
where like the odd modes $E_i$ are the coefficients of ${\cal E}_i$, a bilinear operator on
pairs of the seven $E$ modes
\begin{equation}
e_{a_i}, e_{b_i} \in \{ H_0, H_1, H_2,\alpha, \beta, K, G \}
\end{equation}
that contains at most one derivative on each field with respect to $t$ or $r$.
Given that there are 55 distinct terms, they will not be presented explicitly here.
\subsection{$\ell = 0$
\label{ssec:evenl0}
For the isotropic modes, only the spin 0 fields, $K$ and $H_i$ where $i =
0,1,2$, remain in the Lagrangian. We then introduce eight first derivative fields $H_{it},
H_{ir}, K_t$ and $K_r$, along with their defining equations to have all EOMs manifestly first order. The defining equations naturally pair themselves into
$\mathbb{L}_1^P$ blocks in the Kronecker decomposition just like for the
odd modes. We can therefore
automatically add the hidden constraints corresponding to these four $\mathbb{L}_1^P$ blocks;
these take form of consistency equations such as
\begin{equation}
\dot H_{0r} - H_{0t}' = 0 .
\end{equation}
After taking them into account, the Kronecker decomposition of the resulting $16 \times
12$ pencil reveals two additional $\mathbb{L}^P_1$ blocks related to $K_r, K_t$. These two
blocks hide two additional equations, which are then included into the analysis.
In the resulting 18 $\times$ 12 structure, there are two algebraic $\mathbb{L}_0^P$ constraints.
We can use these two constraints to integrate out $K_r$ and $H_0$ completely and at the same time
remove two redundant equations. Four of the remaining 16 equations turn
into algebraic $\mathbb{L}_0^P$ constraints, allowing us to remove $H_{1t}, H_{2t}, H_{0r}$ and $H_{0t}$
together with four equations.
Of the remaining 12 EOMs two
are redundant,
allowing us to reduce the number of EOMs for the remaining six fields down to 10.
The final system is
\begin{equation}
\left\{4\times \mathbb{L}_1^P, 2\times\mathbb{R}_1\big(\tfrac{\bar \chi_{12}}{\bar
\chi_{11}}\big)\right\} .
\end{equation}
All hidden constraints which can be derived from the four overdetermined blocks are
included and our analysis is thus finished.
The characteristic curves corresponding to the two regular blocks are described by the
same
slope,
\begin{equation}
\label{EvenIsotropicChar}
\frac{dt}{dr} = - \frac{\bar \chi_{12}}{\bar \chi_{11}} ,
\end{equation}
that was associated with the spin-1 odd modes.
However, unlike the odd modes both regular blocks are hyperbolic,
allowing for solutions based on a set of initial data given on a non-characteristic surface.
Thanks to the large field transformation group associated with the Kronecker decomposition, one of the
hyperbolic blocks can be completely decoupled from the remainder of the blocks to describe
a field governed by an autonomous equation similar to Eq.~\eqref{Autonomous}.
This conclusion is in complete agreement with previous investigations of the isotropic modes
\cite{Wyman:2012iw,Motloch:2015gta}. The analyses there relied on solving
for {St\"{u}ckelberg} and metric perturbations in isotropic gauge. There the special combination
$\delta\Gamma=\delta g-x_0r\delta a$ obeys the same autonomous $\mathbb{R}_1$ hyperbolic
form. The remaining isotropic {St\"{u}ckelberg} perturbation $\delta f(t,r)$ also
satisfies a first order differential equation while the remaining metric fluctuations are governed by constraints.
Characteristic curves corresponding to both propagating variables
are exactly \eqref{EvenIsotropicChar}. The isotropic gauge analysis is thus completely consistent
with our unitary gauge analysis.
As with the alternative $\ell=1$ odd mode analysis of \S\ref{ssec:oddal}, in the variables of the isotropic gauge analysis, a Hamiltonian
analysis is tractable. Ref.\ \cite{Khosravi:2013axa} found that the Hamiltonian of this $\ell=0$ mode is unbounded from below.
A Hamiltonian analysis for our unitary gauge system is intractable but from these two examples we can at least conclude
that there is no direct relation between the existence of an $\mathbb{R}_1$ or $\mathbb{R}_2$ block and an unbounded Hamiltonian.
\subsection{$\ell = 1$
\label{ssec:evenl1}
The analysis is very similar to the $\ell=0$ one but with the addition of the spin-1 fields $\alpha, \beta$.
As before, we introduce the 12 additional first derivative
fields and their defining equations. We again directly add the six consistency conditions
which correspond to the $\mathbb{L}_1^P$ blocks related to the defining relations.
In this 24 $\times$ 18 system, we then find three new $\mathbb{L}_1^P$ blocks
whose hidden constraints then reveal three $\mathbb{L}_0^P$ blocks. The latter allow us to
integrate out $K_r, \alpha_r$ and $H_0$ which then further reveals four algebraic constraints among the fields which
we use to remove $\beta_t, H_{2t}, H_{0t}$ and $H_{0r}$. As before, two of the remaining EOMs turn out to be redundant and can be dropped.
The final Kronecker decomposition for the
$18 \times 11$ system becomes
\begin{eqnarray}
\left\{7\times \mathbb{L}_1^P,2\times \mathbb{R}_1\big(\tfrac{\bar \chi_{12}}{\bar \chi_{11}}\big)
,\mathbb{R}_2\big(\tfrac{\bar \chi_{12}}{\bar \chi_{11}}\big)
\right\} .
\end{eqnarray}
There are no additional hidden constraints as all seven $\mathbb{L}_1^P$ constraints are
already included.
On top of the blocks already present at $\ell = 0$ there is an $\mathbb{R}_2$ block
that we also found in the analysis of the $\ell=1$ odd modes. This agrees with our
interpretation that they together represent the two parity states of the spin-1 polarization of
the massive graviton.
\subsection{$\ell \geq 2$
\label{ssec:evenl2}
For $\ell \ge 2$ there is an additional spin-2 field $G$ but the analysis is basically the same as for $\ell = 1$.
We first add three more fields $G, G_t, G_r$, two more defining conditions and the hidden
consistency relation
\begin{equation}
\dot G_r - G_t' = 0.
\end{equation}
From there on the analysis follows exactly the same steps as the $\ell = 1$ analysis with the removal
of first $K_r, \alpha_r, H_0$ and then $\beta_t, H_{2t},H_{0t}, H_{0r}$ with constraints
and finally the dropping of two redundant EOMs. The final Kronecker structure
\begin{equation}
\big\{8\times \mathbb{L}_1^P,
2 \times \mathbb{R}_1\big(\tfrac{\bar \chi_{12}}{\bar \chi_{11}}\big),
\mathbb{R}_2\big(\tfrac{\bar \chi_{12}}{\bar \chi_{11}}\big),
\mathbb{R}_1\big(\tfrac{a}{b}\big), \mathbb{R}_1\big({\scriptstyle -}\tfrac{a}{b}\big)
\big\}
\end{equation}
contains the same regular components as $\ell=1$ corresponding to the spin-0 and spin-1 modes.
In addition there are two
new regular blocks with luminal characteristic curves
\begin{equation}
\frac{dt}{dr} = \pm \frac{a}{b}.
\end{equation}
This completely mirrors the odd modes, where the difference between $\ell \ge 2$ and $\ell =
1$ also consists of a new luminally propagating mode \eqref{oddl2cha}. As we mentioned earlier,
luminality of the mode agrees with our expectations. The fields associated with these
luminal blocks are
\begin{equation}
G_t \pm \frac{b}{a}G_r , \qquad (\mbox{hyperbolic, luminal}).
\end{equation}
and these blocks are thus unambiguously related to the spin-2 field. Notice that in the $f_o$ solution
there are no spacelike curves that intersect the hyperbolic characteristics of all types whereas
in the $f_C$ solutions there are (see Fig.~\ref{fig:characteristics}). Nonetheless the joint degrees of freedom in the $f_C$ case are not hyperbolic due to the parabolic blocks.
For the even modes it is possible to perform a similar decoupling analysis as we performed
for the odd modes. Again, for particular solutions such as $f_o$ it is possible to
completely decouple the parabolic block $\mathbb{R}_2$ and one overdetermined block
$\mathbb{L}_1^P$ from the rest. The parabolic modes
diverge at either origin or spatial infinity and if not set to zero \cite{Gumrukcuoglu:2011zh} will act as sources to the
luminal modes. Again the parabolic decoupling does not happen for more general background solutions
and the spin-1 and spin-2 modes remain mutually coupled.
\section{Discussion
\label{sec:discuss}
The dRGT theory of massive gravity presents interesting challenges for the study of metric
perturbations around its vacuum self-accelerating backgrounds. Although the background
spacetime remains homogeneous and isotropic, the presence of a second metric can break translational invariance and invalidate the standard scalar-vector-tensor decomposition. Furthermore,
the 10 metric variables are derivatively coupled and hide both differential and algebraic constraints
that permit just 5 independently propagating modes. In this paper we have {
developed and employed} techniques to surmount these challenges.
Given the isotropy of the background and the parity invariance of the theory,
we use the Regge-Wheeler-Zerilli decomposition to decouple modes of different
angular momentum and parity. The equations of motion describe the propagation of the
coupled spin states of the massive graviton in the remaining radial dimension.
These equations of motion hide algebraic and differential constraints from the lapse, shift and
ghost-free construction of dRGT. In Appendix~\ref{sec:AlgorithmAppendix} we develop
an algorithm to find its hidden constraints and characteristic curves.
Using this technique, we find the
new spin-0 as well as even and odd spin-1 degrees of freedom all possess the same characteristic
curve
\begin{equation}
\frac{dt}{dr} = - \frac{\bar \chi_{12}}{\bar \chi_{11}} ,
\end{equation}
that depends only on the background solution not on angular momentum or parity.
These characteristics always run tangent to
determinant singularities. On the other hand the two spin-2 degrees of freedom
propagate on luminal characteristics. This behavior of the tensor modes is
expected, because dRGT and general relativity share the same kinetic structure while they
differ in their constraint structure.
Different spin states require different initial and boundary conditions.
The scalar mode is hyperbolic and requires data on a surface that intersects all of
its characteristics.
For solutions like $f_o$ where there are no such surfaces that are spacelike, the initial value problem
is ill-posed. Moreover, the spin-2 modes are hyperbolic and a joint solution requires
a common surface that intersects all characteristics.
The two spin-1 modes propagate along the
same characteristic but each form a parabolic system, much like the heat equation. To
specify their evolution, we need to provide initial data on a characteristic surface and
supplement it with two boundary conditions.
These conclusions agree with a previous analysis of the isotropic modes $\ell = 0$
using a different method \cite{Motloch:2015gta,Khosravi:2013axa} as well as a different analysis
of the $\ell=1$ odd modes presented in \S\ref{ssec:oddal}.
In both of these cases, the Hamiltonian is unbounded from below.
Finally within a given angular momentum and parity set, various spin modes are still
coupled by non-derivative terms in ways that cannot be removed by field redefinitions
except in special cases. The presence of coupled degrees of freedom that are both
hyperbolic and parabolic in nature and propagate on different characteristics implies
that the metric modes of dRGT cannot be evolved as a simple Cauchy problem.
\smallskip
Central to these analyses is the algorithmic method, presented with numerous examples in
Appendix~\ref{sec:AlgorithmAppendix}, to find hidden constraints and characteristic curves of an arbitrary
system of linear partial differential equations in 1+1 dimensions. This method has
a wide range of uses beyond the dRGT theory.
The logic of the method is first to rewrite all the equations of
motion into a first order system of differential equations. The Kronecker decomposition of the
resulting matrix pencil provides a systematic means of extracting hidden constraints and
identifying residual gauge freedom for the purpose of identifying the regular system that defines
the unique, consistent evolution of fields. The generalized eigenvalues of the regular block
define characteristics. Their degeneracy and reality determines the hyperbolic,
parabolic and elliptic nature of the fields.
By identifying constraints hidden in derivatives of the
original equations of motion, these techniques should be useful in other systems where
the phase space degrees of freedom are reduced by constraints.
\acknowledgements
We thank Austin Joyce, Teruaki Suyama, Robert Wald and the organizers and participants of JGRG25 for useful discussions.
This work was supported by U.S.~Dept.\ of Energy
contract DE-FG02-13ER41958.
WH was additionally supported by
the Kavli Institute for Cosmological Physics at the University of
Chicago through grants NSF PHY-0114422 and NSF PHY-0551142 and NASA ATP NNX15AK22G.
PM was additionally supported by grants NSF PHY-1125897 and NSF PHY-1412261 and thanks the Perimeter Institute for
Theoretical Physics where part of this work was performed. Research at Perimeter Institute is supported by
the Government of Canada through Industry Canada and by the Province
of Ontario through the Ministry of Economic Development \& Innovation.
}
\appendi
\section{Hidden constraints and characteristics
\label{sec:AlgorithmAppendix}
We present here an algorithm which we use in revealing hidden algebraic and differential
constraints as well as determining the characteristic curves.
These are curves along which the equations of motion specify unique solutions for the
fields given their values at initial or boundary points.
In principle, our technique can be used for
any set of linear partial differential algebraic equations (PDAEs) in 1+1 dimensions.
Analogously to the well-studied case of ordinary differential algebraic equations (DAEs), PDAEs
are partial differential equations where some of the equations represent constraints.
{Unlike DAEs, where the constraints are purely algebraic, for PDAEs they can be algebraic in one of the dimensions and
differential in the other dimension.} Our method is thus well suited to analyze
perturbations around spherically symmetric solutions with complicated derivative couplings
and nonalgebraic constraints, such as dRGT. Despite its generality and applicability to a
wide range of problems, we are not aware of its publication before in the entire form
(see \cite{Martinson:1999aaa} for discussion of PDAEs with algebraic constraints,
which provides pieces of our construction).
In \S\ref{sec:RW}, we provided an executive summary of the three step technique which we elaborate on
here with a discussion of fields propagating in regular blocks and illustrative examples.
\subsection{First order reduction
The first step is to rewrite all equations of motion as first order partial
differential equations by introducing additional field variables corresponding to field
derivatives of the next to highest order. With each new field, we add into the
investigated system an equation which defines this variable.
For example, if any of the
equations of motion contains $\ddot u$, we introduce an additional field $u_t$ and
supplement the equations of motion with
\begin{eqnarray}
\label{FirstConsistency}
u_t &=& \dot u .
\end{eqnarray}
We can choose to introduce fields in a
symmetric fashion regardless of whether they are required for the initial reduction, i.e.~introduce both $u_t$ and
\begin{eqnarray}
u_r &=& u' .
\end{eqnarray}
This implies a ``hidden" consistency relation
\begin{eqnarray}
u_t' -\dot u_r =0 ,
\end{eqnarray}
but we shall see that even if we do not add this consistency relation to the EOMs at the outset, it will be discovered
in the analysis.
We organize these equations into matrix form as
\begin{equation}
\mathbb{A} \dot {\bf u}+ \mathbb{B} {\bf u} ' + \mathbb{C} {\bf u} = 0.
\label{BasicEquation}
\end{equation}
This notation should not be confused with vectors and tensors in the spacetime or on the
2 sphere.
\subsection{Hidden constraints and Kronecker form
The second step is to determine the independent propagating degrees of freedom
and form a set of equations specifying the unique evolution of a well formed system.
If after finding all algebraic and hidden constraints, the system remains underdetermined then
the system is ill-formed.
Suppose $\mathbb{A}$ is invertible. Then we know that time evolution of the fields can
be specified by their values on spatial surfaces. The generalization of this concept
for singular $\mathbb{A}$
is that if $\mathbb{A} + \lambda \mathbb{B}$ is invertible for some choice of $\lambda$ then
there is some suitable temporal coordinate, e.g. $t' = t+\lambda r$, where evolution is again
defined. In this case, $\mathbb{A}, \mathbb{B}$ are said to form a regular pencil and
$\mathbb{A} + \lambda \mathbb{B}$ is composed entirely by regular blocks.
In such case, we can proceed directly to the next step to find the characteristics or the preferred temporal
coordinates associated with the different fields in the regular block.
If $\mathbb{A} + \lambda \mathbb{B}$ is singular for any $\lambda$,
then the evolution naively looks ill-defined along any curve.
In general, a pencil can
be singular because the matrix system contains either over- or underdetermined blocks
or both. For underdetermined blocks, solutions naively are not unique. For
overdetermined blocks, solutions from arbitrary initial data can in principle be inconsistent. Overdetermined blocks thus hide consistency relations which once exposed can convert underdetermined
blocks into regular blocks that yield a unique and consistent solution.
We therefore look for consistency relations associated with the overdetermined blocks
to augment the EOMs. Before turning to the systematic approach, it is worthwhile
to discuss why constraints can be hidden in the overdetermined block. First recall the case
of a first order ODE. If some linear combination of
EOMs contains no time derivatives, then the remaining structure is either $0=0$ or there is an algebraic
relationship between the fields. In the former case, the system is trivially overconstrained
indicating that an equation is redundant and can be removed.
In the latter case, one can solve for one of the
fields and eliminate it from the system or keep all the fields but add the
derivative of the constraint equation to the EOMs. Note that the choice of which field to eliminate is somewhat
arbitrary and this will be related to a similar choice for the PDAE system.
In the case of a PDAE, the generalization is that there can also be equations that lack either temporal or spatial derivatives but not both. In this case, the ``algebraic" constraint
is really differential in the other dimension. It is then not straightforward to eliminate or
``integrate out" a field associated with the constraint. On the other hand, derivatives of the constraint can still
add an independent EOM that evolves the constraint consistently.
This is similar
to the presence of secondary constraints in a Hamiltonian analysis
or the differentiation index of a DAE.
The algorithmic way of proceeding is to utilize the Kronecker decomposition
of the singular pencil (see Appendix~\ref{sec:KroneckerAppendix}
for definitions and notation). In terms of our PDAE, this amounts to choosing a particular
linear combination of fields ${\bf v}$ or field redefinition and linear combination of EOMs that
exposes the regular, over and underdetermined blocks. More specifically given
appropriate invertible matrices $\mathbb{P}, \mathbb{Q}$
\begin{eqnarray}
{\bf v} &=& \mathbb{Q}^{-1} {\bf u} , \nonumber \\
\tilde \mathbb{A} &=& \mathbb{P}\, \mathbb{A}\, \mathbb{Q} , \nonumber\\
\tilde \mathbb{B} &=& \mathbb{P}\, \mathbb{B}\, \mathbb{Q} , \nonumber
\\
\tilde \mathbb{C} &=& \mathbb{P}\, \mathbb{C}\, \mathbb{Q} + \mathbb{P}\, \mathbb{A}\, \dot \mathbb{Q} + \mathbb{P}\, \mathbb{B}\, \mathbb{Q}' ,
\end{eqnarray}
we can rewrite Eq.~\eqref{BasicEquation} as
\begin{equation}
\tilde \mathbb{A}\, \dot {\bf v} + \tilde \mathbb{B}\, {\bf v} ' + \tilde \mathbb{C} \,{\bf v} = 0 ,
\label{eqn:vveceqn}
\end{equation}
with the matrix pencil $\tilde \mathbb{A} + \lambda \tilde \mathbb{B}$ in the Kronecker form. Notice that
$\mathbb{P}$ describes particular linear combinations of the equations of motion while $\mathbb{Q}$
performs linear field redefinitions. Because in general $\mathbb{Q}$ is a function of $\left\{ t, r \right\}$ the
linear combinations of the fields and equations corresponding to the individual Kronecker blocks
depend on the position in the spacetime.
In Kronecker form, the matrix pencil is composed of blocks.
Each $\mathbb{L}_\mu^P$ block is
overdetermined --- there are $\mu$ fields and $\mu+1$ equations.
Conversely,
each $\mathbb{L}_\mu$ block is underdetermined --- there are
$\mu + 1$ fields and only $\mu$ equations of
motion.
Each overdetermined block hides one constraint and if it is not already included in the EOMs,
we add it. In the special case that $\mu=0$, corresponding to a row of zeros in the Kronecker form,
the constraint is algebraic as it would be for an ODE. In that case, we either eliminate a field
or eliminate a redundant equation. If that resolves the only singular block, we repeat this step
and form the new regular pencil.
The more novel cases are where there are $\mu \ge 1$ overdetermined blocks. The case of
$\mu=1$ is instructive and is the only relevant one for dRGT given its particular second
order structure.
Here the equations in the block take the form
\begin{eqnarray}
\dot v_i + c^j v_j &=& 0 ,\nonumber \\
v_i' + d^j v_j &=& 0 ,
\end{eqnarray}
for some coefficients $c^j, d^j$ where summation over repeated indices is implicit. We can subtract the time derivative
of the second equation from the spatial derivative of the first equation, again forming an
equation which is first order in the derivatives
\begin{equation}
c^j v_j' - d^j \dot v_j + ({c^j}{}' - \dot d^j)v_j = 0 .
\end{equation}
If this new EOM for the fields $v_j$ is not a linear combination of the existing ones, we add it to the list. For systems where only $\mu=1$ overdetermined blocks
exist, these constraints can be found by inspection rather than by formal Kronecker decomposition. They correspond to a linear combination of fields $v_i$ which obeys one equation with no time derivatives and another equation with
no spatial derivatives. In practice to discover such combinations
it suffices to find all linear combinations of the
equations of motion which do not contain any spatial or temporal derivatives $s^i
{\rm EOM}_i$ and $t^i {\rm EOM}_i$ respectively. After we
discover all such combinations we can try to pair them in a way that
\begin{equation}
\frac{\partial}{\partial r}\( s^i {\rm EOM}_i\) - \frac{\partial}{\partial t}\( t^i {\rm EOM}_i\)
\end{equation}
contains no second order derivatives. This is then a valid first order EOM that can be added to the system.
A similar but more involved procedure applies to overdetermined blocks with $\mu>1$. Such structures
hide constraints between fields at higher order in derivatives and present opportunities to eliminate higher order
terms. In this case the combination of fields
$v_i$ which appears in the EOMs with no temporal derivatives is not the same as the
combination $v_j$ which appears in the EOMs with no spatial derivatives. Instead the two are connected
by a derivative chain through the $\mathbb{L}_\mu^P$ system of $\mu+1$ equations. In this case by taking
$\mu-k+1$ spatial derivatives and $k-1$ temporal derivatives of the $k$th equation, one can construct a combination
with no $(\mu+1)$th order derivatives. Like the $\mu=1$ system, this combination involves a constraint on the
system with $\mu$ derivatives. The complication is that to cast the system in first order form,
auxiliary fields with $\mu-1$ derivatives must be introduced into the system. Nonetheless, since this introduction
amounts to fields with no extra freedom associated with them, the constraint represents a new equation which
if not already in the EOM system is added to them. Since each $\mathbb{L}_\mu^P$ block is a $(\mu+1) \times \mu$ matrix
system, once this constraint is found it exhausts the extra information in the overdetermined block.
After including the new information from all overdetermined blocks, we place the augmented system in its
final Kronecker form. If there are no longer any underdetermined blocks, we proceed to the next step with
just the regular blocks. Overdetermined blocks remain but consistent evolution of their fields is now enforced in the
regular block.
If underdetermined blocks still remain, then the EOMs have no unique solution.
In physical systems this is often due to gauge freedom which has to be fixed. In these cases, gauge fixing provides
new constraints. If adding them to the EOMs and repeating this step
produces a Kronecker decomposition with only regular and overdetermined blocks then we can again proceed to the next step (see \S\ref{sec:GR} for an example).
\subsection{Characteristics
The regular blocks in the final Kronecker decomposition determine the characteristic curves of the system. Consider the simplest regular block $\mathbb{R}_1(\Omega)$. It describes the dynamics of
a single degree of freedom $v_i$, described by the equation
\begin{equation}
v_i' -\Omega \dot v_i + c^j v_j = 0 .
\end{equation}
If $\Omega$ is real, this equation specifies derivative of $v_i$ along the direction
\begin{equation}
\label{Characteristic}
\frac{dt}{dr} = -\Omega ,
\end{equation}
and so evolves the field along this direction. For this
reason the curve defined by Eq.~\eqref{Characteristic} is a characteristic curve.
When the characteristic curve is aligned with
the time coordinate, formally we would have to set $\Omega = \infty$. Instead, the
Kronecker decomposition of such a regular block $\mathbb{R}_1(\infty)$ is defined to be of the nilpotent form through Eq.~(\ref{eqn:Rnil}).
Regular blocks that are of dimension 1 and produce real characteristics are hyperbolic. Data on a
non-characteristic surface that intersects these curves defines a unique solution by
integrating their values along the characteristic curves. If this surface is spacelike
then the subsystem has a well-posed initial value or Cauchy problem. If all hyperbolic blocks
share a common spacelike non-characteristic surface then their joint Cauchy problem is
well-posed. If the $\Omega$ is
complex, then the block is elliptic and requires solution by relaxation from values on all
boundaries.
If the regular block is higher than dimension 1, then it is parabolic. The triangular form of $\mathbb{R}_i$ produces
a chain of equations
\begin{eqnarray}
\label{SymmetryEOM2}
v_1' - \Omega \dot v_1 + \cdots &=& 0 ,\nonumber\\
v_2' - \Omega \dot v_2 - \dot v_1 + \cdots &=& 0 ,\nonumber\\
&\vdots& \nonumber\\
v_{i}' - \Omega \dot v_{i} - \dot v_{i-1} + \cdots &=& 0 ,
\label{eqn:chain}
\end{eqnarray}
where the dots in the equations stand for nonderivative terms.
The first equation determines the $v_1$ characteristic through $\Omega$ just like the hyperbolic counterpart. The next variable
inherits the same characteristic but now supplies an evolution equation for $v_1$ off of the characteristic.
This pattern continues through the chain. Unlike the hyperbolic system
we can define data for the fields on a given characteristic and march
forwards across characteristics. On each characteristic, which is typically spacelike, information is communicated from one boundary to the other ``instantaneously" with respect to the marching direction. Thus conditions must typically be specified
at both boundaries. In this sense, a parabolic system is similar to an elliptic equation along the direction of the characteristic
while sharing the hyperbolic property of marching data but instead from one characteristic to another. The nilpotent case
where $\Omega=\infty$ takes the same form but with time and space switched.
The system as a
whole is hyperbolic if and only if all
regular blocks are hyperbolic.
Note that our analysis distinguishes between characteristics of two independent regular blocks $\{ \mathbb{R}_1(\Omega),\mathbb{R}_1(\Omega)\}$ that just happen to be the
same and degenerate characteristics of a single $\mathbb{R}_2(\Omega)$ regular block.
For example, the former could represent two decoupled wave equations with luminal characteristics which is clearly
a hyperbolic system as a whole.
In the literature, based on the association with a single second order system repeated characteristics
themselves are often used as the definition of a parabolic system (see
e.g.~\cite{Hoffman:2001aaa}) but this definition can not fully
distinguish all the possibilities.
\subsection{Field assignment
\label{ssec:KroneckerAmbiguity}
While the Kronecker decomposition of the matrix pencil is uniquely determined, the
matrices $\mathbb{P}, \mathbb{Q}$ themselves are not. Since ${\bf v}=\mathbb{Q}^{-1} {\bf u}$ determines a specific linear combination of the original variables
${\bf u}$ that can be associated with the various blocks, field assignment is not unique and so is not formally
a step in our technique. On the other hand these transformations are useful for finding field combinations where
$\tilde \mathbb{C}$ is as block diagonal as possible so that ${\bf v}$ is as decoupled as possible.
Formally there are further transformations $\tilde \mathbb{P}$, $\tilde \mathbb{Q}$ that obey the group multiplication
property
\begin{equation}
\tilde \mathbb{P} \mathbb{P} ( \mathbb{A} + \lambda \mathbb{B}) \mathbb{Q} \tilde \mathbb{Q} =
\tilde \mathbb{P} (\tilde \mathbb{A} + \lambda \tilde \mathbb{B}) \tilde \mathbb{Q} = \tilde \mathbb{A} + \lambda \tilde \mathbb{B},
\end{equation}
or symmetry that leaves the Kronecker form invariant.
There are two useful transformations $\tilde \mathbb{Q}$ that are worth noting.
First, $\tilde \mathbb{Q}$ can be chosen to add linear
combinations of fields in an overdetermined block to those in a regular block. For example, given a matrix pencil
in Kronecker form
\begin{equation}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} = \{ \mathbb{L}_1^P,\mathbb{R}_1(\Omega) \} = \begin{pmatrix}
1 & 0\\
\lambda & 0\\
0 & \lambda - \Omega
\end{pmatrix} ,
\end{equation}
we have $\tilde \mathbb{P}(\tilde \mathbb{A} + \lambda \tilde \mathbb{B})\tilde \mathbb{Q} = \tilde \mathbb{A} + \lambda \tilde \mathbb{B}$ for all
\begin{eqnarray}
\tilde \mathbb{P} &=&\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
-C\Omega & C & 1
\end{pmatrix},\quad
\tilde \mathbb{Q} = \begin{pmatrix}
1 & 0\\
-C & 1
\end{pmatrix} ,
\end{eqnarray}
where $C(t,r)$ is an arbitrary real function. Given that the two columns correspond to $v_1,
v_2$ for $C = 0$ we now find that the regular block can correspond to any field combination
$\tilde v_2 = v_2 + C v_1$. In general variables in overdetermined blocks may always be added to regular blocks.
When there are two regular blocks with the same $\Omega$ or characteristic curve, we can
perform an additional transformation which keeps the Kronecker form invariant. Let us assume
the Kronecker decomposition reveals two regular blocks $\{ \mathbb{R}_i(\Omega), \mathbb{R}_j(\Omega)\}$.
Each block represents a derivative chain of the form (\ref{eqn:chain}). For clarity, let us assign
the $v$ variables associated with the first as $x_1,\ldots,x_i$ and to the second as $y_1,\ldots,y_j$.
Starting from $y_1$ and combining it with $x_{k+1}$, offset in its own chain by any $\max(0,i-j) \le k < i$, we can take
sequential linear combinations
\begin{eqnarray}
\tilde x_{k+1} &=& x_{k+1} + C y_{1},\nonumber\\
&\vdots& \nonumber\\
\tilde x_{i} &= & x_{i} + C y_{i-k},
\end{eqnarray}
where again $C(t,r)$ is an arbitrary real function. The evolution equations for $\tilde x_i$ are still of the
form \eqref{eqn:chain}. The Kronecker structure is thus unchanged by this operation despite the field
redefinition. The corresponding
$\tilde \mathbb{P}, \tilde \mathbb{Q}$ can easily be derived using these linear combinations.
Notice that we can have $i = j$ and $k=0$, where we take
linear combinations of the whole blocks, and also $x= y$, in which case we perform field
redefinitions within a single regular block by sequentially adding lower fields in the chain to higher fields. Conversely,
fields in regular blocks with different characteristics $\Omega_i\ne \Omega_j$ cannot in
general be mixed.
\subsection{Examples
We now illustrate the procedure with several illustrative examples. We begin with the canonical examples from second order
linear PDEs: the wave, heat and Laplace equations. We then give an example of an underdetermined system: gravitational
waves in general relativity where the gauge is left unspecified. Finally we provide examples where hidden constraints
reduce the number of propagating degrees of freedom or the order of derivatives in a coupled set of EOMs by eliminating phase space degrees of freedom.
\subsubsection{Wave equation
\label{sec:wave}
First, let us apply the above algorithm to the wave equation
\begin{equation} \ddot f- f'' = 0, \label{waveeq} \end{equation}
whose Lagrangian is in the simplest form by
\begin{equation} \mathcal{L} = \dot f^2 - f'^2. \label{Lwaves} \end{equation}
Since complicated Lagrangians often hide simpler ones due to the presence of constraints,
let us illustrate our algorithm with an alternate form
\begin{equation}
\mathcal{L} = \dot f^2 - 4 f'h + 4 h^2 . \label{Lwaveu}
\end{equation}
Obviously, in this case we can directly read off the EOM for $h$
\begin{equation}
h = \frac{f'}{2} ,
\end{equation}
integrate $h$ out of the Lagrangian, and recover the standard form of \eqref{Lwaves}.
Our analysis below basically does the same, but through the algorithm above.
The equations of motion for the given Lagrangian \eqref{Lwaveu} read
\begin{eqnarray}
\ddot f - 2 h' &=& 0 , \nonumber\\
-f' + 2 h &=& 0 .
\end{eqnarray}
We can reduce this system to first order form by
introducing $f_t, f_r$ through
\begin{eqnarray}
\dot f - f_t &=& 0 , \nonumber \\
f' - f_r &=& 0 ,
\end{eqnarray}
which we append to the EOMs written in terms of these fields
\begin{eqnarray}
\dot f_t - 2h' &=& 0 , \nonumber\\
-f_r + 2 h &=& 0.
\end{eqnarray}
The matrix pencil for the fields ${\bf u} = (f,f_t,f_r,h)^T$ is
\begin{equation}
\mathbb{A}+\lambda\mathbb{B}=\left(
\begin{array}{c c c c}
1 & 0 & 0 &0 \\
\lambda & 0 &0 &0 \\
0 & 1 &0 &-2\lambda \\
0 & 0 & 0 &0 \\
\end{array}
\right).
\end{equation}
The presence of a row of zeros indicates an $\mathbb{L}_0^P$ overdetermined structure and hence a constraint.
In this case it is a simple algebraic constraint $h=f_r/2$, consistent with our
earlier discussion. Eliminating $h$ with the constraint
we are left with
\begin{eqnarray}
{\rm EOM}_1&:&\ \dot f - f_t = 0 , \nonumber\\
{\rm EOM}_2&:&\ f' - f_r = 0 , \nonumber \\
{\rm EOM}_3&:&\ \dot f_t - f_r' = 0 .
\end{eqnarray}
If we started with the usual form of the wave equation~\eqref{waveeq},
we would arrive directly to this set of equations after step 1.
Now for the field vector ${\bf u} = (f,f_t,f_r)^T$ the matrix pencil is
\begin{equation}
\mathbb{A}+\lambda\mathbb{B}=\left(
\begin{array}{c c c }
1 & 0 &0 \\
\lambda & 0 &0 \\
0 & 1 &- \lambda \\
\end{array}
\right).
\end{equation}
This is a singular pencil representing the fact that there is no explicit evolution equation for $f_r$.
However in addition to the one underdetermined block represented by the third row there is one overdetermined block specified by the first column.
A simple rearrangement of rows, vectors and sign conventions would place
this in Kronecker form with one $\mathbb{L}_1$ and one $\mathbb{L}_1^P$ block,
but is not necessary to see that there is a hidden constraint.
There is only one linear combination of these EOMs which has no spatial derivatives, EOM$_2$ itself
and one which has no temporal derivatives, EOM$_3$ itself.
Matching these two together is straightforward and is accomplished by differencing the complementary derivatives,
generating a consistency constraint
\begin{equation}
{\rm EOM}_4:\ \partial_r ({\rm EOM}_{1} ) - \partial_t ({\rm EOM}_2) = \dot f_r - f_t' = 0.
\end{equation}
Given that there is only one $\mathbb{L}_1^P$ block the addition of this independent equation completes the system
and supplies the missing evolution equation for $f_r$.
Adding the constraint to the EOMs, we now have the pencil
\begin{equation}
\mathbb{A}+\lambda\mathbb{B}=\left(
\begin{array}{c c c }
1 & 0 &0 \\
\lambda & 0 &0 \\
0 & 1 &-\lambda \\
0 &-\lambda & 1 \\
\end{array}
\right).
\end{equation}
This pencil has the original $\mathbb{L}_1^P$ block in the first column but instead of an underdetermined $\mathbb{L}_1$ block we now have a $2 \times 2$ block that contains only regular pieces. With
\begin{eqnarray}
\mathbb{P} &=& \begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & -\frac{1}{2} & -\frac{1}{2}\\
0 & 0 & -\frac{1}{2} & \frac{1}{2}
\end{pmatrix} , \quad
\mathbb{Q} = \begin{pmatrix}
1 & 0 & 0\\
0 & 1 & -1\\
0 & 1 & 1
\end{pmatrix} ,
\end{eqnarray}
we can put the pencil into its
Kronecker form
\begin{eqnarray}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} &=& \{ \mathbb{L}_1^P, \mathbb{R}_1(1),\mathbb{R}_1(-1) \}\nonumber\\
&=&
\begin{pmatrix}
1 & 0 & 0\\
\lambda & 0 & 0\\
0 & \lambda- 1& 0\\
0 & 0 & \lambda + 1
\end{pmatrix} .
\end{eqnarray}
In agreement with the expectations, the regular blocks possess two luminal characteristics
\begin{equation}
\frac{dt}{dr} = \pm 1 ,
\end{equation}
with field content ${\bf v} = [f,(f_r + f_t)/2,(f_r-f_t)/2]^T$. Each block is hyperbolic
in the sense that we can propagate one boundary condition along the characteristic curve.
In the overdetermined block we have equations for $\dot f$ and $f'$ that can be integrated
self-consistently on any curve given $f_r\pm f_t$. In fact, the appearance of the fields
in the regular block is not unique. Since we have implicitly integrated out $f$, we could
add an arbitrary mixture of it back into the dynamical fields $f_r \pm f_t + c_\pm f$
which mathematically does not change the Kronecker structure or the
characteristics. This illustrates the fact that $\mathbb{P}$, $\mathbb{Q}$, and ${\bf v}$ are not
unique even though counting of the degrees of freedom and the identification of their
characteristics is.
\subsubsection{Heat equation
As the next example, consider the heat equation
\begin{equation}
\dot f - f'' = 0 ,
\end{equation}
where $f$ is usually associated with temperature.
Step 1 is the reduction to a first order system. We can either choose to just introduce $f_r=f'$ or
also introduce $f_t=\dot f$ to obtain a symmetric set.
Let us start with the first case.
Here the field vector is ${\bf u} = ( f, f_r)^T$, the EOMs are
\begin{eqnarray}
{\rm EOM}_1 &:&\ f' - f_r = 0 ,\nonumber \\
{\rm EOM}_2 &:&\ f_r' - \dot f = 0 ,
\end{eqnarray}
and the corresponding matrix pencil is already in Kronecker form
\begin{equation}
\mathbb{A}+ \lambda\mathbb{B} = \mathbb{R}_2(0)= \left(
\begin{array}{c c }
\lambda & 0 \\
-1 & \lambda \\
\end{array}
\right),
\end{equation}
with a single characteristic curve
\begin{equation}
\frac{dt}{dr} = 0 .
\end{equation}
The repeated characteristics in the block also represent the well-known fact that the heat
equation is parabolic. Characteristics are constant time slices and instead of defining initial conditions
on a non-characteristic surface, one specifies them on an initial time slice. The second equation, which is the original
EOM, then propagates
this information forward in time. To fully define the system, we also require two spatial boundary conditions since
information propagates instantaneously across the time slice.
Now consider the second case where we introduced $f_t$ as a second auxiliary field. In that
case the EOMs are
\begin{eqnarray}
{\rm EOM}_1&:&\ \dot f - f_t = 0 , \nonumber\\
{\rm EOM}_2&:&\ f' - f_r = 0 , \nonumber\\
{\rm EOM}_3&:&\ f_t - f_r' = 0 .
\end{eqnarray}
With the field vector ${\bf u} = (f, f_t, f_r)^T$,
the matrix pencil is
\begin{equation}
\mathbb{A}+\lambda\mathbb{B}=\left(
\begin{array}{c c c }
1 & 0 & 0\\
\lambda & 0 & 0 \\
0 & 0 & -\lambda \\
\end{array}
\right).
\end{equation}
This corresponds to an overdetermined block in the first column $\mathbb{L}_1^P$, an underdetermined $\mathbb{L}_0$ block in the
second column and a regular block in the third.
Again the overdetermined block hides the same consistency constraint as the wave equation
\begin{equation}
{\rm EOM_4}:\ f_t' - \dot f_r = 0
\end{equation}
and completes the underdetermined block to a larger regular block
\begin{equation}
\mathbb{A}+\lambda\mathbb{B}=\left(
\begin{array}{c c c }
1 & 0 & 0\\
\lambda & 0 & 0 \\
0 & 0 & -\lambda \\
0 & \lambda & -1
\end{array}
\right).
\end{equation}
The Kronecker form for this {pencil} is achieved by choosing
\begin{eqnarray}
\mathbb{P} &=& \begin{pmatrix}
1& 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix} , \quad
\mathbb{Q} = \begin{pmatrix}
1 & 0 & 0\\
0 & 0 & 1\\
0 & 1 & 0
\end{pmatrix} ,
\end{eqnarray}
and reads
\begin{eqnarray}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} = \{\mathbb{L}_1^P,\mathbb{R}_2(0)\}= \begin{pmatrix}
1 & 0 & 0\\
\lambda & 0 & 0\\
0 & \lambda & 0 \\
0 & -1 & \lambda
\end{pmatrix} .
\end{eqnarray}
In this method we recover the same regular block and characteristic curves
as before but now associated with $(f_r,f_t)$.
\subsubsection{Laplace equation
The Laplace equation
\begin{equation}
\ddot f + f'' = 0 ,
\end{equation}
is the canonical example of a system where there are no real characteristics in the regular block.
For consistency of our notation we keep labeling the coordinates $(t, r)$ though as we shall see
such a case does not have an initial value formulation and hence physically is associated with
problems with two spatial dimensions.
Since the only difference with the wave equation is a change in sign of $f''$, we skip directly to step 2
with the hidden constraint added
\begin{eqnarray}
{\rm EOM}_1&:&\ \dot f - f_t = 0 ,\nonumber \\
{\rm EOM}_2&:&\ f' - f_r = 0 ,\nonumber \\
{\rm EOM}_3&:&\ \dot f_t + f_r' = 0 ,\nonumber \\
{\rm EOM}_4&:&\ f_t' - \dot f_r = 0 ,
\end{eqnarray}
which has the pencil for ${\bf u} = (f,f_t,f_r)^T$
\begin{eqnarray}
\mathbb{A} + \lambda \mathbb{B} &=& \begin{pmatrix}
1 & 0 & 0\\
\lambda & 0 & 0\\
0 & 1 & \lambda\\
0 & \lambda & -1
\end{pmatrix} .
\end{eqnarray}
In this case, we explicitly show how to construct $\mathbb{P}$ and $\mathbb{Q}$ to highlight
how a set of real but coupled first order PDEs can lack real characteristics.
Since the first two rows are already in the correct form, we can focus our
attention to the $2 \times 2$ lower right subblock
\begin{equation}
\mathbb{A}_2+\lambda\mathbb{B}_2= \begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix} +
\lambda \begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix} .
\end{equation}
The associated Weierstrass form has the identity $\mathbb{I}_2$ for the $\tilde \mathbb{B}_2$ matrix so we switch
the rows by multiplying on the left with
\begin{equation}
\mathbb{S}_2 = \begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix} .
\end{equation}
Since
\begin{equation}
\mathbb{S}_2 \mathbb{A}_2 =
\begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix}
\label{eqn:RA}
\end{equation}
is diagonalizable, it can be placed into Jordan form with its eigenvectors. We can immediately see that the
eigenvalues of Eq.~(\ref{eqn:RA}) are imaginary and the eigenvector matrix $\mathbb{Q}_2$ is complex. Explicitly the
$2\times 2$
block comes into canonical form with
\begin{eqnarray}
\tilde \mathbb{A}_2&=&\mathbb{Q}_2^{-1} \mathbb{S}_2 \mathbb{A}_2 \mathbb{Q}_2 = \mathbb{P}_2 \mathbb{A}_2 \mathbb{Q}_2 , \nonumber\\
\tilde \mathbb{B}_2 &=&\mathbb{Q}_2^{-1} \mathbb{S}_2 \mathbb{B}_2 \mathbb{Q}_2 = \mathbb{Q}_2^{-1} \mathbb{I}_2 \mathbb{Q}_2 = \mathbb{I}_2,
\end{eqnarray}
so that
putting the blocks together, we have
\begin{equation}
\mathbb{P} = \begin{pmatrix}
1& 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & i\\
0 & 0 & 1 & -i
\end{pmatrix} , \quad
\mathbb{Q} = \left( \begin{array}{rrr}
1 & 0 & 0 \vphantom{\Big[}\\
0 & -\tfrac{i}{2} & \tfrac{i}{2} \vphantom{\Big[} \\
0 & \tfrac{1}{2} & \tfrac{1}{2} \vphantom{\Big[}
\end{array} \right),
\end{equation}
and
\begin{eqnarray}
\tilde\mathbb{A}+\lambda\tilde\mathbb{B}&=& \{ \mathbb{L}_1^P, \mathbb{R}_1(i), \mathbb{R}_1(-i) \} \nonumber\\ &=& \left(
\begin{array}{c c c }
1 & 0 & 0\\
\lambda & 0 & 0 \\
0 & \lambda-i &0 \\
0 & 0 & \lambda+i
\end{array}
\right).
\end{eqnarray}
Notice the absence of real characteristic curves, which is a well-known feature of the Laplace equation. There are no preferred paths of information propagation and so
solution at each spacetime point influences the solution at all other points. Thus the Laplace equation cannot
be solved by integrating initial data along characteristics as information from all boundaries determines the solution.
An attempt to solve the system as a Cauchy problem is ill-posed since the normal modes grow exponentially;
typical initial data will blow up at the future time infinity unless boundary conditions are enforced there.
\subsubsection{GR and gauge freedom
\label{sec:GR}
Next we investigate gravitational waves in general relativity around a Minkowski
background in spherical coordinates
\begin{equation}
\label{eqn:Mink}
\dd s^2 = -\dd t^2 + \left(\dd r^2 + r^2 \dd\Omega_2^2\right) .
\end{equation}
The expansion of the Einstein-Hilbert action
is given by \eqref{EinsteinHilbert}.
Therefore our RWZ analysis for dRGT also applies to this case if we set
$A=\Lambda=0$ and $a=b=1$. In particular Eq.~\eqref{OddLagr} for the odd modes reduces to
\begin{eqnarray}
\label{MinkowskiLagr}
\mathcal{L}_{B}^{\rm GR} &=&
\frac{1}{4} ( \dot h_1 - h_0' )^2
+\frac{1}{r} h_0\dot h_1+\frac{ \ell (\ell+1)}{4r^2} h_0^2
\\
&&
-\frac{ (\ell-1)(\ell+2)}{4r^2} h_1^2+\frac{1}{16r^2} \mk{ \dot h_2^2 - h_2'^2 + \f{2}{r^2} h_2^2 }
\nonumber\\
&&+\frac{ \sqrt{(\ell-1) (\ell+2)} }{4r^2} \mk{ h_0 \dot h_2 - h_1 h_2' + \f{2}{r} h_1 h_2 }.
\nonumber
\end{eqnarray}
The three equations of motion can be written as first order differential equations if we
introduce six additional fields $h_{it}, h_{ir}, i \in \{0, 1, 2\}$, and their defining
equations
\begin{eqnarray}
\dot h_i - h_{it} &=& 0 , \nonumber\\
h_i ' - h_{ir} &=& 0 .
\end{eqnarray}
Each of these is an $\mathbb{L}^P_1$ overdetermined block and they together contain three hidden
consistency equations
\begin{equation}
\dot h_{ir} - h_{it}' = 0 .
\end{equation}
There is an additional $\mathbb{L}^P_1$ block in the Kronecker
decomposition, however the additional constraint is a tautology. We have thus incorporated all of the
additional constraints.
The Kronecker decomposition then reads
\begin{equation}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} =\{\mathbb{L}_2, 4\times \mathbb{L}_1^P, \mathbb{R}_1(1), \mathbb{R}_1(-1)\} .
\end{equation}
The presence of the underdetermined $\mathbb{L}_2$ block shows the equations of motion are not
sufficient to determine uniquely the evolution of all fields.
This is
not surprising as the general relativity Lagrangian \eqref{MinkowskiLagr} is
diffeomorphism invariant and contains a gauge symmetry
\begin{eqnarray}
h_0(t,r) &\rightarrow& h_0(t,r) + \dot \Lambda(t,r) , \nonumber\\
h_1(t,r) &\rightarrow& h_1(t,r) + \Lambda'(t,r) - \frac2{r}
\Lambda(t,r) , \nonumber\\
h_2(t,r) &\rightarrow& h_2(t,r) + 2\Lambda(t,r) ,
\end{eqnarray}
where $\Lambda(t,r)$ is an arbitrary function. Because in our analysis we
did not fix this gauge freedom, the redundant modes appear in the final Kronecker
decomposition.
The next step is to remove this gauge freedom. As an example, we choose $h_2 = 0$.
This gauge constraint when added to the system
{as an EOM} is formally an $\mathbb{L}_0^P$ overdetermined block. Following our algorithm
and eliminating $h_2$ from the system reveals two additional algebraic constraints
\begin{equation}
h_{2t} = h_{2r} = 0,
\end{equation}
coming from one $\mathbb{L}_1^P$ block which turned into two $\mathbb{L}_0^P$ blocks upon
integrating $h_2$ out. Eliminating these constraints is the same as erasing
$h_2$ from the original higher order EOM system at the outset. Notice that
gauge fixing after obtaining the EOMs still retains an equation of motion associated with
$h_2$; we comment on this subtlety below.
After eliminating these three variables,
the Kronecker system is
\begin{equation}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} =\{\mathbb{L}_2, 2\times\mathbb{L}_0^P,3\times \mathbb{L}_1^P\} .
\label{eqn:GRintermediate}
\end{equation}
The $\mathbb{L}_0^P$ blocks represent algebraic constraints that can be used to complete the
underdetermined block to a regular block. In the
previous Kronecker decomposition these equations correspond to the $\mathbb{R}_1(\pm 1)$
blocks which now both turn into
the same algebraic constraint $\mathbb{L}_0^P$
\begin{equation}
h_{0t} - h_{1r} = 0 ,
\label{GaugeFixedEOM}
\end{equation}
which means one of them is redundant.
Elimination of $h_{0t}$ and the redundancy turns a previously underdetermined $\mathbb{L}_2$ block into two regular $1\times 1$ blocks. At this point, no underdetermined
blocks remain, all the information in the overdetermined blocks is extracted and the analysis
is finished; the final decomposition reads
\begin{equation}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} = \{3\times \mathbb{L}_1^P,
\mathbb{R}_1(1), \mathbb{R}_1(-1)\} .
\end{equation}
The two regular blocks have luminal characteristics, as expected.
Finally, this example also illustrates a subtlety about gauge fixing. Gauge fixing can always be safely performed at the equations of motion level.
Notice though that if the $h_2=0$ gauge were fixed directly at the Lagrangian level, we would never
vary with respect to it and would lose an equation of motion.
Gauge fixing directly in the Lagrangian should only be performed if the equation of motion that is
lost is redundant. In cases where it is not,
the system of remaining
EOMs is incomplete and does not fully describe the physical system (see \S III\,C of
\cite{Motloch:2014nwa} for examples).
In the case considered here, we arrive at correct answer even when
we fix the gauge through setting $h_2 \rightarrow 0$ in the Lagrangian. This is because
the information contained in the gauge-fixed $h_2$ EOM is
exactly Eq.~\eqref{GaugeFixedEOM}. The $h_2$ EOM is in fact responsible for the redundancy of the identical $\mathbb{L}_0^P$ blocks in
Eq.~(\ref{eqn:GRintermediate}). More generally, one can set a field to zero by using
gauge freedom if its gauge transformation does not
involve derivatives of the gauge function.\footnote{We thank Teruaki Suyama for discussion
on this point.} For this reason, the gauge fixing to unitary gauge in the dRGT quadratic Lagrangian using
Eq.~(\ref{eqn:unitary}) is a valid procedure which does not lose information in the {St\"{u}ckelberg} EOMs.
\subsubsection{Propagating vs derivatively-constrained fields
\label{sec:proderi}
A general quadratic Lagrangian for two fields with maximally two derivatives
typically propagates two degrees of freedom. However, as we show with the following example,
this counting can be mistaken due to hidden derivative constraints. The Lagrangian for dRGT described in
the main text is a more advanced case of the same phenomenon.
Let us investigate the Lagrangian
\begin{equation}
\label{SpecialLagr}
\mathcal{L} = \(q_0' + q_1' + \dot q_0\)^2 - 2 q_0^2 + 8 q_1^2 .
\end{equation}
The two equations of motion contain second derivatives, therefore we introduce four
additional fields $q_{0r}, q_{0t}, q_{1r}, q_{1t}$, where as before subscript determines
which derivative is taken, as well as the definitional EOMs
\begin{eqnarray}
\label{ExampleE1}
{\rm EOM}_1&:& \dot q_{0} -q_{0t}= 0,\nonumber\\
{\rm EOM}_2&:& q_{0}' -q_{0r}= 0,\nonumber\\
{\rm EOM}_3&:& \dot q_{1} -q_{1t}= 0,\nonumber\\
{\rm EOM}_4&:& q_{1}' -q_{1r}= 0,
\end{eqnarray}
associated with them.
As usual these definitions provide $2\times \mathbb{L}_1^P$ blocks that hide the consistency
constraints
\begin{eqnarray}
\label{ExampleE2}
{\rm EOM}_5&:& q_{0t}'-\dot q_{0r} = 0,\nonumber\\
{\rm EOM}_6&:& q_{1t}'-\dot q_{1r} = 0.
\end{eqnarray}
Finally the original equations of motion in the first order variables can be written as
\begin{eqnarray}
\label{ExampleE3}
{\rm EOM}_7&:& 2q_0 + q_{0r}' + q_{1r}'+2 \dot q_{0r} + \dot q_{0t} + \dot q_{1r} = 0,\nonumber\\
{\rm EOM}_8&:& 8q_1 - q_{0r}'-q_{1r}'-\dot q_{0r} = 0 .
\end{eqnarray}
This system of equations has the structure
\begin{equation}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} =
\{ \mathbb{L}_2, 3\times \mathbb{L}_1^P \}
\end{equation}
and so hides an additional $\mathbb{L}_1^P$ constraint associated with
\begin{equation}
Q=q_{0r}+q_{1r}+q_{0t},
\label{eqn:EOMQ}
\end{equation}
exactly the combination that appears in the
term in brackets of Eq.~\eqref{SpecialLagr}.
Equating its mixed derivatives gives
\begin{eqnarray}
\label{ExampleE4}
&&\partial_t\({\rm EOM}_8 - {\rm EOM}_5\) + \partial_r\({\rm EOM}_7 + {\rm EOM}_8\) =
\nonumber\\
&&\phantom{\partial_t({\rm EOM}_2} 2\(q_{0r} + 4 q_{1r} + 4 q_{1t}\) =0.
\end{eqnarray}
This algebraic constraint allows us to integrate out $q_{1t}$ and convert the $\mathbb{L}_2$ block
into regular blocks. The final Kronecker decomposition of the 8 equations and 5 variables reads
\begin{equation}
\label{SpecialKronecker}
\tilde \mathbb{A} + \lambda \tilde \mathbb{B} = \{3\times \mathbb{L}_1^P,
\mathbb{R}_1(-2), \mathbb{R}_1\big({-}\tfrac{2}{3}\big)\} .
\end{equation}
Notice that the system contains two regular blocks indicating just a single
propagating degree of freedom. Two initial conditions on a joint noncharacteristic
surface supplies sufficient information to determine uniquely the evolution of the
system, despite the form of the Lagrangian \eqref{SpecialLagr} which contains two fields
with second derivatives. The specific construction above leads to the field combinations
in the various blocks
\begin{equation}
{\bf v} = \{q_0,q_1,Q,\tfrac{3}{4} q_{0r}-\tfrac{3}{2} q_{1r} , -\tfrac{3}{4} q_{0r} -\tfrac{3}{2} q_{1 r} \}^T.
\end{equation}
It is useful to recall for the comparison that follows that the full EOMs can be constructed
as $\tilde\mathbb{A} \dot {\bf v} + \tilde \mathbb{B} {\bf v}' + \tilde \mathbb{C} {\bf v}=0$, where here
\begin{equation}
\tilde \mathbb{C} =
\begin{pmatrix}
0 & 0 & -1 &\frac{1}{3} &-1 \\
0 & 0 & 0 & -\frac{2}{3} & \frac{2}{3} \\
0 & 0 & 0 & -\frac{1}{6} & -\frac{1}{2} \\
0 & 0 & 0 &\frac{1}{3} & \frac{1}{3} \\
2& 8& 0 & 0 & 0 \\
0 & -8 & 0 & 0 & 0 \\
0&-12 & 0 & 0 & 0 \\
0 &4 & 0 & 0 & 0
\end{pmatrix} .
\label{eqn:Cexample}
\end{equation}
This example also illustrates the alternative analysis in Appendix~\ref{ssec:oddal} in a simpler setting. The Lagrangian \eqref{SpecialLagr} is equivalent to
\begin{equation} \label{SpecialLagrQ} \mathcal{L} = -Q^2 + 2Q \(q_0' + q_1' + \dot q_0\) - 2 q_0^2 + 8 q_1^2, \end{equation}
as we can recover \eqref{SpecialLagr} by plugging in the EOM for $Q$, i.e.~Eq.~(\ref{eqn:EOMQ}). After integration by parts, the $q_0$ and $q_1$ EOMs become constraints
\begin{eqnarray}
q_0 &=& -\frac{1}{2} (Q' + \dot Q), \nonumber\\
q_1 &=& \frac{1}{8} Q',
\label{eqn:integrateout}
\end{eqnarray}
which are themselves linear combinations of the overdetermined EOMs for $Q$
associated with the $\mathbb{L}_1^P$ block given by the 5th and 6th rows of Eq.~(\ref{eqn:Cexample}).
We can then rewrite \eqref{SpecialLagrQ} as
\begin{equation} \mathcal{L} = \f{1}{2}\dot Q^2 + \dot Q Q' + \f{3}{8} Q'^2 - Q^2 ,\end{equation}
which gives an EOM
\begin{equation}
\label{ExampleE5}
\ddot Q + 2\dot Q' + \f{3}{4} Q'' + 2Q = 0 ,\end{equation}
that is of course compatible with the original form (\ref{eqn:EOMQ}) once Eq.~(\ref{eqn:integrateout})
is backsubstituted. The two methods are thus equivalent despite the fact that $Q$ is considered
an overdetermined variable in one and a propagating variable in the other.
Combining the EOM (\ref{ExampleE5}) with two consistency conditions $d\dot Q = \ddot Q dt + Q'' dr$ and $dQ'=\dot Q' dt + Q'' dr$, we obtain
\begin{equation} \label{EOMmat}
\begin{pmatrix}
1 & 2 & \f{3}{4} \\
dt & dr & 0 \\
0 & dt & dr
\end{pmatrix}
\begin{pmatrix}
\ddot Q \\ \dot Q' \\ Q''
\end{pmatrix}
=
\begin{pmatrix}
-2Q \\ d\dot Q \\ dQ'
\end{pmatrix} .
\end{equation}
The $dt/dr$ values for which the matrix on the left hand side cannot be inverted define the characteristic curves:
\begin{equation} \f{dt}{dr} = 2,~ \f{2}{3} , \end{equation}
which are consistent with those from the Kronecker form \eqref{SpecialKronecker}.
\subsubsection{Constrained higher order systems
Our Kronecker analysis also assists in identifying cases where the EOMs appear to be of higher order
and require extra initial data or degrees of freedom but due to hidden constraints are really of a lower order \cite{Chen:2012au,Zumalacarregui:2013pma}.
For example, Ref.~\cite{Langlois:2015cwa} illustrate this phenomenon with the
coupled higher order DAE system for the fields $\{ \phi, q \}$
\begin{eqnarray}
a \ddddot \phi - k_0 \ddot \phi + b \dddot q - c\ddot q - v \phi &=&0, \nonumber\\
k_1 \ddot q + b \dddot \phi + c \ddot \phi + w q &=& 0,
\label{eqn:higherorder}
\end{eqnarray}
where $\{ a, b, c, k_0, k_1, v, w \}$ are constants.
Since these equations are ODEs in time our ``matrix pencil" $\mathbb{A} +\lambda \mathbb{B}=\mathbb{A}$. Kronecker blocks in this case
can have no $\lambda$ and thus only
contain $\mathbb{L}_0$, $\mathbb{L}_0^P$ and $\mathbb{R}_1(\infty)$. Note that the Kronecker decomposition of $\mathbb{A} + \lambda \mathbb{C}$ is
sometimes also used to decouple, solve or study the stability of fields in blocks in the DAE but we will not address that use here.
We perform our first order reduction
\begin{eqnarray}
&& u_1 \equiv q , \nonumber\\
{\rm EOM}_1&:& u_2 = \dot u_1 (= \dot q) ,\nonumber\\
{\rm EOM}_2&:& u_3 = \dot u_2 (= \ddot q) , \nonumber\\
&& u_4 \equiv \phi ,\nonumber\\
{\rm EOM}_3&:&u_5 = \dot u_4 (= \dot \phi) , \nonumber\\
{\rm EOM}_4&:&u_6 = \dot u_5 (=\ddot \phi) , \nonumber\\
{\rm EOM}_5&:&u_7 = \dot u_6 (= \dddot \phi) ,
\label{eqn:uidefs}
\end{eqnarray}
so that the original EOMs (\ref{eqn:higherorder}) are
\begin{eqnarray}
{\rm EOM}_6&:& a \dot u_7 - k_0 u_6 + b \dot u_3 - c u_3 - v u_4 = 0, \nonumber\\
{\rm EOM}_7&:&k_1 u_3 + b u_7 + c u_6 + w u_1 = 0 .
\end{eqnarray}
We therefore start with a $7 \times 7$ system with 5 defining equations and 2 original EOMs.
The original Kronecker structure is
\begin{eqnarray}
\{ \mathbb{L}_0^P, 6 \times \mathbb{R}_1(\infty) \}.
\end{eqnarray}
We can see immediately that the $\mathbb{L}_0^P$ block is associated with EOM$_7$
which contains no derivatives.
We use this to eliminate the highest order derivative
term $u_7$ assuming $b\ne 0$
\begin{eqnarray}
u_7 &=& -\frac{k_1}{b} u_3 - \frac{c}{b}u_6 - \frac{w}{b} u_1,
\end{eqnarray}
which turns EOM$_6$ into
\begin{eqnarray}
(b^2 - a k_1) \mk{\dot u_3 -\frac{c}{b} u_3} &=& \mk{b k_0 - \frac{a}{b} c^2 } u_6 + b v u_4
\nonumber\\
&& - \frac{a}{b} c w u_1 + a w u_2 ,
\end{eqnarray}
after using the definitional EOMs (\ref{eqn:uidefs}).
For generic parameters, the resulting $6\times 6$ system is regular since this supplies the evolution equation for
$\dot u_3$. As a whole, the evolution of
6 fields (3 phase space DOFs) are uniquely specified by initial values.
The special case is when $b^2 - a k_1=0$. The system is singular since
there is no evolution equation for $u_3$. This represents a column of zeros in the
$\mathbb{A}$ matrix or equivalently an $\mathbb{L}_0$ underdetermined structure, but at the same
time we gain a constraint from EOM$_{6}$. So the $6\times 6$ system is
\begin{eqnarray}
\{\mathbb{L}_0, \mathbb{L}_0^P, 5 \times \mathbb{R}_1(\infty) \}.
\end{eqnarray}
We can resolve the constraint by solving for the next highest derivative if $k_0 k_1-c^2\ne 0$
\begin{eqnarray}
u_6 = \frac{ c w u_1 - b w u_2 -k_1 v u_4}{k_0 k_1-c^2},
\label{eqn:u6}
\end{eqnarray}
which brings EOM$_4$ to
\begin{eqnarray}
{\rm EOM}_4&:& \dot u_5 = \frac{ c w u_1 - b w u_2 -k_1 v u_4}{k_0 k_1-c^2}.
\end{eqnarray}
The constraint (\ref{eqn:u6}) also gives $\dot u_6$ which converts EOM$_5$ to another constraint
\begin{eqnarray}
u_3= k_1 \frac{c v u_4 + b v u_5 - k_0 w u_1}{k_0 k_1^2 - c^2 k_1 - b^2 w}.
\end{eqnarray}
Eliminating $u_3$ by assuming the denominator does not vanish brings EOM$_2$ to
\begin{eqnarray}
{\rm EOM}_2&:& \dot u_2 = k_1 \frac{c v u_4 + b v u_5 - k_0 w u_1}{k_0 k_1^2 - c^2 k_1 - b^2 w} .
\end{eqnarray}
The system is now a
\begin{equation}
\{ 4 \times \mathbb{R}_1(\infty) \}
\end{equation}
regular system of EOM$_{1-4}$ containing 4 hyperbolic blocks. All of the highest derivative field
have been eliminated by constraints hidden in the original EOMs. We can of course also rewrite this as two coupled
second order differential equations for $u_1$ and $u_4$, i.e.~the original $\phi$ and $q$. The utility of this approach
for higher order systems is the algorithmic method of discovering constraints which in this case could have been done by
inspection.
\section{Decomposition techniques
\label{sec:decomposition}
Here we review the decomposition techniques for regular or singular matrix pencils in \S\ref{sec:KroneckerAppendix} and for tensors on the 2-sphere in \S\ref{sec:harmonics}.
In the main text and Appendix~\ref{sec:AlgorithmAppendix},
the former is used to block diagonalize and characterize the derivative structure of a set of partial differential equations of motion. The latter is used to decouple the parity and angular
momentum modes of metric fluctuations.
\subsection{Kronecker decomposition of matrix pencil
\label{sec:KroneckerAppendix}
Given two matrices $\mathbb{A}, \mathbb{B}$ of the same dimensions, their linear combination $\mathbb{A} +
\lambda \mathbb{B}$ is called a matrix pencil.
If there exists a $\lambda$ for which $\mathbb{A} + \lambda \mathbb{B}$ is invertible then it is called a regular pencil
and can be placed into a block diagonal Weierstrass form~\cite{Kagstrom:1983} of $r$ regular subblocks with invertible matrices $\mathbb{P}$ and $\mathbb{Q}$
\begin{eqnarray}
{\mathbb{P}(\mathbb{A} + \lambda \mathbb{B})\mathbb{Q} =} \left\{ \mathbb{R}_{\mu_1}, \ldots,
\mathbb{R}_{\mu_r} \right\} \nonumber.
\end{eqnarray}
As a shorthand convention, we denote a block diagonal concatenation of matrices $\diag\{\mathbb{A}_1,\ldots,\mathbb{A}_n\}$ as just the
list of its subblocks $\{ \mathbb{A}_1,\ldots, \mathbb{A}_n \}$.
Here each regular subblock $\mathbb{R}_\mu$ is a $\mu \times \mu$ matrix pencil
defined by the generalized eigenvalue $\Omega$. For finite $\Omega$
\begin{equation}
\mathbb{R}_\mu(\Omega) \equiv \lambda \mathbb{I}_\mu - \mathbb{J}_\mu(\Omega),
\end{equation}
where $\mathbb{I}_\mu$ is a $\mu \times \mu$ identity matrix and
$\mathbb{J}_\mu(\Omega)$ is a $\mu \times \mu$ lower Jordan block of the form
\begin{equation}
\mathbb{J}_\mu(\Omega) =\left(
\begin{array}{cccc}
\Omega & & & \\
1 & \Omega & & \\
& \ddots & \ddots & \\
& & 1 & \Omega
\end{array}
\right) ,
\label{eqn:Jordan}
\end{equation}
whereas for the special case $\Omega=\infty$
\begin{equation}
\mathbb{R}_\mu(\infty) \equiv \lambda \mathbb{N}_\mu - \mathbb{I}_\mu.
\label{eqn:Rnil}
\end{equation}
Here $\mathbb{N}_\mu=\mathbb{J}_\mu(0)$ is a nilpotent lower Jordan matrix with $\Omega=0$.
If the matrix pencil is singular, it can still be cast into Kronecker form
\cite{VanDooren:1979aaa} with invertible matrices $\mathbb{P}$ and $\mathbb{Q}$
\begin{eqnarray}
\mathbb{P}(\mathbb{A} + \lambda \mathbb{B})\mathbb{Q} &=&
\big\{\mathbb{L}_{\mu_1},\ldots, \mathbb{L}_{\mu_u}, \mathbb{L}_{\mu_1}^P, \ldots, \mathbb{L}_{\mu_o}^P, \nonumber\\
&& \mathbb{R}_{\mu_1}, \ldots, \mathbb{R}_{\mu_r}\big\},
\end{eqnarray}
where $u$ is the number of ``underdetermined" $\mu \times (\mu + 1)$ pencils of the form
\begin{equation}
\mathbb{L}_\mu=\begin{pmatrix}
\lambda & 1 & &\\
& \ddots & \ddots &\\
& & \lambda & 1
\end{pmatrix} ,
\end{equation}
and $o$ is the number of ``overdetermined" $(\mu + 1) \times \mu$ pencils of the pertransposed $\mathbb{L}_\mu$ form,
\begin{equation}
\mathbb{L}_\mu^P =\begin{pmatrix}
1 & &\\
\lambda & \ddots &\\
& \ddots & 1 \\
& & \lambda
\end{pmatrix} .
\end{equation}
The degenerate cases of $\mathbb{L}_0$ and $\mathbb{L}_0^P$ are formally $0 \times 1$ and $1 \times 0$ matrices
which stand for a column or row of zeros in the block diagonal form respectively.
Finally as a short hand convention, we denote for example
\begin{equation}
\{ 4\times \mathbb{L}_1 \} = \{ \mathbb{L}_1,\mathbb{L}_1,\mathbb{L}_1,\mathbb{L}_1 \},
\end{equation}
if there are repeated identical block structures.
\subsection{Angular harmonics
\label{sec:harmonics}
The normal modes or harmonic functions for tensorial fields on the 2-sphere are classified by their transformation properties
under a general rotation defined by Euler angles.
Following Ref.~\cite{Goldberg} (see also \cite{Okamoto:2003zw}), we can decompose any trace free totally symmetric tensor
of rank $s$
on the 2 sphere into its spin $\pm s$ components ${}_{\pm s} f(\vh{n})$ as
\begin{equation}
T_{a_1\dots a_s}=({}_sf) \bar{\vc{m}}_{a_1}\dots\bar{\vc{m}}_{a_s}
+({}_{-s}f )\vc{m}_{a_1}\dots\vc{m}_{a_s},
\label{Eqn:RankSBoth}
\end{equation}
where the covariant complex unit vectors on the sphere
\begin{equation}
{ \vc{m}}_a =\frac{1}{\sqrt{2}} \left(
\begin{array}{c}
1 \\
i\sin\theta\\
\end{array}
\right),
\quad
{\bar{\vc{m}}}_a =\frac{1}{\sqrt{2}} \left(
\begin{array}{c}
1 \\
-i\sin\theta\\
\end{array}
\right),
\end{equation}
obey the conjugate orthonormality property
\begin{eqnarray}
\vc{m}_a \vc{m}^a = \bar{\vc{m}}_a \bar{\vc{m}}^a = 0 ,\qquad \vc{m}_a \bar{\vc{m}}^a = 1.
\label{Eqn:Orthonormality}
\end{eqnarray}
Angular indices are raised and lowered by the metric $\sigma_{ab}$ on the 2-sphere
and the antisymmetric Levi-Civita tensor $\epsilon_{ab}$ converts the real and imaginary parts
\begin{eqnarray}
\epsilon_{a}^{\hphantom{a}b} \vc{m}_b &=& i \vc{m}_a, \nonumber\\
\epsilon_{a}^{\hphantom{a}b} \bar{\vc{m}}_b &=& - i \bar{\vc{m}}_a.
\end{eqnarray}
Explicitly,
\begin{equation}
\sigma_{ab} = \left(
\begin{array}{cc}
1 & 0\\
0 & \sin^2 \theta\\
\end{array}
\right) ,\quad
\epsilon_{ab} = \left(
\begin{array}{cc}
0 & \sin\theta \\
-\sin\theta & 0\\
\end{array}
\right) .
\end{equation}
Note that $\vc{m}_a \bar{\vc{m}}_b + \bar{\vc{m}}_a \vc{m}_b = \sigma_{ab}$.
In this Appendix, we employ $\vh{n}$ to denote the radial unit vector specified by
the angular coordinates $\{ \theta,\phi \}$ and integrals over
$d\vh{n}$ as integrals over angles on the 2-sphere. A right handed rotation of the coordinate axis
around $\vh{n}$ by $\psi$ changes the spin functions by a phase $e^{-i s\psi}$.
These definitions apply to $s=0$ scalar functions as well but note that
Eq.~(\ref{Eqn:RankSBoth}) implies the convention $T = {}_0 f + {}_{-0} f = 2 {}_0 f$. In this case the complete
set of modes for $T$ are the spherical harmonics $Y_{\ell m}$.
The spin-$s$ functions can likewise be decomposed into multipole moments based on their
transformation properties under the remaining Euler angles, i.e.~a rotation of the pole of the spherical coordinates.
The normal modes are generalizations of spherical harmonics called spin spherical harmonics
\cite{Goldberg}
that obey
the orthonormality property
\begin{equation}
\int d\vh{n} ( {}_s Y_{\ell' m'}^* )( {}_s Y_{\ell m} )= \delta_{\ell \ell'}\delta_{m m'},
\end{equation}
the conjugation property
\begin{equation}
{}_s Y_{\ell m}^* = (-1)^{m+s} {}_{-s}Y_{\ell (-m)},
\end{equation}
and the parity property
\begin{equation}
{}_s Y_{\ell m}(\vh{n}) = (-1)^{\ell} {}_{-s}Y_{\ell m}(-\vh{n})
\end{equation}
where $\ell \ge s$ and $-\ell \le m \le \ell$.
Rotation of the coordinate origin mixes the $m$ moments of a given angular momentum $\ell$.
Thus the $s\ge 1$ tensor eigenstates of a given angular momentum $(\ell,m)$ with even parity $X=E$ and odd parity $X=B$
are given by
\begin{equation}
Y^X_{\ell m,a_1 \ldots a_s} ={}_s f^X_{\ell m} \bar{\vc{m}}_{a_1}\ldots \bar{\vc{m}}_{a_s} + {}_{-s} f^X_{\ell m} \vc{m}_{a_1} \ldots \vc{m}_{a_s},
\end{equation}
where the spin functions are
\begin{eqnarray}
{}_s f^E_{\ell m} &=& -i( {}_s f^B_{\ell m}) = \frac{{}_{s}Y_{\ell m}}{\sqrt{2} } (-1)^s, \nonumber\\
{}_{-s} f^E_{\ell m}&=& i( {}_{-s} f^B_{\ell m}) = \frac{{}_{-s}Y_{\ell m}}{\sqrt{2} }.
\end{eqnarray}
By virtue of the analogous spin relations above,
the tensors satisfy the orthonormality relation
\begin{eqnarray}
\int d\vh{n} Y^{X*}_{\ell m,a_1 \ldots a_s} Y^{X',a_1 \ldots a_s}_{\ell' m'} = \delta_{\ell\ell'}\delta_{m m'}\delta_{X X'}
\label{eqn:orthonormality}
\end{eqnarray}
and the conjugation relation
\begin{eqnarray}
Y^{X*}_{\ell m,a_1 \ldots a_s} = (-1)^m Y^{X}_{\ell (-m),a_1 \ldots a_s} ,
\end{eqnarray}
where $X \in E,B$.
Covariant differentiation on these tensors raise and lower the spin weights according to the
ladder operators $\;\raise1.0pt\hbox{$'$}\hskip-6pt\partial,\baredth$ \cite{Goldberg}
\begin{eqnarray}
\nabla_b T_{a_1\ldots a_s} &=& -\bar{\vc{m}}_{a_1}\cdots\bar{\vc{m}}_{a_s}
\frac{\bar{\vc{m}}_b \;\raise1.0pt\hbox{$'$}\hskip-6pt\partial
+\vc{m}_b \baredth }{\sqrt{2}}
{}_sf \nonumber\\
&&- \vc{m}_{a_1}\cdots\vc{m}_{a_s}
\frac{\bar{\vc{m}}_b \;\raise1.0pt\hbox{$'$}\hskip-6pt\partial
+\vc{m}_b \baredth }{\sqrt{2}}{}_{-s}f .
\label{Eqn:CovDerivFinal}
\end{eqnarray}
In particular, their action on the spin harmonics gives
\begin{eqnarray}
\;\raise1.0pt\hbox{$'$}\hskip-6pt\partial {}_s Y_{\ell m} &=& \sqrt{(\ell-s)(\ell + s +1)} \, {}_{s+1} Y_{\ell m} ,\nonumber\\
\baredth {}_s Y_{\ell m} &=&- \sqrt{(\ell+s)(\ell - s +1)}\, {}_{s-1} Y_{\ell m}.
\end{eqnarray}
To make a connection with the RWZ literature, we can use Eq.~(\ref{Eqn:CovDerivFinal})
to relate the covariant derivative of the scalar harmonics to the vector harmonics
\begin{eqnarray}
Y^E_{\ell m,a} = \frac{ \nabla_a Y_{\ell m} }{\sqrt{\ell(\ell+1)}}, \quad
Y^B_{\ell m,a} = \frac{\epsilon_{ba} \nabla^b Y_{\ell m}}{\sqrt{\ell(\ell+1)}} ,
\end{eqnarray}
and likewise the second derivative to the rank-2 tensor harmonics \cite{Kamionkowski:1996ks}
\begin{eqnarray}
Y^E_{\ell m,ab} &=& \sqrt{2\frac{(\ell-2)!}{(\ell+2)!}}
\(\nabla_a \nabla_b -
\frac{1}{2}\sigma_{ab} \nabla_c \nabla^c\)Y_{\ell m}, \nonumber\\
Y^B_{\ell m,ab} &=& \sqrt{\frac{1}{2}\frac{(\ell-2)!}{(\ell+2)!}}
\(\epsilon^c_{\phantom{c}b}\nabla_a \nabla_c +
\epsilon^c_{\phantom{c}a}\nabla_b \nabla_c \)Y_{\ell m}\nonumber .
\end{eqnarray}
Eq.~(\ref{Eqn:CovDerivFinal}) also provides identities for integrals of scalar contractions of
covariant derivatives of tensors over the 2-sphere
\begingroup
\allowdisplaybreaks[1]
\begin{eqnarray}
&& \int d\vh{n}(\nabla_b Y^{X*}_{\ell m,a_1 \ldots a_s})( \nabla^b Y^{X',a_1 \ldots a_s}_{\ell' m'} )
\nonumber\\*
&&\qquad = \delta_{\ell\ell'}\delta_{m m'}\delta_{X X'}[\ell(\ell+1)-s^2] ,\nonumber\\
&& \int d\vh{n}(\nabla_{a_s} Y^{X*}_{\ell m,a_1 \ldots a_{s-1}}) Y^{X',a_1 \ldots a_s}_{\ell' m'}
\nonumber\\*
&&\qquad = \delta_{\ell\ell'}\delta_{m m'} \delta_{X X'} \sqrt{\frac{(\ell-s+1)(\ell+s)}{2}},
\nonumber\\
&& \int d\vh{n}(\nabla_{b} Y^{X*}_{\ell m,a_1 \ldots a_{s}})( \nabla^{a_s} Y^{X',a_1 \ldots a_{s-1}b}_{\ell' m'} )
\nonumber\\*
&&\qquad = \delta_{\ell\ell'}\delta_{m m'}\delta_{X X'}\frac{(\ell-s)(\ell+s+1) }{2},
\nonumber\\
&& \int d\vh{n}(\nabla_bY^{X*}_{\ell m,a_1 \ldots a_{s}})( \nabla_c Y^{X',a_1 \ldots a_s b c}_{\ell' m'} )
\nonumber\\*
&&\qquad = -\delta_{\ell\ell'}\delta_{m m'}\delta_{X X'}\sqrt{\frac{(\ell -s)(\ell+s+1)}{2}}
\nonumber\\*
&&\qquad \quad \times \sqrt{\frac{(\ell+s+2)(\ell-s-1)}{2}} ,
\label{eqn:angularidentities}
\end{eqnarray}
\endgroup
which are used in the main text to determine how the various spin components are coupled through the equations
of motion.
\vfill
\begin{widetext}
\section{Alternative odd analysis
\label{ssec:oddal}
In this appendix we highlight the difference between the odd mode analysis in \S\ref{ssec:oddl2}
and an alternative analysis employing a technique of auxiliary fields, which is commonly
used in the literature~\cite{DeFelice:2011ka, Motohashi:2011pw, Motohashi:2011ds,
Kobayashi:2012kh, Ogawa:2015pea}.
See also Appendix~\ref{sec:proderi} for a simpler example of the general technique.
The general $B$ mode Lagrangian \eqref{OddLagr} is given explicitly by
\begin{eqnarray}
\mathcal{L}_{B} &=& D_1 h_0^2 + D_2 h_1^2 + D_3 h_2^2 + D_4 h_0 h_1
+ D_5 h_0 h_2
+ D_6 h_1 h_2+ D_7 (\dot h_1 - h_0')^2 + D_8 \dot h_2^2 + D_9
h_2'^2\nonumber\\
&&
+ D_{10} h_1 h_0' + D_{11} h_1 h_2' + D_{12} h_0 \dot h_1 +
D_{13} h_0 \dot h_2 ,
\label{OddLagrRepeat}
\end{eqnarray}
where the $D_i$ coefficients in terms of the background metric and {St\"{u}ckelberg} fields are
\begin{eqnarray}
D_1 &=&\frac{a^2 \left[ 3 r^2 \dot a^2-2 r b'
b+ A r^2 b^2 \bar \chi_{22}+ \ell
(\ell+1)b^2\ \right]
+2 r a' a b \left(b-rb'\right)
+ r^2 a'^2 b^2-\Lambda r^2 a^4
b^2}{4 r^2 a^3 b^3}
, \nonumber\\
D_2 &=&\frac{- a^2 \left[ r^2 \dot a^2-2 r b'
b+(\ell-1)(\ell+2) b^2\right]+2 r a' a b \left(r
b'+b\right)+ r^2 a'^2 b^2+ r^2 a^4 \left(\Lambda b^2+ A \bar \chi_{11}\right)}{4 r^2
a^5 b}
, \nonumber\\
D_3 &=&\frac{-a^2 \left[4 r^2 \dot a^2+b^2
\left(A r^2 \bar \chi_{22}-2\right)\right]+4 r^2
a'^2 b^2+8 r a' a b^2+ r^2 a^4 \left(2 \Lambda b^2+ A \bar \chi_{11}\right)}{16 r^4
a^5 b}
,
\nonumber\\
D_4 &=&-\frac{ 4 \dot a (r a'+a)
+A r a^2 \bar \chi_{12}}{2 r
a^3 b}
, \quad
D_5 =-\frac{\sqrt{(\ell-1)(\ell+2)} \dot a}{2 r^2 a^2 b}
, \quad
D_6 =\frac{\sqrt{(\ell-1)(\ell+2)} \left(r a'+a\right)b}{2
r^3 a^4}
, \nonumber\\
D_7 &=&\frac{1}{4 a b}
, \quad
D_8 =\frac{1}{16 r^2 a b}
, \quad
D_9 =-\frac{ b}{16 r^2 a^3}
, \quad
D_{10} =\frac{\dot a}{a^2 b}
, \quad
D_{11} =-\frac{\sqrt{(\ell-1)(\ell+2)} b}{4 r^2 a^3}
, \quad
D_{12} =\frac{ \left(r a'+a\right)}{r a^2 b}
, \nonumber\\
D_{13} &=&\frac{\sqrt{(\ell-1)(\ell+2)}}{4 r^2 a b}
.
\label{eqn:Ds}
\end{eqnarray}
Parts proportional to $A$ are contributions from the dRGT potential term in the
quadratic Lagrangian \eqref{SimplifiedLagr}. Naturally, this potential term affects only the coefficients of nonderivative
terms $D_1, D_2, D_3$, and $D_4$. The remaining parts of the coefficients $D_i$ are
inherited from the Einstein-Hilbert Lagrangian and the effective cosmological constant
$\Lambda$.
While we cannot integrate out either $h_0$ or $h_1$
from the Lagrangian \eqref{OddLagr} directly, the peculiar structure $(h_0' - \dot
h_1)^2$ enables us to introduce an auxiliary field and obtain a dynamically
equivalent unconstrained
Lagrangian with only two dynamical field variables.
The first step is to complete the square of derivative terms of $h_0$ and $h_1$ in Eq.~\eqref{OddLagr} as
\begin{equation}
\mathcal{L}_{B} = D_7 \( \dot h_1 - h_0' + \frac{D_{12}}{2 D_7}h_0 - \frac{D_{10}}{2
D_7} h_1\)^2+\cdots .
\label{csqLagr}
\end{equation}
Next introduce an auxiliary field $q$ defined to be
\begin{equation}
\mathcal{L}_{B} = -\f{\qa^2}{D_7} + 2 \qa \(\dot h_1 - h_0' + \frac{D_{12}}{2 D_7}h_0 - \frac{D_{10}}{2 D_7} h_1\) +\cdots.
\label{QLagr}
\end{equation}
Clearly, the equation of motion for $q$ is given by
\begin{equation}
\label{qRelation}
\qa = D_7 \mk{ \dot h_1 - h_0' + \frac{D_{12}}{2 D_7}h_0 - \frac{D_{10}}{2 D_7} h_1 } ,
\end{equation}
and we recover Eq.~\eqref{csqLagr} by plugging Eq.~\eqref{qRelation} back into Eq.~\eqref{QLagr}.
In this way, the problematic $(h_1' - \dot h_0)^2$ term in Eq.~\eqref{OddLagr} can be effectively hidden inside the
terms of Eq.~\eqref{QLagr}, while the remaining derivatives on $h_0, h_1$ can be moved
onto $\qa$ through integration by parts. After all the algebra, we get a
Lagrangian without derivatives on $h_0, h_1$
\begin{eqnarray}
\label{AfterHayatoTrick}
\mathcal{L}_{B} &=& \hat D_1 h_0^2 + \hat D_2 h_1^2 + D_3 h_2^2 +
\hat D_4 h_0 h_1 + D_5 h_0 h_2
+ D_6 h_1 h_2+ D_8 \dot h_2^2 + D_9 h_2'^2 + D_{11} h_1 h_2' + D_{13} h_0 \dot h_2 \label{Loddq}
\\
&&~ - \f{1}{D_7} \qa^2 + \f{D_{12}}{D_7} \qa h_0 - \f{D_{10}}{D_7} \qa h_1 - 2\dot \qa h_1 + 2 \qa'
h_0 , \nonumber
\end{eqnarray}
where the equal sign should be interpreted as the same up to boundary terms from integrating the respective Lagrangians by parts.
Due to the integrations by parts
\begin{eqnarray}
\hat D_1 &=& D_1 - \frac{D_{12}^2}{4D_7} - \frac{D_{12}'}{2}
= \f{(\ell-1)(\ell+2) + A r^2 \bar\chi_{22} }{4 r^2 a b}, \nonumber\\
\hat D_2 &=& D_2 - \frac{D_{10}^2}{4D_7} - \frac{\dot D_{10}}{2}
= \f{-(\ell-1)(\ell+2)b^2 + A r^2 a^2 \bar\chi_{11} }{4 r^2 a^3 b} ,\nonumber\\
\nonumber \\
\hat D_4 &=& D_4 + \frac{D_{12}D_{10}}{2D_7} = - \f{ A \bar\chi_{12} }{2 ab} .
\label{hatDs}
\end{eqnarray}
Notice that the case of $\ell=1$ is special;
for this reason, we consider $\ell =1$ and $\ell \ge 2$ separately.
\subsection{Odd $\ell \ge 2$ EOMs
For modes with $\ell \ge 2$,
we can integrate $h_0, h_1$ out by using their equations of motion.
The end result reads
\begin{eqnarray}
\label{HayatosSolution}
h_0 &=& -\frac{\hat D_4 [-2 \dot \qa + D_6 h_2 + D_{11} h_2' - \f{D_{10}}{D_7}\qa] -
2 \hat D_2 \left[2 \qa' + D_{13} \dot h_2 + D_5 h_2 + \f{D_{12}}{D_7}\qa \right]}{\hat D_4^2 - 4 \hat
D_1 \hat D_2} ,
\nonumber\\
h_1 &=& - \frac{2 \hat D_1 [ 2 \dot \qa - D_6 h_2 - D_{11} h_2' + \f{D_{10}}{D_7}\qa]
+ \hat D_4 \left[2 \qa' + D_{13}\dot h_2 +D_5 h_2 + \f{D_{12}}{D_7}\qa \right]}{\hat D_4^2 - 4 \hat
D_1 \hat D_2} .
\end{eqnarray}
The coefficient in the denominator is
\begin{eqnarray}
{\hat D_4^2 - 4 \hat D_1 \hat D_2 = \frac{(\ell-1)(\ell+2)}{4 r^4 a^4}}
\left[
(\ell-1)(\ell+2) + A r^2 a^2 \Tr \bar \chi\right] , \label{h0h1det}
\end{eqnarray}
and is typically nonzero for $\ell \geq 2$.
However, there are positions in spacetime
where \eqref{h0h1det} vanishes and we cannot solve for $h_0, h_1$ through
Eq.~\eqref{HayatosSolution}.
Because $\Tr \bar \chi$ is an invariant quantity, we cannot avoid this problem by
going into another slicing of the background spacetime as we did in our main analysis.
Outside of these problematic points, by plugging solutions \eqref{HayatosSolution} into
the Lagrangian \eqref{AfterHayatoTrick} it is possible to obtain an unconstrained
Lagrangian with only two degrees of freedom $\qa, h_2$ and with no more than second
derivatives. This Lagrangian then leads to two second order equations of motion for
$\qa,h_2$. Characteristic curves for these EOMs can be then found in a standard way
\cite{Hoffman:2001aaa} by focusing only on the second derivative terms in the two equations of
motion and determining where the EOMs fail to determine their values given the lower derivatives.
Similarly to \eqref{EOMmat}, requiring four consistency conditions such as
\begin{equation}
d(\qa') = \qa'' dr + \dot \qa' dt,
\end{equation}
where the left hand side is assumed to be continuous, leads to a linear system of six equations
for the six unknown second derivatives $\qa'', \dot \qa', \ddot \qa, h_2'', \dot h_2', \ddot
h_2$. For general values of the infinitesimal displacement vectors $dt, dr$ this system
has a unique solution. For special ratios $dt/dr$ which correspond to the characteristic
curves the system allows for multiple solutions and the highest derivatives are not
uniquely defined. Characteristic curves obtained this way agree with the curves obtained
by our main analysis.
Note that this alternate procedure does not on its own distinguish between two $\mathbb{R}_1$ subsystems
which share characteristics and the $\mathbb{R}_2$ system identified in our main analysis. Likewise
since the discontinuity identified here is only in the highest derivatives, the analysis
does not address the chained derivatives in the $\mathbb{R}_2$ block that
link characteristics.
\subsection{Odd $\ell=1$ EOMs
\label{sec:oddl1app}
For $\ell=1$, there is no spin-2 $h_2$ mode
so that Eq.~\eqref{Loddq} becomes
\begin{align}
\label{Loddl1q}
\mathcal L_{B,\ell=1} =& \hat D_1 h_0^2 + \hat D_2 h_1^2 + \hat D_4 h_0 h_1 - \f{\qa^2}{D_7} + \f{D_{12}}{D_7} \qa h_0
- \f{D_{10}}{D_7} \qa h_1 - 2 \dot \qa h_1 + 2 \qa' h_0.
\end{align}
Note that the coefficients $\hat D_1,\hat D_2,\hat D_4$
in Eq.~\eqref{hatDs} are proportional to $m^2$ for $\ell=1$, which makes the following analysis different from general relativity.
Variation of Eq.~\eqref{Loddl1q} with respect to $h_0, h_1, \qa$ yields
\begin{align}
2 \hat D_1 h_0 + \hat D_4 h_1 + \f{D_{12}}{D_7} \qa + 2\qa' &=0, \label{ell1eoms1} \\
\hat D_4 h_0 + 2 \hat D_2 h_1 - \f{D_{10}}{D_7} \qa - 2\dot \qa &=0, \label{ell1eoms2}\\
- 2 \qa + D_{12} h_0 - D_{10} h_1 + 2 D_7 \dot h_1 - 2 D_7 h_0' &=0. \label{ell1eoms3}
\end{align}
Since $\hat D_4^2-4\hat D_1\hat D_2=0$ for $\ell=1$ from Eq.~(\ref{h0h1det}), we cannot solve Eqs.~\eqref{ell1eoms1} and \eqref{ell1eoms2} for $h_0$
and $h_1$. Instead
we first solve \eqref{ell1eoms1} for $h_0$:
\begin{equation} h_0 = -\f{1}{2\hat D_1} \mk{ \hat D_4 h_1 + \f{D_{12}}{D_7}\qa + 2\qa' } . \label{eqh0} \end{equation}
Plugging Eq.~\eqref{eqh0} into Eq.~\eqref{ell1eoms2}, we obtain an autonomous equation
\begin{equation} \dot \qa + \f{\hat D_4}{2 \hat D_1} \qa' + \f{1}{2 D_7} \mk{ D_{10} + \f{\hat D_4 D_{12}}{2\hat D_1} } \qa = 0 . \label{eqq} \end{equation}
Here, by the virtue of $\hat D_4^2-4\hat D_1\hat D_2=0$, the $h_1$ term drops out.
Finally, from Eq.~\eqref{ell1eoms3} we obtain
\begin{align}
&\dot h_1 + \f{\hat D_4}{2\hat D_1} h_1' + \kk{ \mk{\f{\hat D_4}{2\hat D_1}}' - \f{1}{2D_7} \mk{D_{10} + \f{\hat D_4 D_{12}}{2\hat D_1} } } h_1
= - \mk{\f{ \qa'}{\hat D_1}}' + \kk{ \f{1}{D_7} + \f{D_{12}^2}{4D_1D_7^2} - \mk{\f{D_{12}}{2D_1D_7}}' } \qa. \label{eqh1}
\end{align}
Note that the source term in the right-hand side is written in terms of $\qa$. Therefore,
given background evolution, we can first solve Eq.~\eqref{eqq} for $\qa(t,r)$, plug it into
Eq.~\eqref{eqh1} to solve for $h_1(t,r)$, and then Eq.~\eqref{eqh0} gives $h_0(t,r)$. As we
have two first-order differential equations, we require two initial conditions to solve
the system. It is straightforward from Eq.~\eqref{eqq} and \eqref{eqh1} to check that
the characteristic curves corresponding to
these two equations are the same as those uncovered in our main analysis.
A structurally similar set of equations was uncovered for the two $\ell=0$
modes in isotropic gauge~\cite{Wyman:2012iw, Motloch:2015gta}. There one of the isotropic modes $\delta
\Gamma$ formed an autonomous equation; this mode then sourced the second
isotropic mode $\delta f$. Unlike here, both these equations were manifestly first order,
with $\delta \Gamma$ sourcing $\delta f$ through a term without any derivatives. In the
present analysis we see $h_1$ sourced by up to second derivatives of $\qa$. Because of
this derivative sourcing, this system is an $\mathbb{R}_2$ parabolic block whereas $\ell = 0$
is a $2\times \mathbb{R}_1$ pair of hyperbolic blocks.
\subsection{Odd $\ell=1$ Hamiltonian analysis
\label{ssec:oddl1Hamiltonian}
The $\ell=1$ odd Lagrangian is simple enough to also perform the Hamiltonian analysis.
From \eqref{Loddl1q}, the canonical momenta for $\qa$, $h_0$, $h_1$ are given by
\begin{equation} p_q = -2h_1, \quad p_0 = 0, \quad p_1 = 0, \end{equation}
and yield three primary constraints:
\begin{equation} \phi_q = p_q + 2h_1, \quad \phi_0 = p_0, \quad \phi_1= p_1. \end{equation}
The only nonvanishing Poisson bracket between them is
\begin{equation} \{ \phi_q,\phi_1 \} =2. \end{equation}
The total Hamiltonian density is given by
\begin{align}
{\cal H}_T =& \dot \qa p_q + \dot h_0 p_0 + \dot h_1 p_1 - {\cal L}_{B,\ell=1} + \mu_q \phi_q + \mu_0 \phi_0 + \mu_1 \phi_1, \notag\\
=& - \hat D_1 h_0^2 - \hat D_2 h_1^2 - \hat D_4 h_0 h_1 +\f{1}{D_7} \qa^2 - \f{D_{12}}{D_7} \qa h_0
+ \f{D_{10}}{D_7} \qa h_1 - 2\qa' h_0 + \mu_q \phi_q + \mu_0 \phi_0 + \mu_1 \phi_1 ,
\end{align}
where $\mu_q$, $\mu_0$, $\mu_1$ are Lagrange multipliers.
The consistency conditions are then given by
\begin{align}
&0\approx \dot \phi_q = \{ \phi_q,{\cal H}_T \} = -\f{2}{D_7}\qa + \f{D_{12}}{D_7}h_0 - \f{D_{10}}{D_7}h_1 + 2\mu_1,\notag\\
&0\approx \dot \phi_0 = \{ \phi_0,{\cal H}_T \} = 2\hat D_1h_0 + \hat D_4h_1 + \f{D_{12}}{D_7}\qa + 2\qa', \notag\\
&0\approx \dot \phi_1 = \{ \phi_1,{\cal H}_T \} = 2\hat D_2h_1 + \hat D_4h_0 - \f{D_{10}}{D_7}\qa - 2\mu_q.
\end{align}
From $\dot \phi_q\approx 0$ and $\dot \phi_1\approx 0$ we can solve for $\mu_1$ and $\mu_q$,
while from $\dot \phi_0\approx 0$ we obtain a secondary constraint
\begin{equation} \phi_2 = 2\hat D_1h_0 + \hat D_4h_1 + \f{D_{12}}{D_7}\qa + 2\qa'. \end{equation}
Poisson brackets of $\phi_2$ with the remaining constraints are
\begin{align}
&\{ \phi_2, \phi_q \} = \f{D_{12}}{D_7} + 2\{ \qa',p_q \}, \notag\\
&\{ \phi_2, \phi_0 \} = 2\hat D_1, \notag\\
&\{ \phi_2, \phi_1 \} = \hat D_4.
\end{align}
Therefore, the consistency condition of $\phi_2$ gives a relation between $\mu_q$, $\mu_0$, $\mu_1$
and does not generate a further constraint.
The determinant of the Poisson brackets between constraints is given by
\begin{equation} \det \{\phi_i ,\phi_j\} = 16\hat D_1^2. \end{equation}
So long as $\hat D_1 = A \bar \chi_{22}/(4 a b) \not = 0$, all four constraints are second class.
Therefore, the number of initial conditions we need is $3\times 2-4=2$, which is consistent with two
first-order EOMs for $\qa$ and $h_1$ obtained above.
Finally by using the constraints $\phi_i=0$, we can express
\begin{equation}
h_1 = -\frac{p_q}{2}, \quad h_0 = \frac{\hat D_4 p_q/2 - D_{12}q/D_7 -2 q'}{2\hat D_1} ,
\end{equation}
and rewrite the Hamiltonian on the constrained surface in terms of $q, p_q$
\begin{equation}
{\cal H}_T = - \f{(2\hat D_1 D_{10} + \hat D_4 D_{12})q + 2\hat D_4 D_7 q'}{4\hat D_1D_7} p_q
+ \f{D_{12}^2 + 4\hat D_1 D_7}{4\hat D_1 D_7^2} q^2
+ \f{D_{12}}{\hat D_1 D_7} q q' + \f{1}{\hat D_1} q'^2 .
\end{equation}
The quadratic term
$p_q^2$ vanishes because of $\hat D_4^2 - 4 \hat D_1 \hat D_2 = 0$.
This Hamiltonian is linear in $p_q$ and thus unbounded from below.
\end{widetext}
|
2,869,038,155,270 | arxiv | \section{Introduction and statement of main results}
The main purpose of this work is to study uniform regularity estimates
for a family of elliptic operators $\{\mathcal{L}_\varep, \varep>0\}$,
arising in the theory of homogenization, with rapidly oscillating periodic coefficients.
We establish sharp $W^{1,p}$ estimates, Lipschitz estimates, and nontangential
maximal function estimates, which are uniform in the parameter $\varep$,
on solutions with Neumann boundary conditions.
Specifically, we consider
\begin{equation}\label{operator}
\mathcal{L}_\varep
=-\frac{\partial}{\partial x_i}\left[
a_{ij}^{\alpha\beta}\left(\frac{x}{\varep}\right)
\frac{\partial}{\partial x_j}\right]
=-\text{\rm div}\left[ A\left(\frac{x}{\varep}\right)\nabla \right],
\end{equation}
where $\varep>0$.
We assume that the coefficient matrix
$A(y)=\big(a_{ij}^{\alpha\beta} (y)\big)$ with $1\le i,j\le d$ and $ \ 1\le \alpha, \beta\le m$
is real and satisfies the ellipticity condition
\begin{equation}\label{ellipticity}
\mu |\xi|^2 \le a_{ij}^{\alpha\beta} (y) \xi_i^\alpha \xi_j^\beta \le \frac{1}{\mu} |\xi|^2
\quad \text{ for } y\in \brd \text{ and } \xi=(\xi_i^\alpha)\in \mathbb{R}^{dm},
\end{equation}
where $\mu>0$, the periodicity condition
\begin{equation}\label{periodicity}
A(y+z)=A(y) \quad \text{ for } y\in \mathbb{R}^{d} \text{ and }
z\in \mathbb{Z}^{d},
\end{equation}
and the smoothness condition
\begin{equation}\label{smoothness}
| A(x)-A(y)| \le \tau |x-y|^\lambda
\quad \text{ for some } \lambda \in (0,1) \text{ and } \tau \ge 0.
\end{equation}
We will say $A\in \Lambda (\mu, \lambda,\tau)$ if $A=A(y)$ satisfies conditions
(\ref{ellipticity}), (\ref{periodicity}) and (\ref{smoothness}).
Let $f\in L^2(\Omega)$ and $g\in W^{-1/2,2}(\partial\Omega)$.
Consider the Neumann boundary value problem
\begin{equation}\label{Neumann-problem-1}
\left\{
\aligned
\mathcal{L}_\varep (u_\varep) & =\text{\rm div} (f)& & \text{ in }\Omega,\\
\frac{\partial u_\varep}{\partial\nu_\varep} & = g-n\cdot f & & \text{ on } \partial\Omega,
\endaligned
\right.
\end{equation}
where
\begin{equation}\label{conormal}
\left( \frac{\partial u_\varep}{\partial\nu_\varep}\right)^\alpha
=n_i (x) a_{ij}^{\alpha\beta}\big(\frac{x}{\varep}\big)
\frac{\partial u^\beta_\varep}{\partial x_j}
\end{equation}
denotes the conormal derivative associated with $\mathcal{L}_\varep$ and
$n=(n_1, \dots, n_d)$ is the outward unit normal to $\partial\Omega$.
Assume that $\int_\Omega u_\varep =0$.
It is known from the theory of homogenization that
under the assumptions (\ref{ellipticity})-(\ref{periodicity}),
$u_\varep \to u_0$ weakly in $W^{1,2}(\Omega)$ as $\varep\to 0$, where
$\mathcal{L}_0 (u_0)=\text{\rm div}(f)$
in $\Omega$ and $\frac{\partial u_0}{\partial \nu_0}=g-n\cdot f
$ on $\partial\Omega$.
Moreover, the homogenized operator $\mathcal{L}_0$ is an elliptic operator
with constant coefficients satisfying (\ref{ellipticity})
and depending only on the matrix $A$ (see e.g. \cite{bensoussan-1978}).
In this paper we shall be interested in sharp regularity estimates of $u_\varep$,
which are uniform in the parameter $\varep$,
assuming that the data are in $L^p$ or Besov or H\"older spaces.
The following three theorems are the main results of the paper.
Note that the symmetry condition $A^*=A$, i.e.,
\begin{equation}\label{symmetry}
a_{ij}^{\alpha\beta} (y) =a_{ji}^{\beta\alpha} (y) \quad \text{ for } 1\le i,j\le d
\text{ and } 1\le \alpha, \beta \le m,
\end{equation}
is also imposed in Theorems \ref{Lipschitz-estimate-theorem} and \ref{maximal-function-theorem}.
\begin{thm}[{$W^{1,p}$ estimates}]
\label{W-1-p-theorem}
Suppose $A\in \Lambda(\mu,\lambda,\tau)$ and $1<p<\infty$.
Let $\Omega$ be a bounded $C^{1,\alpha}$ domain for some $0<\alpha<1$.
Let $g=(g^\beta)\in B^{-1/p, p}(\partial\Omega)$,
$f=(f_j^\beta)\in L^p(\Omega)$ and $F=(F^\beta)\in L^q(\Omega)$,
where $q=\frac{pd}{p+d}$ for $p>\frac{d}{d-1}$ and $q>1$ for
$1<p\le \frac{d}{d-1}$.
Then, if $F$ and $g$
satisfy the compatibility condition
$\int_\Omega F^\beta
+<g^\beta,1>=0$ for $1\le \beta\le m$, the weak solutions
to
\begin{equation}\label{W-1-p}
\left\{ \aligned
\mathcal{L}_\varep (u_\varep) &=\text{\rm div} (f) +F
& & \text{ in } \Omega,\\
\frac{\partial u_\varep}{\partial\nu_\varep} & =g-n\cdot f
& & \text{ on } \partial\Omega,\\
u_\varep & \in W^{1, p}(\Omega)
\endaligned
\right.
\end{equation}
satisfy the estimate
\begin{equation}\label{W-1-p-estimate}
\| \nabla u_\varep\|_{L^p(\Omega)}
\le C\, \left\{ \| f\|_{L^p(\Omega)} + \|F\|_{L^q(\Omega)}
+\| g\|_{B^{-1/p, p}(\partial\Omega)}\right\},
\end{equation}
where $C>0$ depends only on $d$, $m$, $p$, $q$,
$\mu$, $\lambda$, $\tau$ and $\Omega$.
\end{thm}
\begin{thm}[Lipschitz estimates]
\label{Lipschitz-estimate-theorem}
Suppose that $A\in \Lambda(\mu,\lambda,\tau)$ and $A^*=A$.
Let $\Omega$ be a bounded $C^{1,\alpha}$ domain, $0<\eta<\alpha<1$ and $q>d$.
Then, for any $g\in C^{\eta} (\partial\Omega)$ and $F\in L^q(\Omega)$ with
$\int_\Omega F +\int_{\partial\Omega} g=0$,
the weak solutions to
\begin{equation}\label{Neumann-problem-3}
\left\{
\aligned
\mathcal{L}_\varep (u_\varep) & =F& & \text{ in }\Omega,\\
\frac{\partial u_\varep}{\partial\nu_\varep} & = g & & \text{ on } \partial\Omega,\\
|\nabla u_\varep| & \in L^\infty(\Omega),
\endaligned
\right.
\end{equation}
satisfy the estimate
\begin{equation}\label{Lipschitz-estimate}
\|\nabla u_\varep\|_{L^\infty(\Omega)}
\le C \big\{ \| g\|_{C^{\eta}(\partial\Omega)} +\| F\|_{L^q(\Omega)}\big\},
\end{equation}
where $C>0$ depends only on $d$, $m$,
$\eta$, $q$, $\mu$, $\lambda$, $\tau$ and $\Omega$.
\end{thm}
\begin{thm}[Nontangential maximal function estimates]
\label{maximal-function-theorem}
Suppose that $A\in \Lambda (\mu,\lambda,\tau)$ and $A=A^*$.
Let $\Omega$ be a bounded $C^{1,\alpha}$ domain
and $1<p<\infty$.
Then, for any $g\in L^p(\partial\Omega)$ with mean value zero,
the weak solutions to
\begin{equation}\label{Neumann-problem-2}
\left\{
\aligned
\mathcal{L}_\varep (u_\varep) & =0& & \text{ in }\Omega,\\
\frac{\partial u_\varep}{\partial\nu_\varep} & = g & & \text{ on } \partial\Omega,\\
(\nabla u_\varep)^*& \in L^p(\partial\Omega), & &
\endaligned
\right.
\end{equation}
satisfy the estimate
\begin{equation}\label{maximal-function-estimate}
\|(\nabla u_\varep)^*\|_{L^p(\partial\Omega)}
+\| \nabla u_\varep\|_{L^q(\Omega)}\le C \, \| g\|_{L^p(\partial\Omega)},
\end{equation}
where $q=\frac{pd}{d-1}$ and
$C>0$ depends only on $d$, $m$, $p$, $\mu$, $\lambda$, $\tau$ and $\Omega$.
\end{thm}
A few remarks on notation are in order.
In Theorem \ref{W-1-p-theorem}, $B^{-1/p, p}(\partial\Omega)$
is the dual of the Besov space $B^{1/p, p^\prime}(\partial\Omega)$
on $\partial\Omega$,
where $p^\prime=\frac{p}{p-1}$,
and $<g^\beta, 1>$ denotes the action of $g^\beta$ on the function $1$.
By a weak solution $u$ to (\ref{W-1-p}), we mean that $u\in W^{1,p}(\Omega)$ and satisfies
\begin{equation}\label{weak-formulation}
\int_\Omega a_{ij}^{\alpha\beta}
\left(\frac{x}{\varep}\right) \frac{\partial u_\varep^\beta}
{\partial x_j}
\cdot \frac{\partial \varphi^\alpha}{\partial x_i}\, dx
=\int_\Omega \left\{ -f_i^\alpha
\frac{\partial \varphi^\alpha}{\partial x_i}
+F^\alpha\varphi^\alpha \right\}\, dx
+<g^\alpha, \varphi^\alpha>,
\end{equation}
for any $\varphi =(\varphi^\alpha)\in C_0^1 (\br^d)$.
In Theorem \ref{maximal-function-theorem} we have used
$(\nabla u_\varep)^*$ to denote the nontangential maximal function of $\nabla u_\varep$.
We point out that the Lipschitz estimate in Theorem \ref{Lipschitz-estimate-theorem}
is sharp. Even with $C^\infty$ data, one cannot expect higher order uniform estimates
of $u_\varep$, as $\nabla u_\varep$ is known to converge to $\nabla u_0$ only weakly.
As a result, the use of nontangential maximal functions in Theorem \ref{maximal-function-theorem}
to describe the sharp regularity of solutions with $L^p$ Neumann data appears to be natural and
necessary.
Also note that under the conditions (\ref{ellipticity}) and
(\ref{smoothness}), the existence and uniqueness (modulo additive
constants) of solutions to (\ref{W-1-p}), (\ref{Neumann-problem-3})
and (\ref{Neumann-problem-2}) with sharp regularity
estimates are more or less well known
(see e.g. \cite{Agmon-1959,Agmon-1964,Taylor-tools}).
What is new here is that with the additional periodicity assumption (\ref{periodicity}),
the constants $C$ in the regularity estimates (\ref{W-1-p-estimate}), (\ref{Lipschitz-estimate})
and (\ref{maximal-function-estimate}) are independent of
$\varep$.
In the case of the Dirichlet boundary condition $u_\varep =g$ on $\partial\Omega$
with $g\in B^{1/p^\prime, p}(\partial\Omega)$ or $g\in C^{1,\eta}(\partial\Omega)$,
results analogous to Theorems \ref{W-1-p-theorem} and \ref{Lipschitz-estimate-theorem}
were established by Avellaneda and Lin in \cite{AL-1987,AL-1991}
for $C^{1,\alpha}$ domains (without the assumption $A^*=A$). They also obtained the
nontangential maximal function estimate $\|(u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\| g\|_{L^p(\partial\Omega)}$ for solutions of $\mathcal{L}_\varep(u_\varep)=0$ in $\Omega$
(the case $m=1$ was given in \cite{AL-1987-ho}).
As it was noted in \cite{AL-1987}, uniform regularity estimates,
in addition to being of independent interest, have applications
to homogenization of boundary control of distributed systems \cite{Lions-1985-IMA,
Lions-1988-SIAM, AL-1989-ho}.
Furthermore, they can be used to estimate convergence rates of $u_\varep\to u_0$ as
$\varep\to 0$. In particular, it was proved in \cite{AL-1987} that
$\| u_\varep-u_0\|_{L^\infty(\Omega)} =O(\varep)$,
if $\mathcal{L}_\varep (u_\varep)=\text{\rm div}(f)$ in $\Omega$,
$u_\varep =g$ on $\partial\Omega$, and $f,g$ are in certain function spaces.
Extending the Lipschitz estimate (\ref{Lipschitz-estimate})
to solutions with Neumann boundary conditions has been a longstanding
open problem.
The main reason why it is more difficult to deal with solutions
with Neumann boundary
conditions in Theorem \ref{Lipschitz-estimate-theorem} than solutions with
Dirichlet boundary conditions in \cite{AL-1987,AL-1991}
is that now the boundary conditions in (\ref{Neumann-problem-3})
are $\varep$-dependent,
which causes new difficulties in the estimation of the appropriate boundary correctors.
We have overcome this difficulty, in the presence of symmetry,
thanks to the Rellich estimates obtained in \cite{Kenig-Shen-1, Kenig-Shen-2}.
Neumann boundary conditions are important in applications
of homogenization (see e.g. \cite{bensoussan-1978, Jikov-1994,Lions-1988-SIAM, Oleinik-1992}).
The uniform estimates we establish in this paper can be used to study
convergence problems for solutions $u_\varep$, eigenfunctions and
eigenvalues with Neumann boundary conditions.
As an example,
let $w_\varep (x) =u_\varep (x)-u_0(x) -\varep \chi (\frac{x}{\varep}) \nabla u_0 (x)$,
where $\chi$ denotes the matrix of correctors for $\mathcal{L}_\varep$ in $\br^d$.
It can be shown that $w_\varep=w_\varep^{(1)} +w_\varep^{(2)}$, where
$\| \nabla w_\varep^{(1)}\|_{L^p(\Omega)} \le C_p \, \varep \|\nabla^2 u_0\|_{L^p(\Omega)}$
for any $1<p<\infty$, and $|\nabla w_\varep^{(2)} (x)|\text{\rm dist}(x, \partial\Omega)
\le C\varep \|\nabla u_0\|_{L^\infty(\partial\Omega)}$ for any $x\in \Omega$.
We will return to this in a forthcoming publication.
Let $N_\varep(x,y)$ denote the matrix of Neumann functions
for $\mathcal{L}_\varep$
in $\Omega$ (see Section 5).
As a consequence of our uniform H\"older and Lipschitz estimates, we obtain the following
bounds,
\begin{equation}\label{Neumann-function-bound-0}
\aligned
|N_\varep (x,y)| & \le \frac{C}{|x-y|^{d-2}},\\
|\nabla_x N_\varep (x,y)|+|\nabla_y N_\varep (x,y)| &\le \frac{C}{|x-y|^{d-1}},\\
|\nabla_x\nabla_y N_\varep (x,y)| &\le \frac{C}{|x-y|^d},
\endaligned
\end{equation}
for $d\ge 3$ (see Section 8).
In view of the work of Avellaneda and Lin on homogenization
of Poisson's kernel \cite{AL-1989-ho},
we remark that the techniques we develop in this paper
may also be used to establish asymptotics of
$ N_\varep (x,y)$.
This line of research,
together with the convergence results mentioned above,
will be developed in a forthcoming paper.
We should mention that the case $p=2$ in Theorem \ref{maximal-function-theorem}
is contained in \cite{Kenig-Shen-2}.
In fact, for the elliptic system $\mathcal{L}_\varep (u_\varep)=0$
in a bounded Lipschitz domain $\Omega$,
the Neumann problem
with the uniform estimate $\|(\nabla u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\|\frac{\partial u_\varep}{\partial\nu_\varep}\|_{L^p(\partial\Omega)}$
and
the Dirichlet problem with the estimate $\|(u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\| u_\varep\|_{L^p(\partial\Omega)}$, as well as the so-called regularity problem
with the estimate
$\| (\nabla u_\varep)^*\|_{L^p(\partial\Omega)}\le C\| \nabla_{tan} u_\varep\|_{L^p(\partial\Omega)}$,
were solved recently by Kenig and Shen in \cite{Kenig-Shen-2} for $p$ close to $2$
(see \cite{Kenig-book} for references on boundary value problems
in Lipschitz domains for elliptic equations with constant coefficients).
The results in \cite{Kenig-Shen-2} are proved
under the assumption that $A\in \Lambda(\mu, \lambda, \tau)$ and $A^*=A$,
by the method of layer potentials.
In the case of a single equation ($m=1$),
the $L^p$ solvabilities of Neumann, Dirichlet and regularity problems
in Lipschitz domains with uniform nontangential
maximal function estimates were established in \cite{Kenig-Shen-1}
for the sharp ranges of $p$'s
(the result for Dirichlet problem in Lipschitz domains
was obtained earlier by B. Dahlberg \cite{Dahlberg-personal},
using a different approach; see the appendix to \cite{Kenig-Shen-1}
for Dahlberg's proof).
The results in \cite{Kenig-Shen-1, Kenig-Shen-2} rely on uniform
Rellich estimates $\|\frac{\partial u_\varep}{\partial\nu_\varep}\|_{L^2(\partial\Omega)}
\approx \|\nabla_{tan} u_\varep\|_{L^2(\partial\Omega)}$
for solutions of $\mathcal{L}_\varep (u_\varep)=0$ in a Lipschitz domain $\Omega$.
We point out that one of the key steps in the proof of Theorem \ref{Lipschitz-estimate-theorem}
uses the Rellich estimate $\|\nabla u_\varep\|_{L^2(\partial\Omega)}
\le C\|\frac{\partial u_\varep}{\partial\nu_\varep}\|_{L^2(\partial\Omega)}$
in a crucial way.
We now describe the key ideas in the proofs of our main results.
To show Theorem \ref{W-1-p-theorem}, we first establish the uniform
boundary H\"older estimate for local solutions,
\begin{equation}\label{local-Holder-estimate-0}
\| u_\varep\|_{C^{0,\gamma}(B(Q, \rho)\cap\Omega)}
\le C \rho^{-\gamma}
\left(\average_{B(Q,2\rho)\cap\Omega}
|u_\varep|^2 \, dx\right)^{1/2},
\end{equation}
for any $\gamma\in (0,1)$,
where $\mathcal{L}_\varep (u_\varep)=0$ in $B(Q,3\rho)\cap\Omega$
and $\frac{\partial u_\varep}{\partial\nu_\varep}=0$
on $B(Q,3\rho)\cap\partial\Omega$ for some $Q\in \partial\Omega$ and $0<\rho<c$.
The proof of (\ref{local-Holder-estimate-0}) uses a compactness method,
which was developed by Lin and Avellaneda
in \cite{AL-1987,AL-1989-II,AL-1989-ho} for homogenization problems,
with basic ideas originating from the
regularity theory in the calculus of variations
and minimal surfaces.
As in the case of Dirichlet boundary condition, boundary correctors
are not needed for H\"older estimates with Neumann boundary condition.
From (\ref{local-Holder-estimate-0}) one may deduce the weak
reverse H\"older inequality,
\begin{equation}\label{reverse-Holder-0}
\left(\average_{B(Q, \rho)\cap\Omega} |\nabla u_\varep|^p\, dx\right)^{1/p}
\le C_p
\left(\average_{B(Q, 2\rho)\cap\Omega} |\nabla u_\varep|^2 \, dx\right)^{1/2}
\end{equation}
for any $p>2$. By \cite{Geng} this implies that $\|\nabla u_\varep\|_{L^p(\Omega)}
\le C\| f\|_{L^p(\Omega)}$ for $p>2$, if $\mathcal{L}_\varep (u_\varep)
=\text{\rm div}(f)$ in $\Omega$ and $\frac{\partial u_\varep}{\partial\nu_\varep}
=-n\cdot f$ on $\partial\Omega$.
The rest of Theorem \ref{W-1-p-theorem} follows by some duality arguments.
The proof of Theorem \ref{Lipschitz-estimate-theorem} is much more difficult than
that of Theorem \ref{W-1-p-theorem}. Assume that $0\in \partial\Omega$.
After a simple rescaling, the heart of matter here is to establish the uniform boundary
Lipschitz estimate for local solutions,
\begin{equation}\label{Lipschitz-estimate-0}
\|\nabla u_\varep\|_{L^\infty (B(0, 1)\cap\Omega)}
\le C\big\{ \| u_\varep\|_{L^\infty (B(0, 2)\cap \Omega)}
+\| g\|_{C^\eta(B(0,2)\cap\partial\Omega)}\big\},
\end{equation}
for some $\eta>0$, where $\mathcal{L}_\varep (u_\varep)=0$
in $B(0, 3)\cap \Omega$ and $\frac{\partial u_\varep}{\partial\nu_\varep}
=g$ on $B(0,3)\cap\partial\Omega$.
This problem has been open for more than 20 years, ever since
the same estimate was established in \cite{AL-1987} for
local solutions with the Dirichlet boundary condition
$u_\varep=0$ in $B(0,3)\cap\partial\Omega$.
Our proof of (\ref{Lipschitz-estimate-0})
also uses the compactness method mentioned
above. However, as in the case of the Dirichlet boundary condition,
one needs to introduce suitable boundary correctors in order to fully
take advantage of the fact that
solutions of the homogenized system are in $C^{1,\eta} (B(0,2)\cap \Omega)$.
A major technical breakthrough of this paper
is the introduction and estimates of
such correctors $\Phi_\varep =(\Phi_{\varep, j}^{\alpha\beta})$, where
for each $1\le j\le d$ and $1\le \beta\le m$, $\Phi_{\varep, j}^\beta
=(\Phi_{\varep, j}^{1\beta}, \dots, \Phi_{\varep, j}^{m\beta})$ is the solution to
the Neumann problem
\begin{equation}\label{corrector-0}
\left\{
\begin{aligned}
\mathcal{L}_\varep (\Phi_{\varep, j}^\beta) & =0 &\qquad & \text{ in } \Omega,\\
\frac{\partial}{\partial \nu_\varep} \big( \Phi^\beta_{\varep, j}\big)
& =\frac{\partial }{\partial\nu_0} \big( P_j^\beta\big) & \qquad &\text{ on } \partial\Omega,\\
\Phi_{\varep, j}^\beta (0) & =0.
\end{aligned}
\right.
\end{equation}
Here $P_j^\beta =x_j (0, \cdots, 1, \dots, 0)$
with $1$ in the $\beta^{th}$ position
and $\frac{\partial w}{\partial \nu_0}$
denotes the conormal derivative of $w$ associated with the homogenized
operator $\mathcal{L}_0$.
Note that by the boundary H\"older estimate,
$\Phi_{\varep, j}^{\alpha\beta} (x) \to x_j\delta_{\alpha\beta}$
uniformly in $\Omega$ as $\varep\to 0$.
To carry out an elaborate compactness scheme in a similar fashion to that in \cite{AL-1987},
one needs to prove the uniform Lipschitz estimate for the solution of (\ref{corrector-0}),
\begin{equation}\label{corrector-Lipschitz}
\| \nabla \Phi_\varep\|_{L^\infty(\Omega)}\le C.
\end{equation}
The proof of (\ref{corrector-Lipschitz}) relies on two crucial observations. First, one can use
Rellich estimates as well as boundary H\"older estimates to show that
\begin{equation}\label{Neumann-estimate-0}
\int_{\partial\Omega} |\nabla_y \big\{
N_\varep (x,y)-N_\varep (z,y)\}|\, d\sigma (y) \le C,
\end{equation}
where $|x-z|\le c\, \text{\rm dist}(x, \partial\Omega)$.
Secondly, if $w_\varep(x)=\Phi_\varep (x) -x I -\varep \chi (x/\varep)$, then
$\frac{\partial w_\varep}{\partial \nu_\varep}$ can be represented as a sum of tangential
derivatives of $g_{ij}$ with $\|g_{ij}\|_{L^\infty(\partial\Omega)}
\le C\varep$.
Since $\mathcal{L}_\varep (w_\varep)=0$ in $\Omega$,
it follows from these observations as well as interior estimates that
$|\nabla w_\varep (x)|\le C\varep [\text{\rm dist} (x,\partial\Omega)]^{-1}$.
This gives the estimate $|\nabla \Phi_\varep (x)|\le C$, if
$\text{dist}(x, \partial\Omega)> \varep$.
The remaining case $\text{dist}(x,\partial\Omega)\le \varep$ follows
by a blow-up argument.
See Section 7 for details.
We note that the symmetry condition $A^*=A$ is only needed
for using the Rellich estimates.
With the Lipschitz estimate in Theorem \ref{Lipschitz-estimate-theorem}
at our disposal, Theorem \ref{maximal-function-theorem}
for $p>2$ follows from the case $p=2$ (established in \cite{Kenig-Shen-2} for
Lipschitz domains), by a real variable method originating in \cite{Caffarelli-1998} and
further developed in \cite{Shen-2005-bounds,
Shen-2006-ne,Shen-2007-boundary}.
The case $1<p<2$ is handled by establishing $L^1$ estimate for solutions with
boundary data in the Hardy space $H^1(\partial\Omega)$ and then interpolating
it with $L^2$ estimates,
as in the case of Laplacian \cite{Dahlberg-Kenig-1987} (see Section 9).
In view of the Lipschitz estimates in \cite{AL-1987}
for local solutions with Dirichlet boundary condition and
the $L^2$ estimates in \cite{Kenig-Shen-2},
a similar approach also solves the $L^p$ regularity
problem with the estimate $\|(\nabla u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\| \nabla_{tan} u_\varep\|_{L^p(\partial\Omega)}$ in a $C^{1,\alpha}$ domain $\Omega$
for all $1<p<\infty$ (see Section 10).
We further note that the same approach works
equally well for the exterior domain
$\Omega_-=\br^d\setminus \overline{\Omega}$ and gives the solvabilities
of the $L^p$ Neumann and regularity problems in $\Omega_-$.
Consequently, as in the case of the Laplacian
on a Lipschitz domain \cite{Verchota-1984,Dahlberg-Kenig-1987},
one may use the $L^p$ estimates in $\Omega$ and $\Omega_-$ and
the method of layer potentials to show that
solutions to the $L^p$ Neumann and regularity problems in $C^{1,\alpha}$
domains may be represented
by single layer potentials with density functions that are uniformly bounded
in $L^p$.
Similarly, the solutions to the $L^p$ Dirichlet problem may be represented
by double layer potentials with uniformly $L^p$ bounded density functions
(see Section 11).
The summation convention will be used throughout the paper.
Finally we remark that
we shall make little effort to distinguish vector-valued functions or function spaces
from their real-valued counterparts.
This should be clear from the context.
\section{Homogenization and weak convergence}
Let $\mathcal{L}_\varep =-\text{\rm div}(A(x/\varep)\nabla)$ with
matrix $A(y)$ satisfying (\ref{ellipticity})-(\ref{periodicity}).
For each $1\le j\le d$ and $1\le\beta\le m$, let
$\chi_j^\beta =(\chi_j^{1\beta }, \dots, \chi_j^{m\beta})$ be the solution
of the following cell problem:
\begin{equation}\label{cell-problem}
\left\{
\aligned
& \mathcal{L}_1 (\chi_j^\beta)=-\mathcal{L}_1 (P^\beta_j) \quad \text{ in }\brd,\\
&\chi_j^\beta (y) \text{ is periodic with respect to }\mathbb{Z}^d,\\
& \int_{[0,1]^d}
\chi_j^\beta \, dy =0,
\endaligned
\right.
\end{equation}
where $P_j^\beta =P_j^\beta (y)
=y_j(0, \dots, 1, \dots, 0)$ with $1$ in the $\beta^{th}$ position.
The matrix $\chi=\chi(y) =(\chi_j^{\alpha\beta}(y))$ with
$1\le j\le d$ and $1\le \alpha, \beta\le m$ is called the matrix of
correctors for $\{ \mathcal{L}_\varep\}$.
With the summation convention
the first equation in (\ref{cell-problem}) may be written
as
\begin{equation}\label{corrector-equation}
\frac{\partial}{\partial y_i}
\left[ a_{ij}^{\alpha\beta}
+a_{i\ell}^{\alpha\gamma}
\frac{\partial}{\partial y_\ell}\left( \chi_j^{\gamma\beta}\right)\right]=0
\quad
\text{ in }\brd.
\end{equation}
Let $\hat{A} =(\hat{a}_{ij}^{\alpha\beta})$, where $1\le i, j\le d$, $1\le \alpha, \beta\le m$ and
\begin{equation}
\label{homogenized-coefficient}
\hat{a}_{ij}^{\alpha\beta}
=\int_{[0,1]^d}
\left[ a_{ij}^{\alpha\beta}
+a_{i\ell}^{\alpha\gamma}
\frac{\partial}{\partial y_\ell}\left( \chi_j^{\gamma\beta}\right)\right]
\, dy.
\end{equation}
Then $\mathcal{L}_0=-\text{div}(\hat{A}\nabla)$ is the so-called
homogenized operator associated with
$\{ \mathcal{L}_\varep\}$ (see \cite{bensoussan-1978}).
We need the following homogenization result.
\begin{lemma}
\label{lemma-2.1}
Let $\Omega$ be a bounded Lipschitz domain in $\brd$ and
$$
\text{\rm div}\left[A_k\left({x}/\varep_k\right)
\nabla u_k \right]=f\in W_0^{-1,2}(\Omega) \quad \text{ in } \Omega,
$$
where $\varep_k\to 0$ and the matrix $A_k(y)$
satisfies (\ref{ellipticity})-(\ref{periodicity}).
Suppose that $u_k \to u_0$ strongly in $L^2(\Omega)$,
$\nabla u_k\to \nabla u_0$ weakly in $L^2(\Omega)$ and
$A_k \big({x}/{\varep_k}\big) \nabla u_k$ converges weakly in $L^2(\Omega)$.
Also assume that the constant matrix $\hat{A_k}$, defined by (\ref{homogenized-coefficient})
(with $A$ replaced by $A_k$), converges to $A^0$.
Then
$$
A_k \left({x}/{\varep_k}\right)\nabla u_k \to A^0 \nabla u_0 \
\text{ weakly in } L^2(\Omega)
$$
and $\text{\rm div}(A^0\nabla u_0)=f$ in $\Omega$.
\end{lemma}
\begin{proof}
If $A_k$ is independent of $k$, this is a classical result in the theory of
homogenization (see e.g. \cite{bensoussan-1978} or \cite{Chechkin-2007}).
The general case may be proved by the same energy method. We give a proof here
for the sake of completeness.
Let $A_k =(a_{ij,k}^{\alpha\beta})$,
$\hat{A_k} =(\hat{a}_{ij,k}^{\alpha\beta})$ and
$A^0=(b_{ij}^{\alpha\beta})$.
Suppose that
\begin{equation}\label{definition-of-p}
a_{i\ell, k}^{\alpha\gamma} (x/\varep_k) \frac{\partial u_k^\gamma}{\partial x_\ell}
\to p_i^\alpha (x) \quad \text{ weakly in } L^2(\Omega).
\end{equation}
Clearly, $\text{\rm div}(P)=f$ in $\Omega$, where $P=(p_i^\alpha)$.
For $1\le j, \ell\le d$, $1\le \beta\le m$ and $k=1,2, \dots$, write
\begin{equation}\label{div-curl}
\aligned
& a_{i\ell, k}^{\alpha\gamma}
(x/\varep_k) \frac{\partial u_k^\gamma}{\partial x_\ell}
\cdot \frac{\partial }{\partial x_i}
\left\{ \varep_k \chi_{j,k}^{*\alpha\beta} (x/\varep_k)
+x_j \delta_{\alpha\beta}\right\}\\
& \qquad =\frac{\partial u_k^\gamma}{\partial x_\ell}
\cdot
a_{i\ell, k}^{\alpha\gamma}
\frac{\partial }{\partial x_i}
\left\{ \varep_k \chi_{j,k}^{*\alpha\beta} (x/\varep_k)
+x_j \delta_{\alpha\beta}\right\},
\endaligned
\end{equation}
where $\chi_k^* =(\chi_{j,k}^{*\alpha\beta})$ denotes the matrix of correctors for
$(\mathcal{L}_\varep^k)^*$, the adjoint operator of $\mathcal{L}_\varep^k=-\text{div}
(A_k(x/\varep)\nabla )$.
By taking the weak limits on the both sides of (\ref{div-curl}) and
using a compensated compactness argument (see e.g. Lemma 5.1 in \cite{Chechkin-2007}),
we obtain
$$
\aligned
& p_i^\alpha (x) \cdot
\int_{[0,1]^d}
\left\{ \frac{\partial }{\partial y_i}
\left[ \chi_{j,k}^{*\alpha\beta} (y)\right]
+\delta_{ij}\delta_{\alpha\beta}\right\}\, dy\\
&\qquad
=\frac{\partial u_0^\gamma}{\partial x_\ell}
\cdot \lim_{k\to\infty}
\int_{[0,1]^d}
a_{i\ell, k}^{\alpha\gamma}
\left\{\frac{\partial}{\partial y_i}
\left[ \chi_{j,k}^{*\alpha\beta}(y)\right]
+\delta_{ij}\delta_{\alpha\beta}\right\}\, dy.
\endaligned
$$
Since
$$
\int_{[0,1]^d}
a_{i\ell, k}^{\alpha\gamma} (y)
\frac{\partial}{\partial y_i}
\left\{ \chi_{j,k}^{*\alpha\beta}(y)\right\} dy
=\int_{[0,1]^d}
a_{ji, k}^{\beta\alpha} (y)\frac{\partial}{\partial y_i}
\left\{ \chi_{\ell, k}^{\alpha\gamma} (y) \right\} dy
$$
(see e.g. \cite[p.122]{bensoussan-1978}),
it follows that
$$
\aligned
p_j^\beta (x)
&=\frac{\partial u_0^\gamma}{\partial x_\ell} \cdot
\lim_{k\to\infty}
\int_{[0,1]^d}
\left\{
a_{j\ell, k}^{\beta\gamma} (y)
+a_{ji, k}^{\beta\alpha}
\frac{\partial}{\partial y_i}
\big[ \chi_{\ell, k}^{\alpha\gamma}(y)\big]\right\} dy\\
&=\frac{\partial u_0^\gamma}{\partial x_\ell}\cdot
\lim_{k\to\infty} \hat{a}^{\beta\gamma}_{j\ell, k}\\
&=b_{j\ell}^{\beta\gamma} \cdot\frac{\partial u_0^\gamma}{\partial x_\ell}.
\endaligned
$$
In view of (\ref{definition-of-p}) this finishes the proof.
\end{proof}
Let $\psi:\mathbb{R}^{d-1}\to \mathbb{R}$ be a $C^{1,\alpha_0}$ function such that
\begin{equation}\label{psi}
\psi (0)=|\nabla\psi (0)|=0
\quad
\text{ and }\quad
\|\nabla \psi\|_{C^{\alpha_0}(\mathbb{R}^{d-1})}
\le M_0,
\end{equation}
where $\alpha_0\in (0,1)$ and $M_0>0$ will be fixed throughout the paper.
For $r>0$, let
\begin{equation}
\label{definition-of-D}
\aligned
& D(r)=D(r, \psi) = \big\{ (x^\prime, x_d)\in\brd: \
|x^\prime|<r \text{ and } \psi(x^\prime)<x_d<\psi(x^\prime) +r \big\},\\
&\widetilde{D}(r)=\widetilde{D}(r, \psi) = \big\{ (x^\prime, x_d)\in\brd: \
|x^\prime|<r \text{ and } \psi(x^\prime)-r<x_d<\psi(x^\prime) +r \big\},\\
& \Delta (r)
=\Delta(r, \psi) =
\big\{ (x^\prime, \psi(x^\prime))\in\br^d: |x^\prime|<r \big\}.
\endaligned
\end{equation}
\begin{lemma}\label{lemma-2.2}
Let $\{\psi_k\}$ be a sequence of $C^{1,\alpha_0}$ functions satisfying (\ref{psi}).
Suppose that
$\psi_k \to \psi_0$ in $C^1(|x^\prime|<r)$ and
$\{ \| v_k\|_{L^2(D(r, \psi_k))}\}$ is bounded.
Then there exist a subsequence, which we still denote by $\{ v_k\}$, and
$v_0\in L^2(D(r,\psi_0))$ such that
$v_k \to v_0 $ weakly in $
L^2(\Omega)$ for any $\Omega\subset\subset D(r, \psi_0)$.
\end{lemma}
\begin{proof}
Let $w_k(x^\prime, x_d)=v_k(x^\prime, x_d+\psi_k(x^\prime))$, defined
on
$$
D(r,0)=\{ (x^\prime, x_d):\
|x^\prime|<r \text{ and } 0<x_d<r\}.
$$
Since $\{ w_k\}$ is bounded in $L^2(D(r, 0))$, there exists a subsequence, which
we still denote by $\{ w_k\}$, such that
$w_k \to w_0$ weakly in $L^2(D(r,0))$.
Let $v_0 (x^\prime, x_d)=w_0 (x^\prime, x_d-\psi_0(x^\prime))$.
It is not hard to verify that
$v_k\to v_0$ weakly in $L^2(\Omega)$ if $\Omega\subset\subset D(r, \psi_0)$.
\end{proof}
The following theorem plays an important role in our compactness argument for the
Neumann problem. Note that (\ref{Neumann-problem-k}) is the weak formulation of
$\text{\rm div}\big( A_k(x/\varep_k)\nabla u_k\big)=0$
in $D(r, \psi_k)$ and $\frac{\partial u_k}{\partial \nu_\varep^k}=g_k$
on $\Delta(r, \psi_k)$.
\begin{thm}\label{compactness-theorem}
Let $\{ A_k(y)\}$ be a sequence of matrices satisfying
(\ref{ellipticity})-(\ref{periodicity}) and $\{ \psi_k\}$ a sequence of $C^{1,\alpha_0}$
functions satisfying (\ref{psi}).
Suppose that
\begin{equation}\label{Neumann-problem-k}
\int_{D(r, \psi_k)}
A_k(x/\varep_k)\nabla u_k \cdot \nabla \varphi\, dx
=\int_{\Delta(r, \psi_k)}
g_k \cdot \varphi\, d\sigma
\end{equation}
for any $\varphi\in C_0^1(\widetilde{D}(r, \psi_k))$, where
$\varep_k\to 0$ and
\begin{equation}\label{compactness-condition}
\| u_k \|_{W^{1,2}(D(r, \psi_k))}+
\| g_k \|_{L^2(\Delta (r, \psi_k))} \le C.
\end{equation}
Then there exist subsequences of $\{ \psi_k\}$, $\{ u_k\}$ and $\{ g_k\}$, which we still denote by
the same notation, and a function $\psi_0$ satisfying (\ref{definition-of-p}),
$g_0\in L^2(\Delta(r,\psi_0))$, $u_0\in W^{1,2}(D(r, \psi_0))$, a constant
matrix $A^0$ such that
\begin{equation}\label{compactness-conclusion}
\left\{
\aligned
&\psi_k \to \psi_0 \text{ in } C^1(|x^\prime|<r),\\
& g_k(x^\prime, \psi_k(x^\prime)) \to g_0 (x^\prime, \psi_0 (x^\prime))
\quad \text{ weakly in } L^2 (|x^\prime|< r),\\
& u_k(x^\prime, x_d-\psi_k(x^\prime))
\to u_0 (x^\prime, x_d-\psi_0(x^\prime))
\quad \text{ strongly in } L^2(D(r,0)),
\endaligned
\right.
\end{equation}
and
\begin{equation}\label{compactness-conclusion-1}
\int_{D(r, \psi_0)}
A^0 \nabla u_0 \cdot \nabla \varphi\, dx
=\int_{\Delta(r,\psi_0)}
g_0 \cdot \varphi\, d\sigma
\end{equation}
for any $\varphi\in C_0^1(\widetilde{D}(r, \psi_0))$.
Moreover, the matrix $A^0$, as the limit of a subsequence
of $\{\hat{A}_k\}$,
satisfies the condition
(\ref{ellipticity}).
\end{thm}
\begin{proof}
We first note that (\ref{compactness-conclusion})
follows directly from (\ref{compactness-condition}) by passing to subsequences.
To prove (\ref{compactness-conclusion-1}), we fix $\varphi\in C_0^1
(\widetilde{D}(r, \psi_0))$.
Clearly, if $k$ is sufficiently large, $\varphi\in C_0^1
(\widetilde{D}(r, \psi_k))$.
It is also easy to check that
$$
\int_{\Delta(r, \psi_k)}
g_k \cdot \varphi\, d\sigma
\to \int_{\Delta(r, \psi_0)} g_0 \cdot \varphi\, d\sigma.
$$
By passing to a subsequence we may assume that
$\hat{A}_k \to A^0$.
Thus it suffices to show that
\begin{equation}\label{compactness-1}
\int_{D(r, \psi_k)}
A_k (x/\varep_k)\nabla u_k\cdot \nabla\varphi \, dx
\to
\int_{D(r, \psi_0)}
A^0 \nabla u_0 \cdot \nabla\varphi \, dx.
\end{equation}
In view of Lemma \ref{lemma-2.2} we may assume that $\{ u_k\}$,
$\nabla u_k$, and $A_k(x/\varep_k)\nabla u_k$ converge weakly in $L^2(\Omega)$
for any $\Omega\subset\subset D(r,\psi_0)$.
As a result, $\{ u_k\}$ also converges strongly in $L^2(\Omega)$.
Now, given any $\delta>0$, we may choose a Lipschitz domain $\Omega$ such that
$\overline{\Omega}\subset D(r, \psi_0)$,
\begin{equation}\label{compactness-2}
\big|\int_{D(r,\psi_0)\setminus \Omega}
A^0\nabla u_0\cdot \nabla \varphi\, dx\big|<\delta/3
\end{equation}
and
\begin{equation}\label{compactness-3}
\big|\int_{D(r,\psi_k)\setminus \Omega}
{A}_k (x/\varep_k)\nabla u_k\cdot \nabla \varphi\, dx\big|<\delta/3
\end{equation}
for $k$ sufficiently large. Thus (\ref{compactness-1}) would follow if we can show that
\begin{equation}\label{compactness-4}
\int_\Omega
A_k (x/\varep_k) \nabla u_k \cdot \nabla \varphi\, dx
\to
\int_\Omega
A^0 \nabla u_0 \cdot \nabla \varphi\, dx.
\end{equation}
This, however, is a direct consequence of Lemma \ref{lemma-2.1},
since
$\text{\rm div}(A_k(x/\varep_k)\nabla u_k)=0$ in $\Omega$
by (\ref{Neumann-problem-k}).
\end{proof}
We end this section with the uniform interior gradient estimate, established
in \cite{AL-1987} by
Avellaneda and Lin,
for solutions of $\mathcal{L}_\varep (u_\varep)=0$.
For a ball $B=B(x,r)$ in $\brd$,
we let $\rho B=B(x,\rho r)$.
We will use $\average_E f$ to denote $\frac{1}{|E|}\int_E f$, the average of $f$ over $E$.
\begin{thm}\label{interior-estimate-theorem}
Let $A \in \Lambda(\mu,\lambda,\tau)$.
Suppose that $\mathcal{L}_\varep (u_\varep)=0$ in $2B$.
Then
\begin{equation}\label{interior-estimate}
\sup_{B} |\nabla u_\varep|
\le C
\left(\average_{2B}
|\nabla u_\varep|^2\, dx \right)^{1/2},
\end{equation}
where $C$ depends only on $d$, $m$, $\mu$, $\lambda$, $\tau$.
\end{thm}
\section{Boundary H\"older estimates}
The goal of this section is to establish uniform
boundary H\"older estimates
for $\mathcal{L}_\varep$ under Neumann boundary condition.
Throughout this section we assume that
$A\in \Lambda (\mu, \lambda,\tau)$.
\begin{thm}\label{boundary-holder-theorem}
Let $\Omega$ be a bounded $C^{1,\alpha_0}$ domain.
Let $p>0$ and $\gamma\in (0,1)$.
Suppose that $\mathcal{L}_\varep (u_\varep)=0$ in $B(Q, r)\cap\Omega$
and $\frac{\partial u_\varep}{\partial\nu_\varep} =g$
on $B(Q,r)\cap\partial\Omega$
for some $Q\in \partial\Omega$ and $0<r<r_0$.
Then
\begin{equation}\label{local-size-estimate}
\sup_{B(Q, r/2)\cap\Omega} |u_\varep|
\le C\left\{
\left(
\average_{B(Q,r)\cap\Omega} |u_\varep|^p \, dx \right)^{1/p}
+\rho \| g\|_{L^\infty (B(Q,r)\cap\partial\Omega)}\right\},
\end{equation}
and for $x,y\in B(Q, r/2)\cap \Omega$,
\begin{equation}\label{local-holder-estimate}
|u_\varep (x)-u_\varep (y)|
\le
C\left(\frac{|x-y|}{r}\right)^\gamma
\left\{ \left(\average_{B(Q,r)\cap\Omega} |u_\varep|^p\, dx \right)^{1/p}
+\rho \| g\|_{L^\infty (B(Q,r)\cap\partial\Omega)}\right\},
\end{equation}
where $r_0>0$ depends only on $\Omega$ and $C>0$
on $d$, $m$, $\mu$, $\lambda$, $\tau$, $p$, $\gamma$ and $\Omega$.
\end{thm}
Let $D(\rho,\psi)$ and $\Delta (\rho,\psi)$ be defined by (\ref{definition-of-D}).
By a change of the coordinate system it will suffice
to establish the following.
\begin{thm}\label{boundary-holder-theorem-local}
Let $\gamma\in (0,1)$.
Suppose that $\mathcal{L}_\varep (u_\varep)=0$ in $D(\rho)$ and
$\frac{\partial u_\varep}{\partial \nu_\varepsilon} =g$ on $\Delta(\rho)$
for some $\rho>0$.
Then for any $x,y\in D(\rho/2)$,
\begin{equation}
\label{boundary-holder-estimate}
|u_\varep (x)-u_\varep (y)|
\le C \left(\frac{|x-y|}{\rho}\right)^\gamma
\left\{
\left(\average_{D(\rho)}
|u_\varep |^2\right)^{1/2}
+\rho \| g\|_{L^\infty (\Delta (\rho))}\right\},
\end{equation}
where $D(\rho)=D(\rho, \psi)$, $\Delta(\rho)=D(\rho, \psi)$, and
$C >0$ depends only on $d$, $m$, $\mu$, $\lambda$,
$\tau$, $\gamma$ and $(\alpha_0, M_0)$ in (\ref{psi}).
\end{thm}
The proof of Theorem \ref{boundary-holder-theorem-local}
uses the compactness method developed in \cite{AL-1987, AL-1989-II,
AL-1989-ho} for homogenization problems.
We begin with the well known Cacciopoli's inequality,
\begin{equation}\label{Cacciopoli}
\int_{D(s\rho)}
|\nabla u_\varep|^2\, dx
\le \frac{C}{(t-s)^2 \rho^2}
\int_{D(t\rho)} |u_\varep|^2\, dx
+C \rho \| g\|_{L^2(\Delta(\rho))}^2,
\end{equation}
where $0<s<t<1$, $\mathcal{L}_\varep(u_\varep)=0$ in $D(\rho)$ and
$\frac{\partial u_\varep}{\partial\nu_\varep}=g$ on $\Delta(\rho)$.
The periodicity of $A$ is not needed here.
For a function $u$ defined on $S$, we will use $(\overline{u})_S$
(and $\average_S$) to denote its
average over $S$.
\begin{lemma}\label{Step-3.1-lemma}
Fix $\beta\in (0,1)$. There exist $\varep_0>0$ and $\theta\in (0,1)$, depending
only on $d$, $m$, $\mu$, $\lambda$, $\tau$, $\beta$ and $(\alpha_0, M_0)$, such that
\begin{equation}
\label{estimate-3.1}
\average_{D(\theta)}
|u_\varep -(\overline{u_\varep})_{D(\theta)}|^2
\le \theta^{2\beta},
\end{equation}
whenever $\varep<\varep_0$, $\mathcal{L}_\varep( u_\varep)=0$ in $D(1)$,
$\frac{\partial u_\varep}{\partial \nu_\varep} =g$ on $\Delta (1)$,
$$
\| g\|_{L^\infty (\Delta (1))}\le 1 \quad \text{ and }\quad
\average_{D(1)}
|u_\varep -(\overline{u_\varep})_{D(1)} |^2
\le 1.
$$
\end{lemma}
\begin{proof}
Let $\mathcal{L}_0=-\text{\rm div} (A^0\nabla)$, where
$A^0$ is a constant matrix satisfying (\ref{ellipticity}).
Let $\beta^\prime =(1+\beta)/2$.
By boundary H\"older estimates for solutions of elliptic systems with
constant coefficients,
\begin{equation}
\label{constant-3.1}
\average_{D(r)}
|w - (\overline{w})_{D(r)}|^2
\le C_0 r^{2\beta^\prime}
\quad \quad \text{ for } 0<r<\frac{1}{4},
\end{equation}
whenever $\mathcal{L}_0 (w)=0$ in $D(1/2)$,
$\frac{\partial w}{\partial \nu_0}=g$ on $\Delta (1/2)$,
\begin{equation}\label{3.1.0}
\| g\|_{L^\infty(\Delta(1/2))}\le 1 \qquad
\text{ and } \qquad
\int_{D(1/2)} |w|^2 \le |D(1)|,
\end{equation}
where $C_0$ depends only on $d$, $m$, $\beta$,
$\mu$ and $(\alpha_0, M_0)$.
Next we choose $\theta\in (0, 1/4)$
so small that $2C_0 \theta^{2\beta^\prime}\le \theta^{2\beta}$.
We shall show by contradiction that for this $\theta$,
there exists $\varep_0>0$, depending only
on $d$, $m$, $\mu$, $\lambda$, $\tau$, $\beta$ and $(\alpha_0, M_0)$, such that
(\ref{estimate-3.1}) holds if $0<\varep<\varep_0$ and
$u_\varep$ satisfies the conditions in
Lemma \ref{Step-3.1-lemma}.
To this end let's suppose that there exist sequences $\{ \varep_k\}$,
$\{ A_k \}$,
$\{ u_{\varep_k}\}$, $\{ g_k\}$ and $\{ \psi_k\}$ such that
$\varep_k \to 0$, $A_k \in \Lambda(\mu, \lambda, \tau)$,
$\psi_k$ satisfies (\ref{psi}),
\begin{equation}\label{3.1.1}
\left\{
\aligned
\mathcal{L}^k_{\varep_k} (u_{\varep_k}) & =0 & & \text{ in } D_k (1),\\
\frac{\partial u_{\varep_k}}{\partial\nu_{\varep_k}} & =g_k
& & \text{ on }\Delta_k(1),
\endaligned
\right.
\end{equation}
\begin{equation}\label{3.1.2}
\| g_k \|_{L^\infty(\Delta_k (1))} \le 1, \qquad
\average_{D_k(1)}
|u_{\varep_k} -(\overline{u_{\varep_k}})_{D_k(1)}|^2\le 1
\end{equation}
and
\begin{equation}\label{3.1.3}
\average_{D_k (\theta)}
|u_{\varep_k} -(\overline{u_{\varep_k}})_{D_k(\theta)}|^2
> \theta^{2\beta},
\end{equation}
where $\mathcal{L}_{\varep_k}^k
=-\text{\rm div} \big(A_k(x/\varep_k)\nabla\big)$,
$D_k(r) =D(r, \psi_k)$ and $\Delta_k (r)=D(r, \psi_k)$.
By subtracting a constant we may assume that $(\overline{u_{\varep_k}})_{D_k(1)} =0$.
Thus it follows from (\ref{3.1.2}) and the Cacciopoli's inequality
(\ref{Cacciopoli}) that the norm of
$ u_{\varep_k}$ in $W^{1,2}(D_k(1/2)) $ is uniformly bounded.
In view of Theorem \ref{compactness-theorem}, by passing to subsequences, we may assume that
\begin{equation}\label{3.1.4}
\left\{
\aligned
&\psi_k \to \psi_0 \quad \text{ in }C^1(|x^\prime|<1),\\
&g_k(x^\prime, \psi_k (x^\prime))\to g_0(x^\prime, \psi_0(x^\prime)) \quad\text{ weakly
in } L^2(|x^\prime|<1),\\
& u_{\varep_k}(x^\prime, x_d-\psi_k (x^\prime))\to u_0 (x^\prime, x_d-\psi_0(x^\prime))
\quad \text{ strongly in } L^2(D(1/2,0)),
\endaligned
\right.
\end{equation}
and
\begin{equation}\label{3.1.5}
\left\{
\aligned
& \text{div}(A^0\nabla u_0)=0 & \quad & \text{ in } D(1/2, \psi_0),\\
& \frac{\partial u_0}{\partial \nu_0} =g_0 & \quad& \text{ on } \Delta(1/2, \psi_0),
\endaligned
\right.
\end{equation}
where $A^0$ is a constant matrix satisfying (\ref{ellipticity}).
Using (\ref{3.1.4}) one may verify that
$$
|D_k(r)|\to |D_0(r)|,\ \
\| g_0\|_{L^\infty(\Delta(1, \psi_0))}\le 1,\ \
(\overline{u_{\varep_k}})_{D_k(r)}\to (\overline{u_0})_{D_0(r)}
$$
and
\begin{equation}\label{3.1.6}
\int_{D_k(r)}
|u_{\varep_k}
-(\overline{u_{\varep_k}})_{D_k(r)}|^2
\to
\int_{D_0(r)}
|u_0
-(\overline{u_0})_{D_0(r)}|^2
\end{equation}
for any $r\in (0,1]$, where $D_0(r)=D(r, \psi_0)$.
It follows that
\begin{equation}\label{3.1.6-1}
\aligned
\average_{D_0(1)} |u_0|^2 & \le 1,\\
\average_{D_0 (\theta)}
|u_{0} -(\overline{u_{0}})_{D_0(\theta)}|^2
& \ge \theta^{2\beta}.
\endaligned
\end{equation}
In view of (\ref{constant-3.1})-(\ref{3.1.0}) and (\ref{3.1.6-1}) we obtain
$\theta^{2\beta}\le C_0 \theta^{2\beta^\prime}$.
This contradicts $2C_0\theta^{2\beta^\prime}
\le \theta^{2\beta}$.
\end{proof}
\begin{lemma}\label{Step-3.2-lemma}
Fix $\beta\in (0,1)$. Let $\varep_0$, $\theta$ be the constants
given by Lemma \ref{Step-3.1-lemma}.
Suppose that $\mathcal{L}_\varep (u_\varep) =0$
in $D(1, \psi)$ and $\frac{\partial u_\varep}{\partial\nu_\varep}=g$ on $\Delta(1,\psi)$.
Then, if $\varep< \theta^{k-1}\varep_0$ for some $k\ge 1$,
\begin{equation}\label{3.2.1}
\average_{D(\theta^k, \psi)}
|u_\varep -(\overline{u_\varep})_{D(\theta^k, \psi)}|^2
\le \theta^{2k\beta} J^2,
\end{equation}
where
$$
J=\max \left\{
\left(\average_{D(1, \psi)}
|u_\varep -(\overline{u_\varep})_{D(1, \psi)}|^2\right)^{1/2},\
\| g\|_{L^\infty(\Delta(1,\psi))}\right\}.
$$
\end{lemma}
\begin{proof}
The lemma is proved by induction on $k$.
Note that the case $k=1$ is given by Lemma \ref{Step-3.1-lemma}.
Assume now that the lemma holds for some $k\ge 1$.
Let $\varep < \theta^k \varep_0$. We apply Lemma \ref{Step-3.1-lemma}
to $w(x)=u(\theta^k x)$ in $D(1, \psi_k)$, where
$\psi_k(x)=\theta^{-k}\psi(\theta^k x)$.
Since $\mathcal{L}_{\varep/\theta^k} (w)=0$ in $D(1,\psi_k)$,
this gives
$$
\aligned
&
\average_{D(\theta^{k+1}, \psi)}
|u_\varep- (\overline{u_\varep})_{D(\theta^{k+1}, \psi)}|^2\\
& \quad =
\average_{D(\theta, \psi_k)}
|w- (\overline{w})_{D(\theta, \psi_k)}|^2\\
&\quad
\le \theta^{2\beta}
\max \left\{\average_{D(1, \psi_k)}
|w-(\overline{w})_{D(1, \psi_k)}|^2,\
\theta^{2k} \| g\|^2_\infty\right\}\\
&\quad
= \theta^{2\beta}
\max \left\{\average_{D(\theta^k, \psi)}
|u_\varep-(\overline{u_\varep})_{D(\theta^k, \psi)}|^2,\
\theta^{2k} \| g\|^2_\infty\right\}\\
&\quad
\le \theta^{2(k+1)\beta} J^2,
\endaligned
$$
where $\| g\|_\infty
=\| g\|_{L^\infty(\Delta(1,\psi))}$ and the last step follows by the
induction assumption.
Here we also have used the fact that
$\|\nabla\psi_k\|_{C^{\alpha_0}(\mathbb{R}^{d-1})}
\le \|\nabla \psi\|_{C^{\alpha_0}(\mathbb{R}^{d-1})} \le M_0$.
\end{proof}
\noindent{\bf Proof of Theorem \ref{boundary-holder-theorem-local}}.
By rescaling we may assume that $\rho=1$. We may also assume that
$\varep< \varep_0$, since the case $\varep\ge\varep_0$ follows directly from the
classical regularity theory. We may further assume that
$$
\| g\|_{L^\infty(\Delta(1))}\le 1
\quad\text{ and } \quad
\int_{D(1)} |u_\varep|^2\le 1.
$$
Under these assumptions we will show that
\begin{equation}\label{3.3.1}
\average_{D(r)}
|u_\varep-(\overline{u_\varep})_{D(r)} |^2 \le C r^{2\beta}
\end{equation}
for any $r\in (0,1/4)$.
The desired estimate (\ref{boundary-holder-estimate}) with $p=2$ follows from
the interior estimate (\ref{interior-estimate}) and (\ref{3.3.1}),
using Campanato's characterization of H\"older spaces (see e.g. \cite{Giaquinta}).
To prove (\ref{3.3.1}) we first consider the case $r\ge (\varep/\varep_0)$.
Choose $k\ge 0$ so that $\theta^{k+1}\le r< \theta^{k}$.
Then $\varep \le \varep_0 r <\varep_0\theta^k$.
It follows from Lemma \ref{Step-3.2-lemma} that
$$
\aligned
& \average_{D(r)}
|u_\varep -(\overline{u_\varep})_{D(r)}|^2
\le C\average_{D(\theta^k)}
|u_\varep -(\overline{u_\varep})_{D(\theta^k)}|^2
\\
&\quad\quad
\le C\theta^{2k\beta}
\le Cr^{2\beta}.
\endaligned
$$
Next suppose that $r<(\varep/\varep_0)$.
Let $w(x)=u_\varep (\varep x)$. Then
$\mathcal{L}_1 (w)=0$ in $D(\varep_0^{-1}, \psi_\varep)$, where
$\psi_\varep (x^\prime)=\varep^{-1} \psi(\varep x^\prime)$.
By the classical regularity we obtain
$$
\aligned
&\average_{D(r,\psi)} |u_\varep -(\overline{u_\varep})_{D(r,\psi)}|^2
=\average_{D(\frac{r}{\varep}, \psi_\varep)}
|w-(\overline{w})_{D(\frac{r}{\varep}, \psi_\varep)}|^2\\
&\le C\left(\frac{r}{\varep}\right)^{2\beta}
\max \left\{
\average_{D(\frac{1}{\varep_0}, \psi_\varep)}
|w-(\overline{w})_{D(\frac{1}{\varep_0}, \psi_\varep)}|^2,\
\varep^2 \| g\|_\infty\right\}\\
& = C\left(\frac{r}{\varep}\right)^{2\beta}
\max \left\{
\average_{D(\frac{\varep}{\varep_0}, \psi)}
|u_\varep-(\overline{u_\varep})_{D(\frac{\varep}{\varep_0}, \psi)}|^2,\
\varep^2 \| g\|_\infty\right\}\\
&\le C\left(\frac{r}{\varep}\right)^{2\beta}
\left(\frac{\varep}{\varep_0}\right)^{2\beta}
=C\varep_0^{-2\beta} r^{2\beta},
\endaligned
$$
where the last inequality follows from the previous case $r=(\varep/\varep_0)$.
This finishes the proof of (\ref{3.3.1}) and thus of Theorem \ref{boundary-holder-theorem-local}.
\qed
We are now in a position to give the proof of Theorem \ref{boundary-holder-theorem}.
\noindent{\bf Proof of Theorem \ref{boundary-holder-theorem}.}
By rescaling we may assume that $r=1$.
The case $p=2$ follows directly from Theorem \ref{boundary-holder-theorem-local}.
To handle the case $0<p<2$, we note that by a simple covering argument,
estimate (\ref{local-size-estimate}) for $p=2$ gives
\begin{equation}\label{3.3.2}
\sup_{B(Q,s)\cap \Omega} |u_\varep|
\le
C\left\{ \frac{1}{(t-s)^d}
\left(\average_{B(Q,t)\cap\Omega} |u_\varep|^2\right)^{1/2}
+ \| g\|_{L^\infty(B(Q, 1)\cap\partial\Omega)}\right\},
\end{equation}
where $(1/4)<s<t<1$. By a convexity argument (see e.g. \cite[p.173]{Fefferman-Stein-1972}),
estimate (\ref{3.3.2})
implies that for any $p>0$,
\begin{equation}\label{3.3.3}
\left(\average_{B(Q,1/2)\cap\Omega} |u_\varep|^2\right)^{1/2}
\le
C_p\left\{
\left(\average_{B(Q,1)\cap\Omega} |u_\varep|^p\right)^{1/p}
+ \| g\|_{L^\infty(B(Q, 1)\cap\partial\Omega)}\right\}.
\end{equation}
The case $0<p<2$ now follows from estimate (\ref{3.3.3}) and the case $p=2$.
\qed
\section{Proof of Theorem \ref{W-1-p-theorem}}\label{section-4}
Under conditions (\ref{ellipticity}) and (\ref{smoothness}),
weak solutions to (\ref{W-1-p}) exist and are unique, up to
an additive constant, provided that the data
satisfy the necessary condition
$\int_\Omega F^\beta+<g^\beta, 1>=0$ for $1\le \beta\le m$.
In this section we will show that the weak solutions satisfy
the uniform $W^{1,p}$ estimate in Theorem \ref{W-1-p-theorem}.
Our starting point is the following
theorem established by J. Geng in \cite{Geng}, using a real variable
method originating in \cite{Caffarelli-1998} and further developed
in \cite{Shen-2005-bounds, Shen-2006-ne, Shen-2007-boundary}.
\begin{thm}\label{Geng-theorem}
Let $p>2$ and $\Omega$ be a bounded Lipschitz domain. Let $\mathcal{L}=-\text{\rm div}(A (x)\nabla)$
be an elliptic operator with coefficients satisfying (\ref{ellipticity}).
Suppose that
\begin{equation}\label{reverse-holder}
\left\{ \average_{B\cap\Omega}
|\nabla u|^p\right\}^{1/p}
\le C_0
\left\{ \average_{2B\cap\Omega}
|\nabla u|^2\right\}^{1/2},
\end{equation}
whenever $u\in W^{1,2} (3B\cap\Omega)$,
$\mathcal{L} (u) =0$ in $3B\cap\Omega$,
and $\frac{\partial u}{\partial\nu} =0$ on $3B\cap \partial\Omega$.
Here $B=B(Q, r)$ is a ball with the property
that $0<r<r_0$ and either $Q\in \partial\Omega$ or
$B(Q,3r)\subset \Omega$.
Then, for any $f\in L^p (\Omega)$, the unique (up to constants) $W^{1,2}$ solution to
\begin{equation}
\left\{
\aligned
\mathcal{L} (u) & =\text{\rm div} (f) & \quad & \text{ in } \Omega,\\
\frac{\partial u}{\partial\nu}
&=-n\cdot f &\quad & \text{ on } \partial\Omega,
\endaligned
\right.
\end{equation}
satisfies the estimate
\begin{equation}\label{4.1.2}
\| \nabla u\|_{L^p(\Omega)} \le C_p \| f\|_{L^p(\Omega)},
\end{equation}
where $C_p$ depends only on $d$, $m$, $p$,
$\mu$, $r_0$, $\Omega$ and the constant $C_0$ in (\ref{reverse-holder}).
\end{thm}
Now, given $A\in \Lambda (\mu, \lambda, \tau)$ and $p>2$.
Let $\Omega$ be a $C^{1,\alpha_0}$ domain.
Suppose that $\mathcal{L}_\varep (u_\varep)=0$ in $3B\cap\Omega$ and
$\frac{\partial u_\varep}{\partial_{\nu_\varep}}=0$
on $3B\cap\partial\Omega$.
If $3B\subset \Omega$, the weak reverse H\"older
inequality (\ref{reverse-holder}) for $u_\varep$ follows
from the interior estimate (\ref{interior-estimate}).
Suppose that $Q\in \partial\Omega$ and $B=B(Q,r)$.
We may use the interior estimate
and boundary H\"older estimate (\ref{local-holder-estimate})
to obtain
\begin{equation}\label{4.1.3}
\aligned
|\nabla u_\varep (x)|
&\le C \delta(x)^{-1}\left(\average_{B(x,c\delta (x))}
|u_\varep (y)- u_\varep (x)|^2\, dy\right)^{1/2}\\
&\le C_\gamma
\left(\frac{r}{\delta (x)}\right)^{\gamma}
\left(\average_{B(Q,2r)\cap \Omega}
|\nabla u_\varep|^2\, dy\right)^{1/2}
\endaligned
\end{equation}
for any $\gamma\in (0,1)$ and $x\in B(Q,r)\cap \Omega$,
where $\delta (x)=\text{\rm dist}(x, \partial\Omega)$.
Choose $\gamma\in (0,1)$ so that $p\gamma<1$.
It is easy to see that (\ref{4.1.3}) implies
$$
\left(\average_{B\cap\Omega}
|\nabla u_\varep|^p\right)^{1/p}
\le C_p
\left(\average_{2B\cap\Omega}
|\nabla u_\varep|^2\right)^{1/2}.
$$
In view of Theorem \ref{Geng-theorem}
we have proved Theorem \ref{W-1-p-theorem}
for the case $p>2$, $g=0$ and $F=0$.
\begin{lemma}\label{lemma-4.1}
Suppose $A\in \Lambda(\mu, \lambda,\tau)$.
Let $f\in L^p(\Omega)$, where
$\Omega$ be a bounded $C^{1,\alpha_0}$ domain and $1<p<\infty$.
Let $u\in W^{1,p}(\Omega)$ be a weak solution
to $\mathcal{L}_\varep (u_\varep)=\text{\rm div}(f)$ in $\Omega$ and
$\frac{\partial u_\varep}{\partial\nu_\varep} =-n\cdot f$ on $\partial\Omega$.
Then $\|\nabla u_\varep\|_{L^p(\Omega)}
\le C_p\, \| f\|_{L^p(\Omega)}$.
\end{lemma}
\begin{proof}
The case $p>2$ was proved above.
Suppose that $1<p<2$.
Let $g\in C^\infty_0(\Omega)$ and $v_\varep$ be a weak solution of
$\mathcal{L}_\varep^* (v_\varep)=\text{div} (g)$
and $\frac{\partial v_\varep}{\partial\nu^*_{\varep}}
=0$ on $\partial\Omega$,
where $\mathcal{L}_\varep^*$ denotes the adjoint of $\mathcal{L}_\varep$.
Since $A^*\in \Lambda(\lambda, \mu, \tau)$ and $p^\prime>2$,
we have $\|\nabla v_\varep\|_{L^{p^\prime}(\Omega)} \le C \| g\|_{L^{p^\prime}(\Omega)}$.
Also, note that
\begin{equation}\label{4.1.1}
\int_\Omega
f_i^\alpha \cdot \frac{\partial v_\varep^\alpha}{\partial x_i}\, dx
=\int_\Omega a_{ij}^{\alpha\beta} \left(\frac{x}{\varep}\right)
\frac{\partial u_\varep^\beta}{\partial x_j} \cdot\frac{\partial v_\varep^\alpha}{\partial x_i}\, dx
=\int_\Omega g_i^\alpha \cdot \frac{\partial u_\varep^\alpha}{\partial x_i}\,dx,
\end{equation}
where $f=(f_i^\alpha)$ and $g=(g_i^\alpha)$.
The estimate $\|\nabla u_\varep\|_{L^p(\Omega)}
\le C\| f\|_{L^p(\Omega)}$ now follows from (\ref{4.1.1}) by duality.
\end{proof}
\begin{lemma}\label{lemma-4.2}
Suppose that $A\in \Lambda(\lambda, \mu, \tau)$.
Let $g=(g^\alpha)\in B^{-1/p, p}(\partial\Omega)$, where
$\Omega$ is a bounded $C^{1, \alpha_0}$ domain, $1<p<\infty$ and
$<g^\alpha,1>=0$. Let $u\in W^{1, p}(\Omega)$ be a weak solution to
$\mathcal{L}_\varep (u_\varep)=0$ in $\Omega$
and $\frac{\partial u_\varep}{\partial \nu_\varep}
=g$ on $\partial\Omega$.
Then $\|\nabla u_\varep\|_{L^p(\Omega)}\le C_p\, \| g\|_{B^{-1/p, p}(\partial\Omega)}$.
\end{lemma}
\begin{proof}
Let $f\in C_0^\infty(\Omega)$ and $v_\varep$ be a weak solution to
$\mathcal{L}_\varep^* (v_\varep)=\text{\rm div}(f)$ in $\Omega$ and $\frac{\partial v_\varep}
{\partial\nu^*_\varep}=0$ on $\partial\Omega$.
Since $A^*\in \Lambda(\lambda, \mu,\tau)$, by Lemma \ref{lemma-4.1},
we have $\|\nabla v_\varep\|_{L^{p^\prime}(\Omega)} \le C\, \| f\|_{L^{p^\prime}(\Omega)}$.
Note that
\begin{equation}\label{4.2.1}
\int_\Omega f_i^\alpha \cdot \frac{\partial u_\varep^\alpha}{\partial x_i}\, dx
=-\int_\Omega a_{ij}^{\alpha\beta}
\left(\frac{x}{\varep}\right) \frac{\partial u_\varep^\beta}{\partial x_j}
\cdot \frac{\partial v_\varep^\alpha}{\partial x_i}\, dx
=-<g, v_\varep>.
\end{equation}
Let $E$ be the average of $v_\varep$ over $\Omega$. Then
\begin{equation}\label{4.2.2}
\aligned
\big| <g, v_\varep>\big|
& =\big| <g, v_\varep-E>\big|
\le \| g\|_{B^{-1/p, p}(\partial\Omega)}
\| v_\varep -E\|_{B^{1/p, p^\prime}(\partial\Omega)}\\
&\le C\, \| g\|_{B^{-1/p, p}(\partial\Omega)} \| v_\varep-E\|_{W^{1,p^\prime}(\Omega)} \\
& \le C\, \| g\|_{B^{-1/p, p}(\partial\Omega)}
\|\nabla v_\varep\|_{L^{p^\prime}(\Omega)}\\
& \le C
\| g\|_{B^{-1/p, p}(\partial\Omega)} \| f\|_{L^{p^\prime} (\Omega)},
\endaligned
\end{equation}
where we have used a trace theorem for the second inequality
and Poincar\'e inequality for the third.
The estimate $\|\nabla u_\varep\|_{L^p(\Omega)}
\le C\, \| g\|_{B^{-1/p, p}(\partial\Omega)}$
follows from (\ref{4.2.1})-(\ref{4.2.2}) by duality.
\end{proof}
Let $1<q<d$ and $\frac{1}{p}=\frac{1}{q}-\frac{1}{d}$.
In the proof of the next lemma, we will need the following Sobolev inequality
\begin{equation}\label{Sobolev}
\left(\int_\Omega |u|^p\, dx \right)^{1/p}
\le C\left(\int_\Omega |\nabla u|^q \, dx \right)^{1/q},
\end{equation}
where $u\in W^{1,q}(\Omega)$ and $\int_{\partial\Omega} u =0$.
\begin{lemma}\label{lemma-4.4}
Suppose that $A\in \Lambda(\mu, \lambda,\tau)$.
Let $F\in L^q(\Omega)$, where $1<q<d$ and $\Omega$ is a bounded $C^{1, \alpha_0}$
domain.
Let $u\in W^{1,p}(\Omega)$ be a weak solution to
$\mathcal{L}_\varep (u_\varep)=F$ in $\Omega$ and $\frac{\partial u_\varep}{\partial\nu_\varep}
=-b$ on $\partial\Omega$, where $\frac{1}{p}=\frac{1}{q}-\frac{1}{d}$
and $b=\frac{1}{|\partial\Omega|}\int_\Omega F$.
Then $\|\nabla u_\varep\|_{L^p(\Omega)} \le C\, \| F\|_{L^q(\Omega)}$.
\end{lemma}
\begin{proof}
Let $f\in C_0^\infty(\Omega)$ and $v_\varep$ be a weak solution to
$(\mathcal{L}_\varep)^* (v_\varep)=\text{\rm div} (f)$ in $\Omega$
and $\frac{\partial v_\varep}{\partial\nu_\varep}=0$ on $\partial\Omega$.
By Lemma \ref{lemma-4.1}, we have $\|\nabla v_\varep\|_{L^{p^\prime}(\Omega)}
\le C\, \| f\|_{L^{p^\prime}(\Omega)}$.
Note that
\begin{equation}\label{4.4.1}
\aligned
\int_\Omega \frac{\partial u_\varep^\alpha}{\partial x_i} \cdot f_i^\alpha\, dx
& =\int_\Omega a_{ij}^{\alpha\beta}\left(\frac{x}{\varep}\right) \frac{\partial u_\varep^\beta}
{\partial x_j}\cdot \frac{\partial v_\varep^\alpha}{\partial x_i}\, dx\\
&=\int_\Omega F\cdot v_\varep\, dx
-\int_{\partial\Omega} b\cdot v_\varep\, d\sigma\\
&=\int_\Omega F (v_\varep -E)\, dx,
\endaligned
\end{equation}
where $E$ is the average of $v_\varep$ over $\partial\Omega$.
It follows from (\ref{4.4.1}) and Sobolev inequality (\ref{Sobolev}) that
$$
\aligned
\big|\int_\Omega \frac{\partial u_\varep^\alpha}{\partial x_i} \cdot f_i^\alpha\, dx\big|
& \le \|F\|_{L^q(\Omega)}
\| v_\varep -E\|_{L^{q^\prime}(\Omega)}\\
& \le C \| F\|_{L^q(\Omega)} \|\nabla v_\varep\|_{L^{p^\prime}(\Omega)}\\
&\le C \| F\|_{L^q(\Omega)}\| f\|_{L^{p^\prime}(\Omega)}.
\endaligned
$$
By duality this gives $\|\nabla u_\varep\|_{L^p(\Omega)} \le C\, \|F\|_{L^q(\Omega)}$.
\end{proof}
\noindent{\bf Proof of Theorem \ref{W-1-p-theorem}.}
Let $v_\varep$ be a weak solution to
$\mathcal{L}_\varep (v_\varep) =\text{\rm div} (f)$ in $\Omega$
and $\frac{\partial v_\varep}{\partial \nu_\varep} = -n\cdot f$ on
$\partial\Omega$.
Let $w_\varep$ be a weak solution to
$\mathcal{L}_\varep (w_\varep) =F$ in $\Omega$ and
$\frac{\partial w_\varep}{\partial \nu_\varep} = -b$ on
$\partial\Omega$, where $b=\frac{1}{|\partial\Omega|}\int_\Omega F$.
Finally, let $h_\varep =u_\varep -v_\varep-w_\varep$. Then
$\mathcal{L}_\varep (h_\varep) =0$ in $\Omega$ and
$\frac{\partial h_\varep}{\partial\nu_\varep}
=g+b$ on $\partial\Omega$.
It follows from Lemmas \ref{lemma-4.1}, \ref{lemma-4.2} and \ref{lemma-4.4} that
$$
\aligned
\|\nabla u_\varep\|_{L^p(\Omega)}
& \le \|\nabla v_\varep\|_{L^p(\Omega)}
+\| \nabla w_\varep\|_{L^p(\Omega)} +\|\nabla h_\varep\|_{L^p(\Omega)}\\
& \le C\,
\left\{ \| f\|_{L^p(\Omega)}
+\| F\|_{L^q(\Omega)}
+\| g\|_{B^{-1/p, p}(\partial\Omega)}\right\},
\endaligned
$$
where $q=\frac{pd}{p+d}$ for $p>\frac{d}{d-1}$, and $q>1$ for $1<p\le \frac{d}{d-1}$.
This completes the proof.
\qed
\section{A matrix of Neumann functions}
Let $\Gamma_\varep (x,y)
=\big(\Gamma_{A,\varep}^{\alpha\beta} (x,y)\big)_{m\times m}$
denote the matrix of fundamental solutions of $\mathcal{L}_\varep$ in $\br^d$,
with pole at $y$. Under the assumption $A\in \Lambda(\mu,\lambda, \tau)$,
one may use the interior estimate (\ref{interior-estimate}) to show that
for $d\ge 3$,
\begin{equation}\label{fundamental-estimate-1}
|\Gamma_\varep (x,y)|\le C|x-y|^{2-d}
\end{equation}
and
\begin{equation}\label{fundamental-estimate-2}
|\nabla_x\Gamma_\varep (x,y)| +|\nabla_y \Gamma_\varep (x,y)|\le C |x-y|^{1-d},
\end{equation}
where
$C$ depends only on $d$, $m$, $\mu$, $\lambda$ and $\tau$
(see e.g. \cite{Hofmann-Kim-2007}; the size estimate (\ref{fundamental-estimate-1})
also follows from \cite{ERS}).
Let $V_\varep (x,y) =\big(V_{A,\varep}^{\alpha\beta}(x,y)\big)_{m\times m}$,
where for each $y\in \Omega$,
$V_\varep^\beta
(x,y)=\big(V_{A,\varep}^{1\beta}(x,y),\dots, V_{A,\varep}^{m\beta}(x,y)\big)$ solves
\begin{equation}\label{definition-of-V}
\left\{
\aligned
\mathcal{L}_\varep \big( V_\varep^\beta (\cdot, y)\big) & =0 \qquad\text{ in } \Omega,\\
\frac{\partial}{\partial \nu_\varep}
\big\{ V_\varep^\beta (\cdot, y)\big\} & =\frac{\partial}{\partial\nu_\varep}
\big\{ \Gamma_\varep^\beta (\cdot, y)\big\} +\frac{e^\beta}{|\partial\Omega|} \qquad
\text{ on }\partial\Omega,\\
\int_{\partial\Omega} V_\varep^{\beta} (x,y)\, d\sigma (x)
& =\int_{\partial\Omega}\Gamma_\varep^{\beta} (x,y)\, d\sigma (x),
\endaligned
\right.
\end{equation}
where $\Gamma_\varep^\beta (x,y)=(\Gamma_{A,\varep}^{1\beta}(x,y),
\dots, \Gamma_{A,\varep}^{m\beta}
(x,y))$
and $e^\beta =(0, \dots, 1, \dots, 0)$ with $1$ in the $\beta^{th}$ position.
We now define
\begin{equation}\label{definition-of-N}
N_\varep (x,y)=\big( N^{\alpha\beta}_{A,\varep} (x,y)\big)_{m\times m}
=\Gamma_\varep (x,y) -V_\varep (x,y),
\end{equation}
for $x,y\in \Omega$.
Note that, if $N_\varep^\beta (x,y) =\Gamma_\varep^\beta (x,y)-V_\varep^\beta (x,y)$,
\begin{equation}\label{equation-for-N}
\left\{
\aligned
\mathcal{L}_\varep \big\{ N^\beta_\varep (\cdot, y)\} &
=e^\beta\delta_y(x) \qquad\text{ in } \Omega,\\
\frac{\partial}{\partial\nu_\varep} \big\{ N^\beta_\varep (\cdot, y)\big\}
& =-e^\beta |\partial\Omega|^{-1} \qquad \text{ on } \partial\Omega\\
\int_{\partial\Omega} N^\beta_\varep (x,y)\, d\sigma (x) & =0,
\endaligned
\right.
\end{equation}
where $\delta_y (x)$ denotes the Dirac delta function with pole at $y$.
We will call $N_\varep (x,y)$ the matrix of Neumann functions for
$\mathcal{L}_\varep$ in $\Omega$.
\begin{lemma}\label{symmetry-lemma}
For any $x,y\in \Omega$, we have
\begin{equation}\label{Neumann-symmetry}
N_{A,\varep}^{\alpha\beta} (x,y)
=N_{A^*, \varep}^{\beta\alpha}(y,x),
\end{equation}
where $A^*$ denotes the adjoint of $A$.
\end{lemma}
\begin{proof}
Note that
\begin{equation}\label{fundamental-symmetry}
\Gamma_{A,\varep}^{\alpha\beta} (x,y)
=\Gamma_{A^*, \varep}^{\beta\alpha} (y,x),
\qquad \text{ for any } x,y\in \Omega.
\end{equation}
Using the Green's representation formula for $\mathcal{L}_\varep$ on $\Omega$,
(\ref{definition-of-V}) and (\ref{equation-for-N}) one may show that
$$
\aligned
&
V_{A,\varep}^{\alpha\beta} (x,y)
+\Gamma_{A, \varep}^{\alpha\beta}(x,y)
-\frac{1}{|\partial\Omega|}
\int_{\partial\Omega}
\left\{ \Gamma_{A,\varep}^{\alpha\beta} (z,y)
+\Gamma_{A^*, \varep}^{\beta\alpha} (z,x)\right\}\, d\sigma (z)\\
&
=\int_\Omega a_{ij}^{\gamma\delta} \left(\frac{z}{\varep}\right)
\frac{\partial}{\partial z_i}
\bigg\{ \Gamma_{A^*, \varep}^{\gamma\alpha}(z,x)\bigg\}
\cdot \frac{\partial}{\partial z_j}
\bigg\{ \Gamma_{A, \varep}^{\delta\beta}(z,y)\bigg\}\, dz\\
&\qquad
-
\int_\Omega a_{ij}^{\gamma\delta} \left(\frac{z}{\varep}\right)
\frac{\partial}{\partial z_i}
\bigg\{ V_{A^*, \varep}^{\gamma\alpha}(z,x)\bigg\}
\cdot \frac{\partial}{\partial z_j}
\bigg\{ V_{A, \varep}^{\delta\beta}(z,y)\bigg\}\, dz.
\endaligned
$$
This gives $V_{A,\varep}^{\alpha\beta}(x,y)
=V_{A^*, \varep}^{\beta\alpha}(y,x)$ and hence (\ref{Neumann-symmetry}).
\end{proof}
\begin{thm}\label{Neumann-theorem-5.2}
Let $\Omega$ be a bounded $C^{1,\alpha_0}$ domain and
$A\in \Lambda(\mu, \lambda, \tau)$.
Let $x_0,y_0, z_0\in \Omega$ be such that $|x_0-z_0|<(1/4)|x_0-y_0|$.
Then for any $\gamma\in (0,1)$,
\begin{equation}\label{5.2}
\left\{
\average_{B(y_0,\rho/4)\cap\Omega}
\big|\nabla_y\big\{ N_\varep (x_0,y) -N_\varep (z_0,y)\big\} |^2\, dy\right\}^{1/2}
\le C \rho^{1-d} \left(\frac{|x_0-z_0|}{\rho}\right)^{\gamma},
\end{equation}
where $\rho=|x_0-y_0|$ and
$C$ depends only on $\mu$, $\lambda$, $\tau$, $\gamma$ and $\Omega$.
\end{thm}
\begin{proof}
Let $f\in C_0^\infty (B(y_0,\rho/2)\cap\Omega)$ and $\int_\Omega f=0$.
Let
$$
u_\varep (x)
=\int_\Omega N_\varep (x,y) f(y)\, dy.
$$
Then $\mathcal{L}_\varep (u_\varep)=f$ in $\Omega$ and $\frac{\partial u_\varep}{\partial\nu_\varep}
=0$ on $\partial\Omega$. Since $\mathcal{L}_\varep (u_\varep)
=0$ in $B(x_0,\rho/2)\cap\Omega$, it follows
from the boundary H\"older estimate (\ref{boundary-holder-estimate})
and interior estimates
that
\begin{equation}\label{5.2.1}
|u_\varep (x_0)-u_\varep (z_0)|
\le C
\left(\frac{|x_0-z_0|}{\rho}\right)^{\gamma}\cdot\rho
\cdot \left\{
\average_{B(x_0, \rho/2)\cap\Omega}
|\nabla u_\varep |^2\right\}^{1/2}.
\end{equation}
Let $E$ be the average of $u_\varep$ over $B(y_0,\rho/2)\cap\Omega$.
Note that by (\ref{ellipticity}),
\begin{equation}\label{5.2.2}
\aligned
\mu \int_\Omega |\nabla u_\varep|^2 dx \le
& \left| \int_\Omega f\cdot u_\varep\, dx \right|
=\left|\int_{B(y_0, \rho/2)\cap\Omega}
f\cdot (u_\varep -E)\, dx\right|\\
& \le \| f\|_{L^2(\Omega)}
\| u_\varep -E\|_{L^2(B(y_0,\rho/2)\cap\Omega)}\\
& \le C \rho \| f\|_{L^2(\Omega)}
\| \nabla u_\varep\|_{L^2(B(y_0,\rho/2)\cap\Omega)},
\endaligned
\end{equation}
where we have used the Cauchy and Poincar\'e inequalities.
Hence, $\|\nabla u_\varep\|_{L^2(\Omega)}
\le C\rho \|f\|_{L^2(\Omega)}$.
This, together with (\ref{5.2.1}), gives
$$
|u_\varep (x_0)-u_\varep (z_0)|
\le C\rho^{2-\frac{d}{2}} \left(\frac{|x_0-z_0|}{\rho}\right)^\gamma
\| f\|_{L^2(\Omega)}.
$$
By duality this implies that
\begin{equation}\label{5.2.3}
\left\{ \int_{B(y_0,\rho/2)\cap\Omega}
\big| W(y) -C_{x_0,z_0}\big|^2\, dy\right\}^{1/2}
\le C\rho^{2-\frac{d}{2}}
\left(\frac{|x_0-z_0|}{\rho}\right)^\gamma,
\end{equation}
where $W(y)=N_\varep (x_0,y)-N_\varep (z_0,y)$ and $C_{x_0,z_0}$
is the average of $W$
over $B(y_0,\rho/2)\cap\Omega$.
In view of (\ref{Neumann-symmetry}) we have
$(\mathcal{L}_\varep)^* (W^*)=0$ in $B(y_0, \rho/2)\cap\Omega$
and $\frac{\partial }{\partial \nu_\varep^*}\{ W^*\} =0$ on $\partial\Omega$, where
$\frac{\partial}{\partial\nu^*_\varep}$ denote the conormal derivative
associated with $(\mathcal{L}_\varep)^*$.
The estimate (\ref{5.2}) now follows from (\ref{5.2.3}) by
Cacciopoli's inequality (\ref{Cacciopoli}).
\end{proof}
\begin{lemma}\label{lemma-5.3}
Let $V_\varep(x,y)$ be defined by (\ref{definition-of-V}).
Suppose $d\ge 3$. Then for any $x,y\in \Omega$,
\begin{equation}\label{estimate-5.3}
|V_\varep (x,y)|\le {C}{\big[ \delta (x)\big]^{\frac{2-d}{2}}
\big[\delta(y)\big]^{\frac{2-d}{2}}},
\end{equation}
where $\delta (x)=\text{\rm dist}(x,\partial\Omega)$.
\end{lemma}
\begin{proof}
We begin by fixing $y\in\Omega$ and $1\le \beta\le m$.
Let $u_\varep (x)=V_\varep (x,y)$.
In view of (\ref{definition-of-V}) we have
$$
\|\nabla u_\varep \|_{L^2(\Omega)}
\le C \|\frac{\partial u_\varep}{\partial\nu_\varep}\|_{W^{-1/2,2} (\partial\Omega)}
\le C \|\frac{\partial u_\varep}{\partial\nu_\varep}\|_{L^p(\partial\Omega)},
$$
where $p=\frac{2(d-1)}{d}$. Note that by (\ref{fundamental-estimate-2}),
$$
\aligned
\|\frac{\partial u_\varep}{\partial\nu_\varep}\|_{L^p(\partial\Omega)}
& \le C \left\{ \int_{\partial\Omega}
\frac{d\sigma (x)}{|x-y|^{p(d-1)}}\right\}^{1/p}
+ C|\partial\Omega|^{\frac{1}{p}-1}\\
& \le C\big[\delta (y)\big]^{\frac{2-d}{2}}.
\endaligned
$$
Thus we have proved that
$$
\| \nabla u_\varep\|_{L^2(\Omega)}
\le C \big[ \delta (y)\big]^{\frac{2-d}{2}}.
$$
Now, by the interior estimates and the Sobolev inequality (\ref{Sobolev}),
$$
\aligned
|u_\varep (x)|
&\le C\left\{ \frac{1}{[\delta (x)]^d}
\int_{B(x,\delta(x)/2)}
|u_\varep (z)|^{2^*} dz \right\}^{1/2^*}\\
&\le C \big[ \delta (x)\big]^{\frac{2-d}{2}}
\left\{ \left(\int_\Omega |\nabla u_\varep|^2\, dx \right)^{1/2}
+|\Omega|^{\frac{1}{2^*}}
\big| \average_{\partial\Omega} u_\varep d\sigma\big| \right\}\\
&\le
C \big[ \delta (x)\big]^{\frac{2-d}{2}}
\left\{ \big[ \delta (y)\big]^{\frac{2-d}{2}}
+|\Omega|^{\frac{1}{2^*}}
\big| \average_{\partial\Omega} \Gamma_\varep (z,y) d\sigma (z)\big| \right\}\\
&\le
C \big[ \delta (x)\big]^{\frac{2-d}{2}}
\big[ \delta (y)\big]^{\frac{2-d}{2}},
\endaligned
$$
where $2^*=\frac{2d}{d-2}$.
\end{proof}
\begin{thm}\label{Neumann-function-theorem}
Let $\Omega$ be a bounded $C^{1,\alpha}$ domain in $\brd$, $d\ge 3$.
Suppose that $A\in \Lambda (\mu, \lambda, \tau)$.
Then
\begin{equation}\label{Neumann-size-estimate}
|N_\varep (x,y)|\le C |x-y|^{2-d}
\end{equation}
and for any $\gamma\in (0,1)$,
\begin{equation}\label{Neumann-holder-estimate}
\aligned
|N_\varep (x,y)-N_\varep (z,y)| & \le \frac{C_\gamma |x-z|^\gamma}{|x-y|^{d-2+\gamma}},\\
|N_\varep (y,x)-N_\varep (y,z)| & \le \frac{C_\gamma |x-z|^\gamma}{|x-y|^{d-2+\gamma}},
\endaligned
\end{equation}
where $|x-z|<(1/4)|x-y|$.
\end{thm}
\begin{proof}
By Theorem \ref{boundary-holder-theorem}
we only need to establish the size estimate (\ref{Neumann-size-estimate}).
To this end we first note that by Lemma \ref{lemma-5.3},
\begin{equation}\label{5.4.1}
|N_\varep (x,y)|\le C\big\{ |x-y|^{2-d}
+\big[\delta(x)\big]^{2-d}
+\big[\delta(y)\big]^{2-d}\big\}.
\end{equation}
Next, let $\rho=|x-y|$. It follows from Theorem \ref{boundary-holder-theorem}
and (\ref{5.4.1}) that
\begin{equation}\label{5.4.2}
\aligned
|N_\varep (x,y)|
& \le C \left\{\left\{
\average_{B(x,\rho/4)\cap\Omega} |N_\varep (z,y)|^p \, dz\right\}^{1/p}
+\frac{\rho}{|\partial\Omega|}\right\}\\
& \le C
\big\{
|x-y|^{2-d}
+\big[\delta (y)\big]^{2-d}\big\},
\endaligned
\end{equation}
where we have chosen $p$ so that $p(d-2)<1$.
With estimate (\ref{5.4.2}) at our disposal,
another application of Theorem \ref{boundary-holder-theorem} gives
$$
\aligned
|N_\varep (x,y)|
&\le C\left\{ \left\{\average_{B(y,\rho/4)\cap \Omega}
|N_\varep (x,z)|^p\, dz\right\}^{1/p}
+\rho^{2-d}\right\}\\
&\le C|x-y|^{2-d}.
\endaligned
$$
This finishes the proof.
\end{proof}
\begin{remark}\label{remark-5.0}
{\rm
If $m=1$ and $d\ge 3$, the size estimate (\ref{Neumann-size-estimate}) and H\"older estimate
(\ref{Neumann-holder-estimate})
for some $\gamma>0$ were established in \cite{Kenig-Pipher-1993}
for divergence form elliptic operators with bounded measurable coefficients in
bounded star-like Lipschitz domains.
}
\end{remark}
\begin{remark}\label{remark-5.2}
{\rm
Suppose that $d\ge 3$.
The matrix of Neumann functions for the exterior domain $\Omega_-=\br^d\setminus \overline{\Omega}$
may be constructed in a similar fashion. Indeed, let $N_\varep^- (x,y)
=\Gamma_\varep (x,y)-V_\varep^-(x,y)$, where $V_\varep^-(x,y)$ is chosen so that
for each $y\in \Omega_-$,
\begin{equation}\label{5.6.1}
\left\{
\aligned
\mathcal{L}_\varep \big\{ N_\varep^- (\cdot, y)\big\} & =\delta_y (x) I \quad \text{ in }\Omega,\\
\frac{\partial}{\partial \nu_\varep} \big\{ N_\varep^- (\cdot, y)\big\}
&=0 \quad \text{ on }\partial\Omega,\\
N_\varep^- (x,y) & =O(|x-y|^{2-d}) \quad \text{ as } |x|\to\infty,
\endaligned
\right.
\end{equation}
where $I$ is the $m\times m$ identity matrix.
The estimates in Theorem \ref{Neumann-function-theorem} continue to hold
for $N_\varep^-(x,y)$.
}
\end{remark}
\begin{remark}\label{remark-5.1}
{\rm
If $d=2$, the matrix of Neumann functions may be defined as follows.
Choose $B(0,R)$ such that $\Omega\subset B(0,R/2)$. Let $G_\varep (x,y)$ be
the Green's function for $\mathcal{L}_\varep$ in $ B(0,R)$.
Define $N_\varep (x,y)=G_\varep (x,y)-V_\varep (x,y)$, where $V_\varep (x,y)$ is the solution
to (\ref{definition-of-V}), but with $\Gamma_\varep (x,y)$ replaced by $G_\varep (x,y)$.
Theorem \ref{Neumann-theorem-5.2} continues to hold for $d=2$.
One may modify the argument in the proof of Lemma \ref{lemma-5.3}
to show that
$$
|V_\varep (x,y)|\le C_\gamma \big[\delta(x)\big]^{-\gamma} \big[\delta(y)\big]^{-\gamma},
$$
for any $\gamma>0$. In view of the proof of Theorem \ref{Neumann-function-theorem}
and the estimate
$|G_\varep (x,y)|\le C \big\{ 1+\big|\ln |x-y|\big|\big\}$ in \cite{AL-1987},
this gives
$
|N_\varep (x,y)|\le C_\gamma|x-y|^{-\gamma}$
for any $\gamma>0$.
}
\end{remark}
\section{Correctors for Neumann boundary conditions}
Let $\Phi_\varep =(\Phi_{\varep,j}^{\alpha\beta})$,
where for each $1\le j\le d$ and $1\le \beta\le m$,
$\Phi_{\varep,j}^\beta =(\Phi_{\varep,j}^{1\beta}, \dots, \Phi_{\varep, j}^{m \beta})$
is a solution to the Neumann problem
\begin{equation}\label{Phi}
\left\{
\aligned
\mathcal{L}_\varep \big( \Phi_{\varep,j}^\beta) & =0 &\quad & \text{ in } \Omega,\\
\frac{\partial}{\partial\nu_\varep}
\big(\Phi_{\varep,j}^\beta\big) & =\frac{\partial}{\partial \nu_0} \big( P_j^\beta\big) & \quad
&\text{ on } \partial \Omega,\\
\endaligned
\right.
\end{equation}
Here $P_j^\beta =P_j^\beta (x)=x_j (0,\dots, 1, \dots, 0)$ with $1$ in the $\beta^{th}$
position.
In the study of boundary estimates for Neumann boundary conditions,
the function $\Phi_\varep (x)-x$
plays a similar role as $\varep \chi (\frac{x}{\varep})$ for interior
estimates. The goal of this section is to prove the following uniform Lipschitz estimate of
$\Phi_\varep$.
\begin{thm}\label{corrector-theorem}
Let $\Omega$ be a $C^{1,\alpha_0}$ domain.
Suppose that $A\in \Lambda (\mu,\lambda, \tau)$ and $A^*=A$.
Then
\begin{equation}\label{corrector-estimate}
\|\nabla \Phi_\varep\|_{L^\infty(\Omega)} \le C,
\end{equation}
where $C$ depends only on $d$, $m$, $\mu$, $\lambda$, $\tau$ and $\Omega$.
\end{thm}
Our proof of Theorem \ref{corrector-theorem} uses the uniform $L^2$ Rellich
estimate for Neumann problem:
\begin{equation}\label{Rellich-estimate}
\int_{\partial\Omega} |\nabla u_\varep|^2\, d\sigma
\le C\int_{\partial\Omega}
\big|\frac{\partial u_\varep}{\partial \nu_\varep}\big|^2\, d\sigma,
\end{equation}
for solutions of $\mathcal{L}_\varep (u_\varep)=0$ in $\Omega$.
We mention that (\ref{Rellich-estimate}) as well as the uniform $L^2$ Rellich estimate
for the regularity of Dirichlet problem:
\begin{equation}\label{Rellich-estimate-1}
\int_{\partial\Omega} |\nabla u_\varep|^2\, d\sigma
\le C\int_{\partial\Omega}
\big|\nabla_{\rm tan} u_\varep\big|^2\, d\sigma,
\end{equation}
was established by Kenig and Shen
in \cite{Kenig-Shen-2} under the assumption that $\Omega$ is Lipschitz,
$A\in \Lambda (\mu, \lambda,\tau)$
and $A=A^*$ (also see \cite{Kenig-Shen-1} for the case of the elliptic equation).
The constant $C$ in (\ref{Rellich-estimate})-(\ref{Rellich-estimate-1})
depends only on $d$, $m$,
$\mu$, $\lambda$, $\tau$ and the Lipschitz character of $\Omega$.
\begin{lemma}\label{6.2-lemma}
Let $\Omega$ and $\mathcal{L}$ satisfy the same assumptions as in Theorem \ref{corrector-theorem}.
Suppose that $\mathcal{L}_\varep (u_\varep)=0$ in $\Omega$,
$\frac{\partial u_\varep}{\partial\nu_\varep}=g$ on $\partial\Omega$, and
$$
g=\sum_{i,j}
\left( n_i \frac{\partial}{\partial x_j}-
n_j\frac{\partial}{\partial x_i}\right) g_{ij},
$$
where $g_{ij}\in C^1(\partial\Omega)$ and
$n=(n_1, \dots, n_d)$ denotes the unit outward normal to $\partial\Omega$.
Then
\begin{equation}\label{6.2.1}
|\nabla u_\varep (x)|
\le \frac{C}{\delta (x)}
\sum_{i,j} \| g_{ij}\|_{L^\infty (\partial \Omega)},
\end{equation}
for any $x\in \Omega$, where $\delta(x)=\text{\rm dist}(x,\partial\Omega)$.
\end{lemma}
\begin{proof}
By the interior estimate (\ref{interior-estimate}) we only need to show that
\begin{equation}\label{6.2.2-1}
|u_\varep (x)-u_\varep (z)|
\le C \sum_{i,j} \| g_{ij}\|_{L^\infty (\partial \Omega)},
\end{equation}
where $|x-z|\le cr$ and $r=\delta (x)$.
Let $N_\varep (x,y)$ denote the matrix of Neumann functions for $\mathcal{L}_\varep$
on $\Omega$.
Note that
$$
\aligned
u_\varep (x)-u_\varep (z)
&=\int_{\partial\Omega}
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} g(y)\, d\sigma(y)\\
&=
\int_{\partial\Omega}
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\}
\sum_{i,j}
\left( n_i \frac{\partial}{\partial y_j}-
n_j\frac{\partial}{\partial y_i}\right) g_{ij} (y)
\, d\sigma (y)\\
& =-\sum_{i,j}
\int_{\partial\Omega}
\left( n_i \frac{\partial}{\partial y_j}-
n_j\frac{\partial}{\partial y_i}\right)
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\}
\cdot g_{ij}(y)\, d\sigma (y),
\endaligned
$$
where we have used the fact that $n_i\frac{\partial}{\partial y_j}
-n_j\frac{\partial}{\partial y_i}$ is a tangential derivative
on $\partial\Omega$.
Consequently it suffices to show that
\begin{equation}\label{6.2.2}
\int_{\partial\Omega}
\big| \nabla _y \big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \big|\,
d\sigma (y) \le C,
\end{equation}
if $|x-z|\le cr$ and $r=\delta(x)$.
Let $Q\in \partial\Omega$ so that $|x-Q|=\text{\rm dist}(x, \partial\Omega)$.
By translation and rotation we may assume that $Q=0$ and
$$
\aligned
&\Omega\cap \{ (x^\prime, x_d): \ |x^\prime|<8cr \text{ and } |x_d|<8cr\}\\
&\quad=\big\{ (x^\prime, x_d): \ |x^\prime|<8cr \text{ and }
\psi(x^\prime)<x_d <8cr\}
\endaligned
$$
where $\psi(0)=|\nabla \psi (0)|=0$ and $c$ is sufficiently small.
To establish (\ref{6.2.2}) we will show that
\begin{equation}\label{6.2.3}
\int_{|y|\le cr}
\big| \nabla _y \big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \big|\,
d\sigma (y) \le C,
\end{equation}
and there exists $\beta>0$ such that for $cr<\rho<r_0 $,
\begin{equation}\label{6.2.4}
\int_{ |y-P|\le c\rho}
\big| \nabla _y \big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \big|\,
d\sigma (y) \le C\left(\frac{r}{\rho}\right)^\beta,
\end{equation}
where $P\in\partial\Omega$ and $|P|=\rho$.
The estimate (\ref{6.2.2})
follows from (\ref{6.2.3}) and (\ref{6.2.4})
by a simple covering argument.
To see (\ref{6.2.3}) we let
$$
S(t)=\big\{ (x^\prime, x_d): \ |x^\prime|<t \text{ and } \psi(x^\prime)
<x_d< \psi(x^\prime) +ct\big\}.
$$
Note that by Cauchy inequality, for $t\in (cr,2cr)$,
\begin{equation}\label{6.2.5}
\aligned
& \left\{ \int_{|y|\le cr}
\big| \nabla _y \big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \big|\,
d\sigma (y)\right\}^2\\
&\qquad
\le Cr^{d-1}
\int_{\partial S(t)}
\left|
\nabla_y
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \right|^2\, d\sigma (y)\\
&\qquad
\le Cr^{d-1}
\int_{\partial S(t)}
\left|
\frac{\partial }{\partial \nu^*_\varep}
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \right|^2\, d\sigma (y),
\endaligned
\end{equation}
where we have used the Rellich estimate (\ref{Rellich-estimate})
for the last inequality.
Since
$$
\frac{\partial }{\partial \nu^*_\varep (y)}
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} =0 \quad \text{ in }\partial\Omega,
$$
we may integrate both sides of (\ref{6.2.5}) in $t$ over $(cr, 2cr)$ to obtain
\begin{equation}\label{6.2.6}
\aligned
& \left\{ \int_{|y|\le cr}
\big| \nabla _y \big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \big|\,
d\sigma (y)\right\}^2 \\
&\qquad
\le Cr^{d-2}
\int_{ S(2cr)}
\left|
\nabla_y
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \right|^2\, dy.
\endaligned
\end{equation}
The desired estimate (\ref{6.2.3}) now follows from estimate
(\ref{5.2}).
The proof of (\ref{6.2.4}) is similar to that of (\ref{6.2.3}).
Indeed, an analogous argument gives
$$
\aligned
& \left\{ \int_{|y-P|\le c\rho}
\left|
\nabla_y
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \right|^2\, d\sigma(y)\right\}^2 \\
&\qquad
\le C\rho^{d-2}
\int_{|y-P|\le 2c\rho}
\left|
\nabla_y
\big\{ N_\varep (x,y)-N_\varep (z,y)\big\} \right|^2\, dy\\
&\qquad
\le C \left(\frac{r}{\rho}\right)^{2\gamma}.
\endaligned
$$
This completes the proof.
\end{proof}
Let $\Psi_\varep =\big(\Psi_{\varep,j}^{\alpha\beta} (x)\big)$,
where $1\le j\le d$, $1\le\alpha, \beta\le m$ and
\begin{equation}\label{Psi}
\Psi_{\varep,j}^{\alpha\beta}(x)
=\Phi_{\varep,j}^{\alpha\beta} (x)
-x_j \delta_{\alpha\beta} -\varep \chi_j^{\alpha\beta} \left(\frac{x}{\varep}\right).
\end{equation}
\begin{lemma}\label{6.3-lemma}
Suppose that
$\Omega$ and $\mathcal{L}$ satisfy the same conditions as in Theorem \ref{corrector-theorem}.
Then
\begin{equation}\label{6.3.1}
|\nabla \Psi_\varep (x)|\le \frac{C\varep}{\delta(x)}
\qquad \text{ for any } x\in\Omega.
\end{equation}
\end{lemma}
\begin{proof}
Fix $1\le \ell \le d$ and $1\le \gamma \le m$.
Let $w=(w^1, \dots, w^m)=(\Psi_{\varep,\ell}^{1\gamma}, \dots, \Psi_{\varep,\ell}^{m\gamma})$.
Note that $\mathcal{L}_\varep (w)=0$ in $\Omega$.
In view of Lemma \ref{6.2-lemma} it suffices to show that
there exists $ g_{ij}\in C^1(\partial\Omega)$ such that
\begin{equation}\label{6.3.2}
\left\{
\aligned
& \frac{\partial w}{\partial \nu_{\varep}}
=\sum_{i,j}
\left(n_i \frac{\partial}{\partial x_j}
-n_j \frac{\partial}{\partial x_i}\right) g_{ij},\\
& \| g_{ij}\|_{L^\infty(\partial\Omega)}\le C\varep.
\endaligned
\right.
\end{equation}
To this end we observe that by the definition of $\Phi_{\varep,j}^{\alpha\beta}$ in (\ref{Phi}),
$$
\aligned
\left( \frac{\partial w}{\partial\nu_\varep}\right)^\alpha
& =n_i a_{ij}^{\alpha\beta} \left(\frac{x}{\varep}\right)
\frac{\partial }{\partial x_j}
\left\{ \Phi_{\varep,\ell}^{\beta\gamma}\right\}
-n_i a_{ij}^{\alpha\beta} \left(\frac{x}{\varep}\right)
\frac{\partial}{\partial x_j}
\left\{ x_\ell \delta_{\beta\gamma}
+\varep \chi_\ell^{\beta\gamma} \left(\frac{x}{\varep}\right) \right\}\\
&
=n_i \hat{a}_{ij}^{\alpha\beta}
\frac{\partial}{\partial x_j}
\big\{ x_\ell \delta_{\beta\gamma}\big\}
-
n_i a_{ij}^{\alpha\beta} \left(\frac{x}{\varep}\right)
\frac{\partial}{\partial x_j}
\left\{ x_\ell \delta_{\beta\gamma}
+\varep \chi_\ell^{\beta\gamma} \left(\frac{x}{\varep}\right)\right\}\\
&=
n_i \hat{a}^{\alpha\gamma}_{i\ell}
-n_i a_{ij}^{\alpha\beta}
\left(\frac{x}{\varep}\right)
\left\{ \delta_{j\ell}\delta_{\beta\gamma}
+\frac{\partial \chi_\ell^{\beta\gamma}}{\partial x_j} \left(\frac{x}{\varep}\right)\right\},
\endaligned
$$
where $\hat{a}_{ij}^{\alpha\beta}$ are the homogenized coefficients defined
by (\ref{homogenized-coefficient}).
Let
\begin{equation}\label{6.3.3}
H_{i\ell}^{\alpha\gamma} (y)
=\hat{a}^{\alpha\gamma}_{i\ell}
-a_{ij}^{\alpha\beta} (y)
\left\{
\delta_{j\ell}\delta_{\beta\gamma}
+\frac{\partial \chi_\ell^{\beta\gamma}}{\partial y_j} (y)\right\}.
\end{equation}
It follows from the definition of $\hat{a}_{i\ell}^{\alpha\gamma}$ that
$$
\int_{[0,1]^d} H_{i\ell}^{\alpha\gamma} (y)\, dy =0.
$$
Thus we may solve the Poisson equation on $[0,1]^d$ with periodic boundary
conditions,
\begin{equation}\label{6.3.4}
\left\{
\aligned
& \Delta U_{i\ell}^{\alpha\gamma} =H_{i\ell}^{\alpha\gamma} \quad \text{ in } \brd,\\
& U_{i\ell}^{\alpha\gamma} (y) \text{ is periodic with respect to } \mathbb{Z}^d.
\endaligned
\right.
\end{equation}
Since $A(y)$ and $\nabla\chi (y)$ are H\"older continuous,
$\nabla^2 U_{i\ell}^{\alpha\gamma}$ is H\"older continuous.
In particular, we have $\|\nabla U_{i\ell}^{\alpha\gamma}\|_\infty
\le C$, where $C$ depends only on $\mu$, $\lambda$ and $\tau$.
Now let
$$
F_{i\ell k}^{\alpha\gamma} (y)
=\frac{\partial }{\partial y_k} \bigg\{ U_{i\ell}^{\alpha\gamma} (y)\bigg\}.
$$
Then
$$
H_{i\ell}^{\alpha\gamma} (y)=\frac{\partial}{\partial y_k}
\bigg\{ F_{i\ell k}^{\alpha\gamma} (y)\bigg\}
$$
and hence
\begin{equation}\label{6.3.5}
\aligned
\left(\frac{\partial w}{\partial \nu_\varep}\right)^\alpha
&=n_i (x) H_{i\ell}^{\alpha\gamma}
\left(\frac{x}{\varep}\right)\\
&=
n_i(x) \frac{\partial}{\partial x_k}
\bigg\{\varep
F_{i\ell k}^{\alpha \gamma} \left(\frac{x}{\varep}\right)\bigg\}.
\endaligned
\end{equation}
We claim that
\begin{equation}\label{6.3.6}
\frac{\partial }{\partial y_i}
\bigg\{ F_{i\ell k}^{\alpha\gamma} (y)\bigg\}
=0.
\end{equation}
Assume the claim is true. We may then write
\begin{equation}\label{6.3.7}
\left(\frac{\partial w}{\partial \nu_\varep}\right)^\alpha
=
n_i(x) \frac{\partial}{\partial x_k}
\bigg\{\varep
F_{i\ell k}^{\alpha \gamma} \left(\frac{x}{\varep}\right)\bigg\}
-
n_k(x) \frac{\partial}{\partial x_i}
\bigg\{\varep
F_{i\ell k}^{\alpha \gamma} \left(\frac{x}{\varep}\right)\bigg\}
\qquad \text{ on }\partial\Omega.
\end{equation}
Since $\| \varep F_{i\ell k}^{\alpha\gamma} (x/\varep)\|_\infty \le C\varep$,
we obtain the desired (\ref{6.3.2}).
Finally, to show (\ref{6.3.6}), we observe that
$$
\frac{\partial}{\partial y_i}
\bigg\{ H_{i\ell}^{\alpha\gamma}(y)\bigg\}=0 \qquad \text{ in }\brd,
$$
which follows directly from (\ref{corrector-equation}).
In view of (\ref{6.3.4}) this implies that
$\frac{\partial}{\partial y_i}
\big\{U_{i\ell}^{\alpha\gamma} (y)\big\}$ is harmonic in $\brd$.
Since it is also periodic, we may deduce that
$\frac{\partial}{\partial y_i}
\big\{U_{i\ell}^{\alpha\gamma} (y)\big\}$ is constant.
As a result,
$$
\frac{\partial }{\partial y_i}
\bigg\{ F_{i\ell k}^{\alpha\gamma} (y)\bigg\}
=\frac{\partial^2}{\partial y_k\partial y_i}
\bigg\{ U_{i\ell}^{\alpha \gamma}(y)\bigg\}
=0 \qquad \text{ in }\brd.
$$
This completes the proof of Lemma \ref{6.2-lemma}.
\end{proof}
\noindent{\bf Proof of Theorem \ref{corrector-theorem}.}
It follows from (\ref{Psi}) and (\ref{6.3.1}) that
\begin{equation}\label{6.4.1}
|\nabla \Phi_\varep (x)| \le C +\frac{C\varep}{\delta (x)}
\qquad \text{ for any }x\in \Omega.
\end{equation}
This implies that $|\nabla \Phi_\varep (x)|\le C$ if $\delta(x)\ge c\varep$.
To estimate $|\nabla \Phi_\varep (x)|$ for $x$ with $\delta (x)\le c\varep$,
we use a standard blow-up argument.
Fix $j$ and $\beta$.
Let $w(x)=\varep^{-1}\Phi_{\varep,j}^\beta (\varep x)$.
Then $\mathcal{L}_1 (w)=0$ and
$$
\frac{\partial w}{\partial\nu_1}
=\frac{\partial \Phi_{\varep, j}^\beta}{\partial \nu_\varep}
(\varep x)
=n_i(\varep x) \hat{a}_{ij}^{\alpha\beta}.
$$
Since $\Omega$ is a $C^{1, \alpha_0}$ domain, its normal $n(x)$ is H\"older
continuous.
Thus, by the classical regularity results for
the Neumann problem with data in H\"older spaces,
\begin{equation}\label{6.4.2}
\|\nabla \Phi_\varep\|_{L^\infty (B(Q,\varep)\cap \Omega)}
\le
C+ C\left\{ \frac{1}{\varep^d}
\int_{B(Q,2\varep)\cap\Omega}
|\nabla \Phi_\varep|^p \, dx \right\}^{1/p}
\end{equation}
for any $p>0$, where $Q\in \partial\Omega$ and $C$ depends only on $d$, $m$,
$p$, $\mu$, $\lambda$,
$\tau$ and $\Omega$.
We remark that estimate (\ref{6.4.2}) with $p=2$ is well known and the case $0<p<2$
follows from the case $p=2$ by a convexity argument.
Finally, it follows from (\ref{6.4.1}) and (\ref{6.4.2}) with $p<1$ that
$$
\|\nabla \Phi_\varep\|_{L^\infty(B(Q,\varep)\cap\Omega)}
\le C.
$$
This finishes the proof of Theorem \ref{corrector-theorem}.
\qed
\begin{remark}\label{remark-6.1}
{\rm Fix $\eta\in C_0^\infty(\mathbb{R}^{d-1})$ so that $\eta(x^\prime)=1$ for $|x^\prime|\le 2$
and $\eta(x^\prime)=0$ for $|x^\prime|\ge 3$.
For any function $\psi$ satisfying the condition (\ref{psi}),
we may construct a bounded $C^{1, \alpha_0}$ domain $\Omega_\psi$
in $\brd$ with the following property,
\begin{equation}\label{6.6.1}
\aligned
& D_{\psi\eta} (4)\subset \Omega_\psi\subset \big\{ (x^\prime, x_d): \ |x^\prime|<8
\text{ and } |x_d|<8(M_0+1)\big\},\\
&\big\{ (x^\prime, (\psi\eta)(x^\prime)): \ |x^\prime|<4\big\}
\subset \partial\Omega_\psi.
\endaligned
\end{equation}
Clearly, the domain $\Omega_\psi$ can be constructed in such a way that
$\Omega_\psi\setminus \{ (x^\prime, (\psi\eta)(x^\prime)): \ |x^\prime|\le 4\}$
depends only on $M_0$.
Let $\Phi_\varep(x)=\Phi_\varep(x, \Omega_\psi, A)$ be the matrix of functions satisfying
(\ref{Phi}) with $\Omega=\Omega_\psi$ and $\Phi_\varep (0)=0$.
It follows from Theorem \ref{corrector-theorem} that
$\|\nabla \Phi_\varep\|_{L^\infty (\Omega)} \le C$, where
$C$ depends only on $d$, $m$, $\mu$, $\lambda$, $\tau$ and $(\alpha_0, M_0)$.
}
\end{remark}
\section{Boundary Lipschitz estimates}
In this section we establish the uniform boundary Lipschitz estimate
under the assumption that $A\in \Lambda(\mu,\lambda,\tau)$ and
$A^*=A$.
\begin{thm}\label{boundary-Lipschitz-theorem}
Let $\Omega$ be a bounded $C^{1,\alpha_0}$ domain.
Suppose that $A\in \Lambda(\mu, \lambda,\tau)$ and $A^*=A$.
Let $\mathcal{L}_\varep (u_\varep)=0$ in $B(Q,\rho)\cap \Omega$
and $\frac{\partial u_\varep}{\partial\nu_\varep}=g
$ on $B(Q,\rho)\cap \partial\Omega$ for some $Q\in \partial\Omega$
and $0<\rho<c$.
Assume that $g\in C^\eta (B(Q, \rho)\cap\partial\Omega)$ for some $\eta\in (0,\alpha_0)$.
Then
\begin{equation}\label{boundary-Lipschitz-estimate}
\| \nabla u_\varep\|_{L^\infty (B(Q,\rho/2)\cap\Omega)}
\le C\left\{
\rho^{-1} \| u_\varep\|_{L^\infty (B(Q, \rho)\cap\Omega)}
+\| g\|_{C^\eta (B(Q, \rho)\cap \partial\Omega)}\right\},
\end{equation}
where $c=c(\Omega)>0$ and
$C$ depends only on $d$, $m$, $\mu$, $\lambda$, $\tau$, $\eta$ and $\Omega$.
\end{thm}
Let $D(\rho)=D(\rho, \psi)$ and $\Delta (\rho)=D(\rho, \psi)$
be defined by (\ref{definition-of-D}) with $\psi\in C^{1, \alpha_0}(\br^{d-1})$,
$\psi(0)=|\nabla\psi(0)|=0$ and $\| \nabla \psi\|_{C^{\alpha_0}(\br^{d-1})}\le M_0$.
We will use $\| g\|_{C^{0,\eta}(K)}$ to denote
$$
\inf\big\{ M:
\, |g(x)-g(y)|\le M |x-y|^\beta \text{ for all } x,y \in K\big\}.
$$
\begin{lemma}\label{lemma-7.3}
Let $0<\eta<\alpha_0$ and
$\kappa=(1/4)\eta$.
Let $\Phi_\varep =\Phi_\varep (x, \Omega_\psi, A)$
be defined as in Remark \ref{remark-6.1}.
There exist constants $\varep_0>0$, $\theta\in (0,1)$ and $C_0>0$, depending only on
$d$, $m$, $\mu$, $\lambda$, $\tau$, $\eta$ and $(\alpha_0, M_0)$, such that
\begin{equation}\label{estimate-7.3.1}
\| u_\varep -<
\Phi_\varep, \mathbf{B}_\varep >\|_{L^\infty (D(\theta))}\\
\le
\theta^{1+\kappa},
\end{equation}
for some $\mathbf{B}_\varep =(b_{\varep, j}^\beta)\in \mathbb{R}^{dm}$ with
the property that
$$
|\mathbf{B}_\varep |\le C_0\theta^{-1}
\| u_\varep\|_{L^\infty (D(\theta))}
\text{ and } <n(0)\hat{A}, \mathbf{B}_\varep>=
n_i(0)\hat{a}_{ij}^{\alpha\beta} b_{\varep, j}^\beta
=0,
$$
whenever
$$
\varep<\varep_0, \quad
\mathcal{L}_\varep (u_\varep)=0 \text{ in } D(1),\quad
\frac{\partial u_\varep}{\partial\nu_\varep} =g \text{ on }
\Delta (1), \quad u_\varep (0)=0,
$$
and
\begin{equation}\label{estimate-7.3.2}
\| g\|_{C^{0,\eta} (\Delta(1))} \le 1, \quad g(0)=0,
\quad
\| u_\varep\|_{L^\infty (D(1))}\le 1.
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathcal{L}_0 =-\text{\rm div} (A^0\nabla )$,
where $A^0 =(\hat{a}^{\alpha\beta}_{ij})$
is a constant $m\times m$ matrix satisfying (\ref{ellipticity}).
By boundary H\"older estimates for gradients of solutions to elliptic systems with constant
coefficients in $C^{1,\alpha_0}$ domains,
\begin{equation}\label{7.3.3}
\aligned
& \| w-<x,(\overline{\nabla w})_{D(r)}> \|_{L^\infty (D(r))}\\
&\qquad
\le C_1 r^{1+2\kappa} \left\{ \| g\|_{C^\eta (\Delta (1/2))}
+
\| w\|_{L^\infty (D(1/2))}\right\},
\endaligned
\end{equation}
for any $r\in (0,1/4)$, whenever $\mathcal{L}_0 (w)=0$ in $D(1/2)$,
$\frac{\partial w}{\partial\nu_0}=g$ on $\Delta(1/2)$ and $w(0)=0$.
The constant $C_1$ in (\ref{7.3.3})
depends only on $d$, $m$, $\mu$, $\eta$ and $(\alpha_0, M_0)$.
Observe that if
$$
g(0)=<n(0)A^0, (\nabla w)(0)>=0,
$$
then
$
\| g\|_{C^\eta (\Delta (1/2))} \le C\| g\|_{C^{0,\eta} (\Delta(1/2))}
$
and
\begin{equation}\label{7.3.3.1}
\aligned
& |<n(0)A^0, (\overline{\nabla w})_{D(r)}>|
=|<n(0)A^0, (\overline{\nabla w})_{D(r)}-(\nabla w) (0)>|\\
&\qquad \le Cr^{2\kappa} \left\{ \| g\|_{C^{0,\eta} (\Delta (1/2))}
+
\| w\|_{L^\infty (D(1/2))}\right\}.
\endaligned
\end{equation}
Consequently, if we let $\mathbf{B}_0 =(b_{0,j}^\beta)\in \mathbb{R}^{dm}$ with
\begin{equation}\label{7.3.3.2}
b_{0,j}^\beta
=\left(\overline{\frac{\partial w^\beta}{\partial x_j}}\right)_{D(r)}
-n_j(0) h^{\beta\gamma} n_i (0) \hat{a}_{i\ell}^{\gamma \alpha}
\left(\overline{\frac{\partial w^\alpha}{\partial x_\ell}}\right)_{D(r)},
\end{equation}
where $(h^{\alpha\beta})_{m\times m}$ is the inverse matrix
of $(n_i(0)n_j(0)\hat{a}_{ij}^{\alpha\beta})_{m\times m}$, then
\begin{equation}\label{7.3.3.3}
\| w-<x,\mathbf{B}_0 > \|_{L^\infty (D(0,r))}
\le C_2 r^{1+2\kappa},
\end{equation}
for any $r\in (0,1/4)$, provided that $\mathcal{L}_0 (w)=0$ in $D(1/2)$,
$\frac{\partial w}{\partial\nu_0}=g$ on $\Delta(1/2)$, $w(0)=0$,
\begin{equation}\label{7.3.4}
\| g\|_{C^{0,\eta} (\Delta (1/2))}\le 1, \quad g(0)=0
\quad \text{ and } \quad
\| w\|_{L^\infty (D(1/2))}\le 1,
\end{equation}
where $C_2$ depends only on $d$, $m$, $\mu$, $\eta$ and $(\alpha_0, M_0)$.
Next we choose $\theta\in (0,1/4)$ so small that $2C_2\theta^{\kappa}
\le 1$.
We shall show by contradiction that for this $\theta$, there exists
$\varep_0>0$, depending only on $d$, $m$,
$\mu$, $\lambda$, $\tau$, $\eta$ and $(\alpha_0, M_0)$,
such that estimate (\ref{estimate-7.3.1})
holds with
\begin{equation}\label{7.3.3.5}
b_{\varep,j}^\beta
=\left(\overline{\frac{\partial u_\varep^\beta}{\partial x_j}}\right)_{D(\theta)}
-n_j(0) h^{\beta\gamma} n_i (0) \hat{a}_{i\ell}^{\gamma \alpha}
\left(\overline{\frac{\partial u_\varep^\alpha}{\partial x_\ell}}\right)_{D(\theta)},
\end{equation}
if $0<\varep<\varep_0$ and $u_\varep$ satisfies the conditions in Lemma \ref{lemma-7.3}.
We recall that
$(\hat{a}_{ij}^{\alpha\beta})$
in (\ref{7.3.3.5})
is the homogenized matrix given by (\ref{homogenized-coefficient}).
It is easy to verify that $n_i(0) \hat{a}_{ij}^{\alpha\beta} b_{\varep, j}^\beta =0$.
Also, by the divergence theorem, $|\mathbf{B}_\varep|
\le C_0 \theta^{-1} \| u_\varep\|_{L^\infty(D(\theta))}$.
To show (\ref{estimate-7.3.1}) by contradiction,
let's suppose that there exist sequences
$\{ \varep_k\}$, $\{ A^k\} $, $\{ u_{\varep_k}\}$, $\{ g_k\}$ and $\psi_k$ such that
$\varep_k\to 0$, $A^k\in \Lambda (\mu, \lambda,\tau)$, $\psi_k$ satisfies (\ref{psi}),
\begin{equation}\label{7.3.5}
\left\{
\aligned
\mathcal{L}_{\varep_k}^k (u_{\varep_k}) & =0 &\quad & \text{ in }D_k(1),\\
\frac{\partial u_{\varep_k}}{\partial\nu_{\varep_k}} & =g_k& \quad & \text{ on } \Delta_k(1),\\
u_{\varep_k} (0) &=g_k (0) =0,
\endaligned
\right.
\end{equation}
\begin{equation}\label{7.3.6}
\| g_k\|_{C^{0,\eta} (\Delta_k (1))}
\le 1, \qquad \text{ \ }\qquad \| u_{\varep_k}\|_{L^\infty(D_k(1))}
\le 1,
\end{equation}
and
\begin{equation}\label{7.3.7}
\| u_{\varep_k} -<\Phi_{\varep_k}^k, \mathbf{B}^k_\varep>
\|_{L^\infty (D_k (\theta))}
> \theta^{1+\kappa},
\end{equation}
where $D_k (r) =D(r, \psi_k)$,
$\Delta_k (r)=\Delta (r, \psi_k)$,
$\Phi^k_{\varep_k}=\Phi_{\varep_k} (x, \Omega_{\psi_k}, A^k)$
and $\mathbf{B}_\varep^k$ is given by (\ref{7.3.3.5}).
By passing to subsequences we may assume that as $k\to \infty$,
\begin{equation}\label{7.3.8}
\aligned
\hat{A}^k & \to A^0,\\
\psi_k & \to \psi_0 \quad \text{ in } C^1 (|x^\prime|< 4),\\
g_k(x^\prime, \psi_k(x^\prime))
& \to g_0 (x^\prime, \psi_0 (x^\prime)) \quad
\text{ in } C(|x^\prime|<1).\\
\endaligned
\end{equation}
Since $\| u_{\varep_k}\|_{C^\eta (D(1/2,\psi_k))} +\| \Phi_{\varep_k}^k\|_{C^\eta
(D(1/2, \psi_k))}\le C$ by
Theorem \ref{boundary-holder-theorem}, again by passing
to subsequences, we may also assume that
\begin{equation}\label{7.3.9}
\aligned
& u_{\varep_k} (x^\prime, x_d-\psi_k(x^\prime))
\to u_0 (x^\prime, x_d-\psi_0 (x^\prime))
\quad \text{ uniformly on } D(1/2, 0),\\
& R_{\varep_k}^k (x^\prime, x_d-\psi_k (x^\prime))
\quad\text{ converges uniformly on } D(1/2, 0),
\endaligned
\end{equation}
where $R_{\varep_k}^k (x) =\Phi_{\varep_k}^k (x) -x$.
Furthermore, in view of Theorem \ref{compactness-theorem}, we may assume that
$\mathcal{L}_0 (u_0)=0$ in $D(1/2, \psi_0)$ and $\frac{\partial u_0}{\partial\nu_0}=
g_0$ on $\Delta (1/2, \psi_0)$, where $\mathcal{L}_0
=-\text{\rm div} (A^0\nabla )$.
Note that by Lemma \ref{6.3-lemma},
$R_{\varep_k}^k (x^\prime, x_d-\psi_k (x^\prime))$ must converge to a constant.
Since $R_{\varep_k}^k (0)=0$, we deduce that
$R_{\varep_k}^k (x^\prime, x_d-\psi_k (x^\prime))$ converges uniformly to $0$ on $D(1/2,0)$.
Thus, in view of (\ref{7.3.6})-(\ref{7.3.9}), we may conclude that
$ u_0(0)=g(0)=0$,
\begin{equation}\label{7.3.10}
\| g\|_{C^{0,\eta}(\Delta(1/2, \psi_0))} \le 1, \quad \quad
\| u_0\|_{L^\infty(D(1/2, \psi_0))} \le 1
\end{equation}
and
\begin{equation}\label{7.3.11}
\| u_0 -<x, \mathbf{B}_0>
\|_{L^\infty (D (\theta, \psi_0))}
\ge \theta^{1+\kappa}.
\end{equation}
This, however, contradicts with (\ref{7.3.3.3})-(\ref{7.3.4}).
\end{proof}
\begin{remark}\label{remark-7.3}
{\rm
Let $w =<\Phi_\varep ,\mathbf{B}_\varep>
=\Phi_{\varep, j}^{\alpha\beta} (x) b_{\varep,j}^\beta$,
where $\Phi_\varep$ and $\mathbf{B}_\varep$ are given
by Lemma \ref{lemma-7.3}. Then $\mathcal{L}_\varep (w)=0$ and $\frac{\partial w}
{\partial\nu_\varep} =n_i(x)\hat{a}_{ij}^{\alpha\beta}b_{\varep, j}^\beta$.
In particular, we have $w(0)=0$ and
$\frac{\partial w}{\partial\nu_\varep} (0)=0$.
Also, note that in Lemma \ref{lemma-7.3},
one may choose any $\theta\in (0,\theta_1)$, where $2C_2\theta_1^\kappa=1$.
These observations are important to the proof of the next lemma.
}
\end{remark}
\begin{lemma}\label{lemma-7.4}
Let $\kappa$, $\varep_0$, $\theta$ be the constants
given by Lemma \ref{lemma-7.3}.
Suppose that $\mathcal{L}_\varep (u_\varep) =0$ in $D(1, \psi)$,
$\frac{\partial u_\varep}{\partial\nu_\varep} =g$ on $\Delta(1, \psi)$
and $u_\varep (0)=g(0)=0$.
Assume that $\varep<\theta^{\ell -1}\varep_0$ for some $\ell\ge 1$.
Then there exist $\mathbf{B}_\varep^j \in \mathbb{R}^{dm}$ for $j=0, 1, \dots, \ell-1$, such that
$$
<n(0)\hat{A}, \mathbf{B}_\varep^j>=0, \quad
|\mathbf{B}_\varep^j|\le C J
$$
and
\begin{equation}\label{7.4.2}
\| u_\varep -\sum_{j=0}^{\ell -1}
\theta^{\kappa j}
< \Pi_\varep^j, \mathbf{B}_\varep^j >\|_{L^\infty(D(\theta^\ell, \psi))}
\le \theta^{\ell(1+\kappa)} J,
\end{equation}
where
$$
\aligned
& \Pi_\varep^j(x) =\theta^{j}\Phi_{\frac{\varep}{\theta^j}}
(\theta^{-j}x, \Omega_{\psi_j}, A),\\
& J=\max \left\{
\| g\|_{C^{0,\eta} (\Delta(1, \psi))}, \| u_\varep\|_{L^\infty(D(1, \psi))}\right\}
\endaligned
$$
and $\psi_j (x^\prime)
=\theta^{-j}\psi(\theta^j x^\prime)$.
\end{lemma}
\begin{proof}
The lemma is proved by an induction argument on $\ell$.
The case $\ell=1$ follows by applying Lemma \ref{lemma-7.3} to $u_\varep/J$.
Suppose now that Lemma \ref{lemma-7.4} holds for some $\ell\ge 1$.
Let $\varep<\theta^\ell \varep_0$.
Consider the function
$$
w(x)=\theta^{-\ell}
\left\{ u_\varep (\theta^\ell x)-\sum_{j=0}^{\ell-1}
\theta^{\kappa j} <\Pi_\varep^j (\theta^\ell x), \mathbf{B}_\varep^j >\right\}
$$
on $D(1, \psi_\ell)$. Note that
$\mathcal{L}_{\frac{\varep}{\theta^\ell}} (w)=0$ in $D(1, \psi_\ell)$,
$w(0)=0$ and by the induction assumption,
\begin{equation}\label{7.4.2.1}
\| w\|_{L^\infty (D(1, \psi_\ell))}
\le \theta^{\ell \kappa}J.
\end{equation}
Let
$$
h(x)=\frac{\partial w}{\partial \nu_{\frac{\varep}{\theta^\ell}}} (x)
\qquad \text{ on } \Delta (1, \psi_\ell).
$$
Then
\begin{equation}\label{7.4.3}
h(x)=g(\theta^\ell x)
-\sum_{j=1}^{\ell-1}
\theta^{\kappa j} < n(\theta^\ell x ) \hat{A}, \mathbf{B}_\varep^j>,
\end{equation}
where $n$ denotes the unit outward normal to $\Delta(1, \psi)$.
It follows that $h(0)=0$. Since $\varep \theta^{-\ell}<\varep_0$,
we may then apply the estimate for the case $\ell=1$ to obtain
\begin{equation}\label{7.4.4}
\aligned
&
\| w-<\Phi_{\frac{\varep}{\theta^\ell}} (x, \Omega_{\psi_\ell}, A),
\mathbf{B}_{\frac{\varep}{\theta^\ell}}>\|_{L^\infty (D(\theta, \psi_\ell))}\\
&\qquad
\le \theta^{1+\kappa}
\max \left\{ \| h\|_{C^{0,\eta}(\Delta (1, \psi_\ell))},
\| w\|_{L^\infty (D(1, \psi_\ell))}\right\},
\endaligned
\end{equation}
where $\mathbf{B}_{\frac{\varep}{\theta^\ell}}\in \mathbb{R}^{dm}$ satisfies the conditions
$<n(0)\hat{A}, \mathbf{B}_{\frac{\varep}{\theta^\ell}}>=0$ and
\begin{equation}\label{7.4.4.1}
|\mathbf{B}_{\frac{\varep}{\theta^\ell}}|\le C\max
\left\{ \| h\|_{C^{0,\eta}(\Delta(1, \psi_\ell))},
\| w\|_{L^\infty(D(1, \psi_\ell))}\right\}.
\end{equation}
It follows that
\begin{equation}\label{7.4.5}
\aligned
&
\| u_\varep (x)-\sum_{j=0}^{\ell-1}
\theta^{\kappa j} <\Pi_\varep^j (x), \mathbf{B}_\varep^j>
-\theta^\ell <\Phi_{\frac{\varep}{\theta^\ell}} (\theta^{-\ell} x, \Omega_{\psi_\ell}, A),
\mathbf{B}_{\frac{\varep}{\theta^\ell}}>\|_{L^\infty (D(\theta^{\ell +1}, \psi))}\\
&\qquad
\le \theta^{\ell +1+\kappa}
\max \left\{ \| h\|_{C^{0,\eta}(\Delta(1, \psi_\ell))},
\| w\|_{L^\infty (D(1, \psi_\ell))}\right\}.
\endaligned
\end{equation}
To estimate the right hand side of (\ref{7.4.5}), we observe that
$$
\aligned
\| h\|_{C^{0,\eta}(\Delta (1, \psi_\ell))}
&\le \theta^{\ell \eta} \| g\|_{C^{0,\eta} (\Delta(1, \psi))}
+\sum_{j=0}^{\ell-1} \theta^{\kappa j} \cdot CJ \cdot
\theta^{\ell \eta} \| n\|_{C^{0, \eta}(\Delta(1, \psi))}\\
& \le
\theta^{4\ell \kappa} J \left\{ 1+\frac{C\| n\|_{C^{0,\eta}(\Delta(1, \psi))}}{1-\theta^\kappa}\right\},
\endaligned
$$
since $\eta=4\kappa$.
Since $0<\eta<\alpha_0$,
by making an initial dilation of $x$, if necessary, we may assume that
$\|n\|_{C^{0,\eta}(\Delta (1,\psi))}$ is small so that
\begin{equation}\label{7.4.6}
\theta^{\kappa}
\left\{ 1+\frac{C\| n\|_{C^{0,\eta}(\Delta(1, \psi))}}{1-\theta^\kappa}\right\}
\le 1.
\end{equation}
This implies that
\begin{equation}\label{7.4.7}
\| h\|_{C^{0,\eta}(\Delta (1, \psi_\ell))}
\le \theta^{\ell \kappa} J.
\end{equation}
This, together with (\ref{7.4.2.1}) and (\ref{7.4.5}), gives
\begin{equation}\label{7.4.8}
\| u_\varep -\sum_{j=0}^{\ell}
\theta^{\kappa j} <\Pi_\varep^j, \mathbf{B}_\varep^j>
\|_{L^\infty (D(\theta^{\ell +1}, \psi))}\\
\le \theta^{(\ell +1)(1+\kappa)} J,
\end{equation}
where we have chosen
$\mathbf{B}_\varep^\ell =\theta^{-\ell\kappa} \mathbf{B}_{\frac{\varep}{\theta^\ell}}$.
Finally,
in view of (\ref{7.4.4.1}), (\ref{7.4.2.1}) and (\ref{7.4.7}), we have
$|\mathbf{B}_\varep^\ell |\le C J$.
This completes the induction argument.
\end{proof}
\begin{lemma}\label{lemma-7.6}
Suppose that $\mathcal{L}_\varep (u_\varep) =0$ in $D(1)$
and $\frac{\partial u_\varep}{\partial\nu_\varep} =g$
on $\Delta(1)$.
Then
\begin{equation}\label{7.6.1}
\int_{D(\rho)}
|\nabla u_\varep|^2\, dx
\le C \rho^d \left\{ \| u_\varep\|_{L^\infty (D(1))}^2
+\| g\|_{C^\eta(\Delta(1))}^2 \right\},
\end{equation}
for any $0<\rho<(1/2)$, where $C$ depends only on $\mu$, $\lambda$,
$\tau$, $\eta$ and $(M_0, \alpha_0)$.
\end{lemma}
\begin{proof}
By subtracting a constant we may assume that $u_\varep (0)=0$.
We may also assume that $g(0)=0$. To see this, consider
$$
v^\alpha_\varep (x)
=u_\varep^\alpha (x)-\Phi_{\varep, j}^{\alpha\beta} (x) n_j(0) b^\beta,
$$
where $(b^\beta)\in \br^{m}$ solves the linear system
$n_i(0)n_j(0)\hat{a}^{\alpha\beta}_{ij} b^\beta
=g^\alpha (0)$.
Then $\mathcal{L}_\varep (v_\varep)=0$ in $D(1)$, $v_\varep (0)=0$ and
$$
\left(\frac{\partial v_\varep}{\partial\nu_\varep}\right)^\alpha (x)
=g^\alpha (x) -n_i(x)\hat{a}_{ij}^{\alpha\beta} n_j(0)b^\beta
\qquad \text{ on } \Delta (1).
$$
Thus $\frac{\partial v_\varep}{\partial\nu_\varep} (0)=0$.
Since $\|\Phi_\varep \|_{L^\infty(D(1))}
+\|\nabla \Phi_\varep\|_{L^\infty(D(1))}\le C$, the desired estimate for $u_\varep$
follows from the corresponding estimate for $v_\varep$.
Under the assumption that $u_\varep (0)=g(0)=0$, we will show that
\begin{equation}\label{7.6.2}
\| u_\varep\|_{L^\infty(D(\rho))}
\le C
\rho \left\{ \| u_\varep\|_{L^\infty (D(1))}
+\| g\|_{C^\eta(\Delta(1))} \right\},
\end{equation}
for any $0<\rho<(1/2)$.
Estimate (\ref{7.6.1}) follows from (\ref{7.6.2})
by Cacciopoli's inequality (\ref{Cacciopoli}).
Let $\kappa$, $\varep_0$, $\theta$ be the constants given by Lemma \ref{lemma-7.3}.
Let $0<\varep<\theta\varep_0$ (the case $\varep\ge \theta\varep_0$
follows from the classical regularity estimates).
Suppose that
$$
\theta^{i+1} \le \frac{\varep}{\varep_0} <\theta^i
\qquad \text { for some } i\ge 1.
$$
Let $\rho\in (0, 1/2)$. We first consider the case $\frac{\varep}{\varep_0}
\le \rho<\theta$.
Then $\theta^{\ell+1}\le \rho<\rho^\ell$ for some $\ell=1,\dots, i$.
It follows that
\begin{equation}\label{7.6.3}
\aligned
&\| u_\varep\|_{L^\infty (D(\rho))}
\le \| u_\varep\|_{L^\infty (D(\theta^\ell))}\\
& \le \| u_\varep -\sum_{j=0}^{\ell-1} \theta^{\kappa j}
<\Pi_\varep^j, \mathbf{B}_\varep^j >\|_{L^\infty (D(\theta^\ell))}
+\sum_{j=0}^{\ell-1} \theta^{\kappa j} |\mathbf{B}_\varep^j|
\| \Pi_\varep^j\|_{L^\infty (D(\theta^\ell))}\\
&\le \theta^{\ell (1+\kappa)} J
+CJ \sum_{j=0}^{\ell-1} \theta^{\kappa j}\|\Pi_\varep^j\|_{L^\infty (D(\theta^\ell))},
\endaligned
\end{equation}
where $J=\max \big\{ \| g\|_{C^{0,\eta}(D(1))}, \| u_\varep\|_{L^\infty(D(1))}\big\}$ and
we have used Lemma \ref{lemma-7.4}.
Recall that $\Pi_\varep^j (x)=\theta^j \Phi_{\frac{\varep}{\theta^j}} (\theta^{-j} x,
\Omega_{\psi_j}, A)$.
By Remark \ref{remark-6.1} we have
$\Pi_\varep^j (0)=0$ and $\|\nabla \Pi_\varep^j \|_{L^\infty(D(1))}\le C$. Hence,
$$
\| \Pi_\varep^j \|_{L^\infty(D(\theta^\ell))} \le C\theta^\ell.
$$
This, together with (\ref{7.6.3}), gives $\| u_\varep\|_{L^\infty(D(\rho))}
\le C\rho J$ for any $\frac{\varep}{\varep_0}\le \rho<\frac12$ (the case $\theta\le \rho<(1/2)$ is
trivial).
To treat the case $0<\rho<\frac{\varep}{\varep_0}$, we use a blow-up argument.
Let $w(x)=\varep^{-1} u_\varep (\varep x)$.
Then $\mathcal{L}_1 (w) =0$ in $D(2\varep_0^{-1}, \psi_\varep)$ and
$\frac{\partial w}{\partial\nu_1} (x) =g(\varep x)$ on
$\Delta (2\varep_0^{-1}, \psi_\varep)$,
where $\psi_\varep (x^\prime) =\varep^{-1}\psi(\varep x^\prime)$.
By the classical regularity estimate,
$$
\|\nabla w\|_{L^\infty (D(\frac{1}{\varep_0}, \psi_\varep))}
\le C\left\{
\| w\|_{L^\infty(D(\frac{2}{\varep_0}, \psi_\varep))}
+\|\frac{\partial w}{\partial\nu_1}\|_{C^\eta (\Delta (\frac{2}{\varep_0}, \psi_\varep))}\right\}.
$$
It follows that
$$
\|\nabla u_\varep\|_{L^\infty (D(\frac{\varep}{\varep_0}))}
\le C
\left\{
\varep^{-1}
\| u\|_{L^\infty(D(\frac{2\varep}{\varep_0}))}
+\| g\|_{C^\eta (\Delta(1))}\right\}\\
\le C J,
$$
where we have used the estimate (\ref{7.6.2}) with $\rho=\frac{2\varep}{\varep_0}$
for the last inequality.
Finally, since $u_\varep (0)=0$, for $0<\rho<\frac{\varep}{\varep_0}$, we obtain
$$
\| u_\varep\|_{L^\infty (D(\rho))}
\le C\rho \|\nabla u_\varep\|_{L^\infty(D(\frac{\varep}{\varep_0}))}
\le C\rho J.
$$
This completes the proof of (\ref{7.6.2}).
\end{proof}
\noindent{\bf Proof of Theorem \ref{boundary-Lipschitz-theorem}.}
By rescaling we may assume that $\rho=1$.
By a change of the coordinate system, we may deduce from Lemma \ref{lemma-7.6} that
if $P\in \partial\Omega$, $|P-Q|<\frac12$ and $0<r<\frac14$,
$$
\int_{B(P, r)\cap\Omega}
|\nabla u_\varep|^2\, dx
\le C r^d
\left\{ \| u_\varep\|^2_{L^\infty (B(Q,1)\cap \Omega)}
+\| g\|^2_{C^\eta(B(Q, 1)\cap\partial\Omega)}\right\},
$$
where $C$ depends only on $d$, $m$, $\mu$, $\lambda$, $\tau$, $\eta$ and $\Omega$.
This, together with the interior estimate (\ref{interior-estimate}),
implies that
$$
\|\nabla u_\varep\|_{L^\infty(B(Q,\frac12)\cap \Omega)}
\le C \left\{
\| u_\varep\|_{L^\infty (B(Q,1)\cap \Omega)}
+\| g\|_{C^\eta(B(Q, 1)\cap\partial\Omega)}\right\}.
$$
The proof of Theorem \ref{boundary-Lipschitz-theorem} is now complete.
\qed
\section{Proof of Theorem \ref{Lipschitz-estimate-theorem}}
Under the condition $A\in \Lambda (\lambda, \mu, \tau)$,
we have proved in Section 5 that
\begin{equation}\label{8.1}
|N_\varep (x,y)|\le \frac{C}{|x-y|^{d-2}} \qquad \text{ if } d\ge 3.
\end{equation}
With the additional assumption $A^*=A$, we may use Theorem \ref{boundary-Lipschitz-theorem}
to show that for $d\ge 3$,
\begin{equation}\label{8.2}
\aligned
|\nabla_x N_\varep (x,y)|+|\nabla_y N_\varep (x,y)|
& \le \frac{C}{|x-y|^{d-1}},\\
|\nabla_x\nabla_y N_\varep (x,y)| &\le \frac{C}{|x-y|^d}.
\endaligned
\end{equation}
If $d=2$, one obtains $|N_\varep (x,,y)|\le C_\gamma |x-y|^{-\gamma}$
and $|\nabla_x N_\varep (x,y)| +|\nabla_y N_\varep (x,y)|\le
C_\gamma |x-y|^{-1-\gamma}$ for any $\gamma>0$ (this is not sharp, but sufficient for
the proof of Theorem \ref{Lipschitz-estimate-theorem}).
Now, given $F\in L^q(\Omega)$ for some $q>d$, let
$$
v_\varep (x)=\int_\Omega N_\varep (x,y) F(y)\, dy.
$$
Then $\mathcal{L}_\varep (v_\varep)=F$ in $\Omega$ and
$\frac{\partial v_\varep}{\partial\nu_\varep}=-\frac{1}{|\partial\Omega|}\int_\Omega F$
on $\partial\Omega$.
Furthermore, it follows from pointwise estimates on $|\nabla_x N_\varep (x,y)|$ that
$\|\nabla v_\varep\|_{L^\infty(\Omega)}\le C \, \| F\|_{L^q(\Omega)}$.
Thus, by subtracting $v_\varep$ from $u_\varep$, we may assume that $F=0$
in Theorem \ref{Lipschitz-estimate-theorem}.
In this case we may deduce
from Theorems \ref{boundary-Lipschitz-theorem} and \ref{boundary-holder-theorem} that
for $Q\in \partial\Omega$,
\begin{equation}\label{8.3}
\|\nabla u_\varep\|_{L^\infty (B(Q, \rho/2)\cap \Omega)}
\le C\left\{
\left(\average_{B(Q,\rho)\cap\Omega}|\nabla u_\varep|^2\right)^{1/2}
+\| g\|_{C^\eta (\Delta (Q, \rho))}\right\},
\end{equation}
where $C$ depends only on $d$, $m$,
$\mu$, $\lambda$, $\tau$, $\eta$ and $\Omega$.
Since $\|\nabla u_\varep\|_{L^2(\Omega)} \le C\| g\|_{L^2(\partial\Omega)}$,
the estimate $\| \nabla u_\varep\|_{L^\infty(\Omega)}
\le C\| g\|_{C^\eta (\partial\Omega)}$
follows from (\ref{8.3}) and the interior estimate (\ref{interior-estimate})
by a covering argument.
\section{Proof of Theorem \ref{maximal-function-theorem}}
As we mentioned in Section 1, the case $p=2$ is proved in \cite{Kenig-Shen-2}
(for Lipschitz domains).
To handle the case $p>2$, we need the following weak reverse H\"older inequality.
\begin{lemma}\label{lemma-9.1}
Let $\Omega$ be a bounded $C^{1,\alpha_0}$ domain.
Suppose that $A\in \Lambda(\lambda, \mu, \tau)$ and $A^*=A$.
Then, for $Q\in \partial\Omega$ and $0<r<r_0$,
\begin{equation}\label{9.1.1}
\sup_{B(Q,r)\cap\partial\Omega}
(\nabla u_\varep)^*
\le C\left\{ \average_{B(Q,2r)\cap\partial\Omega} |(\nabla u_\varep)^*|^2\, d\sigma\right\}^{1/2} ,
\end{equation}
where $u_\varep\in W^{1,2}(B(Q,3r)\cap\Omega)$
is a weak solution to $\mathcal{L}_\varep (u_\varep)=0$ in $B(Q,3r)\cap\Omega$
with either $\frac{\partial u_\varep}{\partial\nu_\varep}=0$ or $u_\varep=0$
on $B(Q,3r)\cap \partial\Omega$.
\end{lemma}
\begin{proof}
Recall that the nontangential maximal function of $(\nabla u_\varep)^*$ is defined by
$$
(\nabla u_\varep)^*(P)
=\sup \big\{
|\nabla u_\varep (x)|:\
x\in\Omega \text{ and } |x-P|< C_0 \, \text{dist}(x, \partial\Omega)\big\},
$$
for $P\in \partial\Omega$,
where $C_0=C(\Omega)>1$ is sufficiently large.
Note that $$
(\nabla u_\varep)^*(P)= \max\big\{
\mathcal{M}_{r,1} (\nabla u_\varep), \mathcal{M}_{r,2} (\nabla u_\varep)\big\},
$$
where
$$
\aligned
& \mathcal{M}_{r,1} (\nabla u_\varep) (P)
=\sup \big\{
|\nabla u_\varep (x)|:\
x\in\Omega, \ |x-P|\le c_0r \text{ and } |x-P|< C_0 \, \text{dist}(x, \partial\Omega)\big\},\\
& \mathcal{M}_{r,2} (\nabla u_\varep) (P)
=\sup \big\{
|\nabla u_\varep (x)|:\
x\in\Omega, \ |x-P|>c_0r \text{ and } |x-P|< C_0 \, \text{dist}(x, \partial\Omega)\big\},
\endaligned
$$
and $c_0=c(\Omega)>0$ is sufficiently small.
Using interior estimate (\ref{interior-estimate}), it is easy to see that
$\sup_{B(Q,r)\cap\partial\Omega} \mathcal{M}_{r,2} (\nabla u_\varep)$ is
bounded by the right hand side of (\ref{9.1.1}).
To estimate $\mathcal{M}_{r,1}(\nabla u_\varep)$,
we observe that
\begin{equation}\label{9.1.2}
\aligned
\sup_{B(Q,r)\cap\partial\Omega} \mathcal{M}_{r,1} (\nabla u_\varep)
& \le \sup_{B(Q,3r/2)\cap\Omega} |\nabla u_\varep|\\
&\le C\, \left\{ \average_{B(Q,2r)\cap\Omega} |\nabla u_\varep|^2\, dx \right\}^{1/2} \\
&\le C\, \left\{ \average_{B(Q,2r)\cap\partial\Omega}
|(\nabla u_\varep)^*|^2\, d\sigma\right\}^{1/2}.
\endaligned
\end{equation}
We point out that the second inequality in (\ref{9.1.2}) follows from
the boundary Lipschitz estimate. For Neumann condition $\frac{\partial u_\varep}
{\partial\nu_\varep}=0$ on $B(Q,3r)\cap\Omega$, the estimate was given
by Theorem \ref{boundary-Lipschitz-theorem}, while the case of Dirichlet condition
follows from Theorem 2 in \cite[p.805]{AL-1987}.
\end{proof}
\begin{lemma}\label{lemma-9.2}
Suppose that $A\in\Lambda (\lambda, \mu, \tau)$ and $A^*=A$.
Let $p>2$ and $\Omega$ be a bounded Lipschitz domain.
Assume that
\begin{equation}\label{9.2.1}
\left(\average_{B(Q,r)\cap\partial\Omega} |(\nabla u_\varep)^*|^p\, d\sigma\right)^{1/p}
\le
C\, \left(\average_{B(Q,2r)\cap\partial\Omega} |(\nabla u_\varep)^*|^2
\, d\sigma\right)^{1/2},
\end{equation}
whenever $u_\varep \in W^{1,2}(B(Q,3r)\cap \Omega)$
is a weak solution to $\mathcal{L}_\varep (u_\varep)=0$ in $B(Q, 3r)\cap\Omega$
and $\frac{\partial u_\varep}{\partial\nu_\varep} =0$ on $B(Q,3r)\cap\partial\Omega$
for some $Q\in \partial\Omega$ and $0<r<r_0$.
Then the weak solutions to $\mathcal{L}_\varep(u_\varep)=0$ in $\Omega$
and $\frac{\partial u_\varep}{\partial\nu_\varep}=g\in L^p(\partial\Omega)$
satisfy the estimate $\|(\nabla u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\, \| g\|_{L^p(\partial\Omega)}$.
\end{lemma}
\begin{proof} This follows by a real variable argument originating
in \cite{Caffarelli-1998} and
further developed in \cite{Shen-2005-bounds, Shen-2006-ne, Shen-2007-boundary}.
In \cite{Kim-Shen} the argument was used to prove that for any given $p>2$ and
Lipschitz domain $\Omega$, the solvability of the Neumann problem for Laplace's equation
$\Delta u=0$ in $\Omega$ with $L^p$ boundary data is equivalent to a weak reverse H\"older inequality,
similar to (\ref{9.2.1}).
With the solvability of the $L^2$ Neumann problem for $\mathcal{L}_\varep (u_\varep)
=0$ \cite{Kenig-Shen-2} and interior estimate (\ref{interior-estimate}), the proof of the sufficiency
of the weak reverse H\"older inequality in \cite[pp.1819-1821]{Kim-Shen} extends directly
to the present case. We omit the details.
\end{proof}
It follows from Lemmas \ref{lemma-9.1} and \ref{lemma-9.2} that
Theorem \ref{maximal-function-theorem} holds for $p>2$.
To handle the case $1<p<2$, as in the case of
Laplace's equation \cite{Dahlberg-Kenig-1987},
one considers the solutions of the $L^2$ Neumann problem
with atomic data $\frac{\partial u_\varep}{\partial \nu_\varep} =a$, where
$\int_{\partial\Omega} a=0$,
supp$(a)\subset B(Q,r)\cap\partial\Omega$ for some $Q\in \partial\Omega$ and
$0<r<r_0$, and $\|a\|_{L^\infty(\partial\Omega)}\le r^{1-d}$.
One needs to show that
\begin{equation}\label{9.3.1}
\int_{\partial\Omega} (\nabla u_\varep)^*\, d\sigma \le C.
\end{equation}
The case $1<p<2$ follows from (\ref{9.3.1}) by interpolation.
To prove (\ref{9.3.1}), one first uses the H\"older inequality and the $L^2$
estimate $\|(\nabla u_\varep)^*\|_{L^2(\partial\Omega)}\le C\, \| a\|_{L^2(\partial\Omega)}
\le C r^{\frac{1-d}{2}}$ to see that
\begin{equation}\label{9.3.2}
\int_{B(Q, Cr)\cap\partial\Omega} (\nabla u_\varep)^*\, d\sigma \le C.
\end{equation}
Next, to estimate $(\nabla u)^*$ on $\partial\Omega\setminus B(Q, Cr)$,
we show that
\begin{equation}\label{9.3.3}
\int_{B(P_0, c\rho)\cap\partial\Omega}
(\nabla u_\varep)^*\, d\sigma \le C\left(\frac{r}{\rho}\right)^\gamma,
\end{equation}
for some $\gamma>0$, where $\rho=|P_0-Q|\ge Cr$. Note that
\begin{equation}\label{9.3.4}
u_\varep (x)
=b +\int_{B(Q,r)\cap\partial\Omega}
\big\{ N_\varep (x,y)-N_\varep (x,Q)\big\} a(y)\, d\sigma (y)
\end{equation}
for some $b\in \br^m$. It follows that
\begin{equation}\label{9.3.5}
|\nabla u_\varep (x)|
\le C
\average_{B(Q,r)\cap\partial\Omega}
\big| \nabla_x \big\{ N_\varep (x,y)-N_\varep (x,Q)\big\}\big|\, d\sigma (y).
\end{equation}
Hence, if $z\in \Omega$ and
$c\rho\le |z-P|<C_0\delta(z)$ for some $P\in B(P_0, c\rho)\cap
\partial\Omega$,
$$
\aligned
|\nabla u_\varep (z)|
&\le C\left(\average_{B(z, c\delta(z))} |\nabla u(x)|^2\, dx\right)^{1/2}\\
& \le C \average_{B(Q,r)\cap\partial\Omega}
\left(\average_{B(z,c\delta (z))} |\nabla_x \big\{
N_\varep (x,y)-N_\varep (x,Q)\big\}|^2\, dx \right)^{1/2} d\sigma (y)\\
&\le C\rho^{1-d} \left(\frac{r}{\rho}\right)^\gamma,
\endaligned
$$
where $\delta (z)=\text{dist}(z, \partial\Omega)$
and
we have used the interior estimate, Minkowski's inequality and Theorem \ref{Neumann-theorem-5.2}.
This implies that
\begin{equation}\label{9.3.6}
\int_{B(P_0, c\rho)\cap \partial\Omega}
\mathcal{M}_{2, \rho} (\nabla u_\varep)\, d\sigma \le C\left(\frac{r}{\rho}\right)^\gamma.
\end{equation}
Finally, to estimate $\mathcal{M}_{1,\rho} (\nabla u_\varep)$, we note that the $L^2$ nontangential
maximal function estimate, together with an integration argument, gives
\begin{equation}\label{9.3.7}
\int_{B(P_0, c\rho)\cap\partial\Omega}
|\mathcal{M}_{1,\rho} (\nabla u_\varep)|^2\, d\sigma
\le \frac{C}{\rho}
\int_{B(P_0, 2c\rho)\cap\Omega} |\nabla u_\varep|^2\, dx,
\end{equation}
(see \cite{Dahlberg-Kenig-1987} for the case of Laplace's equation).
It follows by H\"older inequality that
\begin{equation}\label{9.3.8}
\aligned
\int_{B(P_0, c\rho)\cap\partial\Omega}
\mathcal{M}_{1,\rho} (\nabla u_\varep)\, d\sigma
& \le C\rho^{d-1} \left(\average_{B(P_0, 2c\rho)\cap\Omega} |\nabla u_\varep|^2\, dx\right)^{1/2}\\
&\le C\left(\frac{r}{\rho}\right)^\gamma,
\endaligned
\end{equation}
where the last inequality follows from (\ref{9.3.5}) and Theorem \ref{Neumann-theorem-5.2}.
In view of (\ref{9.3.6}) and (\ref{9.3.8}), we have proved (\ref{9.3.2}). The desired
estimate
$$
\int_{\partial\Omega\setminus B(Q, Cr)}
(\nabla u_\varep)^*\, d\sigma \le C
$$
follows from (\ref{9.3.2}) by a simple covering argument.
This completes the proof of (\ref{9.3.1}) and hence of Theorem \ref{maximal-function-theorem}.
\qed
\begin{remark}\label{remark-9.0}
{\rm
The estimate $\|\nabla u_\varep\|_{L^q(\Omega)} \le C \| g\|_{L^p(\partial\Omega)}$
with $q=\frac{pd}{d-1}$ in Theorem \ref{maximal-function-theorem}
follows from Theorem \ref{W-1-p-theorem},
using the fact that $L^p(\partial\Omega)\subset B^{-\frac{1}{q}, q}(\partial\Omega)$.
The estimate also follows from the observation that $\|w\|_{L^q(\Omega)}
\le C\| (w)^*\|_{L^p(\partial\Omega)}$ for any $w$
in a Lipschitz domain $\Omega$. To see this, we note that
\begin{equation}\label{9.0.1}
|w(x)|\le C\int_{\partial\Omega}
\frac{ (w)^* (Q)}{|x-Q|^{d-1}}\, d\sigma (Q).
\end{equation}
By a duality argument, it then suffices to show that the operator
$$
I_1( f) (x) =\int_{\Omega} \frac{ f(y)}{|x-y|^{d-1}} \, dy
$$ is bounded from $L^{q^\prime}(\Omega)$ to $L^{p^\prime}(\partial\Omega)$.
This may be proved by using fractional and singular integral estimates
(see e.g. \cite[p.712]{Shen-2006-ne}).
}
\end{remark}
\begin{remark}\label{remark-9.1}
{\rm Suppose that $d \ge 3$. For $g\in L^p(\partial\Omega)$,
consider the $L^p$ Neumann problem in the exterior domain
$\Omega_-=\br^d\setminus \overline{\Omega}$,
\begin{equation}\label{exterior-Neumann}
\left\{\aligned
\mathcal{L}_\varep (u_\varep) & =0 \quad \text{ in } \Omega_-,\\
\frac{\partial u_\varep}{\partial \nu_\varep} & =g \quad \text{ on } \partial\Omega,\\
(\nabla u_\varep)^* & \in L^p(\partial\Omega) \text{ and }
u_\varep (x) =O(|x|^{2-d}) \quad \text{ as } |x|\to\infty.
\endaligned
\right.
\end{equation}
It follows from \cite{Kenig-Shen-2} that if $p=2$ and $\Omega$ is a bounded Lipschitz domain with connected
boundary, the unique solution to (\ref{exterior-Neumann}) satisfies
the estimate $\| (\nabla u_\varep)^*\|_{L^2(\partial\Omega)}
\le C\, \| g\|_{L^2(\partial\Omega)}$
(if $\partial\Omega$ is not connected, the data $g$ needs to satisfy some compatibility
conditions).
An careful inspection of Theorem \ref{maximal-function-theorem}
shows that the $L^2$ results extend to $L^p$ for $1<p<\infty$,
if $\Omega$ is a bounded $C^{1, \alpha}$
domain.
}
\end{remark}
\section{$L^p$ Regularity problem}
In this section we outline the proof of the following.
\begin{thm}\label{regularity-theorem}
Suppose that $A\in \Lambda (\mu, \lambda, \tau)$ and $A^*=A$.
Let $\Omega$ be a bounded $C^{1, \alpha}$ domain with connected boundary and $1<p<\infty$.
Then, for any $f\in W^{1,p}(\partial\Omega)$, the unique solution to
$\mathcal{L}_\varep (u_\varep) =0$ in $\Omega$, $u_\varep =f$ on $\partial\Omega$
and $(\nabla u_\varep)^*\in L^p(\partial\Omega)$ satisfies the estimate
\begin{equation}\label{10.1}
\|(\nabla u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\, \| \nabla_{tan} f\|_{L^p(\partial\Omega)},
\end{equation}
where $C$ depends only on $d$, $m$, $p$, $\mu$, $\lambda$, $\tau$ and $\Omega$.
\end{thm}
The case $p=2$ was proved in \cite{Kenig-Shen-2} for Lipschitz domains.
The case $p>2$ follows from Lemma \ref{lemma-9.1}
and the following analog of Lemma
\ref{lemma-9.2}.
\begin{lemma}\label{lemma-10.2}
Suppose that $A\in\Lambda (\lambda, \mu, \tau)$ and $A^*=A$.
Let $p>2$ and $\Omega$ be a bounded Lipschitz domain with connected boundary.
Assume that
\begin{equation}\label{10.2.1}
\left(\average_{B(Q,r)\cap\partial\Omega} |(\nabla u_\varep)^*|^p\, d\sigma \right)^{1/p}
\le
C_0\, \left(\average_{B(Q,2r)\cap\partial\Omega} |(\nabla u_\varep)^*|^2\, d\sigma \right)^{1/2},
\end{equation}
whenever $u_\varep \in W^{1,2}(B(Q,3r)\cap \Omega)$
is a weak solution to $\mathcal{L}_\varep (u_\varep)=0$ in $B(Q, 3r)\cap\Omega$
and $u_\varep=0$ on $B(Q,3r)\cap\partial\Omega$
for some $Q\in \partial\Omega$ and $0<r<r_0$.
Then the weak solution to $\mathcal{L}_\varep(u_\varep)=0$ in $\Omega$
and $u_\varep =f\in W^{1,p}(\partial\Omega)$
satisfies the estimate $\|(\nabla u_\varep)^*\|_{L^p(\partial\Omega)}
\le C\, \| \nabla_{tan}f \|_{L^p(\partial\Omega)}$,
where $C$ depends only on $d$, $m$, $p$, $\mu$, $\lambda$, $\tau$, $r_0$,
$C_0$ and $\Omega$.
\end{lemma}
The proof of Lemma \ref{lemma-10.2} is similar to that of Lemma \ref{lemma-9.2}.
We refer the reader to \cite{Kilty-Shen-regularity} where a similar statement was proved
for elliptic equations with constant coefficients.
To handle the case $1<p<2$, we follow the approach for Laplace's equation in Lipschitz domains
\cite{Dahlberg-Kenig-1987} and consider $L^2$ solutions
with Dirichlet data $u_\varep =a$, where supp$(a)\subset B(Q,r)\cap\partial\Omega$
for some $Q\in \partial\Omega$ and $0<r<r_0$, and $\|\nabla_{tan} a\|_{L^\infty(\partial\Omega)}
\le r^{1-d}$. By interpolation it suffices to show estimate (\ref{9.3.1}).
Note that $|a|\le C r^{2-d}$. Using the estimates on Green's functions in \cite{AL-1987}, one has
\begin{equation}\label{10.2.2}
|\nabla u_\varep (x)|\le \frac{Cr}{|x-Q|^d} \qquad \text{ if } |x-Q|\ge Cr.
\end{equation}
Estimate (\ref{9.3.1})
follows easily from the $L^2$ estimate $\|(\nabla u_\varep)^*\|_{L^2(\partial\Omega)}
\le C\|\nabla_{tan} a\|_{L^2(\partial\Omega)}$ and (\ref{10.2.2}).
\begin{remark}\label{remark-10.1}
{\rm
One may also consider the $L^p$ regularity problem for the exterior domain:
given $f\in W^{1,p}(\partial\Omega)$, find a solution $u_\varep$ to $\mathcal{L}_\varep
(u_\varep) =0$ in $\Omega_-$ such that $u_\varep=f$ on $\partial\Omega$,
$(\nabla u_\varep)^*\in L^p(\partial\Omega)$ and $u_\varep (x)=O(|x|^{2-d})$
as $|x|\to\infty$.
It follows from \cite{Kenig-Shen-2} that if $\Omega$ is a bounded
Lipschitz domain in $\br^d$, $d\ge 3$,
then the unique solution to the $L^2$ regularity problem in $\Omega_-$
satisfies the estimate $\|(\nabla u_\varep)^*\|_{L^2(\partial\Omega)}
\le C\, \| \nabla_{tan} f\|_{W^{1,2}(\partial\Omega)}$.
An inspection of Theorem \ref{regularity-theorem} shows that
the $L^2$ result extends to $L^p$ for $1<p<\infty$,
if $\Omega$ is a $C^{1,\alpha}$ domain.
}
\end{remark}
\section{Representation by layer potentials}
For $f\in L^p(\partial\Omega)$, the single layer potential $u_\varep =\mathcal{S}_\varep(f)$ and
double layer potential $w_\varep=\mathcal{D}_\varep (f)$ for the operator
$\mathcal{L}_\varep$ in $\Omega$ are defined by
\begin{equation}\label{layer-potential}
\aligned
u_\varep^\alpha (x) &=\int_{\partial\Omega}
\Gamma_{A,\varep}^{\alpha\beta} (x,y) f^\beta (y)\, d\sigma (y),\\
w^\alpha_\varep (x) &=\int_{\partial\Omega}
\left( \frac{\partial}{\partial \nu_{\varep}^*}
\big\{ \Gamma_{A^*,\varep}^\alpha (y, x)\big\}\right)^\beta f^\beta (y)\, d\sigma (y),
\endaligned
\end{equation}
where $\Gamma_{A,\varep}(x,y)$ and $\Gamma_{A^*,\varep} (x,y)=(\Gamma_{A, \varep} (y,x))^*$
are the fundamental
solutions for $\mathcal{L}_\varep$ and
$(\mathcal{L}_\varep)^*$ respectively.
Both $\mathcal{S}_\varep (f)$ and $\mathcal{D}_\varep (f)$
are solutions of $\mathcal{L}_\varep (u)=0$
in $\br^d\setminus \partial\Omega$. Under the assumptions that
$A\in \Lambda (\mu, \lambda,\tau)$ and $\Omega$ is a bounded Lipschitz domain,
it was proved in \cite{Kenig-Shen-2} that for $1<p<\infty$,
$$
\| \big(\nabla \mathcal{S}_\varep (f)\big)^*\|_{L^p(\Omega)}
+\| \big( \mathcal{D}_\varep (f)\big)^*\|_{L^p(\partial\Omega)}
\le C_p \| f\|_{L^p(\partial\Omega)},
$$
where $C_p$ depends only on $d$, $m$, $\mu$, $\lambda$, $\tau$, $p$ and
the Lipschitz character of $\Omega$.
Furthermore, $(\nabla u_\varep)_\pm (P)$ exists for a.e. $P\in \partial\Omega$,
$\left(\frac{\partial u_\varep}{\partial \nu_\varep}\right)_\pm
=(\pm \frac12 I + \mathcal{K}_{A, \varep}) (f)$ and
$ (w_\varep)_\pm = (\mp\frac12 I +\mathcal{K}_{A^*, \varep}^*) (f)$,
where $\mathcal{K}_{A^*,\varep}^* $ is the adjoint of $\mathcal{K}_{A^*, \varep}$.
Here $(u)_\pm$ denotes the nontangential limits on $\partial\Omega$ of $u$, taken
from $\Omega$ and $\Omega_-$ respectively.
Let $L^p_0 (\partial\Omega, \br^m)$ denote the space of functions in $L^p(\partial\Omega,\br^m)$
with mean value zero.
\begin{thm}\label{layer-potential-theorem}
Let $\Omega$ be a bounded $C^{1,\alpha}$ domain in $\br^d$, $d\ge 3$
with connected boundary.
Suppose that $A\in \Lambda(\mu, \lambda, \tau)$ and $A^*=A$.
Then, for $1<p<\infty$,
\begin{equation}\label{11.1}
\aligned
\frac12 I +\mathcal{K}_{A, \varep} &:
L_0^p(\partial\Omega, \br^m) \to L_0^p(\partial\Omega, \br^m),\\
-\frac12 I +\mathcal{K}^*_{A^*, \varep} &:
L^p(\partial\Omega, \br^m) \to L^p(\partial\Omega, \br^m),\\
\mathcal{S}_\varep &: L^p(\partial\Omega, \br^m)\to
W^{1,p}(\partial\Omega, \br^m),
\endaligned
\end{equation}
are invertible and the operator norms of their inverses are bounded by
a constant independent of $\varep$.
\end{thm}
\begin{proof}
The case $p=2$ was proved in \cite{Kenig-Shen-2} for Lipschitz domains.
If $\Omega$ is $C^{1,\alpha}$, the results for $p\neq 2$ follow from the solvabilities of
the $L^p$ Neumann and regularity problems with uniform estimates in $\Omega$ and $\Omega_-$
(see Theorem \ref{maximal-function-theorem}, Theorem \ref{regularity-theorem},
Remarks \ref{remark-9.1} and \ref{remark-10.1}).
\end{proof}
As a corollary, solutions to the $L^p$ Dirichlet, Neumann and regularity problems for
$\mathcal{L}_\varep (u_\varep)=0$ may be represented
by layer potentials with uniformly $L^p$ bounded density functions.
This shows that the classical method of integral equations applies to the
elliptic system $\mathcal{L}_\varep (u_\varep)=0$.
\begin{thm}\label{representation-theorem} Let $1<p<\infty$.
Under the same assumptions on $A$ and $\Omega$
as in Theorem \ref{layer-potential-theorem}, the following holds.
\item
(i) For $g\in L^p(\partial\Omega)$, the solution
to the $L^p$ Dirichlet problem in $\Omega$ with $u_\varep=g$ on $\partial\Omega$ is given
by $u_\varep =\mathcal{D}_\varep (h_\varep)$ with $\|h_\varep\|_{L^p(\partial\Omega)}
\le C_p\|g\|_{L^p(\partial\Omega)}$.
\item
(ii) For $g\in L^p(\partial\Omega)$, the solution to the $L^p$ Neumann problem in $\Omega$
with $\frac{\partial u_\varep}{\partial\nu_\varep} =g$ on $\partial\Omega$
is given by
$u_\varep =\mathcal{S}_\varep (h_\varep)$ with $\|h_\varep\|_{L^p(\partial\Omega)}
\le C_p\| g\|_{L^p(\partial\Omega)}$.
\item
(iii)
For $g\in W^{1,p}(\partial\Omega)$,
the solution to the $L^p$ regularity problem in $\Omega$ with $u_\varep =g$ on $\partial\Omega$
is given by
$u_\varep =\mathcal{S}_\varep (h_\varep)$ with $\|h_\varep\|_{L^p(\partial\Omega)}
\le C_p\| g\|_{L^p(\partial\Omega)}$.
\end{thm}
|
2,869,038,155,271 | arxiv | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\vs}[1]{\rule[- #1 mm]{0mm}{#1 mm}}
\newcommand{\hs}[1]{\hspace{#1mm}}
\newcommand{\mb}[1]{\hs{5}\mbox{#1}\hs{5}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\wt}[1]{\widetilde{#1}}
\newcommand{\und}[1]{\underline{#1}}
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\sm}[2]{\frac{\mbox{\footnotesize #1}\vs{-2}}
{\vs{-2}\mbox{\footnotesize #2}}}
\newcommand{\partial}{\partial}
\newcommand{\epsilon}{\epsilon}
\newcommand{\mbox{\rule{0.2mm}{2.8mm}\hspace{-1.5mm} R}}{\mbox{\rule{0.2mm}{2.8mm}\hspace{-1.5mm} R}}
\newcommand{Z\hspace{-2mm}Z}{Z\hspace{-2mm}Z}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal G}}{{\cal G}}
\newcommand{{\cal K}}{{\cal K}}
\newcommand{{\cal W}}{{\cal W}}
\newcommand{\vec{J}}{\vec{J}}
\newcommand{\vec{\lambda}}{\vec{\lambda}}
\newcommand{\vec{\sigma}}{\vec{\sigma}}
\newcommand{\vec{\tau}}{\vec{\tau}}
\newcommand{\vec{W}}{\vec{W}}
\newcommand{\stackrel{\otimes}{,}}{\stackrel{\otimes}{,}}
\newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}}
\newcommand{\PL}[1]{Phys.\ Lett.\ {\bf #1}}
\newcommand{\NC}[1]{Nuovo Cimento {\bf #1}}
\newcommand{\CMP}[1]{Comm.\ Math.\ Phys.\ {\bf #1}}
\newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}}
\newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}}
\newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}}
\newcommand{\BLMS}[1]{Bull.\ London Math.\ Soc.\ {\bf #1}}
\newcommand{\IJMP}[1]{Int.\ Jour.\ of\ Mod.\ Phys.\ {\bf #1}}
\newcommand{\JMP}[1]{Jour.\ of\ Math.\ Phys.\ {\bf #1}}
\newcommand{\LMP}[1]{Lett.\ in\ Math.\ Phys.\ {\bf #1}}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newpage
\setcounter{page}{0}
\pagestyle{empty}
\vs{30}
\begin{center}
{\LARGE {\bf $N=1,2$ Super-NLS Hierarchies}}\\
[.25cm]
{\LARGE {\bf as Super-KP Coset Reductions.}}\\[1cm]
\vs{8}
{\large {F. Toppan}}\\
\quad \\
{\em Laboratoire de Physique Th\'eorique ENSLAPP
\footnote{URA 14-36 du CNRS, associ\'ee \`a l'Ecole Normale
Sup\'erieure de
Lyon, et au Laboratoire d'Annecy-le-Vieux de Physique des Particules
(IN2P3-CNRS).},}\\
{\em ENS Lyon, 46 all\'ee d'Italie,} \\
{\em F-69364 Lyon Cedex 07, France.}\\
{E-mail: [email protected]}
\end{center}
\vs{8}
\centerline{ {\bf Abstract}}
\indent
We define consistent finite-superfields reductions of the $N=1,2$
super-KP hierarchies via the coset approach we already developped for
reducing the bosonic KP-hierarchy (generating e.g. the NLS hierarchy
from the $sl(2)/U(1)-{\cal KM}$ coset). We work in a manifestly
supersymmetric
framework and illustrate our method by treating explicitly the
$N=1,2$
super-NLS hierarchies.
W.r.t. the bosonic case the ordinary covariant derivative
is now replaced by a spinorial one
containing a spin ${\textstyle {1\over 2}}$ superfield.
Each coset reduction is associated to a rational super-${\cal W}$ algebra
encoding a non-linear super-${\cal W}_\infty$ algebra structure.
In the $N=2$ case two conjugate sets of superLax operators,
equations of motion and infinite hamiltonians in involution are
derived.
Modified hierarchies are obtained from the original ones via
free-fields mappings (just as a m-NLS equation arises by representing
the $sl(2)-{\cal KM}$ algebra through the classical Wakimoto
free-fields).
\vfill
\rightline{{\small E}N{\large S}{\Large L}{\large A}P{\small
P}-L-467/94}
\rightline{April 1994}
\newpage
\pagestyle{plain}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\sect{Introduction}
\indent
The hierarchy of integrable equations leading to a solitonic
behaviour has been a widely studied subject since the fundamental
work of Gardner, Green, Kruskal and Miura \cite{GGKM} concerning the
KdV equation. In more recent time it has
particularly deserved physicists' attention expecially in connection
with matrix models, which are a sort of effective description for
two-dimensional gravity.
In such a context the partition functions of matrix models satisfy
the
so-called Virasoro-${\cal W}$ constraints and can be expressed in terms of
the $\tau$-functions of the hierarchies of classical integrable
equations (for a review on that, see e.g. \cite{mar} and references
therein).\par
Looking at a classification of all possible hierarchies is therefore
a very attracting problem for both mathematical and physical
reasons.
Working in the formalism of pseudo-differential operators (PDO) such
a problem can be formalized as follows: determining
all possible algebraic constraints, consistent with the KP flows, on
the infinite fields entering the KP operator (reduction procedure).
Apart from the well-known solutions of Drinfeld-Sokolov type
\cite{DS}, which can be expressed in terms of purely differential Lax
operators, in the literature other solutions, called
multi-fields KP reductions, have been obtained
\cite{{ara},{bonora},{bonora2}}. Their basic features can be stated
as follows: in the dispersionless limit they give rise
to a Lax operator fractional in the momentum $p$. Moreover, the
algebra of Virasoro-${\cal W}$ constraints turns out to be a ${\cal W}_\infty$
algebra.\par
Inspired by the works \cite{yuwu} (see also \cite{bakas}), in a
previous paper
\cite{toppan} we have shown how such reductions can be derived via a
coset construction, involving a factorization of a Kac-Moody
subalgebra out of a given Kac-Moody
or polynomial ${\cal W}$ algebra. In our framework we have immediately at
disposal a Poisson-brackets structure providing a (multi)hamiltonian
dynamics. Furthermore, the non-linear ${\cal W}_\infty$ algebra can be
compactely interpreted as
a finite rational ${\cal W}$ algebra (to our knowledge, the notion of
rational ${\cal W} $ algebra has been first introduced in \cite{feher}; in
\cite{DFSTR} it has been shown that rational ${\cal W}$ algebras appear in
the somehow different context of coset construction; a detailed
analysis of classical rational ${\cal W}$ algebras and their quantum
deformations has been given in \cite{feher2}).\par
Even if we have not yet attempted to give a general formal proof the
examples worked out so far strongly suggest that to each coset is
associated a corresponding KP reduction; there exists indeed a
well-defined procedure telling how to associate to a given coset a
possible KP reduction. In the absence of a general theorem the
consistency of the derived reduced operator with the
KP flows should be explicitly checked, leading to lenghty but
straightforward computations. No counterexample has been found so
far.\par
A point should be made clear: in our framework we do not need to
introduce
Dirac's brackets since we do not impose hamiltonian reductions; due
to that we are able to derive modified hierarchies
via free-field mappings provided by (the classical version of) the
Wakimoto representation \cite{waki} and their generalizations.\par
In this paper we address the problem of extending the previous
bosonic construction to the $N=1,2$ supersymmetric cases, leading to
consistent coset
reductions of the super-KP hierarchy.
The fundamental reference we will follow concerning the definition of
the KP hierarchy for odd-graded derivative is \cite{manin}.
Supersymmetric integrable hierarchies have a vast literature, see
among others e.g. \cite{others}
and
for the $N=2$ case \cite{N2}.\par
We will work in a manifestly supersymmetric formalism and illustrate
our procedure by explicitly showing
the coset derivation of the $N=1,2$ super-NLS equations. Due to the
above considerations our framework can be straightforwardly applied
to derive more complicated coset theories. \par
The basic difference with respect to the bosonic case lies in the
fact that now
the subalgebra we will factor out is generated by spin
${\textstyle{1\over 2}}$
supercurrents which enter a spinorial, fermionic, covariant
derivative.\par
The supercurrents algebra generating the $N=1$ super-NLS theory
involve two oppositely charged fermionic superfields of spin
${{\textstyle{1\over 2}}}$.
The coset construction can be performed as in the bosonic case
leading
to a non-linear super-${\cal W}$ algebra involving an infinite number of
primary superfields, one for each integral or half integral value of
the spin $s\geq 1$.
Such superalgebra can be regarded as a rational super-${\cal W}$ algebra.
It guarantees the existence of a consistent reduction of the super-KP
hierarchy, which in its turn implies the integrability of the
super-NLS equation.
The totally new feature with respect to the bosonic case, i.e. that
the subsector of
fermionic superfields only appears in the reduced super-KP operator,
will be fully discussed.\par
The first two fermionic superfields in the coset super-algebra are
the first two
(fermionic) hamiltonian densities of the super-NLS equation. Two
compatible super-Poisson brackets are derived as in the bosonic case.
A superWakimoto representation for the supercurrents algebra enables
us to introduce the
associated modified super-NLS hierarchy.\par
Our results concerning the $N=1$ super-NLS equation should be
compared with that of \cite{das} (and \cite{roe}). The equation we
derive coincide with the one
analyzed in \cite{das}. While the coefficients in \cite{das} were
suitably chosen in order to provide integrability, in our case they
are automatically furnished by the coset construction. We remark here
that the Lax operator given in
\cite{das} is of matricial type, while our super-KP reduced operator
is of scalar type. More comments on the connection of the two
approaches will be given in the text.\par
For what concerns the $N=2$ case we will make use of the formalism,
already
developped in \cite{IT} for Toda theories, based on chiral and
antichiral superfields. They are equivalent to $N=1$ superfields,
which allows us to reduce the $N=2$ case to the previous one. Any
object in the $N=2$ theory (namely superfields,
covariant derivatives, hamiltonians, Lax operators) has its chirally
conjugated counterpart.\par
The scheme of the paper is the following: in the next two sections
the bosonic construction is reviewed in detail and the basic
structures which are used
also in the super-case are discussed. We would like to point out that
some of the results here presented are new. Next, the $N=1$ formalism
is introduced and the definition of super-algebra cosets is given.
The $N=1$ super-NLS hierarchy
is analyzed. The last two sections are devoted to introduce the
formalism and extend the results to the $N=2$ case.
\sect{The coset reduction of the bosonic KP hierarchy.}
\indent
In this section we summarize the basic results of \cite{toppan}
concerning the coset reduction of the bosonic KP hierarchy.\par
Let us state the problem first: the KP hierarchy (we follow
\cite{dickey}
and the conventions there introduced)
is defined through the pseudodifferential Lax operator
\begin{eqnarray}
L &=& \partial + \sum_{i=0}^{\infty} U_i\partial^{-i}
\label{kp}
\end{eqnarray}
where the $U_i$ are an infinite set of fields depending on the
spatial coordinate $x$ and the time parameters $t_k$. Let us denote
as ${L^k}_+$ the purely differential part of the $k$-th power of the
$L$ operator; an infinite set of differential equations, or flows,
for the fields $U_i$ is introduced via the equations
\begin{eqnarray}
{\partial L\over \partial t_k}& = &[ {L^k}_+,L]
\label{flows}
\end{eqnarray}
The quantities
\begin{eqnarray}
F_k &=& <L^k>
\label{first}
\end{eqnarray}
are first integrals of motion for the flows (\ref{flows}). Here the
symbol
$<A>$ denotes the integral of the residue ($<A>=\int dw a_{-1} (w)$)
for the
generic pseudodifferential operator
$A = ...+ a_{-1} \partial^{-1} +... $.\par
An infinite set of compatible Poisson brackets structures can be
introduced, leading to a (multi)-hamiltonian structure for the flows
(\ref{flows}). The first integrals of motion are hamiltonians in
involution with respect to
all Poisson brackets. \par
The flows (\ref{flows}) involve an infinite set of fields. The
reduction procedure
of the KP hierarchy consists in introducing algebraic constraints on
such fields, so that only a finite number of them would be
independent.
Such constraints must be compatible with the flows (\ref{flows}). As
a final result one gets a hierarchy of integrable differential
equations involving
a finite number of fields only.\par
The canonical way to perform a reduction consists in imposing the
constraint
\begin{eqnarray}
L^n ={L^n}_+
\label{kdvred}
\end{eqnarray}
which tells that the $n$-th power of $L$ is a purely differential
operator, for a given positive integer $n=2,3,...$ . Such reductions
lead to generalized KdV hierarchies: for $n=2$ one gets the KdV
equation, for $n=3$ the Boussinesq one and so on. The hamiltonian
structure for such reduced hierarchies is induced from the
hamiltonian structure of the original unreduced KP. These hierarchies
are the ones originally described by Drinfeld-Sokolov \cite{DS}.
Under the Poisson brackets structure the fields entering $L^n$
satisfy a classical
finite non-linear ${\cal W}$ algebra (of polynomial type).\par
In the limit of dispersionless Lax equation (which is taken by
assuming the fields not depending on the spatial coordinate $x$) and
in the Fourier-transformed basis (the operator $\partial \equiv p$,
$p$ the momentum) the Lax operator $L^n$ is
just given by a polynomial in $p$ of order $n$.\par
The set of reductions given by the constraint (\ref{kdvred}) does not
exhaust the set of all possible reductions compatible with the flows
of KP.
Indeed in the literature other consistent reductions have been
discussed
(see e.g. \cite{ara},\cite{bonora}). They are called multi-fields
reductions
of KP. In the language of \cite{bonora2} they are labelled by two
positive integers $p,q$ and called generalized ($p,q$) KdV
hierarchies. For this new class of reductions there exists no integer
$n$ such that the constraint (\ref{kdvred}) holds. As a basic feature
of this new class, a non-linear $W_\infty$
algebra is associated to each reduction, instead of just the
polynomial ${\cal W}$ algebra associated to the standard Drinfeld-Sokolov
reductions.\par
In \cite{toppan} we have shown, working out explicitly some examples,
that this new set of reductions can be derived from factoring a
Kac-Moody subalgebra
out of a given Kac-Moody or polynomial ${\cal W}$ algebra (coset
construction).
Furthermore, the structure of non-linear ${\cal W}_\infty$ algebra
associated to such a coset is encoded in an underlining structure of
finite rational ${\cal W}$ algebra
(since the notion of rational ${\cal W}$ algebra has been fully explained
in \cite{{DFSTR},{toppan}}, we will not discuss it here).
Even if we do not dispose of a formal proof telling that any coset
factorization
determines its corresponding KP reduction, we believe this statement
to be true.
Indeed, for any example of coset worked out so far we were able to
find its associated KP-reduced hierarchy.\par
Before going ahead, let us constraint $U_0\equiv 0$ in (\ref{flows})
and let us discuss the first two flows for $k=1,2$. We get
respectively
\begin{eqnarray}
{\partial\over\partial t_1 } U_j &=& U_j' \nonumber\\
{\partial\over\partial t_2 } U_j &=& U_j '' + 2 U_{j+1} ' - 2
\sum_{r=1}^{j-1} (-1)^r\left( \begin{array}{c} j-1\\ r
\end{array}\right)
U_{j-r} \partial^r U_1
\label{eqmo}
\end{eqnarray}
(from now on we use the standard convention of denoting
the spatial derivative with a prime and the time derivative with a
dot if no confusion concerning the flow arises)
for any $j=1,2,...$ .\par
The first flow is trivial, while the second one provides a set of
non-linear
equations. \par
For later convenience (and in order to derive the KP reduction we are
going
to discuss from an underlininig coset algebra which provides the
hamiltonian structure)
let us
introduce at this point a covariant derivative $\cal D$ (whose
precise definition will be given later), acting on covariant fields
with definite charge.
An important point is that the covariant derivative satisfies the
same rules, in particular the Leibniz rule, as the ordinary
derivative and coincides with the latter one when acting upon
chargeless fields.\footnote{the following
discussion will be limited to covariant derivatives defined for an
abelian $U(1)-{\cal KM}$ algebra, even if the non-abelian case can be
considered on the same foot as well.} At a formal level, the formulas
giving the action of covariant derivatives on covariant fields look
the same as those involving ordinary derivatives.
An example of that is the following important commutation rule
\begin{eqnarray}
{\cal D}^{-k} f&=& f{\cal D}^{-k} +\sum_{r=1}^\infty(-1)^r
\left( \begin{array}{c} k+r-1\\ r \end{array}\right)
f^{(r)}
{\cal D}^{-k-r}
\label{comrul}
\end{eqnarray}
(here $f^{(r)} \equiv {\cal D}^r f $ and $k$ is a positive
integer).\par
A consistent reduced version of the KP hierarchy can be expressed as
the Lax
operator
\begin{eqnarray}
L &=& {\cal D} + J_- {\cal D}^{-1} J_+\equiv \partial +J_-\cdot {\cal
D}^{-1}J_+\label{nls}
\end{eqnarray}
Let us introduce the composite fields $V_n = J_- \cdot {\cal D}^n
J_+$. The reduction (\ref{nls}) implies the identification
\begin{eqnarray}
U_{n} &=& (-1)^{n-1}V_{n-1}, \quad\quad n=1,2,...\label{subst}
\end{eqnarray}
where the $U_n$'s are the fields appearing in (\ref{kp}). It can be
easily checked
that the above position is indeed a reduction, namely
that is consistent with the flows (\ref{flows}); this statement is
proved as follows: at first one should notice that, due to the
properties of the covariant derivative, an
algebraic relation holds
\begin{eqnarray}
V_{p+1} \cdot V_{0} &=& V_{0} \cdot \partial V_{p} + (V_1-
\partial V_{0}) V_{p}
\label{alg}
\end{eqnarray}
which allows to algebraically express the fields $V_p$, for $p\geq
2$, in terms of the fundamental ones $V_0 $ and $V_1$. Due to
standard properties of the Newton binomial, the equations for $j > 2$
in the flows
(\ref{eqmo}, $b$) are compatible with the algebraic relation
(\ref{alg}) after
taking into account the substitutions (\ref{subst}). \par
So far we have discussed the reduction of the KP hierarchy at a
purely algebraic
level, without mentioning any hamiltonian structure. Up to now the
introduction of a covariant derivative was not effective since,
as we have already remarked, covariant and ordinary derivatives play
the same role
if only algebra is concerned. The introduction of a covariant
derivative is at least a very convenient tool to make contact with
the hamiltonian dynamics and it proves to be crucial for regarding
the (\ref{nls}) reduction as a coset
construction.\par
Let us assume the fields $J_\pm (x), J_0(x)$ to satisfy the $sl(2)$
Kac-Moody algebra
\begin{eqnarray}
\{J_+(z),J_-(w)\} &=& \partial_w\delta(z-w) - 2 J_0 (w) \delta(z-w)
\equiv
{\cal D} (w)\delta(z-w) \nonumber \\
\{J_0(z), J_\pm (w)\} &=& \pm J_\pm (w) \delta (z-w)
\nonumber \\
\{J_0 (z),J_0(w)\} &=& {-\textstyle{1\over
2}}\partial_w\delta(z-w)\nonumber\\
\{J_\pm(z),J_\pm(w)\} &=& 0
\label{kmalg}
\end{eqnarray}
the covariant derivative $\cal D$ is defined acting on covariant
fields $\Phi_q$ of definite charge $q$ as
\begin{eqnarray}
{\cal D} &=& (\partial +2q J_0)\Phi_q
\end{eqnarray}
The property of covariance for the field $\Phi_q$ being defined
through the relation
\begin{eqnarray}
\{ J_0 (z), \Phi_q (w) \} &=& q \Phi_q(w)\delta (z-w)
\end{eqnarray}
As its name suggests, the covariant derivative maps covariant fields
of charge $q$ into new covariant fields having the same charge.
In particular $J_\pm $ are covariant fields with respect to $J_0$ and
have charge $\pm 1$ respectively, so that
\begin{eqnarray}
{\cal D} J_\pm&= & \partial J_\pm \pm 2 J_0 \cdot J_\pm
\end{eqnarray}
The algebraic relations (\ref{kmalg}) of the $sl(2)$-Kac-Moody can be
seen as a
first Poisson bracket structure (denoted as $ \{\cdot, \cdot \}_1$)
for the reduced (\ref{nls}) hierarchy. It is a trivial check indeed
to show that the
first two integrals of motion $F_{1,2}$ (\ref{first}) are
proportional to $H_{1,2}$:
\begin{eqnarray}
H_1&=&\int (J_-\cdot J_+)\nonumber\\
H_2&=& -\int (J_-\cdot {\cal D}J_+)
\label{hami}
\end{eqnarray}
which are hamiltonians in involution with respect to the
(\ref{kmalg}) Poisson brackets; $H_{1,2}$ reproduce respectively the
first and the second flow of (\ref{eqmo}) under the substitution
(\ref{subst}):
\begin{eqnarray}
{\partial\over\partial t_1 } V_n &=& \{ H_1, V_n\} = V_n '
\nonumber\\
{\partial\over\partial t_2 } V_n &=& \{ H_2, V_n\} =
V_n '' -2V_{n+1} ' -2
\sum_{r=1}^n \left( \begin{array}{c} n\\ r \end{array}\right)
V_{n-r} \partial^rV_0
\label{eqmo2}
\end{eqnarray}
\par
for $n=0,1,...$ .\par
Our framework allows us to accomodate a second compatible Poisson
brackets structure which is given by
\begin{eqnarray}
\{ J_-(z), J_- (w)\}_2&=& 0\nonumber\\
\{ J_+ (z), J_+ (w) \}_2 &=& -\delta (z-w) (J_+)^2 (w)\nonumber\\
\{ J_+ (z), J_- (w) \}_2 &=& {{\cal D}_w}^2 \delta (z-w) +\delta
(z-w) J_+(w)J_-(w)
\end{eqnarray}
To understand the above relations, one should notice that they are
obtained
from the corresponding relations for the first Poisson brackets
structure (\ref{kmalg}) after taking into account the substitutions
\begin{eqnarray}
J_- &\mapsto& J_-\nonumber \\
J_+ &\mapsto &-{\cal D}J_+
\end{eqnarray}
The compatibility of first and second Poisson brackets simply means
the following equality being satisfied
\begin{eqnarray}
{\dot f} &=& \{ H_1, f\}_2 = \{ H_2, f\}_1
\end{eqnarray}
\par
The composite fields $V_n$ entering (\ref{kmalg}) are by construction
chargeless, i.e. they have vanishing Poisson brackets with respect to
$J_0$
\begin{eqnarray}
\{ J_0 (z), V_n (w) \} &=& 0
\label{comm}
\end{eqnarray}
They constitute a linearly independent basis for the composite
chargeless bilinear fields (bilinear invariants); namely any such
field can be obtained as a linear combination of the $V_n$ fields and
ordinary derivatives acting on them. Under the first Poisson brackets
structure the fields $V_n$ form a closed
non-linear algebra. The only finite subset which is closed with
respect to this algebra is given by $V_0$ itself: as soon as every
other field is added, one needs the whole infinite set of fields to
close the algebra. These bilinear invariants therefore provide the
reduction (\ref{nls}) with the structure of a non-linear $W_\infty$
algebra. Since however the $V_n$ fields, even if linearly
independent,
are not algebraically independent due to relations like (\ref{alg}),
the
non-linear ${\cal W}_\infty$ algebra structure can be regarded as
encoded in the more compact structure of finite rational ${\cal W}$
algebra.
For more details and for the explicit expression of such rational
algebra see \cite{toppan}.
\par
The fact that the fields $V_n$ have vanishing Poisson brackets with
respect to the $U(1)-{\cal KM}$ subalgebra of the Kac-Moody $sl(2)$
means that we have found the explicit link between our KP-reduction
(\ref{nls}) and the coset
factorization.\par
In \cite{toppan} another such reduction was considered in full
detail; it was associated to the Lax operator
\begin{eqnarray}
{\tilde L}& =& {\cal D}^2 + T + W_-\cdot {\cal D}^{-1} W_+
\label{op}
\end{eqnarray}
Such operator has not the form of a KP operator, but it is however
possible to introduce the uniquely defined ``square root" $ {\tilde
L}^{{1\over 2}}$ of ${\tilde L}= {\tilde L}^{{1\over 2}} \cdot
{\tilde L}^{{1\over 2}}$ which is of KP-type (${\tilde L}^{{1\over
2}}= {\cal D} +... $). The fields $T, W_\pm$ entering (\ref{op}), are
respectively a
chargeless stress-energy tensor and two (opposite charged) bosonic
spin ${\textstyle{3\over 2}}$ fields;
the charge being defined with respect to an $U(1)-{\cal KM} $
current $J$ entering the covariant derivative. The fields $J, T,
W_\pm$ form a closed algebra which is nothing else that the
non-linear Polyakov-Bershadski
${\cal W}$ algebra \cite{polya}. It plays
the same role of first Poisson brackets structure leading to a
hamiltonian dynamics for
the flow associated to the (\ref{op}) Lax operator, just like the
$sl(2)-{\cal KM}$ algebra in the previous case. The same steps done
before can be repeated in this case too.\par
In general, starting from a given coset algebra, it is quite an easy
Ansatz to find out the form of the reduced KP Lax operator; the
following
steps should be performed: at first the Kac-Moody currents of the
factorized subalgebra should be accomodated into a single covariant
derivative, then with the help of dimensional considerations one
should identify the $U_n$ fields of (\ref{kp}) with invariants
constructed out of covariant fields, the original ones in the algebra
as well as the covariant derivatives applied on them. The only
difficulty left consists in explicitly checking the consistency of
such reduction
with the KP flow, as well as its link with the hamiltonian dynamics
provided by the algebra itself.\par
We close this section with a remark: in the limit of dispersionless
Lax equation,
and taking into account that $J_0$ is a constant ($\equiv \alpha$)
with respect to any flow
due to the relations (\ref{comm}), the reduced operators (\ref{nls})
and (\ref{op}) are respectively given by
\begin{eqnarray}
L&\rightarrow& p + {\lambda \over p+ \alpha} \nonumber\\
{\tilde L}& \rightarrow & p^2 + t +{\lambda \over p+\alpha}
\end{eqnarray}
with $\alpha ,\lambda$ and $ t $ constants.\par
It is remarkable that the reductions associated to rational ${\cal W}$
algebras lead,
in the dispersionless limit, to Lax operators fractional in $p$
(this is always true in any case of coset construction), while the
Drinfeld-Sokolov reductions associated to polynomial ${\cal W}$ algebras
lead to Lax operators polynomial in $p$.
\section{From NLS to a modified NLS hierarchy via Wakimoto
representation of the $sl(2)-{\cal KM}$
algebra.}
\indent
In this section we study more closely the hierarchy associated
to the reduced KP operator (\ref{nls}). We show that it coincides
with the two-components formulation of the NLS hierarchy. In terms of
the second hamiltonian $H_2 = -\int (J_- {\cal D } J_+)$ we get
indeed the following equations
\begin{eqnarray}
{\dot {J_\pm}}&= & \{ J_\pm , H_2\}_1=\pm {\cal D}^2 J_\pm \pm 2
(J_+ J_-)J_\pm
\label{nls2}\end{eqnarray}
This is the coupled system associated to the NLS equation. Due to the
results
mentioned
in the previous section it is consistent to set $J_0\equiv 0$, which
further implies ${\cal D}^2 J_\pm = {J_\pm}'' $.
Next, the standard NLS equation is recovered by letting the time
being imaginary. Such ``Wick rotation" allows making the
identification
\begin{eqnarray}
{J_-}^\star &=& J_+ = u
\end{eqnarray}
We obtain finally
\begin{eqnarray}
i {\dot u} &=& u'' + 2 u|u|^2
\end{eqnarray}
which is the NLS equation in its standard form \cite{fadtak}.\par
At this point we should recall that equivalent integrable equations
can arise in two different ways: either because they are associated
to different hamiltonians
belonging to the same hierarchy of hamiltonians in involution, or
because there exists a mapping between them. This is the case
concerning the relation between
KdV and m-KdV equations, the latter being the equation involving the
free field
$\varphi$, which is related via Miura transformation to the $v$ field
satisfying the KdV equation; for the KdV Lax operator this reads as
follows
\begin{eqnarray}
\partial^2 + v& =&(\partial-\varphi)(\partial + \varphi ) =
\partial^2 +\varphi ' - \varphi^2
\end{eqnarray}
Generalizations of this construction hold for any hierarchy of
Drinfeld-Sokolov type.\par
The framework we developped in the previous section is particularly
useful
for describing the analogue free-fields mappings in the case of coset
reductions.
There exists indeed a standard free field representation of the
$sl(2)-{\cal KM}$
algebra which is given by the (classical) Wakimoto representation
\cite{waki}. It is realized
in terms of the weight $1$ field ${\nu} $\footnote{ in the standard
notation for the quantum Wakimoto representation $\nu \equiv \partial
\phi$, $\phi$ is the fundamental field satisfying the OPE $\phi
(z)\phi(w)\sim log (z-w)$.}
and the bosonic $\beta-\gamma$ system
of weight $(1,0)$, satisfying the algebra
\begin{eqnarray}
\{\beta (z) , \gamma (w) \} &=& -\{ \gamma (z) , \beta (w) \} =
\delta (z-w)\nonumber\\
\{\nu (z), \nu (w) \} &=& \partial_w \delta (z-w)
\end{eqnarray}
(any other Poisson bracket is vanishing).\par
The $sl(2)-{\cal KM}$ algebra given in (\ref{alg}) is reproduced
through
the identifications
\begin{eqnarray}
J_+&=&\beta\nonumber\\
J_0 &=& -\beta \gamma + {i\over {\sqrt 2}}\nu \nonumber\\
J_- &=& \beta \gamma^2 -i{\sqrt 2} \gamma \nu +\partial\nu
\end{eqnarray}
Representing the hamiltonian $H_2 $ in terms of the Wakimoto fields,
one can derive the coupled system
\begin{eqnarray}
{\dot {\beta }}&= & \{\beta, H_2\}_1 = \beta '' + 2\beta^2\gamma '
-2\beta^3\gamma^2\nonumber\\
{\dot {\gamma}} &=& \{\gamma , H_2\}_1=\gamma '' -2\gamma^2\beta '
-2\gamma^3\beta^2
\label{mnls}
\end{eqnarray}
(we used here the consistent constraint $J_0 = 0$ to get rid of the
field $\nu$ in the above equations). The fields $\beta , \gamma$
enter in a symmetric way in the above system and we can forget about
the different weights between them
we originally introduced to define the Wakimoto representation. If we
let indeed the spatial coordinate being imaginary ($\partial_x
\mapsto i\partial_x $), it is consistent to set
\begin{eqnarray}
\gamma^\star&= &\beta = \lambda
\end{eqnarray}
so that the final result is
\begin{eqnarray}
{\dot {\lambda}}& =& -\lambda '' + 2\lambda
(i\lambda\partial\lambda^\star -
|\lambda|^4 )
\label{redmnls}
\end{eqnarray}
\par
It is a remarkable fact that the modified NLS system (\ref{mnls})
should be regarded
as some sort of dual version of the original NLS system (\ref{nls2}).
In the latter case the reduction to the single component NLS equation
is done by assuming the time being imaginary, while in the m-NLS case
this is provided by assuming the space being imaginary. \par
The construction here discussed can be trivially extended to any
coset arising from generic Kac-Moody algebra. The free-fields
analogue of the Wakimoto representation is in this case provided by
(the classical version of) the results of ref. \cite{GMMOS}.\par
Let us finally stress the point that in our approach to the KP-coset
reduction
the connection with the free-fields representation is particularly
explicit,
since we did not need to introduce any Dirac brackets arising from
the constraint
$J_0\equiv 0$: in our framework all computations are performed using
the original Poisson brackets structure.
\section{The coset derivation of the $N=1$ super-NLS equation.}
\indent
In this section we will set up a manifestly supersymmetric framework
to derive via
coset construction $N=1$ supersymmetric integrable hierarchies. There
are two basic motivations for doing that. The first concerns of
course the construction of
superintegrable hierarchies, which are interesting by their own, and
have been widely studied in the literature (see e.g.
\cite{{others},{N2}}). The second motivation lies in better
understanding the coset construction itself. Before any attempt of
classifying
the cosets and before giving general formal proofs of their link with
the hierarchies, it is interesting to investigate how they look in
the case of superalgebras.\par
It should be kept in mind that even if our discussion will concern
the super-NLS hierarchy only, in no respect this example is crucial.
The same approach here discussed
can be straightforwardly applied to derive other supersymmetric coset
hierarchies. It is enough for that to apply the machinery here
developped to any given coset algebra.
The advantage of discussing the super-NLS case lies in its technical
simplicity. \par
The super-NLS case is however not an academical exercise and it is
interesting
to compare our results with that of \cite{{roe},{das}}. In \cite{roe}
two distinct supersymmetrizations, one of these involving a free
parameter, of the NLS equation have been proposed. It is stated that
both
lead to an integrable hierarchy. In \cite{das} manifestly
supersymmetric NLS equations have been investigated. It has been
shown that applying on such equations conventional tests of
integrability only the supersymmetric system without any free
parameter is selected. Moreover there exists a discrepancy in the
coefficients with respect to \cite{roe}. The coset construction we
are going to discuss will automatically provide the super-NLS
integrable system
of ref. \cite{das} with the same coefficients (therefore supporting
the statement of \cite{das} that a misprint occurs in \cite{roe}).
Our coset construction implies
that associated to such system there exists a non-linear
super-${\cal W}_\infty$ algebra
involving an infinite series of primary bosonic (of integral
dimension $h=1,2, ... $) and fermionic (of half-integral dimension $
h={\textstyle{3\over 2}}, {\textstyle{5\over 2}}, ...$) $N=1$
superfields. Such super-${\cal W}_\infty$
algebra can be regarded as a rational super-${\cal W}$ algebra. The
existence of this
non-linear super-${\cal W}_\infty$ algebra is already an indication of the
integrability properties of our super-NLS system. This statement is
made precise by associating to
the coset a
consistent reduction of the super-KP hierarchy. Our Lax operator is
different from the one discussed in \cite{das}.
\par
Let us fix now our conventions concerning the superspace. We denote
with capital
letters the $N=1$ supercoordinates ($X\equiv, x, \theta$, with $x$
and $\theta
$ real, respectively bosonic and grassmann, variables). The
supersymmetric spinor
derivative is given by
\begin{eqnarray}
D \equiv D_X &=& {\partial\over \partial\theta} +\theta
{\partial\over \partial x}
\end{eqnarray}
With the above definition $ {D_X}^2 ={\textstyle{\partial\over
\partial x}}$. \par
The supersymmetric delta-function $\Delta (X,Y)$ is a fermionic
object
\begin{eqnarray}
\Delta (X,Y) &=& \delta (x-y) (\theta -\eta)
\end{eqnarray}
It satisfies the relations
\begin{eqnarray}
\Delta (X,Y) &=& -\Delta (Y,X) \quad\quad\quad
D_X\Delta (X,Y) - D_Y\Delta (X,Y)
\end{eqnarray}
Our convention for the integration over the grassmann variable is
\begin{eqnarray}
\int d\theta \cdot \theta &=& -1
\end{eqnarray}
For any given superfield $F(X)$ we get then
\begin{eqnarray}
\int dY \Delta (X, Y )F(Y) &=& F(X)
\end{eqnarray}
As in the bosonic case, the (super)-line integral over a total
derivative gives a vanishing result.
The canonical dimensions $d$ are respectively given by
\begin{eqnarray}
d(D) &=& d(\Delta) = -d(\theta) = -2d(x) ={\textstyle {1\over 2}}
\end{eqnarray}
The role which in the bosonic case is played by the ordinary
derivative is now
played by the spinor derivative of dimension $d={\textstyle {1\over
2}}$. It makes plausible that now covariant spinor derivatives should
be constructed in terms of
spin ${{\textstyle {1\over 2}}}$ fermionic superfields. An example of
supersymmetric rational ${\cal W}$ algebra involving such kind of
derivatives has indeed been given in \cite{DFRS}. \par
The $N=1$ counterpart of the $U(1)-{\cal KM}$ current $J_0(z)$ should
be expressed by the fermionic superfield $\Psi_0(X)= \psi_0 (x) +
\theta J_0(x)$, satisfying the super-Poisson brackets
relation\footnote{
we recall that super-Poisson brackets are symmetric when taken
between odd elements, antisymmetric otherwise.}
\begin{eqnarray}
\{ \Psi_0 (X), \Psi_0 (Y) \} &=& D_Y \Delta (X,Y)
\label{zerosusyalg}
\end{eqnarray}
which implies, at the level of components
\begin{eqnarray}
\{\psi_0(x),\psi_0(y)\}&=&-\delta (x-y)\nonumber\\
\{J_0(x),J_0(y)\}&=&-\partial_y\delta (x-y)
\end{eqnarray}
Super-covariant fields and the supercovariant derivative can now be
introduced
through
\begin{eqnarray}
\{ \Psi_0 (X), \Phi_q (Y) \} &=& q\Delta (X,Y) \Phi_q (Y)\nonumber\\
{\cal D}\Phi_q &=& D\Phi _q + q \Psi_0 \Phi_q
\end{eqnarray}
$\Phi_q$ is a covariant superfield (either bosonic or fermionic).\par
We are now in the position to discuss the algebra providing the first
(super)-Poisson brackets structure for the super-NLS equation. As
suggested in \cite{das}, the component fields should be accomodated
in two fermionic spin ${\textstyle{1\over 2}}$
superfields $\Psi_\pm = \psi_\pm + \theta J_\pm $. With the above
choice one can identify the bosonic components $J_\pm$ with the
analogue fields we already
encountered in the bosonic case. The relevant algebra can therefore
be simply guessed
to be the supersymmetric analogue of the $sl(2)-{\cal KM}$ algebra,
introduced through the relations
\begin{eqnarray}
\{ \Psi_0 (X), \Psi_\pm (Y) \} &=& \pm \Delta (X,Y) \Psi_\pm
(Y)\nonumber \\
\{ \Psi_+ (X), \Psi_- (Y) \} &=& {\cal D}_Y \Delta (X,Y) = D_Y \Delta
(X,Y)
+ \Delta (X,Y) \Psi_0(Y)
\label{susyalg}
\end{eqnarray}
One indeed recover the (\ref{kmalg}) algebra by setting all the
component spin
${\textstyle{1\over 2}}$ fermionic fields equal to $0$.\par
We can define, just like in the bosonic case, the composite
superfields $V_n (X)$,
where
\begin{eqnarray}
V_n &=& \Psi_- {\cal D}^{n} \Psi_+ \quad\quad n=0,1,2,...
\label{superinv}
\end{eqnarray}
By construction they have vanishing Poisson brackets with respect to
$\Psi_0 $:
\begin{eqnarray}
\{ \Psi_0 (X), V_n (Y) \} &=& 0
\end{eqnarray}
The superfields $V_n$ are respectively bosonic for even values of $n$
and fermionic for odd values. They play the same role as the
corresponding fields
in the purely bosonic case: they constitute a basis of linearly
independent superfields for the chargeless composite superfields. The
super Poisson brackets (\ref{zerosusyalg},\ref{susyalg}) provide such
basis of fields with the structure of non-linear super-${\cal W}_\infty$
algebra that will be discussed later in more detail. \par
In order to associate to the coset algebra a hamiltonian dynamics
like we did in the bosonic case we proceed as follows: we recall that
the superfields $V_n$
have positive dimensions $d(V_n) = {\textstyle { n+2\over 2}}$; then
we look for
all possible hamiltonian densities of a given dimension that one can
algebraically construct out of the superfields $V_n$ and the
covariant derivative (of dimension ${\textstyle {1\over 2}}$) acting
upon them. For any given
dimension only a finite number of such combinations are allowed.
Since now we are working in a manifestly supersymmetric framework and
the (super)-line integral is fermionic, so the hamiltonian densities
must be fermionic of half-integral dimension. The first two
possible hamiltonian densities at dimension ${\textstyle {3\over 2}}$
and ${\textstyle {5\over 2 }}$ respectively are just given by $V_1$
and $V_3$. The latter is indeed the unique, up to a total derivative,
chargeless $d ={\textstyle{5\over 2}}$ object.\par
It can easily checked now that $H_{1,2}$ given by
\begin{eqnarray}
H_1 &=& \int dX V_1 (X) = \int dX (\Psi_- \cdot {\cal
D}\Psi_+)\nonumber\\
H_2 &=& \int dX V_3 (X) = \int dX (\Psi_-\cdot {\cal D}^3 \Psi_+)
\label{superhami}
\end{eqnarray}
have vanishing Poisson brackets among themselves
with respect to (\ref{zerosusyalg},\ref{susyalg}) and can therefore
been regarded as hamiltonians in involution. Two compatible flows are
defined through
\begin{eqnarray}
{\partial \over \partial t_1 } \Psi_\pm &=& \{ H_1, \Psi_\pm \} =
{\cal D}^2 \Psi_\pm
\nonumber\\
{\partial \over \partial t_2 } \Psi_\pm &=& \{ H_2, \Psi_\pm \} =
\pm {\cal D}^4 \Psi_\pm \mp \Psi_\pm { D}( \Psi_\mp {\cal D} \Psi_\pm
)
\label{superNLS}
\end{eqnarray}
The latter equation is the $N=1$ supersymmetric version of the
two-components NLS. As in the bosonic case, if we let the time $t_2$
be imaginary we can consistently set
\begin{eqnarray}
\Psi_+ = {\Psi_-}^\star = \Psi
\nonumber
\end{eqnarray}
to get the super-NLS equation
\begin{eqnarray}
i{\dot \Psi} &=& \Psi^{(4)} -\Psi D(\Psi^\star \Psi^{(1)})
\end{eqnarray}
(in order to simplify the notation from now on the symbol
$A^{(n)} \equiv {\cal D}^{n} A $ will be used). \par
Since ${\dot \Psi_0}=0$ makes consistent to set $\Psi_0 =0$, the
above equation
leads to the following system in component fields ($\Psi= \phi
+\theta q$,
$\phi$ fermionic and $q$ bosonic):
\begin{eqnarray}
i{\dot \phi} &=& \phi_{xx} + \phi ( \phi^\star \phi_x - q^\star q
)\nonumber \\
i{\dot q } &=& q_{xx} - (q q^\star) q + (\phi {\phi_x}^\star
-\phi_x\phi^\star) q
+(\phi \phi^\star)q_x
\end{eqnarray}
As already stated, this equation coincides with the integrable
super-NLS equation of ref. \cite{das}.\par
The supersymmetric character of the above equations is guaranteed by
the invariance of the hamiltonians $H_{1,2}$ under the
transformations
\begin{eqnarray}
\delta \Psi_\pm &=& \pm\varepsilon {\cal D}\Psi_\pm
\end{eqnarray}
where $\varepsilon $ is a grassmann parameter.\par
The existence of a bihamiltonian structure is derived as in the
bosonic case. The second super-Poisson brackets structure is given
by
\begin{eqnarray}
\{ \Psi_- (X), \Psi_- (Y) \}_2 &=& 0\nonumber\\
\{\Psi_- (X), \Psi_+ (Y) \}_2 &=& \Delta^{(3)}
-\Delta^{(1)}\Psi_-\Psi_+ +\Delta {\Psi_-}^{(1)}\Psi_+\nonumber\\
\{\Psi_+ (X), \Psi_+ (Y) \}_2 &=& \Delta^{(2)} \Psi_+{\Psi_+}^{(1)}
-\Delta^{(1)}\Psi_+{\Psi_+}^{(2)} +\Delta
{\Psi_+}^{(1)}{\Psi_+}^{(2)}
\end{eqnarray}
(The superfields on the right hand side are evaluated in $Y$ and
$\Delta^{(n)}=
{{\cal D}_Y}^{n} \Delta (X,Y) $).\par
This second Poisson brackets structure is derived from the first one
after the substitutions
\begin{eqnarray}
\Psi_- &\mapsto & \Psi_-
\nonumber\\
\Psi_+ &\mapsto& {\cal D}^{2} \Psi_+
\end{eqnarray}
are taken into account. \par
The compatibility of the two Poisson brackets structure is ensured,
like in the bosonic case, by the relation
\begin{eqnarray}
{{d F\over d t}} &=& \{ H_1, F\}_2 = \{ H_2, F\}_1
\end{eqnarray}
Precisely like the bosonic case, the two hamiltonians $H_{1,2}$ are
the first two
of an infinite series of hamiltonians mutually in involution. This
statement will be justified later when we show how to
associate to the system (\ref{superNLS}) a reduction of the super-KP
hierarchy.\par
A comment is in order. The algebra (\ref{zerosusyalg},\ref{susyalg})
is the simplest possible
algebra realized in terms of supercurrents and allowing a Kac-Moody
coset construction. There is another very simple supercurrent
algebra, which is
realized by just coupling to the $\Psi_0$ superfield two bosonic
superfields $\Phi_\pm$ (instead of two fermionic ones) of dimension
${\textstyle{1\over 2}}$. The expression of this algebra
looks like (\ref{susyalg}) but now one has to take into account the
antisymmetric
property when exchanging $\Phi_\pm$ in the super-Poisson brackets. If
we define the charges being the super-line integral over the
supercurrents (as $H=\int dX \Psi_0$, $E_\pm = \int dX \Psi_\pm$),
then the algebra (\ref{zerosusyalg},\ref{susyalg}) generates
a global $sl(2)$ algebra for the charges, while the algebra
determined by $\Psi_0$ and the bosonic supercurrents is promoted to
the global superalgebra $osp(1|2)$ (with generators $H$, $F_\pm =
\int dX \Phi_\pm$). In \cite{das} a zero-curvature
formulation for the system (\ref{superNLS}) was found; it is based on
the
$sl(2)$ algebra. They
claimed being unable to derive an analogue formulation starting from
$osp(1|2)$.
The reason is simply because this is associated with a radically
different system, the dynamics being in this case defined for the
bosonic $\Phi_\pm $ superfields.
The fact that the dynamics differs from the fermionic case can be
immediately
seen using the following argument: an invariant composite superfield
$W_0 \cdot W_1 $ ($W_n = \Phi_-{\cal D}^n \Phi_+$) is allowed
entering in the second hamiltonian density of dimension
${\textstyle{5\over 2}}$, while the corresponding composite
superfield $V_0\cdot V_1$ is vanishing in the fermionic case for the
antisymmetry
of $\Psi_\pm$. Our coset construction can be performed for this
bosonic case as well, leading to an interesting superintegrable
system, which of course has nothing to do with the
supersymmetrization of the NLS equation, since the component bosonic
fields have spin ${\textstyle{1\over 2}}$ and not $1$. It is likely
that for such a system the zero-curvature formulation would be based
on the superalgebra $osp(1|2)$. We leave a detailed discussion of it
for a further
publication.
\section{Comments on the non-linear super-${\cal W}_\infty$ coset
algebra.}
\indent
Let us make some more comments here concerning the non-linear
super-${\cal W}_\infty$ algebra structure of the coset algebra. Its linear
generators are the superfields
$V_n$, $n$ non-negative integer, defined in (\ref{superinv}). The
superfields are
bosonic for even values of $n$, fermionic for odd values. The set
$\{V_0, V_1\}$
constitutes a finite super-algebra, given by the Poisson brackets
\begin{eqnarray}
\{ V_0 (X), V_0(Y) \} &=& -\Delta (X,Y) (DV_0 +2V_1)(Y)\nonumber\\
\{ V_0 (X), V_1(Y) \} &=& \Delta^{(2)} (X,Y))
V_0(Y) +\Delta^{(1)}(X,Y) V_1(Y) -\Delta (X,Y) DV_1 (Y)\nonumber\\
\{ V_1 (X), V_1(Y) \} &=& -2\Delta^{(2)} (X,Y) V_1(Y) -\Delta (X,Y)
{D_Y}^2V_1(Y)
\label{supcos}
\end{eqnarray}
In terms of component fields it is given by two bosons of spin $1$
and $2$
respectively, and two spin ${\textstyle {3\over 2}}$ fermions. It is
the maximal
finite subalgebra of the coset superalgebra: as soon as any other
superfield is added
to $V_0,V_1$, the whole set of fields $V_n$ is needed to close the
algebra, giving to the coset the structure of a super-${\cal W}_\infty$
algebra. Moreover such algebra closes in non-linear way.
\par
Using the techniques developped in \cite{DFSTR} it is possible to
show the existence of an equivalent basis for expressing our
super-${\cal W}_\infty$ algebra,
given by the infinite set of superfields $W_h (X)$, which are primary
with
conformal dimension $h$ with respect to the stress-energy tensor
(having vanishing central charge) $T(X) \equiv W_{\textstyle{3\over
2}}(X)$.
To any integral value of $h$ ($h=1,2,..$) is associated a bosonic
primary superfield;
to any half-integral value ($h ={\textstyle{3\over 2}},
{\textstyle{5\over 2}},... $) a fermionic one. \par
The condition of being primary means that the superfields $W_h$
satisfy the relation
\begin{eqnarray}
\{ T(X), W_h(Y) \} &=& -{h}\Delta^{(2)}(X,Y) W_h(Y) +{1\over 2}
\Delta^{(1)}(X,Y) DW_h (Y)-\Delta (X,Y)
D^2 W_h (Y)\nonumber\\
\end{eqnarray}
We have at the lowest orders
\begin{eqnarray}
W_1 &=& V_0 =\Psi_- \Psi_+\nonumber \\
T &=& V_1 + {\textstyle{1\over 2 } }DV_0 = \nonumber\\
&=&
{\textstyle{1\over 2 } }
{\cal D} {\Psi_-}\cdot\Psi_++
{\textstyle{1\over 2 } }
\Psi_-\cdot {\cal D}{\Psi_+}\nonumber\\
W_2 &=& 3V_2 + DT -{\textstyle {3\over 2}} \partial V_0 =
\nonumber\\
&=& \Psi_-\cdot
{\cal D}^2{\Psi_+} +{\cal D}{\Psi_-}\cdot {\cal D} {\Psi_+}
-{\cal D}^2{\Psi_-}\cdot {\Psi_+}
\end{eqnarray}
We wish finally to make some comments on the rational character of
the above defined super-${\cal W}_\infty$ algebra: the whole set of
algebraic relations can be
expressed just in terms of closed rational super-${\cal W}$ algebra
involving $4$ superfields as the following reasoning shows: let us
introduce the superfields
\begin{eqnarray}
\Lambda_p &=_{def}& {\cal D} \Psi_- \cdot {\cal D}^{(p+1)}
\Psi_+\nonumber
\end{eqnarray}
then
\begin{eqnarray}
\Lambda_p &=& D V_{p+1} - V_{p+2} \nonumber
\label{lambdadef}
\end{eqnarray}
Due to standard properties of the covariant derivative we can write
down for the superfields $\Lambda_p$ the analogue of the relation
(\ref{alg}) of the bosonic case:
\begin{eqnarray}
\Lambda_0 \Lambda_{p+1} &=& \Lambda_0 D\Lambda_p + (\Lambda_1 -
D\Lambda_0)\Lambda_p
\label{ratio2}
\end{eqnarray}
which implies that $\Lambda_p$ are rational functions of
$\Lambda_{0,1}$, which
in their turns are determined by $V_i$, $i=0,1,2,3$. \\
Inverting the relation (\ref{lambdadef}) we can express any higher
field $V_{p+1}$ in terms of $V_p$, $\Lambda_{p-1}$. As a consequence
of this we have
the (rational) closure of the superalgebra on the superfields
$V_0,V_1, V_2, V_3$. \par
We remark that now is not possible, like in the bosonic case, to
determine higher
order superfields $V_p$ from the formula (\ref{ratio2}) by simply
inserting $V_0$, $V_p$ in place of $\Lambda_0$, $\Lambda_p$: this is
due to the fact that
any product $V_0\cdot V_{p+1} $ identically vanishes since it is
proportional to a squared fermion (${\Psi_-}^2=0$). That is the
reason why four superfields are
necessary to produce a finite rational algebra and not just two as
one would
have nively expected.
\section{The $N=1$ superWakimoto representation and the modified
super-NLS equation.}
\indent
In this very short section we will repeat the construction of section
3, furnishing the $N=1$ super-Wakimoto representation of the
(\ref{zerosusyalg},\ref{susyalg}) algebra
and associating to the super-NLS equation (\ref{superNLS}) its
modified version.\par
The classical super-Wakimoto representation is realized in terms of
three
free superfields, denoted as $B, C, N$:
\begin{eqnarray}
B(X) &=& b(x) + \theta \beta (x)\nonumber\\
C (X) &=& \gamma (x) + \theta c (x)\nonumber\\
N(X)& =& \mu (x) + \theta \nu (x)
\end{eqnarray}
$B, N $ are assumed to be fermionic of dimension ${\textstyle {1\over
2}}$,
while $C$ is assumed to be a $0$-dimensional bosonic superfield
coupled to $B$.
At the level of components we have in particular the already
encountered
bosonic $\beta - \gamma$ system of weight $(1,0)$, plus now a
fermionic $b-c$
system of weight $({\textstyle {1\over 2}},{\textstyle {1\over
2}})$.\par
The free superfields super-Poisson brackets are given by
\begin{eqnarray}
\{ B (X), C (Y) \} &=&
\{ C(X), B(Y) \} =\Delta (X,Y)\nonumber \\
\{ N (X), N (Y) \} &=& D_Y \Delta (X,Y)
\end{eqnarray}
The (\ref{zerosusyalg},\ref{susyalg}) superalgebra is reproduced in
terms of the superfields $B,C,N$
through the identifications
\begin{eqnarray}
\Psi_+ &=& B\nonumber\\
\Psi_0 &=& - B C + N\nonumber
\\
\Psi _- &=& -{1\over 2} B C^2 + C N - DC
\label{superwak}
\end{eqnarray}
Representing $H_2$ in (\ref{superhami}) via the above system we get
an evolution
equation for $B,C$. As in the bosonic case the $N$ superfield can be
expressed
through $B,C$ by setting $\Psi_0=0$. Finally, by letting the space
being imaginary it is consistent to further set
\begin{eqnarray}
({\cal D }B)^\star &=& C
\end{eqnarray}
which implies
\begin{eqnarray}
\beta (x) &=& \gamma (x); \quad\quad b' (x) = c(x)
\end{eqnarray}
At the end we arrive at the supersymmetric generalization of eq.
(\ref{redmnls}), which is given by
\begin{eqnarray}
{\dot B} &=& - D^4B + B ({D}(C^\star D C ) - {\textstyle{1\over 2 }
}|C|^4 )
\end{eqnarray}
\section{Integrable properties of the $N=1$ super-NLS equation: the
super-KP reduction.}
\indent
We have already discussed the indications of integrability associated
to the super-NLS equation arising from its bihamiltonian structure.
Moreover we are aware of the results of $\cite{das}$ concerning the
integrability. In this section we will show that the equation
(\ref{superNLS}) deserves the name of super-NLS
hierarchy by explicitly associating to it a reduction of the super-KP
operator.
Before doing that let us spend some words on the supersymmetric (with
graded derivative) version of the KP hierarchy. The standard
reference we follow in this case is \cite{manin}.\par
The super-KP operator is given by
\begin{eqnarray}
L &=& D +\sum_{i=0}^\infty U_i (X) D^{-i}
\end{eqnarray}
where now $D$ is the fermionic derivative and the $U_i$'s are
superfields. For even values of $i$ they are fermionic, for odd
values bosonic.
In the following we will be interested only to the flows associated
to even (bosonic) time. For a discussion concerning odd-time flows
see e.g. \cite{ramos}.
The even-time flows are defined through
\begin{eqnarray}
{\partial L \over \partial t_k} &=& [ {L^{2k}}_+, L]
\end{eqnarray}
where ${L^r}_+$ denotes the purely differential part of $L^r$. The
above flows
provide a set of equations for the infinite series of superfields
$U_i$. To derive such equations we recall that $D^{-1} = D
\partial^{-1}$ and the commutation rule (\ref{comrul}) can be
employed.\par
If we set the constraint
\begin{eqnarray}
DU_0 + 2 U_1 &=& 0
\end{eqnarray}
then ${L^2}_+ = D^2=\partial$ and the first flow is trivial. With the
above constraint we get
\begin{eqnarray}
{L^4}_+ &=& D^4 +FD +B
\end{eqnarray}
where
\begin{eqnarray}
F &=& 2 DU_1\nonumber\\
B &=& 4U_3 + 2 DU_2 -6 U_1U_1
\end{eqnarray}
The second flow ($k=2$) is non-trivial and provides the following set
of equations
\begin{eqnarray}
{\partial {U_{2n}}\over \partial t_2} &=& {U_{2n}}^{(4)} + 2
{U_{2n+2}}^{(2)} +F {U_{2n}}^{(1)} + 2 F
U_{2n+1}-U_{2n-1}B^{(1)}+\nonumber\\
&& \sum_{r=1}^{n-1} (-1)^{r+1}
\left( \begin{array}{c} n-1\\ r \end{array}\right)
(U_{2n-2r} B^{(2r)} +U_{2n-2r-1}B^{(2r+1)})+\nonumber\\
&& \sum_{r=1}^n (-1)^r
\left( \begin{array}{c} n\\ r \end{array}\right)
U_{2n-2r+1} F^{(2r)}
\end{eqnarray}
for the fermionic superfields, and
\begin{eqnarray}
{\partial {U_{2n-1}}\over \partial t_2} &=& {U_{2n-1}}^{(4)} +
2{U_{2n+1}}^{(2)} +F{U_{2n-1}}^{(1)} - F^{(1)} U_{2n-1} +\nonumber\\
&& \sum_{r=1}^{n-1} (-1)^{r+1}
\left( \begin{array}{c} n-1\\ r \end{array}\right)
( U_{2n-2r-1}B^{(2r)} + U_{2n-2r-1}F^{(2r+1)}
+U_{2n-2r}F^{(2r)})\nonumber\\
\end{eqnarray}
for the bosonic ones.\par
In order to define the reduced super-KP operator we compare this
flows with the set of equations
\begin{eqnarray}
{\dot V_n} &=& \{ V_n, H_2\}
\end{eqnarray}
for the superfields $V_n = \Psi_- {\cal D}^n \Psi_+ $ introduced in
(\ref{superinv}), provided by the hamiltonian $H_2$ given in
(\ref{superhami}), with respect to the
(\ref{zerosusyalg},\ref{susyalg}) Poisson brackets structure.\par
We get the following equations, for respectively fermionic and
bosonic superfields
\begin{eqnarray}
{\partial V_{2n+1} \over \partial t_2 } &=& \partial^2 V_{2n+1} -
2\partial V_{2n+3} - V_{2n+1}\partial V_0 -V_{2n}\partial V_1
+\nonumber\\
&& \sum_{k=0}^{n-1}
\left( \begin{array}{c} n\\ k \end{array}\right)
(V_{2k+1} \partial^{n-k} DV_1 -V_{2k} \partial^{n-k+1}V_1)
\end{eqnarray}
and
\begin{eqnarray}
{\partial V_{2n} \over \partial t_2 } &=& \partial^2 V_{2n} -
2\partial V_{2n+2} -V_{2n}\partial V_0 +\nonumber\\
&& \sum_{k=0}^{n-1}
\left( \begin{array}{c} n\\ k \end{array}\right)
V_{2k} \partial^{n-k} DV_1
\end{eqnarray}
In order to produce a consistent super-KP reduction we must be able
to fit the above equations in the corresponding equations for the
$U_i$ superfields. This can not be done, or at least we were unable
to do that, for the whole set of $V_n$ superfields. However the
following considerations can be made: we remark that
the equations of motion for bosonic superfields (labelled by an even
integer)
involve on the right hand side bosonic superfields only. It is
therefore
consistent with the dynamics to set all the bosonic superfields
$V_{2n}\equiv 0$.
We argue that this constraint should be imposed in order to proceed
to the right
supersymmetrization of the NLS hierarchy: indeed the corresponding
generators of the coset algebra in the bosonic case are given by the
$J_-{\cal D}^nJ_+$ fields, which implies having a single bosonic
field for each integral value of the spin
($n+2$). In the supersymmetric theory one expects that
the fermionic counterparts should be associated to such fields: for
each half-integral value of the spin one should have a single
fermionic field. The set of superfields $V_n$, $n=0,1,2,...$ is in
this respect highly redundant: it provides two bosons and two
fermions respectively for each integer and half-integer spin value
$s\geq {\textstyle{3\over 2}}$, plus a single spin $1$ bosonic field
arising from $V_0$ which plays no role in the NLS hierachy.
To get rid of this redundancy, a constraint which kills the extra
degrees of freedom should be imposed. A constraint which allows doing
that is just provided by setting
\begin{eqnarray}
V_{2n} &=& 0 \quad\quad for \quad n=0,1,2,..
\label{boscon}
\end{eqnarray}
It is remarkable the consistency of this constraint with the
dynamics, as we
have just pointed out.\par
After taking into account of (\ref{boscon}), the equation for the
fermionic superfields $V_{2n-1}$ is reduced to
\begin{eqnarray}
{\dot V}_{2n-1} &=&
\partial^2 V_{2n-1} - 2\partial V_{2n+1} +\sum_{k=0}^{n-1}
\left( \begin{array}{c} n-1\\ k \end{array}\right)
V_{2k-1} \partial^{n-k} DV_1
\end{eqnarray}
It is immediately checked at this point that a consistent reduction
of the super-KP hierarchy is recovered by setting
\begin{eqnarray}
U_{2n-1} &=& 0 \nonumber\\
U_{2n} &=& {\textstyle{1\over 2}} (-1)^n V_{2n-1}\quad\quad for\quad
n=1,2,...
\end{eqnarray}
The corresponding reduced super-KP operator can be compactely written
as
\begin{eqnarray}
L &=& D +{\textstyle{1\over 2}} \Psi_- {\cal D}^{-2} {\Psi_+}^{(1)}
\end{eqnarray}
with ${\Psi_+}^{(1)}={\cal D} \Psi_+$.\par
The integrability properties of the super-NLS hierarchy are
established due to the existence of such Lax operator.
\section{The $N=2$ formalism.}
\indent
Let us introduce here the framework and conventions for working in a
manifestly
supersymmetric $N=2$ formalism.\par
The $N=2$ superspace is parametrized by the bosonic $x$ coordinate
and two
grassmann variables $\theta, {\overline\theta}$. A generic superfield
is then expanded as
\begin{eqnarray}
\Phi(X) &=& \phi (x) +\theta f(x) +{\overline \theta} {\overline
f}(x) + \theta{\overline\theta} g(x)
\label{n2super}
\end{eqnarray}
The $N=1$ case is recovered when letting
$\theta={\overline\theta}$.\par
Two spinor derivatives $ {\tilde D}, {\overline D} $ are defined as
\begin{eqnarray}
{\tilde D} &=& {\partial \over \partial\theta} +{\overline \theta }
\partial_x\nonumber\\
{\overline D} &=&{\partial \over \partial{\overline\theta}} +{ \theta
} \partial_x
\end{eqnarray}
They satisfy the relations
\begin{eqnarray}
{\tilde D}^2 = {\overline D}^2 &=& 0\nonumber\\
\{ {\tilde D}, {\overline D} \} &=& 2\partial_x
\end{eqnarray}
It is convenient (we come later on this point) to describe the
$N=2$ theory in terms of constrained superfields, namely the chiral
(${\tilde \Psi}$) and antichiral (${\overline \Psi}$) superfields,
defined respectively by
\begin{eqnarray}
{\overline D} {\tilde \Psi} &=& 0
\nonumber\\
{\tilde D} {\overline \Psi} &=& 0
\end{eqnarray}
Due to the above relation the derivated superfields
${\tilde D}\Phi $ and ${\overline D}\Phi$ are respectively antichiral
and chiral superfields.\par
The condition of chirality implies the following expansions in
component fields
\begin{eqnarray}
{\tilde A} &=& a(x) + \theta \alpha (x) + \theta{\overline \theta}
a(x)'\nonumber\\
{\overline B} &=& b(x) +{\overline \theta} {\beta}(x)
-\theta{\overline\theta} b(x)'
\end{eqnarray}
and the derivated superfields are
\begin{eqnarray}
{\tilde D} {\tilde A} &=& \alpha (x) + 2{\overline\theta} a(x)'
-\theta{\overline\theta} a(x)''\nonumber\\
{\overline D} {\overline B} &=& \beta (x) + 2{\theta} b(x)'
+\theta{\overline\theta} b(x)''
\end{eqnarray}
It is remarkable that chiral and antichiral superfields
can be expressed as $N=1$ superfields in relation with the
superspaces
\begin{eqnarray}
{\hat X} &=& ({\hat x} = x+\theta{\overline \theta},
\theta)\nonumber\\
{\check X} &=& ({\check x} = x-\theta{\overline \theta},{\overline
\theta})
\end{eqnarray}
respectively.\par
Moreover if we introduce the $N=1$ spinor derivative $D$ as
\begin{eqnarray}
D&=& D_X ={\partial\over\partial\theta} + 2\theta {\partial\over
\partial x}
\label{newder}
\end{eqnarray}
(allowing a factor $2$ difference with respect to the convention used
in the previous sections), then we can write the derivated
superfields as
\begin{eqnarray}
{\tilde D} {\tilde A} &\equiv& D {\tilde A}|_{\check X}\nonumber\\
{\overline D} {\overline B} &\equiv& D {\overline B}|_{\hat X}
\end{eqnarray}
The existence of the $N=1$ superfield representation for chiral and
antichiral
superfields is particularly useful for our purposes because it allows
defining the $N=2$ supersymmetric theory in terms of the $N=1$
superfield formalism developped in the previous sections. In
particular we can define
super-Poisson brackets structures as done before: they will depend on
the $N=1$ supersymmetric
delta-function already encountered (and the (\ref{newder}) derivative
acting on it).\par
The supersymmetric line integral for chiral and antichiral
superfields are given respectively by
\begin{eqnarray}
d{\hat X} &\equiv& d X\quad\quad\quad X=(x,\theta)\nonumber\\
d{\check X} &\equiv&d{\overline X}\quad\quad\quad{\overline X}
=(x,{\overline\theta})
\end{eqnarray}
The two equivalence relations are due to the fact that the term
proportional to ${\theta{\overline\theta}}$ is a total derivative for
both chiral and antichiral superfields .\par
Let us spend now some more words about using (anti)-chiral
superfields to describe $N=2$ theories. The dynamics of a real
superfield $\Psi $ can always be
recovered from the dynamics of two conjugated
chiral and antichiral superfields
$ {\tilde \Psi}$, $ {\overline \Psi}$ (we recall that the $g(x)$
field appearing in (\ref{n2super}) is just an auxiliary field,
dynamically determined in terms of
the component fields $\phi , f, {\overline f}$).\par
It turns out that the $N=2$ dynamics can be expressed by using two
conjugated sets of equations of motion for chiral and antichiral
superfields. Such equations of motion are defined in terms of
conjugated (anti-)chiral hamiltonians whose combination gives a
single real hamiltonian. For integrable systems the
dynamics can also be expressed through two chirally conjugated Lax
operators
whose combination provide a single real Lax operator. Further details
concerning
such construction can be found in (\cite{IT}). In the following we
will define
the $N=2$ super-NLS equation in terms of these two conjugated sets of
chiral
superfields.
\section{The $N=2$ super-NLS hierarchy.}
\indent
Let us introduce now the $N=2$ super-NLS hierarchy, extending to this
case
the procedure already worked out for the bosonic and $N=1$ NLS
theories. \par
According to the discussion developped in the previous section, it is
clear
that now we should define our $N=2$ hierarchy by ``doubling" the
number of superfields
of the $N=1$ case: we should look for two (chiral and antichiral)
covariant derivatives defined in terms of the spin
${\textstyle{1\over 2}}$ superfields ${\tilde \Psi_0}, {\overline
\Psi_0}$. Moreover we should have two
sets of opposite charged (anti-)chiral superfields
${\tilde\Psi}_\pm$, ${\overline\Psi}_\pm$. These two sets of
superfields should be seen as chirally conjugated. \par
Let us define now the two conjugated covariant derivatives: we
introduce first the conjugate spin ${\textstyle{1\over 2}}$
superfields ${\tilde\Psi}_0, {\overline\Psi}_0$. There is a freedom
in choosing the normalization
condition for their super-Poisson brackets algebra. Let us fix it by
assuming
\begin{eqnarray}
\{ {\tilde\Psi}_0(X),{\tilde\Psi}_0(Y)\}&=& \{
{\overline\Psi}_0(X),{\overline\Psi}_0(Y)\} = D_Y\Delta
(X,Y)\nonumber\\
\{{\tilde\Psi}_0(X),{\overline\Psi}_0(Y)\}&=&
0
\label{superu1}
\end{eqnarray}
with $D_Y$ given by (\ref{newder}).\par
Next the notion of covariant superfield can be introduced: $V$ is
said covariant with charges $({\tilde q},{\overline q})$ if it
satisfies the relations
\begin{eqnarray}
\{ {\tilde\Psi_0}(X),{V(Y)}\}&=& {\tilde q}\Delta (X,Y) V(Y)
\end{eqnarray}
and an analogous one with ${\tilde \cdot}\mapsto {\overline\cdot}$
.\par
The covariant derivative ${\cal D}$, mapping covariant superfields of
charges $({\tilde q},{\overline q})$ into superfields of the same
charge, is in this case
given by
\begin{eqnarray}
{\cal D} V &=& (D +{\tilde q}{\tilde \Psi_0} +{\overline q}{\overline
\Psi_0}) V
\end{eqnarray}
At this point we have all the ingredients to define the complete
supercurrents
algebra involving
${\tilde\Psi}_0,{\tilde\Psi}_\pm,{\overline\Psi}_0,{\overline\Psi}_\pm$
which allows us to define the $N=2$ super-NLS theory. After a little
inspection
one can realize that our game can be played by simply postulating
such
algebra as given by two separated copies of the $N=1$
(\ref{zerosusyalg},\ref{susyalg}) algebra.
A fundamental point is that now, in order to recover the non-trivial
equations of motion which involve together chiral and antichiral
superfields, the two
$N=1$ supercurrents algebras should mix chiral and antichiral
superfields.\par
We can assume the two copies being given by (${\tilde\Psi}_- ,
{\overline\Psi}_0,{\overline\Psi}_+$)
and (${\overline\Psi}_-, {\tilde\Psi}_0, {\tilde\Psi}_+$), with the
following charges for the
${\tilde\Psi}_\pm,{\overline\Psi}_\pm$ superfields
\begin{eqnarray}
{\tilde\Psi}_-&\equiv& (0,-1)\nonumber\\
{\overline\Psi}_+&\equiv& (0,1)\nonumber\\
{\overline\Psi}_-&\equiv& (-1,0)\nonumber\\
{\tilde\Psi}_+ &\equiv& (1,0)
\end{eqnarray}
The complete algebra is given by
\begin{eqnarray}
\{{\overline\Psi}_0 (X), {\overline\Psi}_+(Y) \} &=& \Delta
(X,Y){\overline \Psi}_+(Y) \nonumber\\
\{{\overline\Psi}_0 (X), {\tilde\Psi}_-(Y) \} &=& -\Delta
(X,Y){\tilde \Psi}_-(Y) \nonumber\\
\{{\overline\Psi}_+ (X), {\tilde\Psi}_-(Y) \} &=&
(D_Y -{\overline\Psi}_0(Y) )
\Delta (X,Y) = {\cal D}_Y \Delta (X,Y)\nonumber\\
\{{\tilde\Psi}_0 (X), {\tilde\Psi}_+(Y) \} &=& \Delta (X,Y){\tilde
\Psi}_+(Y) \nonumber\\
\{{\tilde\Psi}_0 (X), {\overline\Psi}_-(Y) \} &=& -\Delta
(X,Y){\overline \Psi}_-(Y) \nonumber\\
\{{\tilde\Psi}_+ (X), {\overline\Psi}_-(Y) \} &=&
(D_Y -{\tilde\Psi}_0 (Y) )
\Delta (X,Y)= {\cal D}_Y \Delta (X,Y)
\label{susyalg2}
\end{eqnarray}
Together with (\ref{superu1}).
All other super-Poisson brackets are vanishing.\par
There exists of course a superWakimoto representation, provided by
two sets of
chirally conjugated superfields:
the bosonic superfields
${\hat C}, {\check C}$ of weight $0$, and the fermionic ones ${\hat
B}, {\check B}, {\hat N}, {\check N} $ of weight ${\textstyle{1\over
2}}$. The $B$'s and $C$'s
superfields generate two coupled systems. \par
The superalgebra of the free Wakimoto
superfields is just provided by
\begin{eqnarray}
\{ {\hat B}(X),{\hat C}(Y)\}&=&\Delta (X,Y)\nonumber\\
\{ {\hat C}(X),{\hat B}(Y)\}&=&\Delta (X,Y)\nonumber\\
\{ {\hat N}(X),{\hat N}(Y)\}&=&D_Y\Delta (X,Y)
\end{eqnarray}
and an equivalent relation with ${\hat\cdot}\mapsto{\check\cdot}$.
The superfields identifications are the same as in (\ref{superwak}):
\begin{eqnarray}
{\overline \Psi}_+ &=& {\hat B}\nonumber\\
{\overline \Psi}_0 &=& -{\hat B}{\hat C} + {\hat B}\nonumber\\
{\tilde \Psi}_- &=& -{\textstyle{1\over 2}} {\hat B}{\hat C}^2 +{\hat
C}{\hat N}- D{\hat C}
\end{eqnarray}
and the analogous relations involving the second set of
superfields.\par
Inspired by the $N=1$ results we can define at this point our
dynamics
as determined by the two conjugated sets of (anti-)chiral
hamiltonians in involution. The first two (${\tilde H}_{1,2}$ and the
conjugates ${\overline H}_{1,2}$) are given by
\begin{eqnarray}
{\tilde H}_1 &=& \int d{ X}{\tilde{\cal H}}_1 =\int dX ({\tilde
\Psi}_- {{\cal D}}{\overline \Psi}_+)\nonumber\\
{\tilde H}_2 &=& \int dX {\tilde {\cal H}}_2 =\int d{ X} ({\tilde
\Psi}_-
{{\cal D}}^3{\overline \Psi}_+)
\end{eqnarray}
and
\begin{eqnarray}
{\overline H}_1 &=& \int d{\overline X}{\overline {\cal H}}_1 =\int
d{\overline X}( {\overline \Psi}_- {{\cal D}}{\tilde
\Psi}_+)\nonumber\\
{\overline H}_2 &=& \int d{\overline X}{\overline {\cal H}}_2 =\int
d{\overline X} ({\overline \Psi}_-
{{\cal D}}^3{\tilde \Psi}_+)
\end{eqnarray}
The real hamiltonians are given by
\begin{eqnarray}
H_{1,2} &=& {\tilde H}_{1,2} + {\overline H}_{1,2}\nonumber\\
\end{eqnarray}
They are invariant under the $N=2$ supersymmetry transformations
\begin{eqnarray}
\delta {\tilde\Psi}_\pm &=& \pm\varepsilon {{\cal D}}{\overline
\Psi}_\pm\nonumber\\
\delta {\overline\Psi}_\pm &=& \pm{\overline \varepsilon} {{\cal
D}}{\tilde \Psi}_\pm
\end{eqnarray}
Moreover the hamiltonian densities ${\tilde {\cal H}}_j,
{\overline{\cal H}}_j$
have by construction vanishing Poisson brackets with respect to the
subalgebra
generators ${\tilde\Psi}_0,{\overline\Psi}_0$, namely they are in the
commutant.
\par
The equations of motion are introduced through the following
equations
\begin{eqnarray}
{\partial\over\partial t_j } { F} &=& \{ H_j, F\}
\end{eqnarray}
After using the algebraic relations (\ref{superu1},\ref{susyalg2}),
and taking into account that we can consistently set
\begin{eqnarray}
{\tilde\Psi}_0={\overline\Psi}_0 &=& 0
\end{eqnarray}
we get the flows:
\begin{eqnarray}
{\partial\over\partial t_1 } {\tilde\Psi}_\pm &=& {\tilde\Psi}_\pm
'\nonumber\\
{\partial\over\partial t_1 }{\overline\Psi}_\pm &=&
{\overline\Psi}_\pm '
\end{eqnarray}
and
\begin{eqnarray}
{\partial\over\partial t_2} {\tilde\Psi}_\pm &=& \pm
{\tilde\Psi}_\pm'' \mp
{\tilde\Psi}_\pm {\overline D} ({\overline\Psi}_\mp {\tilde D}
{\tilde \Psi}_\pm )\nonumber\\
{\partial\over\partial t_2 }{\overline\Psi}_\pm &=& \pm
{\overline\Psi}_\pm'' \mp
{\overline\Psi}_\pm {\tilde D} ({\tilde\Psi}_\mp {\overline D}
{\overline \Psi}_\pm )
\end{eqnarray}
The second flow provides the two-components $N=2$ super-NLS
equation.\par
Notice that the chirality condition is respected by the equations of
motion as it should be.\par
On the right hand side chiral and antichiral superfields are coupled
together in the non-linear term. This ensures the theory having the
genuine feature of a non-trivial
$N=2$ supersymmetry.
The $N=1$ equation is recovered
by assuming $\theta={\overline\theta}$ which implies
${\tilde\Psi}_\pm ={\overline\Psi}_\pm$.\par
It is clear that one can straightforwardly repeat the same steps
as done in the $N=1$ construction. The same structures appear in this
case as well. Let us recall them briefly:\par
i) existence of a compatible bihamiltonian structure relating the
first two hamiltonians.\par
ii) $N=2$ generalization of the modified super-NLS equation arising
by the superWakimoto representation for the algebra
(\ref{superu1},\ref{susyalg2}).\par
iii) existence of the (coset) $N=2$ non-linear super-${\cal W}_\infty$
algebra, promoted to be a finite rational super-${\cal W}$ algebra. It is
linearly generated by the chargeless superfields
\begin{eqnarray}
V_{2n} &=& {\tilde \Psi}_- {\cal D}^{2n}
{\overline {\Psi}}_+\nonumber\\
V_{2n+1} &=&{\tilde \Psi}_- {\cal D}^{2n+1}
{\overline {\Psi}}_+\nonumber\\
W_{2n} &=& {\overline\Psi}_- {\cal D}^{2n}
{\tilde{\Psi}}_+\nonumber\\
W_{2n+1} &=&{\overline \Psi}_- {\cal D}^{2n+1}
{\tilde {\Psi}}_+
\end{eqnarray}
The fermionic superfields $V_{2n+1}$, $W_{2n+1}$ have half-integral
spin ${\textstyle {2n+3\over 2}}$. When evaluated at
${\tilde\Psi}_0={\overline\Psi}_0=0$ they are
respectively chiral and antichiral, and can be expressed as
\begin{eqnarray}
V_{2n+1} &=&{\tilde \Psi}_-(2\partial )^n {\overline D}
{\overline {\Psi}}_+\nonumber\\
W_{2n+1} &=&{\overline \Psi}_- (2\partial )^n{\tilde D}
{\tilde {\Psi}}_+
\end{eqnarray}
The bosonic superfields $V_{2n}$, $W_{2n}$ of spin $n+1$
have not a definite chirality. Notice that, as it should be, our
$N=2$ super-${\cal W}$ algebra admits a ``doubled" number of superfields
with respect to the $N=1$ case.\par
iv) existence of a dynamically consistent constraint, which allows
setting
the bosonic superfields $V_{2n}$, $W_{2n}$ equal to zero.
This implies in its turn a ``reduced dynamics" involving only the
chiral and antichiral fermionic
superfields; such a dynamics is particularly important because
it gives rise to a consistent reduction of the $N=2$
super-KP hierarchy provided by the two conjugate Lax operators of
definite chirality.
\par
These two conjugate Lax operators are given by
\begin{eqnarray}
{\tilde L} &=& {{\cal D}} + {\tilde\Psi}_- {\cal D}^{-2}
{{\overline\Psi}_+}^{(1)}
\nonumber\\
{\overline L} &=& {{\cal D}} + {\overline\Psi}_- {\cal D}^{-2}
{{\tilde\Psi}_+}^{(1)}
\end{eqnarray}
where
\begin{eqnarray}
{{\overline\Psi}_+}^{(1)}&=& {{\cal D}}{\overline\Psi}_+\nonumber\\
{{\tilde\Psi}_+}^{(1)}&=& {{\cal D}}{\tilde\Psi}_+
\end{eqnarray}
${\tilde L} $ is chiral, ${\overline L}$ antichiral.\par
Once expanded, they are expressed in terms of the $V_{2n+1}$,
$W_{2n+1}$ superfields respectively, which are invariants under the
$N=2$ Kac-Moody superalgebra (\ref{superu1}):
\begin{eqnarray}
{\tilde L} &=& {\tilde{ D}} +\sum_{k=0}^\infty (-1)^kV_{ 2k+1}
\partial^{-k}\nonumber\\
{\overline L} &=& {\overline{ D}} +\sum_{k=0}^\infty (-1)^kW_{ 2k+1}
\partial^{-k}
\label{supkpred}
\end{eqnarray}
(we have replaced the covariant derivative with the standard one,
which is allowed when ${\tilde L}$, ${\overline L}$ act on chargeless
superfields).\par
The dynamics for the $V_{2n+1}$, $W_{2n+1}$ superfields derived in
terms of flows of the super-KP reduced operator (\ref{supkpred})
coincides with the
just mentioned ``reduced dynamics" of $V_{2n+1}$, $W_{2n+1}$ arising
from the hamiltonian formulation.
{}~\quad\\
\vfill
{\Large {\bf Conclusions}}
\indent
In this paper we have furnished a method to derive what we can call
(in analogy to the bosonic case) multi-superfields reductions of the
super-KP hierarchy, which are a further generalization of the
commonly studied generalized super-KdV hierarchies.\par
In the particular example here considered we obtained some new
results
concerning the form of the super-Lax operator, the connection with a
super${\cal W}_\infty$ algebra, the link with the modified super-NLS
equation, etc.\par
According to our ``coset method" the multifields reductions are
obtained from cosets of (in this case super) Kac-Moody algebras.\par
We would like to spend some words about the coset method and why it
deserves
being further investigated: it allows having a nice algebraic
interpretation
for the Poisson brackets structures of the theories involved; more
than that,
it could provide an algebraic classification of the multi-fields
(super) KP reductions if the attracting hypothesis that they are all
associated to cosets
proves to be correct. Since our method makes use of covariant
derivatives and is not based on a hamiltonian reduction (and
consequently on Dirac's brackets) it implies a nice free-fields
interpretation and mapping to modified hierarchies as explained in
the paper.
This could prove useful when discussing quantization (it is tempting
indeed to repeat our procedure for let's say the $q$-deformed affine
$sl(2)$ algebra).\par
In order to attack the most important point, concerning the
classification
of the (super) KP reductions some preliminary results will be needed:
we can mention for instance understanding the coset method
in the light of the AKS scheme, expliciting the connection between
the
(unreduced) KP hierarchy Poisson brackets structure and those coming
from the cosets, computing the associated $r$-matrices with methods
like those developped in \cite{rmat}. Such results are needed for a
formal proof of the statement that any coset gives rise to a
KP-reduction.
We will address all these points in forthcoming papers.
{}~\\~\\
\noindent
{\large{\bf Acknowledgements}}
{}~\\~\\
I wish to acknowledge useful discussions had
with L. Feher and P. Sorba.
{}~\\
{}~\\
|
2,869,038,155,272 | arxiv | \section{\label{level1}Introduction}
It is well-known that in General Relativity there are, in principle, space-times where time travel is possible, that is, there are trajectories that form a loop over time, where an observer who follows them could return to its own past \cite{thorne}. These loops are called Closed Timelike Curves (CTC). There is a close relationship between time travel and the possibility of achieving speeds larger than the speed of light in vacuum (superluminal velocity). Performing a path between two points at superluminal velocity and then the return path at a superluminal velocity in a different Lorentz frame, allows, in principle, to return to the origin before having even left \cite{hawking} .
The existence of CTCs presents both logical problems (such as the well-known grandfather paradox) and theoretical ones \cite {lobo}. From the theoretical point of view, the presence of CTCs might be seen as an incompleteness of General Relativity itself: the evolution of a space-time with CTCs lacks a clear and consistent causal structure that can be described by General Relativity or other accepted theory. Certain conditions of realism not necessarily inherent to General Relativity (related to the type of matter (energy conditions) or to the asymptotic behavior of space-time, for example) are usually imposed to prevent the existence of CTCs and, thus, maintaining the causal structure \cite {thorne, curiel, mallary, hawking, wald}. It should be noted that the main interest in the study of CTCs lies in the search for physical mechanisms that prevent their creation \cite {thorne}, such as the chronology protection conjecture proposed by Hawking \cite {hawking}. In fact, the most promising route of research comes from the combination of the theory of General Relativity and quantum field theory in curved space-times, which could help to understand some aspects of quantum gravity \cite {thorne}. However, only a full theory of quantum gravity could finally close this open problem, by confirming or refuting Hawking's conjecture.
In physical problems of this nature, due to the difficulty (or impossibility) of observing the phenomenon itself, the use of simulations might be interesting, both in classical and quantum setups. Using classical means, processes such as superluminal motion \cite {clerici} or the formation of an event horizon in a white hole \cite {philbin} can be simulated. The use of quantum simulators has recently brought results in physical processes of difficult or dubious observation, such as the simulation of a traversable wormhole \cite {sabworm}, space-times in which superluminal trips are allowed and even CTCs \cite {sabexot}, Hawking radiation \cite {nation}, magnetic monopoles \cite {ray} or tachyonic particles \cite {lee}. The nature of each problem makes it necessary to use different types of systems to perform the simulations.
In this paper we are interested in the analysis of the possible mechanisms in charge of preventing the existence of CTCs. In the absence of a full theory of quantum gravity, an experimental simulator including quantum effects might shed light on this open issue. There are many proposed space-times that allow the existence of CTCs, each of them with more or less reasonable physical properties \cite {thorne}. We will focus only on the recent proposal by Mallary \textit {et al.} \cite {mallary}, a space-time consisting of a wire of matter of infinite length that can be moved at relativistic speeds, whose line element is given by
\begin{equation}
\ ds^{2}=-Fc^{2}dt^{2}+\frac{1}{F}dr^{2}+dz^{2}+r^{2}d\phi^{2},
\label{eq1}
\end{equation}
where
\begin{equation}
\ F=\cases{1+\left ( \frac{1}{r} -\frac{1}{R}\right )^{n} &if $r\leq R$ \\ 1 &if $ r> R$\\},
\label{eq2}
\end{equation}
where the radius $ R $ is a positive arbitrary constant and $n \geq 2$. The term $ F $ ensures that the radius of the wire is finite and presents a singularity at $ r = 0 $ (which leads to an infinite mass per unit length). This metric violates the hypothesis of cosmic censorship, since it lacks an event horizon because the $ \frac {1} {F} $ factor of the radial coordinate never becomes infinite. It satisfies the weak, null and strong energy conditions. However, it does not meet the dominant energy condition \cite {curiel}. Although similar metrics can be considered which fulfill the dominant energy condition as well as others having a finite size, the study of CTCs in these metrics is more involved and will not be addressed here \cite {mallary}.
We propose the experimental simulation of photon paths in the space time described by the metric (\ref {eq1}) that give rise to CTCs. For this we will consider two essentially different systems: a classical one and a quantum one. As a classical system we will consider the signal observed by the scattering of a light front on an inclined surface \cite {clerici} and as a quantum system we will use a superconducting circuit, specifically an \textit { array} of SQUIDs \cite {simoen, lahten, sabworm}.
In both cases, simplified versions of (\ref {eq1}) are assumed, where the paths followed by the photons are carried out in a single spatial dimension, that is, they are constrained to a $1+ 1D$ section of the full spacetime. Then we can generally use:
\begin{equation}
\ ds^{2}=-c^{2}\left ( \rho ,t \right )dt^{2}+d\sigma ^{2},
\label{eq3}
\end{equation}
where $\rho$ and $\sigma$ are arbitrary coordinates (if $ \rho $ does not match $ \sigma $, it is taken as an additional parameter). In this way we have Minkowski-like space-times with an effective light speed that depends on spatio-temporal coordinates. Then, in order to implement the space-time (\ref {eq1}), a $1 + 1D$ section is taken from the full $3 + 1D$ space-time, so that two of the coordinates are ignored, obtaining a dimensionally reduced space-time, with an expression of the form (\ref{eq3}). For axial trajectories on the axis $z$ ($\rho = r $; $ \sigma = z $) the metric of the dimensionally reduced space-time is finally:
\begin{equation}
\ ds_{z}^{2}= -c_{v}^{2}Fdt^{2}+dz^{2}
\label{eq4} \\
\end{equation}
where $ c_{v} $ is the speed of light in vacuum. Then, we see in Eq. (\ref{eq4}) that we have $r$-dependent effective speed of light, $ c_{z}=c_{v}\sqrt{F} $.
The structure of the paper is the following. In Section \ref{level2} we will briefly describe the particular CTC proposed in \cite{mallary}. Then we will consider in Section \ref{level3} the quantum simulation of the spacetime where these CTCs arise and discuss the restrictions that appear when we try to implement the CTC. Finally, we will see in Section \ref{level4} that these mechanisms are absent in a classical setup, where we can actually propose a realistic analogue of a CTC. We conclude in Section \ref{level5} with a summary of our results.
\section{\label{level2}Existence and features of CTCs}
Before trying to implement a simulation of CTCs, we will briefly summarize the necessary conditions so that in the space-time (\ref {eq1}) a CTC can be produced, following the more detailed description of \cite {mallary}. Two separate parallel wires will be needed at a distance $d$ ($2R <d \ll L $, where $L$ is a physical distance traveled on the axis $ z $), one of which moves at relativistic speed in the direction of the axis $ z $ as shown in Figure ~ \ref {fig1}. In this way, the wires do not interact gravitationally, and are separated by an empty space-time. Two reference systems at rest at great distances from both wires, denominated $ S ^ {Lab} $ and $ S ^ {\beta}$, will be considered for the lower and upper wire, respectively. Then the upper wire will undergo a boost $ \beta = \frac {v} {c_ {v}} <1 $ in the direction of the $ z $ axis. A photon which follows the path described in Figure ~ \ref {fig1} travel through regions where its speed is greater than the speed of light in vacuum, which will depend on the distance from the wire, as it is deduced from (\ref {eq4}).
\begin{figure}[!]
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{\label{fig1} Diagram of the CTC in the space-time formed by two wires that move with a certain relative speed. $ S ^ {lab} $ represents the observing reference system that is at rest with the lower wire and $ S ^ {\beta} $ is a reference system at rest with respect to the upper wire that has a boost $ \beta $ relative to $ S ^ {lab} $. Both wires have an infinite length in the longitudinal axis.}
\end{figure}
We assume that the photon makes a path in the lower wire (at rest with the laboratory) and the same path back in the upper wire in the opposite direction to the boost with a velocity $ c_ {z} = c_ {v} \sqrt {F} \geq c_ {v} $, where the radial distances traveled will be neglected (since $ d \ll L $).
It is shown in \cite{mallary}, that the total time for the round trip can be negative if:
\begin{equation}
\ \beta > \frac{2}{\sqrt{F}+\frac{1}{\sqrt{F}}}.
\label{eq17}
\end{equation}
Thus, (\ref{eq17}) is the CTC condition as described in \cite{mallary} without the need to explicitly compute metric of the space-time with CTC (we refer to the original paper for further explanation). This scenario can be generalized for the case where in the first wire (at rest with respect to the laboratory system $ S ^ {Lab} $) the coordinate speed of light is $ c_ {v} \sqrt{F_ {1}} $ and for the second wire (at rest with respect to the system with boost $ S ^ {\beta} $) the speed of the coordinate light is $ c_ {v} \sqrt{F_ {2}}$. In this way, we find that the CTC condition takes the general form
\begin{equation}
\ \beta > \frac{1+\frac{\sqrt{F_{2}}}{\sqrt{F_{1}}}}{\frac{1}{\sqrt{F_{1}}}+\sqrt{F_{2}}}
\label{eq18}
\end{equation}
which is clearly reduced to (\ref{eq17}) when $ F_{1}=F_{2} $
\section{\label{level3}Quantum simulation of CTCs}
Our aim is to simulate the path described in Figure ~ \ref {fig1}, trying to reach the CTC condition (\ref{eq17}). For the quantum simulation of the space-time described by (\ref{eq4}) we consider the conformal invariance of the Klein-Gordon equation for a scalar field in $ 1+1D $ \cite {birrel}. Essentially, this is the case for an electromagnetic wave in an open transmission line with an array of dc-superconducting quantum interference devices (dc-SQUIDs) embedded on it \cite {lahten, simoen}. For our purposes, a SQUID can be considered as a tunable Josephson junction (JJ), namely a nonlinear inductance which can be controlled by an external magnetic flux. (For more details on the physics of JJs and SQUIDs in the context fo modern quantum technologies, see for instance \cite{simoen,reviewnori}). In such a way, the propagation speed of a microwave quantum electromagnetic field along the transmission line is given by $ c = \frac {1} {\sqrt {LC}} $, where $ L $ and $ C $ are the inductance and capacitance per unit length, respectively. Since the number of SQUIDs embedded in the transmission line is large enough, we can consider that $ L = L_ {s} $ and $ C = C_ {s} $, where $ L_ {s} $ and $ C_ {s} $ are the inductance and capacitance of a single SQUID. Note that, actually, the capacitance and inductance per unit length are $ \frac{C_{s}}{\epsilon} $ and $ \frac{L_{s}}{\epsilon} $, respectively, where $ \epsilon $ is the size of the SQUID; this does not affect the results since $ \epsilon^{2} $ get absorbed in the definition of $ c_{0} $ (see below). If the SQUID area is small enough, its self-inductance can be neglected. Each SQUID has two JJs but, considering that both have the same critical current ($ I_ {c} $), it can be treated as a single Josephson junction whose inductance is given by
\begin{equation}
\ L_{s}\left ( \phi _{ext} \right )= \frac{\phi _{0}}{4\pi I_{c}\cos \frac{\pi \phi _{ext}}{\phi _{0}}\cos \psi }
\label{eq30},
\end{equation}
where $\phi_ {0} = \frac {h} {2e} $ is the flux quantum, $\phi_ {ext} $ is the external magnetic threading the SQUID and $ \psi $ is the phase difference along the SQUID, which we will take in the weak signal limit $\psi = 0$. In this way, the speed of light in the transmission line is
\begin{equation}
\ c^{2}\left ( \phi _{ext} \right )= \frac{1}{L_{s}C_{s}}= \frac{1}{\frac{\phi _{0}}{4\pi I_{c}}C_{s}}\cos \frac{\pi \phi _{ext}}{\phi _{0}}= c_{0}^{2}\cos \frac{\pi \phi _{ext}}{\phi _{0}}
\label{eq31}
\end{equation}
where $ c_ {0} = c (\phi_ {ext} = 0) = \frac {1} {\sqrt {L_ {s} (\phi_ {ext =0}) C_ {s}} }$ is the speed of light in the transmission line in the absence of external magnetic flux. To modify the velocity (\ref{eq31}) along the SQUID array, a magnetic flux $\phi_ {ext} $ will be applied, with a time and space dependence suitable to emulate the section of space-time of interest. First, we will divide this magnetic flux into two components, such that
\begin{equation}
\ \phi _{ext}\left ( r,t \right )= \phi _{ext}^{DC}+\phi _{ext}^{AC}\left ( r,t \right )
\label{eq32}
\end{equation}
As shown in \cite{sabexot}, we get:
\begin{equation}
\ c^{2}\left ( \phi _{ext} \right )=c^{2}\left ( \phi _{ext}^{DC} \right )\tilde{c}^{2}\left ( \phi _{ext} \right ),
\label{eq33}
\end{equation}
where
\begin{eqnarray}
\ c^{2}\left ( \phi _{ext}^{DC} \right )&=c_{0}^{2}\cos \frac{\pi \phi _{ext}^{DC}}{\phi_{0}}
\label{eq34} \\
\ \tilde{c}^{2}\left ( \phi _{ext}\right )&=\sec \frac{\pi \phi _{ext}^{DC}}{\phi_{0}}\cos \frac{\pi \phi _{ext}}{\phi _{0}},
\label{eq35}
\end{eqnarray}
under the restriction $ (\frac{\pi \phi_{ext}^{DC}}{\phi_{0}}) $,$ (\frac{\pi \phi_{ext}}{\phi_{0}}) \in \left [ -\frac{\pi}{2},\frac{\pi}{2} \right ] $. For the simulation of the spacetime in (\ref{eq4}), we first set an equivalence between the speed of light in vacuum $ c_{v}^{2} $ and $ c^{2}(\phi_{ext}^{DC}) $ such that
\begin{equation}
\ c_{v}^{2}\sim c_{0}^{2}\cos \frac{\pi \phi _{ext}^{DC}}{\phi _{0}}
\label{eq36}
\end{equation}
In this way, setting a constant magnetic flux $\phi_ {ext} ^ {DC}$ will simulate our simulated flat- spacetime speed of light, which might be significantly smaller than the actual value of the speed of light in vacuum $c_ {v}$ and that the speed of light in the transmission line in the absence of magnetic flux $c_ {0}$. In summary, we replace the actual speed of light by a virtual different one, and we assume that the latter plays the same role as the real speed of light but in a virtual universe, setting a virtual causal structure. The superluminal motion obtained is referred to this virtual speed of light, but it is always subluminal with respect to the real speed of light. In this way, we can build the analogue of a CTC with respect to the virtual speed of light but without sending any physical object back in time in any sense. This will be necessary to simulate superluminal velocities in the superconducting circuit.
Secondly, the $AC$ component of the magnetic flow $\phi_ {ext} ^ {AC} (r, t)$ will be used to simulate a spatiotemporal profile for the speed of light such that:
\begin{equation}
\ F=\sec \frac{\pi \phi _{ext}^{DC}}{\phi _{0}}\cos \frac{\pi \phi _{ext}}{\phi _{0}}
\label{eq37} \\
\end{equation}
Thus, we will need the following profiles for the magnetic fluxes:
\begin{equation}
\ \frac{\pi \phi _{ext}^{AC}\left ( r,t \right )}{\phi _{0}}=\arccos \left (F\cos \frac{\pi \phi _{ext}^{DC}}{\phi _{0}} \right )-\frac{\pi \phi _{ext}^{DC}}{\phi _{0}}
\label{eq39} \\
\end{equation}
Since the path takes place at a constant radial distance, the speed of the coordinate light will be identical and constant for each reference system. However, for the simulation, we can only simulate an effective speed of light for the laboratory system $ S ^ {lab} $. In the first case it is immediate, since the speed of the simulated light will simply be $c_ {z} = c_ {v} \sqrt {F}$. The magnitude of $ c_{z} $ will be limited by the magnetic flux value $ \phi_{ext}=\frac{\phi_{0}}{2} $; close values to this limit will cause quantum fluctuacions in the superconductor phase $ \psi $ due to the array impedance, invalidating the approximation made \cite {simoen}. Considering the simulable limit $ \phi_{ext}=0.45 \phi_{0} $ and (\ref{eq39}) we obtain an upper value of $ c_{z} \sim 2.5 c_{v} $ \cite {sabworm}. In the case of the path in the upper wire, we are not able to boost a transmission line up to relativistic speeds. Then, we directly state what would be the metric of the wire described by (\ref {eq1}) in motion along the direction of increasing $ z $ ($ z> 0 $) with a certain velocity $ v $ (where $ r $ is a parameter). That is, what would be the metric observed from rest in the distance (flat spacetime) of a moving wire.
We consider two reference systems, one $ S $ with coordinates ($ z, t $) static with respect to the wire in motion and another $ S '$ with coordinates ($ z', t '$) moving with the wire. The relations between the coordinates of both reference systems are given in the standard way:
\begin{eqnarray}
\ t'&= \gamma \left ( t-\frac{v}{c_{v}^{2}}z \right )
\label{eq43} \\
\ z'&=\gamma \left ( z-vt \right ),
\label{eq44}
\end{eqnarray}
where $ \gamma=1/\sqrt{1-\frac{v^{2}}{c_{v}^{2}}} $ is the usual Lorentz factor. Using this, we find:
\begin{equation}
\ ds^{2}=-\gamma ^{2}F\left ( c_{v}dt-\beta dz \right )^{2}+\gamma ^{2}\left ( dz-\beta c_{v}dt \right )^{2}.
\label{eq48}
\end{equation}
Eq. (\ref{eq48}) possesses two families of null geodesics:
\begin{eqnarray}
\ dz&=c_{v}\frac{\beta +\sqrt{F}}{1+\sqrt{F}\beta }dt\underset{\beta =0,F=1}{\rightarrow}dz=c_{v}dt
\label{eq49} \\
\ dz&=c_{v}\frac{\beta -\sqrt{F}}{1-\sqrt{F}\beta }dt\underset{\beta =0,F=1}{\rightarrow}dz=-c_{v}dt,
\label{eq50}
\end{eqnarray}
where, by considering the flat-spacetime limit ($F = 1$) for a wire at rest ($\beta = 0$), we see that (\ref {eq49}) corresponds to the path of a photon in the same direction as the motion of the wire while (\ref {eq50}) corresponds to the opposite direction. In the latter case, we have that the speed of light is:
\begin{equation}
\ c_{z}^{\beta }=c_{v}\frac{\sqrt{F}-\beta }{1-\sqrt{F}\beta },
\label{eq41}
\end{equation}
which can be negative -- and then reverse the time direction, a necessary but not sufficient condition for a CTC-- if
\begin{equation}
\sqrt {F} \beta> 1.\label{CTCcondition2}
\end{equation}
We analyze the possibility of achieving the condition (\ref{CTCcondition2}) and the more restrictive (\ref{eq17}) in Figure ~ \ref {fig2}. For a coordinate speed of light in the wire at rest with the laboratory, a corresponding range of $\beta$ in the boosted wire would give rise to a CTC, which in turn translates into a range of values for $c_{z}^{\beta}$ that are compatible with a CTC. As can be seen in Figure ~ \ref {fig2}, $c_{z}^{\beta}$ is always negative in the CTC region. However, we are not able to simulate an effective negative speed of light by means of Eqs. (\ref{eq34}) and (\ref{eq35}). Therefore, a fundamental restriction appears in this quantum setup, preventing us from generating a CTC.
\begin{figure}[!]
\includegraphics[width=\textwidth]{fig2.eps}
\caption{\label{fig2} Simulated speed of light in the upper wire $c_{z}^{\beta}$ vs simulated $\beta$ and speed of light in the bottom wire $c_z$. The points under the red curve correspond to negative time for the $BC$ path in the upper wire in the laboratory coordinate system, while the points fulfilling the CTC condition are under the light green line. In both cases, $c_{z}^{\beta}$ is negative and therefore out of experimental reach. The black arrow corresponds to a particular example.}
\end{figure}
Interestingly, defining:
\begin{eqnarray}
\ c_{p}&=c_{v}\frac{\sqrt{F}\left ( 1-\beta ^{2} \right )}{1-F\beta ^{2}}
\label{eq54} \\
\ v&=c_{v}\frac{\beta \left ( F-1 \right )}{1-F\beta ^{2}},
\label{eq55}
\end{eqnarray}
the metric (\ref{eq48}) can be rewritten as:
\begin{equation}
\ ds^{2}=-\left ( c_{p}^{2}-v^{2} \right )dt^{2}+2vdtdz+dz^{2},
\label{eq53}
\end{equation}
which is the well-known metric of a pulse travelling at speed $v$ with the background speed $c_p$ in the comoving frame. The latter has been used to simulate a black hole, since this is also the Schwarzschild metric in Gullstrand-Painlev\`e coordinates. The experimental design proposed is the same as the one explained above, but with an additional conducting line, where a current pulse with velocity $ v $ is generated, producing a magnetic flux bias, and limited by the propagation velocity of the unbiased SQUIDs, i.e. $ c_ {0} = c (\phi_ {ext} = 0) $ \cite {nation}. Thus, one might think of generating an electromagnetic pulse with the velocity $v$ necessary to generate a CTC. However, the analysis of the null geodesics shows that the negative-time trajectories would require $v>c_p$, which immediately implies $\beta>1/\sqrt{F}$ and thus negative $c_p$ (Figure ~ \ref {fig3}). Thus, we face the same restriction as before, due to the inability of simulating negative speeds of light with this setup. It is worth noting that, due to $ c_{p} $ appears squared in Eq. (\ref{eq53}), we can consider the absolute value of (\ref{eq54}). In this way, we can bypass the problem of simulating a negative light propagation velocity, although we still face the issue of generating a current pulse of negative velocity. Interestingly, the boundary between positive-time and negative-time trajectories is the point $c_{p}^{2}=v^{2}$, which is exactly the condition for the appearance of a horizon in the black-hole interpretation of the metric (\ref{eq53}).
In order to further illuminate the quantum origin of the restrictions preventing us from simulating a CTC, we show in the next section a setup using classical light where the above issues are not present.
\begin{figure}[!]
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{\label{fig3} Light propagation and pulse velocities for the particular case $ \beta=0.6 $. The horizontal black dashed line corresponds approximately to maximum simulable value when reducing the background speed of light using $ DC $ magnetic fluxes; note that both $ c_{p} $ and $ v $ depend on $ c_{v} $, where $ c_{v}^{2}\sim c_{0}^{2}\cos \frac{\pi \phi _{ext}^{DC}}{\phi _{0}} $. The absolute value for $ c_{p} $ is represented only when $ c_{p} $ is negative. }
\end{figure}
\section{\label{level4}Classical simulation of CTCs}
Given the impossibility of proposing an effective simulation of a CTC in the quantum system considered, we try to follow the same steps in a classical setup. For this we consider the experiment realized by Clerici et al. \cite {clerici}. This system consists of a light source that emits a wave front that impinges on a surface with an angle $ \theta $, in such a way that the point of intersection of the wave front with the surface moves at a speed $ v = \frac {c_ {v}} {\sin{\theta}} $. This intersection point will be visible due to the scattering of the surface itself, so we will call it the source of scattering. Clearly the scattering source could have superluminal velocities $ v> c_ {v} $. However, this does not pose a problem, because it is not a physical source as such, but a mere cinematical phenomenon \cite {french, gauthier}. Considering the concrete experimental design of Figure ~ \ref {fig4} A, the velocity of the scattering source observed by the camera on the $ x $ axis is given by
\begin{equation}
\ v_{x}^{0}=\frac{c_{v}}{1-\cot \theta }
\label{eq56}
\end{equation}
where $ 0< \theta < \frac{\pi}{4} $ for negative velocities and $ \frac{\pi}{4} < \theta < \frac{\pi}{2} $ for positive velocities (see Figure~\ref{fig4}B), with a singularity at $ \theta = \frac{\pi}{4} $. This behavior resembles that of the effective speed of a photon moving against the direction of motion of a moving wire (\ref{eq41}). Thus, we make the equivalence $ c_{z}^{\beta}=v_{x}^{0} $, and then the incident angle will be given by:
\begin{equation}
\ \theta =\rm{arccot} \left [ 1- \left ( \frac{1-\sqrt{F}\beta }{\sqrt{F}-\beta } \right )\right ],
\label{eq57}
\end{equation}
which is defined for $ \sim 27^{\circ}< \theta < 90^{\circ} $, when considering the values of $ F $ and $ \beta $. Therefore, it includes the singularity $ \theta=\frac{\pi}{4} $ which corresponds with the negative-time boundary $ \beta=\frac{1}{\sqrt{F}} $.
\begin{figure}[!]
\includegraphics[width=0.70\columnwidth]{fig4.eps}
\caption{\label{fig4} Classical simulation of a CTC. A) Outline of the experimental design \cite {clerici} B) Speed of the scattering source on the x axis observed by the camera for different angles. C) Sequential diagram (from top to bottom) of the simulation of a CTC. (Left) Arrangement of the scattering surfaces and the evolution of the image along the surfaces. In the first scheme, the angle of incidence of the wavefront is made explicit with respect to each surface where the wedges of dashed lines represent an angle of $ 45^{\circ} $ (Right) Diagram of the path of a CTC made by a fictitious rocket obtained from the captures of the facilitated video by \cite {mallary} (https://www.youtube.com/watch?v=ub6PGaygVwA). Note that the first capture does not represent any simulation: the arrival of the rocket cannot be simulated with this setup.}
\end{figure}
The ability of obtaining negative and superluminal light velocities, enables the simulation of a CTC in this setup. For this, two surfaces joined each other with an inclination with respect to the front of incident waves, as can be seen in Figure ~ \ref {fig4} C. Flat surfaces can be used since the speeds of light are always constant. The first surface is arranged in an angle $ \theta_ {1}> 45 ^ {\circ} $ and the second one in an angle $ \theta_ {2} <45 ^ {\circ} $, in such a way that they have positive and negative speeds, respectively. Note that in both cases we have superluminal speeds $ \left | v \right |> c_ {v} $. The first surface is matched with the initial path in the wire at rest, while the second surface represents the subsequent path in the wire with boost. It can be considered that the lengths of the surfaces on the axis $ x $ are normalized to the unit distance, so that the first surface covers $ x \in \left [0, \frac {1} {2} \right] $ and the second surface $ x \in \left [\frac {1} {2}, 1 \right] $. Therefore, the CTC is given by:
\begin{equation}
\ v_{x}^{0}=\frac{c_{v}}{1-\cot \theta }, \cases {\theta_{1} =\rm{arccot}\left [ 1- \frac{1}{\sqrt{F_{1}} } \right ], & $x\in \left [ 0,\frac{1}{2} \right ]$\\ \theta_{2} =\rm{arccot}\left [ 1- \left ( \frac{1-\sqrt{F_{2}}\beta }{\sqrt{F_{2}}-\beta } \right ) \right ], & $x\in \left [\frac{1}{2},1 \right ]$.}\label{eq60}
\end{equation}
where $ \theta_{2} $ has been obtained from (\ref{eq57}) and $ \theta_{1} $ just by making the equivalence $ v_{x}^{0}=c_{z}=c_{v} \sqrt{F_{1}} $. Note that in (\ref{eq60}) it has been considered that the path in the wire at rest and in the moving wire can be carried out at different distances from the central singularities. It suffices simply to take into account the condition (\ref {eq18}) to set the values of $ \theta_ {1} $ and $ \theta_ {2} $, and simulate a CTC. The values of (\ref {eq60}) must be of the same magnitude and opposite sign, to make the path correctly.
In Figure ~ \ref {fig4} C an intuitive scheme of the simulation is represented, where it is compared with the curve proposed by Mallary et al. \cite {mallary} for a fictitious rocket. When the wave front hits the surfaces, two images (scattering sources) are observed at each end that move towards the junction of both surfaces, with the speeds determined by (\ref {eq60}). The image that appears on the left corresponds to the path of the rocket in the wire at rest and the image that appears on the right to the rocket moving in negative time (the rocket appears shaded). Both rockets are at the point where the rocket of the wire at rest passes to the moving wire (in this simulation it would be the point at which the images are annihilated, in the language proposed by Clerici et al. \cite {clerici}). In this case, only one path of a hypothetical infinite loop would have been simulated, the initial arrival of the rocket is not considered.
\section{\label{level5}Conclusions}
We have analyzed possible experimental simulations of CTCs in the space-time recently proposed in \cite{mallary}. Note, that we are not considering a real CTC but a simulation of it, based on an apparent superluminal motion in a flat space-time, that is enough to create a CTC, as Hawking noted \cite{hawking}. We have proposed a classical simulation, based on a recent experiment \cite{clerici} with superluminal optical scattering sources. However, when attempting to propose an analogue quantum simulation by means of an SQUID array, fundamental restrictions appear, preventing us from simulating negative-time trajectories and thus CTCs. This suggests that these restrictions are of quantum origin and therefore they might represent in some way an analogy of the mechanism of chronological protection proposed by Hawking \cite {hawking}. It is worth noting that the analogue of the chronology protection mechanism appears as a technical limitation of the particular analogue setup considered and not as a general feature, as expected in a simulation.
Paraphrasing Hawking, we might say that it seems that there is a Chronology Protection Agency which prevents the appearance of closed timelike curves and so makes the universe safe for historians even in simulations in analogue systems.
\section*{Acknowledgements}
CS has received financial support through the Postdoctoral Junior Leader Fellowship Programme from “la Caixa” Banking Foundation (code LCF/BQ/LR18/11640005) and from Fundaci\'on General CSIC (ComFuturo Programme).
\section*{References}
|
2,869,038,155,273 | arxiv | \section{Introduction}
Conventional spoken dialogue systems (SDS) require a substantial amount of hand-crafted rules to achieve good interaction with users.
The large amount of required engineering limits the scalability of these systems to settings with new or multiple domains. Recently, statistical approaches have been studied that allow natural, efficient and more diverse interaction with users without depending on pre-defined rules~\citep{young2013pomdp,gavsic2014incremental,henderson2014robust}.
Natural language generation (NLG) is an essential component of an SDS. Given a semantic representation (SR) consisting of a dialogue act and a set of slot-value pairs, the generator should produce natural language containing the desired information.
Traditionally NLG was based on templates \citep{cheyer2014method}, which produce grammatically-correct sentences that contain all desired information. However, the lack of variation of these sentences made these systems seem tedious and monotonic.
\textit{Trainable generators} \citep{langkilde1998generation,stent2004trainable} can generate several sentences for the same SR, but the dependence on pre-defined operations limits their potential. Corpus-based approaches \citep{oh2000stochastic,mairesse2011controlling} learn to generate natural language directly from data without pre-defined rules. However, they usually require alignment between the sentence and the SR.
Recently, Wen et al.~\shortcite{wensclstm15} proposed an RNN-based approach, which outperformed previous methods on several metrics. However, the generated sentences often did not include all desired attributes.
The variational autoencoder~\citep{journals/corr/KingmaW13} enabled for the first time the generation of complicated, high-dimensional data such as images.
The conditional variational autoencoder (CVAE)~\citep{sohn2015learning}, firstly proposed for image generation, has a similar structure to the VAE with an additional dependency on a condition.
Recently, the CVAE has been applied to dialogue systems \citep{serban2017hierarchical,Shen2017ACV,ZhaoZE17} using the previous dialogue turns as the condition. However, their output was not required to contain specific information.
In this paper, we improve RNN-based generators by adapting the CVAE to the difficult task of cross-domain NLG.
Due to the additional latent information encoded by the CVAE, our model outperformed the SCLSTM at conveying all information. Furthermore, our model reaches better results when the training data is limited.
\section{Model Description}
\subsection{Variational Autoencoder}
The VAE is a generative latent variable model. It uses a neural network (NN) to generate $\hat{x}$ from a latent variable $z$, which is sampled from the prior $p_{\theta}(z)$. The VAE is trained such that $\hat{x}$ is a sample of the distribution $p_{D}(x)$ from which the training data was collected. Generative latent variable models have the form $p_{\theta}(x)=\int_{z}p_{\theta}(x|z)p_{\theta}(z) dz$. In a VAE an NN, called the decoder, models $p_{\theta}(x|z)$ and would ideally be trained to maximize the expectation of the above integral $E\left[p_{\theta}(x)\right]$.
Since this is intractable, the VAE uses another NN, called the encoder, to model $q_{\phi}(z|x)$ which should approximate the posterior $p_{\theta}(z|x)$. The NNs in the VAE are trained to maximise the variational lower bound (VLB) to $\log p_{\theta}(x)$, which is given by:
\begin{equation}
\begin{aligned}
L_{VAE}(\theta, \phi; x) = -KL(q_{\phi}(z|x)||p_{\theta}(z)) \\
+ E_{q_{\phi}(z|x)}[\log p_{\theta}(x|z)]
\label{eq:vae}
\end{aligned}
\end{equation}
The first term is the KL-divergence between the approximated posterior and the prior, which encourages similarity between the two distributions. The second term is the likelihood of the data given samples from the approximated posterior. The CVAE has a similar structure, but the prior is modelled by another NN, called the prior network. The prior network is conditioned on $c$. The new objective function can now be written as:
\begin{multline}
L_{CVAE}(\theta, \phi; x, c) = -KL(q_{\phi}(z|x, c)||p_{\theta}(z|c)) \\
+ E_{q_{\phi}(z|x,c)}[\log p_{\theta}(x|z,c)]
\end{multline}
When generating data, the encoder is not used and $z$ is sampled from $p_{\theta}(z|c)$.
\subsection{Semantically Conditioned VAE}
\begin{figure}
\includegraphics[width=\linewidth]{scvae.png}
\caption{Semantically Conditioned Variational Autoencoder with a semantic representation (SR) as the condition. $x$ is the system response with words $w_{1:N}$. $x_{D}$, $x_{A}$ and $x_{S}$ are labels for the domain, the dialogue act (DA) and the slots of $x$.}
\label{fig:scvae}
\vspace{-0.5em}
\end{figure}
The structure of our model is depicted in Fig.~\ref{fig:scvae}, which, conditioned on an SR, generates the system's word-level response $x$. An SR consists of three components: the domain, a dialogue act and a set of slot-value pairs. \textit{Slots} are attributes required to appear in $x$ (e.g. a hotel's \textit{area}). A \textit{slot} can have a \textit{value}. Then the two are called a \textit{slot-value} pair (e.g. \textit{area}=\textit{north}). $x$ is \textit{delexicalised}, which means that slot values are replaced by corresponding slot tokens. The condition $c$ of our model is the SR represented as two 1-hot vectors for the domain and the dialogue act as well as a binary vector for the slots
During training, $x$ is first passed through a single layer bi-directional LSTM, the output of which is concatenated with $c$ and passed to the recognition network. The recognition network parametrises a Gaussian distribution $\mathcal{N}(\mu_{post}, \sigma_{post})$ which is the posterior.The prior network only has $c$ as its input and parametrises a Gaussian distribution $\mathcal{N}(\mu_{prior}, \sigma_{prior})$ which is the prior. Both networks are fully-connected (FC) NNs with one and two layers respectively. During training, $z$ is sampled from the posterior. When the model is used for generation, $z$ is sampled from the prior. The decoder is an SCLSTM~\citep{wensclstm15} using $z$ as its initial hidden state and initial cell vector. The first input to the SCLSTM is a start-of-sentence (sos) token and the model generates words until it outputs an end-of-sentence (eos) token.
\subsection{Optimization}
When the decoder in the CVAE is powerful on its own, it tends to ignore the latent variable $z$ since the encoder fails to encode enough information into $z$. Regularization methods can be introduced in order to push the encoder towards learning a good representation of the latent variable $z$.
Since the KL-component of the VLB does not contribute towards learning a meaningful $z$, increasing the weight of it gradually from $0$ to $1$ during training helps to encode a better representation in $z$. This method is termed \textit{KL-annealing} \citep{476cadd89dec4e0ab01d9dc59e1222c7}.
In addition, inspired by \citep{ZhaoZE17}, we introduce a regularization method using another NN which is trained to use $z$ to recover the condition $c$. The NN is split into three separate FC NNs of one layer each, which independently recover the \textit{domain}, \textit{dialogue-act} and \textit{slots} components of $c$. The objective of our model can be written as:
\begin{multline}
L_{SCVAE}(\theta, \phi;x, c) = L_{CVAE}(\theta, \phi;x, c) \\
+ E_{q_{\phi}(z|x, c)}[\log p(x_{D}|z)+\log p(x_{A}|z)+ \\
\log \prod_{i=1}^{|S|} p(x_{S_{i}}|z)]
\end{multline}
where $x_{D}$ is the domain label, $x_{A}$ is the dialogue act label and $x_{S_{i}}$ are the slot labels with $|S|$ slots in the SR.
In the proposed model, the CVAE learns to encode information about both the sentence and the SR into $z$.
Using $z$ as its initial state, the decoder is better at generating sentences with desired attributes. In section \ref{sec:z} a visualization of the latent space demonstrates that a semantically meaningful representation for $z$ was learned.
\begin{table*}[tp]
\centering
\caption{The statistics of the cross-domain dataset}
\label{tab:data}
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccc}
\hline
& Restaurant & Hotel & Television & Laptop \\ \hline\hline
\# of examples & 3114/1039/1039 & 3223/1075/1075 & 4221/1407/1407 & 7944/2649/2649 \\ \hline
dialogue acts & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}reqmore, goodbye, select, confirm, request, \\ inform, inform\_only, inform\_count, inform\_no\_match\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}compare, recommend, inform\_all, \\ suggest, inform\_no\_info, 9 acts as left\end{tabular}} \\ \hline
shared slots & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}name, type, area, near, price,\\ phone, address, postcode, pricerange\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}name, type, price,\\ family, pricerange,\end{tabular}} \\ \hline
specific slots & \begin{tabular}[c]{@{}c@{}}food,\\ goodformeal,\\ kids-allowed\end{tabular} & \begin{tabular}[c]{@{}c@{}}hasinternet,\\ acceptscards,\\ dogs-allowed\end{tabular} & \begin{tabular}[c]{@{}c@{}}screensizerange, ecorating, \\ hdmiport, hasusbport, audio,\\ accessories, color, screensize,\\ resolution, powerconsumption\end{tabular} & \begin{tabular}[c]{@{}c@{}}isforbusinesscomputing.\\ warranty, battery, design,\\ batteryrating, weightrange,\\ utility, platform, driverange,\\ dimension, memory, processor\end{tabular} \\ \hline
\end{tabular}%
}
\end{table*}
\section{Dataset and Setup}
The proposed model is used for an SDS that provides information about restaurants, hotels, televisions and laptops. It is trained on a dataset \cite{wenmultinlg16}, which consists of sentences with corresponding semantic representations.
Table~\ref{tab:data} shows statistics about the corpus which was split into a training, validation and testing set according to a 3:1:1 split.
The dataset contains 14 different system dialogue acts.
The television and laptop domains are much more complex than other domains. There are around 7k and 13k different SRs possible for the TV and the laptop domain respectively. For the restaurant and hotel domains only 248 and 164 unique SRs are possible.
This imbalance makes the NLG task more difficult.
The generators were implemented using the PyTorch Library~\citep{paszke2017automatic}.
The size of decoder SCLSTM and thus of the latent variable was set to 128.
KL-annealing was used, with the weight of the KL-loss reaching $1$ after 5k mini-batch updates.
The slot error rate (ERR), used in \citep{oh2000stochastic,thwsjy15}, is the metric that measures the model's ability to convey the desired information.
ERR is defined as: $(p+q)/N$, where $N$ is the number of slots in the SR, $p$ and $q$ are the number of missing and redundant slots in the generated sentence.
The BLEU-4 metric and perplexity (PPL) are also reported. The baseline SCLSTM is optimized, which has shown to outperform template-based methods and trainable generators \cite{wensclstm15}.
NLG often uses the over-generation and reranking paradigm \cite{oh2000stochastic}. The SCVAE can generate multiple sentences by sampling multiple $z$, while the SCLSTM has to sample different words from the output distribution
In our experiments ten sentences are generated per SR. Table~\ref{tab:example} in the appendix shows one SR in each domain with five illustrative sentences generated by our model.
\section{Experimental Results}
\subsection{Visualization of Latent Variable $z$} \label{sec:z}
\begin{figure}[tp]
\centering
\includegraphics[width=\linewidth]{pca.png}
\caption{2D-projection of $z$ for each data point in the test set, with two different colouring-schemes.}
\label{fig:pca}
\end{figure}
2D-projections of $z$ for each data point in the test set are shown in Fig.~\ref{fig:pca}, by using PCA for dimensionality reduction.
In Fig.~\ref{fig:pca}a, data points of the restaurant, hotel, TV and laptop domain are marked as blue, green, red and yellow respectively.
As can be seen, data points from the laptop domain are contained within four distinct clusters. In addition,
there is a large overlap of the TV and laptop domains, which is not surprising as they share all dialogue acts (DAs).
Similarly, there is overlap of the restaurant and hotel domains.
In Fig.~\ref{fig:pca}b, the eight most frequent DAs are color-coded. \texttt{recommend}, depicted as green, has a similar distribution to the laptop domain in Fig.~\ref{fig:pca}a, since \texttt{recommend} happens mostly in the laptop domain.
This suggests that our model learns to map similar SRs into close regions within the latent space. Therefore, $z$ contains meaningful information in regards to the domain, DAs and slots.
\subsection{Empirical Comparison}
\begin{table}[tp]
\centering
\caption{Comparison between SCVAE and SCLSTM. Both are trained with full dataset and tested on individual domains}
\label{tab:indiv}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|l|ccccc}
\hline
Metrics & Method & Restaurant & Hotel & TV & Laptop & Overall \\ \hline\hline
\multirow{2}{*}{ERR(\%)} & SCLSTM & 2.978 & 1.666 & 4.076 & 2.599 & 2.964 \\
& SCVAE & \textbf{2.823} & \textbf{1.528} & \textbf{2.819} & \textbf{1.841} & \textbf{2.148} \\ \hline
\multirow{2}{*}{BLEU} & SCLSTM & 0.529 & 0.642 & 0.475 & 0.439 & 0.476 \\
& SCVAE & \textbf{0.540} & \textbf{0.652} & \textbf{0.478} & \textbf{0.442} & \textbf{0.478} \\ \hline
\multirow{2}{*}{PPL} & SCLSTM & 2.654 & 3.229 & 3.365 & 3.941 & 3.556 \\
& SCVAE & \textbf{2.649} & \textbf{3.159} & \textbf{3.337} & \textbf{3.919} & \textbf{3.528} \\ \hline
\end{tabular}%
}
\end{table}
\subsubsection{Cross-domain Training}
Table~\ref{tab:indiv} shows the comparison between SCVAE and SCLSTM. Both are trained on the full cross-domain dataset, and tested on the four domains individually. The SCVAE outperforms the SCLSTM on all metrics. For the highly complex TV and laptop domains, the SCVAE leads to dramatic improvements in ERR. This shows that the additional sentence level conditioning through $z$ helps to convey all desired attributes.
\subsubsection{Limited Training Data}
Fig.~\ref{fig:percent} shows BLEU and ERR results when the SCVAE and SCLSTM are trained on varying amounts of data. The SCVAE has a lower ERR than the SCLSTM across the varying amounts of training data. For very slow amounts of data the SCVAE outperforms the SCLSTM even more.
In addition, our model consistently achieves better results on the BLEU metric.
\begin{figure}[bp]
\centering
\includegraphics[width=.875\linewidth]{percent.png}
\caption{Comparison between SCVAE and SCLSTM with limited training data.}
\label{fig:percent}
\end{figure}
\subsubsection{K-Shot Learning}
For the K-shot learning experiments, we trained the model using all training examples from three domains and only 300 examples from the target domain\footnote{600 examples were used for laptop as target domain.}. The target domain is the domain we test on. As seen from Table~\ref{tab:shot}, the SCVAE outperforms the SCLSTM in all domains except hotel. This might be because the hotel domain is the simplest and the model does not need to rely on the knowledge from other domains. The SCVAE strongly outperforms the SCLSTM for the complex TV and laptop domains where the number of distinct SRs is large. This suggests that the SCVAE is better at transferring knowledge between domains.
\begin{table}[tp]
\centering
\caption{Comparison between SCVAE and SCLSTM in K-shot learning}
\label{tab:shot}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|l|cccc}
\hline
Metrics & Method & Restaurant & Hotel & TV & Laptop \\ \hline\hline
\multirow{2}{*}{ERR(\%)} & SCLSTM & 13.039 & \textbf{5.366} & 24.497 & 27.587 \\
& SCVAE & \textbf{10.329} & 6.182 & \textbf{20.590} & \textbf{20.864} \\ \hline
\multirow{2}{*}{BLEU} & SCLSTM & \textbf{0.462} & 0.578 & 0.382 & 0.379 \\
& SCVAE & 0.458 & \textbf{0.579} & \textbf{0.397} & \textbf{0.393} \\ \hline
\multirow{2}{*}{PPL} & SCLSTM & 3.649 & 4.861 & 5.171 & 6.469 \\
& SCVAE & \textbf{3.575} & \textbf{4.800} & \textbf{5.092} & \textbf{6.364} \\ \hline
\end{tabular}%
}
\end{table}
\section{Conclusion}
In this paper, we propose a semantically conditioned variational autoencoder (SCVAE) for natural language generation. The SCVAE encodes information about both the semantic representation and the sentence into a latent variable $z$. Due to a newly proposed regularization method, the latent variable $z$ contains semantically meaningful information.
Therefore, conditioning on $z$ leads to a strong improvement in generating sentences with all desired attributes. In an extensive comparison the SCVAE outperforms the SCLSTM on a range of metrics when training on different sizes of data and for K-short learning. Especially, when testing the ability to convey all desired information within complex domains, the SCVAE shows significantly better results.
\section*{Acknowledgments}
Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan.
This research was partly funded by the EPSRC grant EP/M018946/1 Open Domain Statistical Spoken Dialogue Systems. Florian Kreyssig is supported by the Studienstiftung des Deutschen Volkes. Pawe{\l} Budzianowski is supported by the EPSRC and Toshiba Research Europe Ltd.
|
2,869,038,155,274 | arxiv |
\section{Introduction}
The killer application of wireless networks has evolved from real-time voice communication to on-demand multimedia content delivery (e.g., video), which requires a nearly $100$-fold increase in the per-user throughput, from tens of kb/s to $1$ Mb/s. Luckily, the pre-availability of such content allows for leveraging storage opportunities at users in a proactive manner, thereby reducing the amount of necessary data transmission during periods of high network utilization.
A widely adopted information theoretic model for a caching system (e.g., see \cite{maddah2014fundamental,ji2016fundamental,shariatpanahi2016multi,pedarsani2016online}) comprises two phases.
The \emph{placement phase} refers to the operation during low network utilization, when users are not requesting any content. During this phase, the cache memories of users are filled by a central server proactively. When each user directly stores a subset of bits, the placement phase is called \emph{uncoded}. The placement phase is called \emph{centralized} if the server knows the identity of the users in the system and coordinates the placement of the content based on this information. On the other hand, the placement without such a coordination among the users is called \emph{decentralized}.
The transmission stage when users request their desired content is termed \emph{delivery phase}. By utilizing the content stored in their caches during the placement phase, users aim to reconstruct their desired content from the signals they receive. The sources of such signals may differ depending on the context and network topology. In this work, we focus on the device-to-device (D2D) caching scenario, in which the signals available during the delivery phase are generated merely by the users themselves, whereas the central server remains inactive.
A coded caching strategy was proposed by Maddah-Ali and Niesen (MAN) \cite{maddah2014fundamental}. Their model consists of users with caches and of a server which is in charge of the distribution of content to users through an error-free shared-link, during both the placement and delivery phases. This seminal work showed that a \emph{global caching gain} is possible by utilizing multicasting linear combinations during the delivery phase, whereas the previous work on caching \cite{baev2008approximation,almeroth1996use,dan1996dynamic,korupolu2001placement,meyerson2001web,dowdy1982comparative} aimed to benefit from the \emph{local caching gain}, omitting the multicasting opportunities.
By observing that some MAN linear combinations are redundant, the authors~\cite{yu2018exact} proposed an improved scheme, which is information theoretically optimal (i.e., it achieved a lower bound on the minimum possible load of the shared link) under the constraint of uncoded cache placement. It was proved in~\cite{yu2017characterizing} that the uncoded caching scheme is optimal generally within a factor of $2$, e.g., even when more involved (coded) cache placement schemes are allowed.
The work \cite{maddah2014fundamental} has attracted a lot of attention and led to numerous extensions, e.g., decentralized caching \cite{maddah2015decentralized}, device-to-device (D2D) caching \cite{ji2013fundamental,ji2016fundamental,ibrahim2018device}, caching on file selection networks \cite{lim2017information}, caching with nonuniform demands
\cite{niesen2017coded,ji2017order,zhang2015coded}, multi-server\cite{shariatpanahi2016multi}, online caching \cite{pedarsani2016online} to name some.
The D2D caching problem was originally considered in \cite{ji2013fundamental,ji2016fundamental,ibrahim2018device}, where users are allowed to communicate with each other. By extending the caching scheme in \cite{maddah2014fundamental} to the D2D scenario, a global caching gain can also be achieved. It was proved in \cite{ji2013fundamental} and \cite{ji2016fundamental} that the proposed D2D caching scheme is order optimal within a constant when the memory size is not small.
Particularly, the D2D caching setting with uncoded placement considered in this work is closely related to the distributed computing \cite{li2018fundamental,ji2018fundamental,li2018wireless,li2017scalable,reisizadeh2017coded,yu2017optimally,kiamari2017heterogeneous,bitar2017minimizing} and data-shuffling problems \cite{wan2018fundamental,song2017pliable}. The coded distributed computing setting can be interpreted as a symmetric D2D caching setting with multiple requests, whereas the coded data shuffling problem can be viewed as a D2D caching problem with additional constraints on the placement.
\subsection{Our Contributions}
Our main contributions in this paper are:
\begin{enumerate}
\item
Based on the D2D achievable caching scheme in~\cite{ji2016fundamental}, with $K$ the number of users and $N$ the number of files, for $N\geq K$ and the shared-link caching scheme in~\cite{yu2018exact} for $N<K$,
we propose a novel achievable scheme for D2D caching problem, which is shown to be order optimal within a factor of $2$ under the constraint of uncoded placement, in terms of the average transmitted load for uniform probability of file requests and the worst-case transmitted load among all possible demands.
\item
For each user, if any bit of its demanded file not already in its cache can be recovered from its cache content and a transmitted packet of a single other user, we say that the delivery phase is \emph{one-shot}.
Under the constraint of uncoded placement and one-shot delivery, we can divide the D2D caching problem into $K$ shared-link models. Under the above constraints, we then use the index coding acyclic converse bound in~\cite[Corollary 1]{onthecapacityindex} to lower bound the total load transmitted in the $K$ shared-link models. By leveraging the connection among the $K$ shared-link models, we propose a novel way to use the index coding acyclic converse bound compared to the method used for single shared-link model in~\cite{wan2016optimality,wan2016caching,yu2018exact}. With this converse bound, we prove that the proposed achievable scheme is exactly optimal under the constraint of uncoded placement and one-shot delivery, in terms of the average transmitted load and the worst-case transmitted load among all possible demands.
\item
Lastly, inspired by the distributed computing problem with \emph{straggler}s (see e.g. \cite{speedingUpML} for a distributed linear computation scenario), where straggling servers fail to finish their computational tasks on time,
we focus on a novel D2D caching system, where during the delivery phase, each user may be inactive with some probability and the inactivity event of each user is not known by other users. User inactivity may occur due to several reasons such as broken communication links, users moving out of the network, users going off-line to save power, etc. For this setting, a non-one-shot delivery scheme would be very fragile since, because of the fact that requested bits are decoded from the transmissions of multiple users (e.g., from a set of full-rank linear combinations collected from the signals of different users, as in linear network coding), a missing user may affect the decoding of many packets, through a sort of catastrophic error propagation effect.
Instead, we can directly extend the proposed optimal one-shot delivery phase to this problem by using the MDS precoding proposed in \cite{speedingUpML}, which promotes robustness against random unidentified user inactivity.
\end{enumerate}
The rest of this paper is organized as follows. We provide a precise definition of our model and an overview of previous results on D2D and shared-link caching scenarios in Section \ref{sec:sys}. We formally define the load-memory trade-off problem and give a summary of our results in Section \ref{sec:RMtrade-off}. The proposed caching scheme is presented in Section \ref{sec:achiev}. We demonstrate its optimality under the constraint of one-shot delivery through a matching converse in Section \ref{sec:Converse}. We treat the problem of random user inactivity by proposing an extension of the presented scheme in Section \ref{sec:loadouta}. We corroborate our results with computer simulations also by providing numerical comparisons with the existing bounds in Section \ref{sec:Numerical}.
\section{Problem Setting and Related Results}\label{sec:sys}
In this section, we define our notations and network model and present previous results which are closely related to the problem we consider in the current work.
\subsection{Notation}
\label{sec:model:notation}
$|\cdot|$ is used to represent the cardinality of a set or the length of a file in bits;
we let
$\mathcal{A\setminus B}:=\left\{ x\in{\cal A}|x\notin\mathcal{B}\right\}$,
$[a:b:c]:=\{a,a+b,a+2b,...,c\}$,
$[a:c] = [a:1:c]$ and $[n]=[1:n]$;
the bit-wise XOR operation between binary vectors is indicated by $\oplus$;
for two integers $x$ and $y$, if $x<y$ or $x\leq0$, we let $\binom{x}{y}=0$; $\mathbb{1\{\cdot\}}$ denotes the indicator function.
\subsection{D2D Caching Problem Setting}
\label{sub:problem setting}
\begin{figure}
\centering{}
\scalebox{0.85}{\includegraphics{fig1}}
\caption{System model for cache-aided D2D network where users broadcast to all the other users using the bits in their memories stored from the central server during the placement phase. Solid and dotted lines indicate operation during placement and delivery phases, respectively.}
\label{fig:scheme}
\end{figure}
We consider a D$2$D network composed of $K$ users, which are able to receive all the other users' transmissions (see Fig. \ref{fig:scheme}). We assume a collision avoidance protocol for which when a user transmits, all the other stay quiet and listen (e.g., this can be implemented in a practical wireless network using CSMA, as in
the IEEE 802.11 standard). Users make requests from a fixed file database of $N$ files $\boldsymbol{W} := (W_1,\dots, W_N)$ each with a length of $F$ bits. We assume that the requests are known to all users via some control channel.
Since the amount of bits necessary to notify the requests is much less than the actual requested data delivery, the overhead incurred by the request broadcasting is neglected (as in virtually all papers on coded caching appeared in the literature, e.g., \cite{maddah2014fundamental,ji2016fundamental,shariatpanahi2016multi,pedarsani2016online}). Every user has a memory of $MF$ bits, $M < N$, at its disposal. The system operation can be divided into two phases, namely, into the \emph{placement} and \emph{delivery} phases.
During the placement phase users have access to a central server containing the database $\boldsymbol{W}$. In this work, we only consider the caching problem with uncoded cache placement,
where each user $k$ directly stores $MF$ bits of $N$ files in its memory. For the sake of simplicity, we do not repeat this constraint in the rest of the paper.
Since the placement is uncoded, we can divide each file $q$, $q \in [N]$, into subfiles $W_{q}=\{W_{q,{\cal V}}:{\cal V}\subseteq [K]\}$, where $W_{q,{\cal V}}$ represents the set of bits of file $q$ exclusively cached by users in ${\cal V}$.
We denote the indices of the stored bits at user $k$ by $\mathcal{M}_k$. For convenience, we denote the cache placement of the whole system by $\boldsymbol{\mathcal{M}} := (\mathcal{M}_1,\dots,\mathcal{M}_K)$. We assume that, at the end of this phase, each bit of the database is available at least in one of the users' caches, implying $MK \geq N$ must hold.
During the delivery phase, each user demands one file. We define the \textit{demand} vector $\boldsymbol{d}:=\left(d_1,\dots,d_K\right)$, with $d_k \in [N]$ denoting user $k$'s requested file index. The set of all possible demands is denoted by $\mathcal{D}$, so that $\mathcal{D}=[N]^K$.
Given the demand information, each user $k$ generates a codeword $X_k$ of length $R_k F$ bits and broadcasts it to other users, where $R_k$ indicates the load of user $k$. For a given subset of users $\mathcal{T} \subseteq [K]$, we let $X_{\mathcal{T}}$ denote the ensemble of codewords broadcasted by these users. From the stored bits in $\mathcal{M}_k$ and the received codewords $X_{[K]\backslash k}$, each user $k$ attempts to recover its desired file $W_{d_k}$.
In this work we concentrate on the special case of \emph{one-shot delivery}, which we formally define in the following.
\begin{definition}[One-shot delivery]
If each user $k \in [K]$ can decode any bit of its requested file not already in its own cache
from its cache and the transmission of a single other user, we say that the delivery phase is {\em one-shot}. Denoting by $W^{k,i}_{d_k}$ the block of bits needed by user $k$ and recovered from the transmission of user i, such that
\[ H(W^{k,i}_{d_k} | X_i, {\cal M}_k) = 0 \]
indicating that $W^{k,i}_{d_k}$ is a deterministic function of $X_i$ and ${\cal M}_k$, the one-shot condition implies that
\[ (W_{d_k} \setminus {\cal M}_k) \subseteq \bigcup_{i \in [K]\setminus \{k\}} W^{k,i}_{d_k}. \]
In addition, we also define $W^{k,i}_{d_k,{\cal V}}$ as the block of bits needed by user $k$ and recovered from the transmission of user $i$, which are exclusively cached by users in ${\cal V}$. Hence, we have for each user $k\in[K]$
\[ \bigcup_{{\cal V}\subseteq ([K]\setminus \{k\}): i\in {\cal V}} W^{k,i}_{d_k,{\cal V}}=W^{k,i}_{d_k}, \forall i\in [K]\setminus \{k\}.\]
\end{definition}
\begin{remark}\label{rem:one-shot background}
\emph{One-shot} terminology is often used in settings related to interference-channels. To the best of our knowledge, the only work which explicitly emphasized the one-shot delivery in the caching setting before the present work was \cite{naderializadeh2017fundamental}.
\end{remark}
Letting $R = \sum_{k=1}^{K} R_k$, we say that a communication load $R$ is \textit{achievable} for a demand $\boldsymbol{d}$ and placement $\boldsymbol{\mathcal{M}}$, with $\abs{\mathcal{M}_k}=M,\, \forall k \in [K]$, if there exists an ensemble of codewords $X_{[K]}$ of size $RF$ such that each user $k$ can reconstruct its requested file $W_{d_k}$. We let $R^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})$ indicate the minimum achievable load given $\boldsymbol{d}$ and $\boldsymbol{\mathcal{M}}$.
We also define $R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})$ as the minimum achievable load given $\boldsymbol{d}$ and $\boldsymbol{\mathcal{M}}$ under the constraint of one-shot delivery.
We consider independent and equally likely user demands, i.e., $\boldsymbol{d}$ is uniformly distributed on $\mathcal{D}$. Given a placement $\boldsymbol{\mathcal{M}}$, the average load $R^*_{\textup{ave}}(\boldsymbol{\mathcal{M}})$ is defined as the expected minimum achievable load under this distribution of requests:
\begin{equation*}
R^*_{\textup{ave}}(\boldsymbol{\mathcal{M}})=\mathbb{E}_{\boldsymbol{d}}[ R^*(\boldsymbol{d},\boldsymbol{\mathcal{M}})].
\end{equation*}
We define $R^*_{\textup{ave}}$ as the minimum achievable average load:
\begin{equation*}
R^*_{\textup{ave}}=\min_{\substack{\boldsymbol{\mathcal{M}}}} R^*_{\textup{ave}}(\boldsymbol{\mathcal{M}}).
\end{equation*}
Similarly, we define $R^*_{\textup{ave, o}}$ as the minimum average load under the constraint of one-shot delivery.
Furthermore, for a given placement $\boldsymbol{\mathcal{M}}$, the peak load $R_{\textup{worst}}^*(\boldsymbol{\mathcal{M}})$ is defined as
\begin{equation*}
R_{\textup{worst}}^*(\boldsymbol{\mathcal{M}})=\max_{\boldsymbol{d}} R^*(\boldsymbol{d},\boldsymbol{\mathcal{M}}).
\end{equation*}
In addition, we define $R^*_{\textup{worst}}$ as the minimum achievable peak load:
\begin{equation*}
R_{\textup{worst}}^*=\min_{\boldsymbol{\mathcal{M}}} R_{\textup{worst}}^*(\boldsymbol{\mathcal{M}}).
\end{equation*}
Correspondingly, we define $R^*_{\textup{worst, o}}$ as the minimum average load under the constraint of one-shot delivery.
Further, for a demand $\boldsymbol{d}$, we let $N_{\textup{e}}(\boldsymbol{d})$ denote the number of distinct indices in $\boldsymbol{d}$. In addition,
we let $\boldsymbol{d}_{\backslash\{k\}}$ and $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}})$ stand for the demand vector of users $[K]\backslash\{k\}$ and the number of distinct files requested by all users but user $k$, respectively.
As in \cite{tian2016symmetry,yu2018exact}, we group the demand vectors in $\mathcal{D}$ according to the frequency of common entries that they have. Towards this end, for a demand $\boldsymbol{d}$, we stack in a vector of length $N$ the number of appearances of each request in descending order, and denote it by $\boldsymbol{s}(\boldsymbol{d})$.
We refer to this vector as \textit{composition} of $\boldsymbol{d}$. Clearly, $\sum_{n=1}^{N}s_n(\boldsymbol{d})=K$. By $\mathcal{S}$ we denote the set of all possible compositions. We denote the set of demand vectors with the same composition $\boldsymbol{s} \in \mathcal{S}$ by $\mathcal{D}_{\boldsymbol{s}}$. We refer to these subsets as \textit{type}s. Obviously, they are disjoint and $\bigcup\limits_{\boldsymbol{s}\in \mathcal{S}} \mathcal{D}_{\boldsymbol{s}} = \mathcal{D}$. For instance, for $N=3$ and $K=5$, one has $\mathcal{S} = \{(5,0,0), (4,1,0), (3,2,0), (3,1,1), (2,2,1)\}$ and $\mathcal{D}_{\boldsymbol{s}} = \{(W_1,W_1,W_1,W_1,W_1), (W_2,W_2,W_2,W_2,W_2), (W_3,W_3,W_3,W_3,W_3)\}$ when $\boldsymbol{s} = (5,0,0)$.
\subsection{Previous Results on the Device-to-device Coded Caching Problem}\label{sub:D2DRelated}
The seminal work on D2D coded caching \cite{ji2016fundamental} showed that for a demand $\boldsymbol{d}$ the load
\begin{align}
R_{\textup{Ji}} = \min \left\{\frac{K-t}{t},N_{\textup{e}}(\boldsymbol{d})\right\}
\end{align}
is achievable for $t=\frac{KM}{N}\in [K]$. Moreover, for non-integer $t$ with $1 < t < K$, the lower convex envelope of these points is achievable.
By cut-set arguments, the authors also showed that the minimum peak load is lower bounded as
\begin{align}
R_{\textup{worst}}^* \geq \max\left\{\max_{\ell \in \{1, 2, \cdots, \min\{K, N\}\}} \left(\ell - \frac{\ell}{\lfloor\frac{N}{\ell}\rfloor}M\right),
\frac{K-t}{K-1} \times \mathbb{1}\{K>1, N > 1\}\right\}.
\end{align}
Later in \cite{sengupta2015beyond}, the lower bound was tightened with the help of Han's Inequality (cf. \cite[Theorem 17.6.1]{cover2006elements}) to:
\begin{align}
R_{\textup{worst}}^* \geq & \max_{s \in [K], \ell \in [\lceil N/s \rceil]}\left\{\frac{N-sM-\left(\frac{\mu}{s+\mu}\right)\left(N-\ell s\right)^+}{\ell \left(\frac{K-s}{K}\right)}\right\}, \label{eq:LBSengupta}
\end{align}
with $\mu = \left( \min \left( \lceil N/\ell \rceil, K \right) - s \right)$, $\forall s,\ell$.
These lower bounds are more general than our lower bound presented in Section \ref{sec:Converse}, in the sense that they are neither restricted to uncoded placement nor to one-shot delivery.
\subsection{Previous Results on the Shared-link Coded Caching Problem}\label{sub:sharedLinkRelated}
In this subsection, we shortly sketch the shared-link model \cite{maddah2014fundamental}, and state the capacity results for the case of uncoded cache placement \cite{yu2018exact,wan2016optimality}, which are essential for appreciating our results for the D2D model.
In the shared-link model (also referred to as the \emph{bottleneck} model), a server with $N$ files is connected to $K$ users through an error-free channel. Each file is composed of $F$ bits and each user is provided with a local cache of size $MF$ bits.
For uncoded placement, the minimum average and worst-case loads are given as follows \cite{yu2018exact}:
\begin{theorem}\label{teoSL}
For a server based shared-link coded caching scenario with a database of $N$ files and $K$ users each with a cache of size $M$, the following average load $R^{\textup{sl*}}_{\textup{ave}}$ under the constraint of uncoded cache placement, is optimal
\begin{equation}
R^{\textup{sl*}}_{\textup{ave}}=\mathbb{E}_{\boldsymbol{d}}\left[ \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}}{\binom{K}{t}}\right],\label{eq:averageworstcaseSL}
\end{equation}
with $t=\frac{KM}{N}\in [K]$, where $\boldsymbol{d}$ is uniformly distributed over $\mathcal{D}=[N]^K$. When $t\notin [K]$, $R^{\textup{sl*}}_{\textup{ave}}$ corresponds to the lower convex envelope of its values at $t\in [K]$.
\end{theorem}
\begin{corollary}
\label{corrSL}
For a server based shared-link coded-caching scenario with a database of $N$ files and $K$ users each with a cache of size $M$, the following peak load $R^{\textup{sl*}}_{\textup{worst}}$ under the constraint of uncoded cache placement, is optimal
\begin{align}
R^{\textup{sl*}}_{\textup{worst}}= \frac{\binom{K}{t+1}-\binom{K-\min\{K,N\}}{t+1}}{\binom{K}{t}},
\label{eq:worstCaseloadSL}
\end{align}
with $t=\frac{KM}{N}\in [K]$. When $t\notin [K]$, $R^{\textup{sl*}}_{\textup{worst}}$ corresponds to the lower convex envelope of its values at $t\in [K]$.
\end{corollary}
Notice that for the case of $N_{\textup{e}}(\boldsymbol{d}) = K$, i.e., when every user demands a distinct file, the negative term in the above expressions disappear. The achievability for this case was in fact already presented in the seminal paper by Maddah-Ali and Niesen \cite{maddah2014fundamental} and its optimality was proven in \cite{wan2016optimality}.
The above mentioned loads are achieved by applying the caching scheme in \cite{yu2018exact} for each demand $\boldsymbol{d}$. The achieved load by this scheme for a given demand $\boldsymbol{d}$ is given as
\begin{align}\label{eq:singleSL}
R^{\textup{sl*}}(\boldsymbol{d},\boldsymbol{\boldsymbol{\mathcal{M}}_\textup{MAN}})=\frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}}{\binom{K}{t}},
\end{align}
where $\boldsymbol{\mathcal{M}}_\textup{MAN}$ refers to the symmetric placement which was originally presented in \cite{maddah2014fundamental}.
\subsection{Graphical Converse Bound for the Shared-link Coded Caching Problem}
\label{sub:shared-link converse}
As shown in~\cite{wan2016optimality,wan2016caching}, the acyclic index coding converse bound proposed in~\cite[Corollary 1]{onthecapacityindex} can be used to lower bound the broadcast load for the shared-link caching problem with uncoded cache placement. In the delivery phase, with the knowledge of the uncoded cache placement and demand vector $\mathbf{d}$, we can generate a directed graph.
For each sub-file demanded by each user, we can generate a node in the graph. There is a directed edge from node $i$ to node $j$, if and only if the user who demands the sub-file represented by node $j$ caches the sub-file represented by node $i$. If the subgraph over a set of nodes ${\cal J}$ does not contain a directed cycle, assuming the set of sub-files corresponding to this set of nodes is ${\cal S}_{{\cal J}}$ and the length of each sub-file $i$ is $L(i)$,
the broadcast load (denoted by $R^{\textup{sl}*}(\mathbf{d})$) is lower bounded by,
\begin{align}
R^{\textup{sl}*}(\mathbf{d}) \geq \sum_{i\in {\cal S}_{{\cal J}}} L(i).\label{eq:general acyclic bound}
\end{align}
The authors in~\cite{wan2016optimality,wan2016caching} proposed a way to choose the maximal acyclic sets in the graph. We choose $N_{\textup{e}}(\mathbf{d})$ users with different demands. The chosen user set
is denoted by ${\cal C}=\{c_{1},c_{2},...,c_{N_{\textup{e}}(\mathbf{d})}\}$ where $c_{i}\in[K]$.
Each time, we consider a permutation of ${\cal C}$, denoted by $\mathbf{u}=(u_{1},u_{2},...,u_{N_{\textup{e}}(\mathbf{d})})$. It was proved in~\cite[Lemma 1]{wan2016optimality} that the following set of sub-files is acyclic, $\big(W_{d_{u_{i}},{\cal V}_{i}} :
{\cal V}_{i}\subseteq[K]\backslash \{u_{1},\ldots,u_{i}\},
\ i\in[N_{\textup{e}}(\mathbf{d})]
\big)$.
By using~\eqref{eq:general acyclic bound}, we have
\begin{align}
R^{\textup{sl}*}(\mathbf{d})
&\geq \sum_{i\in[N_{\textup{e}}(\mathbf{d})]} \sum_{{\cal V}_{i}\subseteq[K]\backslash\{u_{1},\ldots,u_{i}\}} \frac{ |W_{d_{u_{i}},{\cal V}_{i}}| }{F}.
\label{eq:original uncycle}
\end{align}
Considering all the possible sets of the $N_{\textup{e}}(\mathbf{d})$ users with different demands and all the permutations, we sum all the inequalities in form of~\eqref{eq:original uncycle} to derive a converse bound of $R^{\textup{sl}*}(\mathbf{d})$ in terms of the lengths of sub-files. The next step is to consider all the possible demands and use the Fourier-Motzkin algorithm \cite{el2011network} to eliminate all the sub-files with the constraints of cache size and file size. Following this approach, we can derive a tight converse bound for the shared-link model.
\section{Main Results}\label{sec:RMtrade-off}
In this section we state the main results of this work. In the following theorem, we characterize the exact memory-average load trade-off under the constraint of one-shot delivery. The achievable scheme is introduced in Section~\ref{sec:achiev} and the converse bound is proved in Section~\ref{sec:Converse}.
\begin{theorem}[Average load]\label{teo}
For a D2D caching scenario with a database of $N$ files and $K$ users each with a cache of size $M$, the following average load under the constraint of uncoded placement and one-shot delivery with uniform demand distribution, is optimal
\begin{equation}
R^*_{\textup{ave, o}}=\mathbb{E}_{\boldsymbol{d}}\left[ \frac{\binom{K-1}{t}-\frac{1}{K}\sum_{k=1}^K\binom{K-1-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}})}{t}}{\binom{K-1}{t-1}}\right]\label{eq:averageworstcase}
\end{equation}
with $t=\frac{KM}{N} \in [K]$, where $\boldsymbol{d}$ is uniformly distributed over $\mathcal{D}=[N]^K$. Additionally, $R^*_{\textup{ave, o}}$ corresponds to the lower convex envelope of its values at $t\in [K]$, when $t\notin [K]$.
\end{theorem}
We can also extend the above result to worst-case transmitted load as shown in the following corollary.
\begin{corollary}[Worst-case load]
\label{corr}
For a D2D caching scenario with a database of $N$ files and $K$ users each with a cache of size $M$, the following peak load $R^*_{\textup{worst, o}}$ under the constraint of uncoded placement and one-shot delivery, is optimal
\begin{align}
R^*_{\textup{worst, o}} &= \max_{\boldsymbol{d}} \frac{\binom{K-1}{t}-\frac{1}{K}\sum_{k=1}^K\binom{K-1-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}})}{t}}{\binom{K-1}{t-1}}\label{eq:worseImplicit}\\
&= \begin{cases} \frac{\binom{K-1}{t}}{\binom{K-1}{t-1}} & K \leq N \\
\frac{\binom{K-1}{t}-\frac{2N-K}{K}\binom{K-N}{t}-\frac{2(K-N)}{K}\binom{K-1-N}{t}}{\binom{K-1}{t-1}} & \textup{otherwise}\\
\frac{\binom{K-1}{t}-\binom{K-1-N}{t}}{\binom{K-1}{t-1}} & K \geq 2N
\end{cases}\label{eq:worstCaseload}
\end{align}
with $t = \frac{KM}{N} \in [K]$. Additionally, $R^*_{\textup{worst, o}}$ corresponds to the lower convex envelope of its values at $t\in [K]$, when $t\notin [K]$.
\end{corollary}
\begin{IEEEproof}
The load stated in \eqref{eq:worseImplicit} can be achieved by the scheme presented in Section~\ref{sec:achiev} and its optimality is proved in Section~\ref{sec:Converse}.
To explicitly characterize the worst-case demand which gives \eqref{eq:worstCaseload}, first recall that the binomial coefficient ${n \choose m}$ is strictly increasing in $n$.
For $K \geq 2N$ if every file is demanded by at least $2$ users, every user $k$ will have $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}}) = N$ leading demanders, which is the maximum value possible for $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}})$ $\forall k \in [K]$. Hence, such a demand maximizes the load.
For $K < 2N$, however, it is not possible to have $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}}) = N$ for all users. We call a user $k$ a \emph{unique demander} if it is the only user requesting a file. Depending on whether a user $k$ is the unique demander of a file or not, notice that $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}}) = N_{\textup{e}}(\boldsymbol{d}) - 1$ or $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}}) = N_{\textup{e}}(\boldsymbol{d})$, respectively. By the monotonicity of the binomial coefficient, a worst case demand must have the maximum possible number of different demands, i.e., $N_{\textup{e}}(\boldsymbol{d}) = \min\{N,K\}$. Hence, $N_{\textup{e}}(\boldsymbol{d}) = N$ for $N \leq K < 2N$ and $N_{\textup{e}}(\boldsymbol{d}) = K$ for $K \leq N$ must hold. This already proves the case where $K \leq N$.
For $N < K < 2N$, the worst-case demand vector should satisfy $N_{\textup{e}}(\boldsymbol{d}) = N$ as argued above. This implies that for a user $k$ either $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}}) = N - 1$ or $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}}) = N$ should hold. It follows that minimizing the number of unique demanders would maximize the load. For a worst-case demand vector $\boldsymbol{d}$ with the minimum number of unique demanders which satisfies $N_{\textup{e}}(\boldsymbol{d}) = N$, each file cannot be demanded by more than two users. Thus there are $K-N$ files each of which is demanded by two users while each of the remaining $2N-K$ files is demanded by exactly one user, to satisfy $\sum_{n=1}^{N}s_n(\boldsymbol{d})=K$ (i.e., there are $K$ requests).
Thus, we prove the case where $N < K < 2N$.
\end{IEEEproof}
\begin{remark}
\label{rem:symmetry in file-splitting}
As we will present in Section \ref{sec:achiev} and also discuss in Remark \ref{rem:Ksharedlink}, our achievable scheme is in fact composed of $K$ shared-link sub-systems, where each $i^{\textup{th}}$ sub-system has the parameter $N_{\textup{e}}(\boldsymbol{d}) = N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$. Our presented scheme is symmetric in placement phase and file-splitting step in the delivery phase. The optimality of the symmetry in placement phase \cite{maddah2014fundamental} was already shown for the shared-link model in \cite{wan2016caching,wan2016optimality,yu2018exact} under the constraint of uncoded placement. This symmetry is intuitively plausible as the placement phase takes place before users reveal their demands and any asymmetry in the placement definitely would not lead to a better peak load.
However, a file-splitting step occurs after users make their demands known to the other users. Interestingly, it turns out that the proposed caching scheme with symmetry in file-splitting step achieves the lower bound shown in Section \ref{sec:Converse}, even though the $K$ shared-link sub-systems may not have the same value of $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$.
\end{remark}
\begin{remark}
\label{rem:difference to shared-link}
There are two main differences between the graphical converse bounds in~\cite{wan2016optimality,wan2016caching} for shared-link model and the ones in Theorem~\ref{teo} and Corollary~\ref{corr} for D2D model. On one hand, the D2D caching with one-shot delivery can be divided into $K$ shared-link models. The converse for the D2D model leverages the connection between these $K$ shared-link models while in~\cite{wan2016optimality,wan2016caching}, we only need to consider a single shared-link model. It will be explained in Remark~\ref{rem:improvement} that, if we do not leverage this connection, we may loosen the converse bound.
On the other hand, in the shared-link caching problem, one sub-file demanded by multiple users is treated as one sub-file. However, in the D2D caching problem,
the two sub-pieces $W^{k_1,i}_{q,{\cal V}}$ and $W^{k_2,i}_{q,{\cal V}}$ where $d_{k_1}=d_{k_2}=q$ and $k_1, k_2 \notin {\cal V}$, which represent the sub-pieces of sub-file $W_{q,{\cal V}_2}$ decoded by users $k_1$ and $k_2$ from the transmission by user $i$ respectively, are treated as two (dependent) sub-pieces.
\end{remark}
By comparing the achievable load by our proposed scheme and the minimum achievable load for shared-link model, we obtain the following order optimality result.
\begin{theorem}[Order optimality]\label{thm:order optimality}
For a D2D caching scenario with a database of $N$ files and $K$ users each with a cache size of $M$, the proposed achievable average and worst-case transmitted loads in~\eqref{eq:averageworstcase} and~\eqref{eq:worstCaseload}, is order optimal within a factor of $2$.
\end{theorem}
\begin{IEEEproof}
We only show the order optimality for the average case. The same result can be proven for the worst case by following similar steps as we present in the following.
First, we notice that the load of a transmission that satisfies users' demands from a server with the whole library cannot be higher than the sum-load of transmissions from users' caches. That is to say, we have that $R^*_{\textup{ave}} \geq R^{\textup{sl*}}_{\textup{ave}}$. Furthermore, we observe that $R^{\textup{sl*}}_{\textup{ave}} \geq \frac{t}{t+1} R^*_{\textup{ave, o}}$ by the following:
\begin{align}
\frac{t}{t+1} R^*_{\textup{ave, o}} &= \mathbb{E}_{\boldsymbol{d}}\left[ \frac{\frac{1}{t+1}\binom{K-1}{t}-\frac{1}{K}\frac{1}{t+1}\sum_{i=1}^K\binom{K-1-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}{t}}{\frac{1}{t}\binom{K-1}{t-1}}\right]\nonumber\\
&= \mathbb{E}_{\boldsymbol{d}}\left[ \frac{\frac{1}{K}\binom{K}{t+1}-\frac{1}{K}\sum_{i=1}^K\frac{1}{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}{t+1}}{\frac{1}{K}\binom{K}{t}}\right]\nonumber\\
&\leq \mathbb{E}_{\boldsymbol{d}}\left[ \frac{\binom{K}{t+1}-\min_{i}\frac{K}{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}{t+1}}{\binom{K}{t}}\right]\nonumber\\
&\leq \mathbb{E}_{\boldsymbol{d}}\left[ \frac{\binom{K}{t+1}-\binom{K-N_{\textup{e}}(\boldsymbol{d})}{t+1}}{\binom{K}{t}}\right]\label{eq:orderoptIneq}\\
&= R^{\textup{sl*}}_{\textup{ave}},\nonumber
\end{align}
where \eqref{eq:orderoptIneq} is due to $1 \leq N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}}) \leq N_{\textup{e}}(\boldsymbol{d})$ for all $i \in [K]$.
Therefore, we see that $R^*_{\textup{ave, o}} \geq R^*_{\textup{ave}} \geq \frac{t}{t+1} R^*_{\textup{ave, o}} \geq \frac{1}{2} R^*_{\textup{ave, o}}$, which concludes the proof.
\end{IEEEproof}
\section{A Novel Achievable D2D Coded Caching Scheme}\label{sec:achiev}
In this section, we present a caching scheme that achieves the loads stated in Theorem \ref{teo} and Corollary \ref{corr}. To this end, we show that for any demand vector $\boldsymbol{d}$ the proposed scheme achieves the load
\begin{align}\label{eq:single}
R^*(\boldsymbol{d},\boldsymbol{\boldsymbol{\mathcal{M}}_\textup{MAN}})=\frac{\binom{K-1}{t}-\frac{1}{K}\sum_{i=1}^{K}\binom{K-1-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}{t}}{\binom{K-1}{t-1}},
\end{align}
where $\boldsymbol{\mathcal{M}}_\textup{MAN}$ refers to the symmetric placement which was originally presented in \cite{maddah2014fundamental}. This immediately proves the achievability of the average and worst case loads given in Theorem \ref{teo} and Corollary \ref{corr}, respectively. In Subsection \ref{sub:achiev}, we will present our achievable scheme and provide a simple example, illustrating how the idea of exploiting common demands \cite{yu2018exact} is incorporated in the D2D setting. In Remark~\ref{rem:Ksharedlink}, we will discuss our approach of decomposing the D2D model into $K$ shared-link models.
\subsection{Achievability of $R^*(\boldsymbol{d},\boldsymbol{\boldsymbol{\mathcal{M}}_\textup{MAN}})$}\label{sub:achiev}
In the following, we present the proposed caching scheme for integer values of $t \in [K]$. For non-integer values of $t$, memory sharing schemes \cite{maddah2014fundamental,maddah2015decentralized,ji2016fundamental} can be used to achieve the lower convex envelope of the achievable points for $t$ integer.
\subsubsection{Placement phase}
Our placement phase is based on the MAN placement \cite{maddah2014fundamental}, where each file $W_q$ is divided into $\binom{K}{t}$ disjoint sub-files denoted by $W_{q, {\cal V}}$ where ${\cal V} \subseteq [K]$ and $|{\cal V}| = t$. During the placement phase, each user $k$ caches all bits in each sub-file $W_{q, {\cal V}}$ if $k \in {\cal V}$. As there are $\binom{K-1}{t-1}$ sub-files for each file where $k \in {\cal V}$ and each sub-file is composed of $F/\binom{K}{t}$ bits, each user caches $NFt/K = MF$ bits, fulfilling the memory constraint.
\subsubsection{Delivery phase}
The delivery phase starts with the \emph{file-splitting} step: Each sub-file is divided into $t$ equal length disjoint sub-pieces of $F/ t \binom{K}{t}$ bits which are denoted by $W_{q, {\cal V}, i}$, where $i\in {\cal V}$.
Subsequently, each user $i$ selects any subset of $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$ users from $[K]\backslash\{i\}$, denoted by $\mathcal{U}^i=\{u_1^i,...,u^i_{N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}\}$, which request $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$ distinct files. Extending the nomenclature in \cite{yu2018exact}, we refer to these users as \textit{leading demanders of user $i$}.
Let us now fix a user $i$ and consider an arbitrary subset $\mathcal{A}^i\subseteq [K]\backslash\{i\}$ of $t$ users. Each user $k\in \mathcal{A}^i$ needs the sub-piece $W_{d_k, \{\mathcal{A}^i \cup \{i\}\}\backslash\{k\}, i}$, which is cached by all the other users in $\mathcal{A}^i$ and the user $i$. Precisely, all users in a set $\mathcal{A}^i$ wants to retrieve these sub-pieces $W_{d_k, \{\mathcal{A}^i \cup \{i\}\}\backslash\{k\}, i}$ from the transmissions of user $i$. By letting user $i$ broadcast the codeword
\begin{equation}\label{eq:Broadcasts}
Y_{\mathcal{A}^i}^i:=\underset{k\in \mathcal{A}^i}{{\bigoplus}}W_{d_k, \{\mathcal{A}^i \cup \{i\}\}\backslash\{k\}, i},
\end{equation}
this sub-piece retrieval can be accomplished, as each user $k \in {\cal A}^i$ has all the sub-pieces on the RHS of \eqref{eq:Broadcasts}, except for $W_{d_k, \{\mathcal{A}^i \cup \{i\}\}\backslash\{k\}, i}$.
We let each user $i$ broadcast the binary sums that are useful for at least one of its leading demanders. That is, each user $i$ broadcasts all $Y^i_{\mathcal{A}^i}$ for all subsets $\mathcal{A}^i$ that satisfy $\mathcal{A}^i\cap\mathcal{U}^i\neq\emptyset$, i.e. $X_i = \{Y^i_{\mathcal{A}^i}\}_{\mathcal{A}^i\cap\mathcal{U}^i\neq\emptyset}$. For each user $i \in [K]$, the size of the broadcasted codeword amounts to $\binom{K-1}{t}-\binom{K-1-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})}{t}$ times the size of a sub-piece, summing which for all $i \in [K]$ results in the load stated in \eqref{eq:single}.
We now show that each user $k \in [K]$ is able to recover its desired sub-pieces. When $k$ is a leading demander of a user $i$, i.e., $k\in \mathcal{U}^i$, it can decode any sub-piece $W_{d_k, \mathcal{B}^k \cup \{i\},i}$, for any ${\cal B}^k \subseteq \mathcal{A}^i\backslash\{k\}$, $|{\cal B}^k| = t-1$ , from $Y^i_{{\cal B}^k \cup \{k\}}$ which is broadcasted from user $i$, by performing
\begin{equation}\label{eq:decodeBroadcast}
W_{d_k, \mathcal{B}^k \cup \{i\},i} = \left(\underset{x\in \mathcal{B}^k}{{\bigoplus}}W_{d_k, \{\mathcal{B}^k \cup \{i,k\}\}\backslash\{x\}, i}\right) \bigoplus Y^i_{{\cal B}^k\cup \{k\}}
\end{equation}
as can be seen from \eqref{eq:Broadcasts}.
However, when $k \notin \mathcal{U}^i$, not all of the corresponding codewords
$Y^i_{{\cal B}^k \cup \{k\}}$ for its required sub-pieces $W_{d_k, \mathcal{B}^k\cup \{i\},i}$ are directly broadcasted from user $i$. The user $k$ can still decode its desired sub-piece by generating the missing codewords based on its received codewords from user $i$. To show this, we first reformulate Lemma $1$ from \cite{yu2018exact}, applied to the codewords broadcasted by a user $i$.
\begin{lemma}[Lemma 1 in \cite{yu2018exact}]
\label{dec_l}
Given a user $i$, the demand vector of the remaining users $\boldsymbol{d}_{\backslash\{i\}}$, and a set of leading demanders $\mathcal{U}^i$, for any subset $\mathcal{C}^i\subseteq [K]\backslash\{i\}$ that includes $\mathcal{U}^i$, let $\mathcal{V}^i_{\textup{F}}$ be family of all subsets $\mathcal{V}^i$ of $\mathcal{C}^i$ such that each requested file in $\boldsymbol{d}_{\backslash\{i\}}$ is requested by exactly one user in $\mathcal{V}^i$.
The following equation holds:
\begin{equation*}
\underset{\mathcal{V}^i\in\mathcal{V}^i_{\textup{F}}}{\Large{\bigoplus}} Y^i_{\mathcal{C}^i\backslash\mathcal{V}^i}=0.
\end{equation*}
\end{lemma}
Let us now consider any subset $\mathcal{A}^i$ of $t$ non-leading demanders of user $i$. Lemma \ref{dec_l} implies that the codeword $Y^i_{\mathcal{A}^i}$ can be directly computed from the broadcasted codewords by the following equation:
\begin{equation}\label{eq:lemmaRecNonleader}
Y^i_{\mathcal{A}^i}=\underset{\mathcal{V}^i\in\mathcal{V}^i_{\textup{F}}\backslash \{\mathcal{U}^i\}}{\Large{\bigoplus}} Y_{\mathcal{C}^i\backslash\mathcal{V}^i},
\end{equation}
where $\mathcal{C}^i=\mathcal{A}^i\cup \,\mathcal{U}^i$,
because all codewords on the RHS of the above equation are directly broadcasted by user $i$. Thus, each user $k \notin \mathcal{U}^i$ can obtain the value $Y^i_{\mathcal{A}^i}$ for any subset $\mathcal{A}^i$ of $t$ users, and is able to decode its requested sub-piece.
For each $i \in [K]\backslash\{k\}$, user $k$ decodes its desired sub-piece by following either one of the above strategies, depending on whether it is a leading demander of $i$ or not.
In the following, we provide a short demonstration of the above presented ideas.
\paragraph*{An example}\label{para:ex}
Let us consider the case when $N=2, K=4, M=1, t=KM/N=2$ and $\boldsymbol{d} = (1,2,1,1)$. Notice that $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{2\}})=1$ and $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})=2\; \text{for}\,\, i \in \{1,3,4\}.$ Each file is divided into $\binom{4}{2} = 6$ sub-files and users cache the following sub-files for each $i \in \{1,2\}$:
\begin{align*}
{\cal M}_1 = \:\:\:& \lbrace W_{i,\{1,2\}}, W_{i,\{1,3\}},\, W_{i,\{1,4\}} \rbrace\\
{\cal M}_2 = \:\:\:& \lbrace W_{i,\{1,2\}}, W_{i,\{2,3\}},\, W_{i,\{2,4\}} \rbrace\\
{\cal M}_3 = \:\:\:& \lbrace W_{i,\{1,3\}}, W_{i,\{2,3\}},\, W_{i,\{3,4\}} \rbrace\\
{\cal M}_4 = \:\:\:& \lbrace W_{i,\{1,4\}}, W_{i,\{2,4\}},\, W_{i,\{3,4\}} \rbrace
\end{align*}
and need the following missing sub-files:
\begin{align*}
W_1 \backslash {\cal M}_1 = \:\:\:& \lbrace W_{1,\{2,3\}}, W_{1,\{2,4\}},\, W_{1,\{3,4\}} \rbrace \\
W_2 \backslash {\cal M}_2 = \:\:\:& \lbrace W_{2,\{1,3\}}, W_{2,\{1,4\}},\, W_{2,\{3,4\}} \rbrace \\
W_1 \backslash {\cal M}_3 = \:\:\:& \lbrace W_{1,\{1,2\}}, W_{1,\{1,4\}},\, W_{1,\{2,4\}} \rbrace \\
W_1 \backslash {\cal M}_4 = \:\:\:& \lbrace W_{1,\{1,2\}}, W_{1,\{1,3\}},\, W_{1,\{2,3\}} \rbrace.
\end{align*}
After splitting the sub-files into $2$ equal length sub-pieces, users $1,3,4$ transmit the following codewords, as can be seen from \eqref{eq:Broadcasts}:
\begin{align*}
X_1= \lbrace Y^1_{\{2,3\}} = \:\:\:& W_{2,\{1,3\},1} \Large{\oplus} W_{1,\{1,2\},1},\, Y^1_{\{2,4\}} = W_{2,\{1,4\},1} \Large{\oplus} W_{1,\{1,2\},1},\, Y^1_{\{3,4\}} = W_{1,\{1,3\},1} \Large{\oplus} W_{1,\{1,4\},1}\rbrace\\
X_3= \lbrace Y^3_{\{1,2\}} = \:\:\:& W_{1,\{2,3\},3} \Large{\oplus} W_{2,\{1,3\},3},\, Y^3_{\{1,4\}} = W_{1,\{1,3\},3} \Large{\oplus} W_{1,\{3,4\},3},\, Y^3_{\{2,4\}} = W_{2,\{3,4\},3} \Large{\oplus} W_{1,\{2,3\},3}\rbrace\\
X_4= \lbrace Y^4_{\{1,2\}} = \:\:\:& W_{1,\{2,4\},4} \Large{\oplus} W_{2,\{1,4\},4},\, Y^4_{\{1,3\}} = W_{1,\{1,4\},4} \Large{\oplus} W_{1,\{3,4\},4},\, Y^4_{\{2,3\}} = W_{2,\{3,4\},4} \Large{\oplus} W_{1,\{2,4\},4}\rbrace.
\end{align*}
Notice that for these users, there exists no subset ${\cal A}^i$ s.t. ${\cal A}^i \subseteq [K]\backslash\{i\}$, $|{\cal A}^i|=t=2$ which satisfies ${\cal U}^i \cap {\cal A}^i \neq \emptyset$. However, depending on the choice of ${\cal U}^2$, user 2 can find $\binom{K-1-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{2\}})}{t}=1$ subset ${\cal A}^2$ with ${\cal U}^2 \cap {\cal A}^2 \neq \emptyset$. Such an ${\cal A}^2$ can be determined as $\{3,4\}, \{1,4\}, \{1,3\}$ for the cases of ${\cal U}^2= \{1\}$, ${\cal U}^2 = \{3\}$, ${\cal U}^2 = \{4\}$, respectively.
Picking user $1$ as its leading demander, i.e., $\mathcal{U}^2 = \{1\}$, user $2$ only transmits
\begin{align*}
X_2= \lbrace Y^2_{\{1,3\}} = \:\:\:& W_{1,\{1,2\},2} \Large{\oplus} W_{1,\{2,3\},2},\, Y^2_{\{1,4\}} = W_{1,\{1,2\},2} \Large{\oplus} W_{1,\{2,4\},2} \rbrace,
\end{align*}
sparing the codeword $Y^2_{\{3,4\}} = W_{1,\{2,3\},2} \Large{\oplus} W_{1,\{2,4\},2}$. As mentioned before, the choice of the leading demanders is arbitrary and any one of the $Y^2_{\{1,3\}},\, Y^2_{\{1,4\}},\, Y^2_{\{3,4\}}$ can be determined as the superfluous codeword. In fact, any one of these codewords can be attained by summing the other two, since $Y^2_{\{1,3\}} \Large{\oplus} Y^2_{\{1,4\}} \Large{\oplus} Y^2_{\{3,4\}} = 0$ (cf. \eqref{eq:lemmaRecNonleader}).
From the broadcasted codewords, all users can decode all their missing sub-pieces by using the sub-pieces in their caches as side-information, by performing \eqref{eq:decodeBroadcast}.
As each sub-piece is composed of $F/t\binom{K}{t} = F/12$ bits and as $3 \times 3 + 1 \times 2 = 11$ codewords of such size are broadcasted, our scheme achieves a load of $11/12$, which could be directly calculated by \eqref{eq:single}.
\begin{remark}\label{rem:Ksharedlink}
Notice that a user $i$ generates its codewords exclusively from the sub-pieces $W_{q, {\cal V}, i}$ and there exist $\binom{K-1}{t-1}$ such sub-pieces in its cache.
In addition, for any $k \in [K]\backslash\{i\}$, we have $W_{q, {\cal V}, i} \cap W_{q, {\cal B}, k} = \emptyset$ for any ${\cal V}, {\cal B} \subseteq [K]$, $|{\cal V}|=|{\cal B}|=t$, $i\in {\cal V}$, $k\in {\cal B}$. That is to say, users generate their codewords based on non-overlapping libraries of size $N\binom{K-1}{t-1}\frac{F}{t\binom{K}{t}} = NF/K$ bits.
Also, observe that the cache of a user $k \neq i$ contains $\binom{K-2}{t-2}$ such $W_{q,{\cal V},i}$ sub-pieces, which amounts to $N\binom{K-2}{t-2}\frac{F}{t\binom{K}{t}} = \frac{N(t-1)F}{(K-1)K}$ bits. Recall that a sub-piece $W_{q,{\cal V},i}$ is shared among $t-1$ users other than $i$.
Therefore, the proposed scheme is in fact composed of $K$ shared-link models each with $N$ files of size $F' = F/K$ bits and $K' = K - 1$ users with caches of size $M' = \frac{N(t-1)}{(K-1)}$ units each. The corresponding parameter for each model is found to be $t' = \frac{K'M'}{N} = t-1$. Summing the loads of each $i \in [K]$ shared-link sub-systems \eqref{eq:singleSL} and replacing the shared-link system parameters $F$, $K$, $M$, $t$, $N_{\textup{e}}(\boldsymbol{d})$ with
$F'$, $K'$, $M'$, $t'$, and $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$, respectively, we obtain \eqref{eq:single}.
\end{remark}
\begin{remark}\label{rem:JiConnection}
When each user requests a distinct file ($N_{\textup{e}}(\boldsymbol{d})=K$), our proposed scheme corresponds to the one presented in \cite{ji2016fundamental}. The potential improvement of our scheme when $N_{\textup{e}}(\boldsymbol{d}) < K$ hinges on identifying the possible linear dependencies among the codewords generated by a user.
\end{remark}
\section{Converse Bound under the Constraint of One-Shot Delivery}\label{sec:Converse}
In this section we propose the converse bound under the constraint of one-shot delivery given in Theorem~\ref{teo}. Under the constraint of one-shot delivery, we can divide each sub-file $W_{i,{\cal V}}$ into sub-pieces. Recall that $W^{k,i}_{d_k,{\cal V}}$ represents the bits of $W_{d_k}$ decoded by user $k$ from $X_{i}$.
Under the constraint of one-shot delivery, we can divide the D2D caching problem into $K$ shared-link models. In the $i^{\textrm{th}}$ shared-link model where $i\in [K]$, user $i$ transmits $X_i$ such that each user $k\in [K]\setminus \{i\}$ can recover $W^{k,i}_{d_k,{\cal V}}$ for all ${\cal V}\subseteq ([K]\setminus \{k\})$ where $i\in {\cal V}$.
\subsection{Converse Bound for $R^*_{\textup{o}}(\mathbf{d},\boldsymbol{\mathcal{M}})$}
\label{sub:converse for R(d,M)}
Fix a demand vector $\mathbf{d}$ and a cache placement $\boldsymbol{\mathcal{M}}$. We first focus on the shared-link model where user $i\in[K]$ broadcasts.
Consider a permutation of $[K]\setminus \{i\}$, denoted by $\mathbf{u}=(u_{1},u_{2},...,u_{K-1})$.
For given permuted users $\mathbf{u}$ and demand $\mathbf{d}$ vectors,
we define a new vector $\mathbf{f}(\mathbf{u},\mathbf{d})$ obtained by successive pruning of the vector $\mathbf{u}$
by iterating the following steps: let $\mathbf{f}^0= \mathbf{u}$ (initial state), and for each $\ell =1,2,\ldots$,
let $\mathbf{f}^{\ell}$ be the vector obtained from $\mathbf{f}^{\ell-1}$ by removing all elements $f^{\ell-1}_j$ (the $j^{\textrm{th}}$ element of $\mathbf{f}^{\ell-1}$) with $ j > \ell$ such that $d_{f^{\ell-1}_j} = d_{f^{\ell-1}_{\ell}}$. We
stop when there are no more elements to remove, and call the resulting vector $\mathbf{f}(\mathbf{u},\mathbf{d})$.
In other words, $\mathbf{f}(\mathbf{u},\mathbf{d})$ is obtained from $\mathbf{u}$ and $\mathbf{d}$ by removing from $\mathbf{u}$, for each demanded file, all the users demanding
such file except the user in the leftmost position of $\mathbf{u}$.
For example, if $\mathbf{u}=(2,3,5,4)$, $\mathbf{d}=(1,2,2,3,3)$, we have $d_{u_1}=d_{u_2}=2$ and $d_{u_3}=d_{u_4}=3$, and thus $\mathbf{f}(\mathbf{u},\mathbf{d})=(u_1, u_3)=(2,5)$. It can be seen that $\mathbf{f}(\mathbf{u},\mathbf{d})$ contains $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$ elements.
For each $j\in [1:N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})]$, we define $f_j(\mathbf{u},\mathbf{d})$ as the $j^{\textrm{th}}$ element of $\mathbf{f}(\mathbf{u},\mathbf{d})$.
For the permutation $\mathbf{u}$, we can choose a set of sub-pieces, $\big(W^{f_j(\mathbf{u},\mathbf{d}),i}_{d_{f_j(\mathbf{u},\mathbf{d})},{\cal V}_{j}} :
{\cal V}_{j}\subseteq[K]\backslash \{f_1(\mathbf{u},\mathbf{d}),\ldots,f_j(\mathbf{u},\mathbf{d})\}, \ i\in {\cal V}_j,
\ j\in[N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})]
\big)$.
By a similar proof as~\cite[Lemma 1]{wan2016optimality} (as used in Section~\ref{sub:shared-link converse} of this paper), we have the following lemma.
\begin{lemma}
\label{lem:acyclic}
For each permutation of $[K]\setminus \{i\}$, denoted by $\mathbf{u}=(u_{1},u_{2},...,u_{K-1})$, we have
\begin{align}
H(X_i)
&\geq \sum_{j\in[N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})]} \sum_{ {\cal V}_{j}\subseteq[K]\backslash \{f_1(\mathbf{u},\mathbf{d}),\ldots,f_j(\mathbf{u},\mathbf{d})\}: i\in {\cal V}_j} |W^{f_j(\mathbf{u},\mathbf{d}),i}_{d_{f_i(\mathbf{u},\mathbf{d})},{\cal V}_{j}}|.
\label{eq:converse of Xk}
\end{align}
\end{lemma}
\begin{IEEEproof}
In the $i^{\textrm{th}}$ shared-link model, for each $\mathbf{u}$ which is a permutation of $[K]\setminus \{i\}$, we can generate a directed graph. Each sub-piece $W^{f_j(\mathbf{u},\mathbf{d}),i}_{d_{f_j(\mathbf{u},\mathbf{d})},{\cal V}_{j}}$ where $j\in[N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})]$, ${\cal V}_{j}\subseteq[K]\backslash \{f_1(\mathbf{u},\mathbf{d}),\ldots,f_j(\mathbf{u},\mathbf{d})\}$ and $i\in {\cal V}_j$, is represented by an independent node in the graph demanded by user $f_j(\mathbf{u},\mathbf{d})$. There is a directed edge from node $j_1$ to node $j_2$, if and
only if the user who demands the sub-piece represented by node $j_2$ caches the sub-piece represented
by node $j_2$.
We say that sub-pieces $W^{f_j(\mathbf{u},\mathbf{d}),i}_{d_{f_j(\mathbf{u},\mathbf{d})},{\cal V}_{j}}$ for all ${\cal V}_{j}\subseteq[K]\backslash \{f_1(\mathbf{u},\mathbf{d}),\ldots,f_j(\mathbf{u},\mathbf{d})\}$ where $i\in {\cal V}_j$, are in level $j$.
It is easy to see that the user demanding each sub-piece in level $j$
knows
neither the sub-pieces in the same level, nor the sub-pieces in the
higher levels. As a result,
in the directed graph, there is no sub-set containing a directed cycle. Hence, by the acyclic index coding converse bound in~\cite[Corollary 1]{onthecapacityindex}, we have~\eqref{eq:converse of Xk}.
\end{IEEEproof}
Considering all the permutations of $[K]\setminus \{i\}$ and all $i\in [K]$, we sum the inequalities in form of~\eqref{eq:converse of Xk} to obtain,
\begin{align}
(K-1)! \big(H(X_1)+\ldots+H(X_K) \big) \geq \sum_{k\in[K]} \sum_{{\cal V}\subseteq [K]\setminus \{k\}} \sum_{i\in {\cal V}} a^{k,i}_{{\cal V}} |W^{k,i}_{d_k,{\cal V}}|, \label{eq:summing all Xk}
\end{align}
where $a^{k,i}_{{\cal V}}$ represents the coefficient of $|W^{k,i}_{d_k,{\cal V}}|$ in the sum. In Appendix~\ref{sec:proof of same coeff}, we prove the following lemma.
\begin{lemma}
\label{lem:same coeff}
$a^{k,i_1}_{{\cal V}}=a^{k,i_2}_{{\cal V}}$, for each $i_1,i_2 \in {\cal V}$.
\end{lemma}
From Lemma~\ref{lem:same coeff}, we define $a^{k}_{{\cal V}}=\frac{ a^{k,i}_{{\cal V}}}{(K-1)! } $ for all $i\in {\cal V}$. Hence, from~\eqref{eq:summing all Xk} we have
\begin{subequations}
\begin{align}
R^*_{\textup{o}}(\mathbf{d},\boldsymbol{\mathcal{M}})F\geq
\big(H(X_1)+\ldots+H(X_K) \big) &\geq \frac{1}{(K-1)! } \sum_{k\in[K]} \sum_{{\cal V}\subseteq [K]\setminus \{k\}} \sum_{i\in {\cal V}} a^{k,i}_{{\cal V}} |W^{k,i}_{d_k,{\cal V}}| \label{eq:sum to sub-file 1} \\
& \geq \sum_{k\in[K]} \sum_{{\cal V}\subseteq [K]\setminus \{k\}} a^{k}_{{\cal V}} |W_{d_k,{\cal V}}|\label{eq:sum to sub-file}
\end{align}
\end{subequations}
where in~\eqref{eq:sum to sub-file} we used
\begin{align}
\sum_{i\in {\cal V}}|W^{k,i}_{d_k,{\cal V}}|\geq |W_{d_k,{\cal V}}|.\label{eq:sum to sub-file 3}\
\end{align}
\begin{remark}\label{rem:improvement}
To derive the converse bound under the constraint of uncoded cache placement in~\cite{yu2018exact,wan2016caching}, the authors consider all the demands and all the permutations and sum the inequalities together. By the symmetry, it can be easily checked that in the summation expression, the coefficient of subfiles known by the same number of users is the same. However, in our problem, notice that~\eqref{eq:sum to sub-file 1} and~\eqref{eq:sum to sub-file 3} only hold for one demand. So each time we should consider one demand and let the coefficients of $H(X_k)$ where $k\in[K]$ be the same. Meanwhile, for each demand, we also should let the coefficients in Lemma~\ref{lem:same coeff} be the same.
However, for each demand, the $K$ shared-link models are not the symmetric. If we use the choice of the acyclic sets in~\cite{yu2018exact,wan2016caching} for each of the $K$ shared-link models, we cannot ensure that for one demand, the coefficients are the symmetric.
\end{remark}
\subsection{Converse Bound for $R^*_{\textup{ave, o}}$}
\label{sub:average converse}
We focus on a type of demands $\boldsymbol{s}$. For each demand vector $\mathbf{d} \in {\cal D}_{\boldsymbol{s}}$, we lower bound $R^*_{\textup{o}}(\mathbf{d},\boldsymbol{\mathcal{M}})$ as~\eqref{eq:sum to sub-file}. Considering all the demands in ${\cal D}_{\boldsymbol{s}}$, we then sum the inequalities in form of~\eqref{eq:sum to sub-file},
\begin{align}
\sum_{\mathbf{d} \in {\cal D}_{\boldsymbol{s}}} R^*_{\textup{o}}(\mathbf{d},\boldsymbol{\mathcal{M}})F\geq \sum_{q\in[N]} \sum_{{\cal V}\subseteq [K]} b_{q,{\cal V}} |W_{q,{\cal V}}|\label{eq:demand type}
\end{align}
where $b_{q,{\cal V}}$ represents the coefficient of $|W_{q,{\cal V}}|$. By the symmetry, it can be seen that $b_{q_1,{\cal V}_1}=b_{q_2,{\cal V}_2}$ if $|{\cal V}_1|=|{\cal V}_2|$. So we let $b_{t}:=b_{q,{\cal V}} $ for each $q\in [N]$ and ${\cal V}\subseteq [K]$ where $|{\cal V}|=t$. Hence, from~\eqref{eq:demand type} we get
\begin{align}
|{\cal D}_{\boldsymbol{s}}|F \mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})]=\sum_{\mathbf{d} \in {\cal D}_{\boldsymbol{s}}} R^*_{\textup{o}}(\mathbf{d},\boldsymbol{\mathcal{M}})F\geq \sum_{t\in [0:K]} b_{t} x_t \label{eq:bt xt}
\end{align}
where we define
\begin{align}
x_t:= \sum_{q\in [N]} \sum_{{\cal V} \subseteq [K]:|{\cal V}|=t} |W_{q,{\cal V}}|.\label{eq:definition of xt}
\end{align}
Notice that each sub-file (assumed to be $W_{q,{\cal V}}$) demanded by each user is transmitted as $|{\cal V}|$ sub-pieces, each of which is known by $|{\cal V}|$ users. Now focus on an integer $t\in [0;K]$. We compute $b_{t}$ in the next two steps:
\begin{enumerate}
\item
$x_t$ is the sum of $N\binom{K}{t}$ sub-files known by $t$ users.
So in the sum expression~\eqref{eq:bt xt}, we obtain $b_{t} x_t$ from a sum of $tN\binom{K}{t}b_t$ terms of sub-pieces known by $t$ users.
\item
In~\eqref{eq:converse of Xk}, there are $\binom{K-2}{t-1}+\binom{K-3}{t-1}+\dots+\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}})-1}{t-1}=\binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{k\}})-1}{t}$, terms of sub-pieces known by $t$ users. Hence,~\eqref{eq:sum to sub-file 1} contains $\sum_{i\in [K]} \binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t}$ terms of sub-pieces known by $t$ users. So the sum of~\eqref{eq:sum to sub-file 1} over all the $|{\cal D}_{\boldsymbol{s}}|$ demand vectors, contains $|{\cal D}_{\boldsymbol{s}}| \Big( \sum_{i\in [K]} \binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t}\Big)$ terms of sub-pieces known by $t$ users.
\end{enumerate}
Combining Steps 1) and 2), we can see
\begin{align}
tN\binom{K}{t}b_t= |{\cal D}_{\boldsymbol{s}}| \left( \sum_{i\in [K]} \binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t}\right)\label{eq:step 1 2}
\end{align}
and thus
\begin{align}
b_t=\frac{|{\cal D}_{\boldsymbol{s}}| \Big( \sum_{i\in [K]} \binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t}\Big)}{tN\binom{K}{t}}.\label{eq:bt}
\end{align}
We take~\eqref{eq:bt} into~\eqref{eq:bt xt} to obtain
\begin{subequations}
\begin{align}
\mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})] &\geq \sum_{t\in [0:K]} \frac{b_{t} x_t}{|{\cal D}_{\boldsymbol{s}}|F} \label{eq:take bt into xt 1}\\
&= \sum_{t\in [0:K]} \frac{\Big( \sum_{i\in [K]} \binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t}\Big) x_t}{tN\binom{K}{t} F} \label{eq:take bt into xt 2}\\
&=\sum_{t\in [0:K]} \frac{\Big( \sum_{i\in [K]} \binom{K-1}{t}-\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t}\Big) x_t}{K\binom{K-1}{t-1} NF} \label{eq:take bt into xt 3}\\
&=\sum_{t\in [0:K]} \frac{ \binom{K-1}{t}- \frac{1}{K} \sum_{i\in [K]}\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t} }{\binom{K-1}{t-1} NF}x_t \label{eq:take bt into xt 4}.
\end{align}
\end{subequations}
We also have the constraint of file size and
\begin{align}
\sum_{t\in [0:K]} x_t=NF, \label{eq:file size}
\end{align}
and the constraint of cache size
\begin{align}
\sum_{t\in [1:K]} t x_t \leq KMF. \label{eq:cache size}
\end{align}
Note that the set of values of $N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})$ for all $i \in [K]$ is the same for all demand vectors $\boldsymbol{d}$ with given composition $\boldsymbol{s}$. We let $r_{t,\boldsymbol{s}}:=\frac{ \binom{K-1}{t}- \frac{1}{K} \sum_{i\in [K]}\binom{K-N_{\textup{e}}(\boldsymbol{d}_{\backslash\{i\}})-1}{t} }{\binom{K-1}{t-1} NF}$, where $\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}$. Similar to~\cite{yu2018exact}, we can lower bound~\eqref{eq:take bt into xt 4} using Jensen's inequality and the
monotonicity of $\textrm{Conv}(r_{t,\boldsymbol{s}})$,
\begin{align}
\mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})]\geq \textrm{Conv}(r_{t,\boldsymbol{s}}).
\end{align}
So we have
\begin{align}
\min_{\boldsymbol{\mathcal{M}}}\mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})]&\geq \min_{\boldsymbol{\mathcal{M}}} \textrm{Conv}(r_{t,\boldsymbol{s}})=\textrm{Conv}(r_{t,\boldsymbol{s}}).\label{eq:conv}
\end{align}
Considering all the demand types and from~\eqref{eq:conv}, we have
\begin{align}
R^*_{\textup{ave, o}}&\geq \mathbb{E}_{\boldsymbol{s}}\left[ \min_{\boldsymbol{\mathcal{M}}} \mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})] \right]\geq \mathbb{E}_{\boldsymbol{s}}[\textrm{Conv}(r_{t,\boldsymbol{s}})].\label{eq:consider ave}
\end{align}
Since $r_{t,\boldsymbol{s}}$ is convex, we can change the order of the expectation and the Conv in~\eqref{eq:consider ave}. Thus we proved the converse bound in Theorem~\ref{teo}.
\begin{remark}
We can also prove the converse bound in Theorem~\ref{teo} from the constraints in~\eqref{eq:take bt into xt 4},~\eqref{eq:file size} and~\eqref{eq:cache size}, by Fourier-Motzkin elimination as it was done in~\cite{wan2016optimality,wan2016caching}.
\end{remark}
\subsection{Converse Bound for $R^*_{\textup{worst, o}}$}
\label{sub:worse-case converse}
We can directly extend the proof for average load in Section~\ref{sub:average converse} to the worse-case load as follows.
\begin{align}
R^*_{\textup{worst, o}} = \min_{\boldsymbol{\mathcal{M}}} \max_{\boldsymbol{d}} R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}}) &\geq \min_{\boldsymbol{\mathcal{M}}} \max_{\boldsymbol{s}\in \mathcal{S}} \mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})]\nonumber\\
&\geq \max_{\boldsymbol{s}\in \mathcal{S}} \min_{\boldsymbol{\mathcal{M}}}\mathbb{E}_{\boldsymbol{d}\in {\cal D}_{\boldsymbol{s}}}[ R^*_{\textup{o}}(\boldsymbol{d},\boldsymbol{\mathcal{M}})] \nonumber\\
&\geq \max_{\boldsymbol{s}\in \mathcal{S}} \textrm{Conv}(r_{t,\boldsymbol{s}})\label{eq:convPeak}
\end{align}
where \eqref{eq:convPeak} follows from \eqref{eq:conv}.
Thus we can prove the converse bound in \eqref{eq:worseImplicit}.
\section{Load-outage trade-off in the presence of random user activity}\label{sec:loadouta}
In this section we consider the more realistic scenario of random activity among the users. We also focus on $M=N t/K$ where $t\in [0:K]$.
We assume that each user might be inactive independently with a probability of $p$ during the delivery phase and that $I$ is the realization of the number of inactive users. We also assume that
the inactivity event of one user is not known by the other users during the delivery phase.
To promote successful decoding in the presence of such inactive users, we propose to use MDS coding. A codeword encoded using an $(m,n)$-MDS code can be perfectly reconstructed from any $m$ out of $n$ MDS-coded blocks with the penalty of an increase in the code block length by a factor of $n/m$. In the sequel, we elaborate on the coding procedure and the choice of the values $m, n$. The resulting outage probability $P_{out}$ of the proposed design will be also stated, i.e., the probability that there exists some active user who can not decode its desired file.
Towards this end, we recall that for the D2D caching problem in~\cite{ji2016fundamental}, our proposed caching scheme in Section~\ref{sec:achiev} divides each file into $t \binom{K}{t}$ sub-pieces and let each user cache $t \binom{K-1}{t-1}$ sub-pieces of each file. The proposed one-shot delivery scheme allows users to decode exactly $\binom{K-2}{t-1}$ missing sub-pieces of their requested file from each of the other $K-1$ users. However, if user inactivity exists, each active user cannot receive all the $(K-1)\binom{K-2}{t-1}$ missing sub-pieces. Instead, each active user can only receive totally $(K-I-1)\binom{K-2}{t-1}$ sub-pieces from the remaining $K-I-1$ active users.
Notice that the user inactivity event happens during the delivery phase which can not be predicted in prior during the placement phase. But the inactivity probability $p$ is known by everyone during the placement.
Hence, for the D2D caching problem with user inactivity, during the placement phase, we fix an integer $a\in [0:K-1]$ and divide each file into $t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}$ non-overlapping and equal-length parts, which are then encoded by an $(t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1},t \binom{K}{t})$-MDS code. Hence, for each file $W_i$ where $i\in [N]$, we have $t \binom{K}{t}$ coded sub-pieces, each of which has $\frac{F}{t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}}$ bits. For each set ${\cal V}\subseteq [K]$ where $|{\cal V}|=t+1$ and each user $k\in {\cal V}$, there is one coded sub-piece $W^{\prime}_{i,{\cal V},k}$ cached by users in ${\cal V}$.
During the delivery phase, we use the proposed one-shot delivery scheme in Section~\ref{sec:achiev}.
Hence, an active user caches $t \binom{K-1}{t-1} $ coded sub-pieces of its desired file and receives $(K-1-I)\binom{K-2}{t-1}$ coded sub-pieces of its desired file during the delivery phase from other active users. It can be seen that if $a\geq I$, each active user can recover its desired file. However, the increasing of $a$ would increase the load and the required cache size, both by a factor of $\frac{t \binom{K}{t}}{t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}}$, while the increasing of $a$ would decrease the outage probability simultaneously. Hence, there is tradeoff between the choice of $a$ and the outage probability.
For each $a\in [0:K-1]$, the outage probability of the network is given as:
\begin{align}
P_{out} = \mathrm{P} \{a < I\}= \sum_{i^{\prime} = a+1}^{K} \binom{K}{i^{\prime} }p^{i^{\prime} } (1-p)^{K-i^{\prime} }.\label{eq:outage}
\end{align}
Hence, we have the following results.
\begin{theorem}[Average load with user inactivity]\label{thm:inact}
For a D2D caching problem with user inactivity probability $p$, and with a database of $N$ files and $K$ users, the following memory and average load tradeoff points with uniform distribution, are achievable
\begin{align}
(M,R_{\textup{ave,inact}})=\Big(\frac{N t}{K} \frac{t \binom{K}{t}}{t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}}, \frac{t \binom{K}{t}}{t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}}R^*_{\textup{ave, o}}\Big), \label{eq:averageinact}
\end{align}
with an outage probability in~\eqref{eq:outage} for each $a\in[0:K-1]$ and each $t\in [0:K]$,
where $R^*_{\textup{ave, o}}$ is given in~\eqref{eq:averageworstcase}.
For other memory size, the memory and average load tradeoff point can be obtained by the lower convex envelope for the above corner points.
\end{theorem}
\begin{theorem}[Worst-case load with user inactivity]\label{thm:worst inact}
For a D2D caching problem with user inactivity probability $p$, and with a database of $N$ files and $K$ users, where $t\in [0:K]$, the following memory and worst-case load tradeoff points, are achievable
\begin{align}
(M,R_{\textup{worst,inact}}) = \Big(\frac{N t}{K} \frac{t \binom{K}{t}}{t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}}, \frac{t \binom{K}{t}}{t \binom{K-1}{t-1} + (K-1-a)\binom{K-2}{t-1}}R^*_{\textup{worst, o}}\Big), \label{eq:worstcaseinact}
\end{align}
with an outage probability in~\eqref{eq:outage} for each $a\in[0:K-1]$ and each $t\in [0:K]$,
where $R^*_{\textup{worst, o}}$ is given in~\eqref{eq:worstCaseload}.
For other memory size, the memory and worst-case load tradeoff point can be obtained by the lower convex envelope for the above corner points.
\end{theorem}
Notice that if we do not use an one-shot delivery scheme, each user should jointly decode each of its desired sub-piece by the received packets from different users. Since the inactivity events of other users are not known by each user, it is hard to design a such non one-shot delivery scheme.
\section{Numerical Evaluations}\label{sec:Numerical}
In this section, we compare the load-memory trade-off of the presented scheme with the bounds from related works and evaluate the load-outage trade-off when user inactivity is taken into account (cf. Section \ref{sec:loadouta}).
In Fig.~\ref{fig:N10K20}, we consider the D2D caching problem in~\cite{ji2016fundamental} with $N=10$, $K=30$ and compare the load achieved by the presented one-shot scheme with the achievable load in \cite{ji2016fundamental} (cf. Subsection \ref{sub:D2DRelated}) and with the minimum achievable load for the shared-link model \cite{yu2018exact} (cf. Subsection \ref{sub:sharedLinkRelated}). When the minimum peak load is considered, we also provide the converse bounds in \cite{ji2016fundamental,sengupta2015beyond} (cf. Subsection \ref{sub:D2DRelated}). It can be seen that the proposed D2D caching scheme outperforms the one in~\cite{ji2016fundamental}.
\begin{figure
\centerline{\scalebox{0.7}{\input{fig2a.tex}}\scalebox{0.7}{\input{fig2b.tex}}}
\caption{\small Consider the D2D caching problem in~\cite{ji2016fundamental} with $N=10$, and $K=30$. The left figure is for the tradeoff between memory size and the worst-case load. The right figure for the tradeoff between memory size and the average load with uniform demand distribution. }
\label{fig:N10K20}
\end{figure}
In Fig.~\ref{fig:loadCacheOutage51015}, we consider the D2D caching problem with user inactivity, where $N=50$, $K=100$, and $p=0.1$. We see that for small cache sizes increasing the load by a factor of $2$ is sufficient to drive the outage probability to $P_{out}\approx 10^{-5}$. As the cache size grows, the increase in the load necessary to achieve this outage diminishes.
\begin{figure
\centerline{\scalebox{0.7}{\input{fig3a.tex}}\scalebox{0.7}{\input{fig3b.tex}}}
\caption{\small Consider the D2D caching problem with user inactivity, where $N=50$, $K=100$, and $p=0.1$. The left figure is for the tradeoff between memory size and the worst-case load. The right figure for the tradeoff between memory size and the average load with uniform demand distribution. }
\label{fig:loadCacheOutage51015}
\end{figure}
\section{Conclusions}
In this work, we completely characterized the load-memory
trade-off for cache-aided D2D networks under the constraint of one-shot delivery, when the placement phase is restricted to be uncoded and centralized. We presented a caching scheme and
proved its exact optimality in terms of both average and peak loads. Furthermore, we showed that the achieved load is optimal within a factor of $2$, when the constraint of one-shot delivery is removed. Lastly, we extended the proposed one-shot delivery scheme s.t. the enhanced scheme enjoys robustness against random user inactivity.
\appendices
\section{Proof of Lemma~\ref{lem:same coeff}}
\label{sec:proof of same coeff}
We assume that the set of users in $[K]\setminus\{k\}$ who have same demand as user $k$ is ${\cal S}$.
Let us first focus on $W^{k,i_1}_{d_k,{\cal V}}$. For a permutation of $[K]\setminus \{i_1\}$, denoted by $\mathbf{u}$, if the position of user $k$ in the vector $\mathbf{u}$ is before the position of each user in the set $\big(({\cal S}\cup {\cal V})\setminus \{i_1,i_2\}\big )\cup \{i_2\}$ which is a subset of $([K]\setminus \{i_1\})$ with cardinality $|({\cal S}\cup {\cal V})\setminus \{i_1,i_2\}|+1$, $|W^{k,i_1}_{d_k,{\cal V}}|$ appears in the inequality in form of~\eqref{eq:converse of Xk} for this permutation.
Similarly, let us then focus on $W^{k,i_2}_{d_k,{\cal V}}$. For a permutation of $[K]\setminus \{i_2\}$, denoted by $\mathbf{u}^{\prime}$, if the position of user $k$ in the vector $\mathbf{u}^{\prime}$ is before the position of each user in the set $\big(({\cal S}\cup {\cal V})\setminus \{i_1,i_2\}\big )\cup \{i_1\}$ which is a subset of $([K]\setminus \{i_2\})$ with cardinality $|({\cal S}\cup{\cal V})\setminus \{i_1,i_2\}|+1$, $|W^{k,i_2}_{d_k,{\cal V}}|$ appears in the inequality in form of~\eqref{eq:converse of Xk} for this permutation.
Hence, it is obvious to see that in~\eqref{eq:summing all Xk}, the coefficient of $|W^{k,i_1}_{d_k,{\cal V}}|$ is equal to the one of $|W^{k,i_2}_{d_k,{\cal V}}|$,
i.e., $a^{k,i_1}_{{\cal V}}=a^{k,i_2}_{{\cal V}}$.
\bibliographystyle{IEEEtran}
|
2,869,038,155,275 | arxiv | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\makeatletter
\@addtoreset{equation}{section}
\makeatother
\date{empty}
\pagestyle{plain}
\begin{document}
\begin{titlepage}
\null
\begin{flushright}
June, 2014
\end{flushright}
\vskip 0.5cm
\begin{center}
{\Large \bf BPS States in Supersymmetric Chiral Models \\
\vskip 0.3cm
with Higher Derivative Terms
}
\vskip 1.1cm
\normalsize
\renewcommand\thefootnote{\alph{footnote}}
{\large
Muneto Nitta$^{\dagger}$\footnote{nitta(at)phys-h.keio.ac.jp}
and Shin Sasaki$^\ddagger$\footnote{shin-s(at)kitasato-u.ac.jp}
}
\vskip 0.7cm
{\it
$^\dagger$
Department of Physics, and Research and Education Center for Natural Sciences, \\
\vskip -0.2cm
Keio University, Hiyoshi 4-1-1, Yokohama, Kanagawa 223-8521, Japan
\vskip 0.1cm
$^\ddagger$
Department of Physics, Kitasato University \\
\vskip -0.2cm
Sagamihara 252-0373, Japan
}
\vskip 0.5cm
\begin{abstract}
We study the higher derivative chiral models with four supercharges
and Bogomol'nyi-Prasad-Sommerfield (BPS) states in these models.
The off-shell Lagrangian generically includes higher powers of the
auxiliary fields $F$ which causes distinct on-shell branches associated with the solutions to
the auxiliary fields equation.
We point out that the model admits a supersymmetric completion of
arbitrary higher derivative bosonic models of a
single complex scalar field and
an arbitrary scalar potential can be
introduced even without superpotentials.
As an example, we present
a supersymmetric extension of the
Faddeev-Skyrme model
without four time derivatives,
in contrast to the previously proposed supersymmetric
Faddeev-Skyrme-like model
containing four time derivatives.
In general, higher derivative terms together with a superpotential
result in deformed scalar potentials.
We find that higher derivative corrections to 1/2 BPS domain
walls and 1/2 BPS lumps are exactly canceled out while the 1/4 BPS lumps (as compact baby Skyrmions) depend on a
characteristic feature of the higher derivative models.
We also find a new 1/4 BPS condition for domain
wall junctions which generically receives higher derivative corrections.
\end{abstract}
\end{center}
\end{titlepage}
\newpage
\setcounter{footnote}{0}
\renewcommand\thefootnote{\arabic{footnote}}
\pagenumbering{arabic}
\section{Introduction}
Low energy dynamics of field theories
can be described by only light fields such as
Nambu-Goldstone modes
when one integrates out
massive modes.
The low-energy effective theories are
usually expanded by derivative expansions;
thereby, they inevitably contain higher derivative terms of fields.
Chiral perturbation theory is such a theory
describing low-energy pion dynamics in QCD with
a chiral symmetry breaking \cite{Leutwyler:1993iq}.
The Skyrme model \cite{Sk}, which is a non-linear sigma model with fourth
order derivative terms,
is one of such a class.
Supergravity as low-energy effective theory of string theory
should have higher derivative correction terms \cite{Polchinski:1998rq}.
Other examples include world-volume effective actions of solitonic objects such as
topological solitons in field theories
and D-branes in string theories \cite{FrTs}.
The effective theory of a D-brane is described by
the Dirac-Born-Infeld (DBI) action \cite{DiBoIn}
containing an infinite number of derivatives.
Higher derivative field theories
are also useful in other areas of physics.
In the cosmological context, higher derivative theories are proposed for
inflation models such as the K-inflation \cite{ArDaMu}
and the Galileon inflation \cite{NiRaTr}.
These higher derivative models are known to
admit characteristic soliton solutions such
as k-defects \cite{Ba}, compactons \cite{Adam:2009px,AdRoSaGuWe} and so on.
On the other hand, supersymmetry is
one of the most important tools in modern high energy physics.
It has not only been considered as the most promising candidate to solve the naturalness problem in the
Standard Model
in the phenomenological side,
but also it plays important roles
to control quantum corrections
in supersymmetric field theories, leading
to determining exact low-energy dynamics \cite{Seiberg:1994rs}.
When one constructs low-energy effective theories
in supersymmetric field theories,
one is required to consider higher derivative corrections
in a supersymmetric manner.
It is, however, not so easy to construct supersymmetric
completion of general higher derivative theories.
Off-shell superfield formalisms are useful to write down actions of supersymmetric higher derivative models.
In particular, the four-dimensional $\mathcal{N} = 1$ superfield formalism
that incorporates the chiral superfield $\Phi$ is a simple starting point.
It is, however,
known that not all the off-shell supersymmetric higher derivative models exhibit good physical
properties.
Off-shell formulations of
higher derivative terms often encounter with
an auxiliary field problem;
chiral superfields with space-time derivatives
(e.g. $\partial_m \Phi$) sometimes introduce derivative interactions of
the auxiliary field $F$.
Consequently, the auxiliary fields become dynamical.
It is hard to eliminate them, and
the on-shell structure of the action
is not obvious.
For instance, the chiral Lagrangian of QCD contains
the Wess-Zumino-Witten (WZW) term
to reproduce the quantum anomaly at low energy.
However, a supersymmetric completion
of the WZW term
proposed in Ref.~\cite{Nemeschansky:1984cd}
suffers from this auxiliary field problem
\cite{Gates:1995fx,Nitta:2001rh}.
It was proposed in Ref.~\cite{Gates:1998si} that
a supersymmetric WZW term in superspace
can be constructed
without the auxiliary field problem
if the number of chiral superfields is doubled.\footnote{
The actual form of the WZW term was derived
in Refs.~\cite{Gates:2000rp,Banin:2006db}
and includes a K\"ahler tensor discussed in
the next section.
}
The auxiliary field problem
would be more problematic if one were to introduce a superpotential,
so one could not introduce a potential.
Nevertheless,
supersymmetric higher derivative models of which the building blocks are the chiral
superfields are studied in various contexts.
Among other things, the chiral models studied in Refs.~\cite{AdQuSaGuWe, KhLeOv}
provide a good grounding for studying supersymmetric
higher derivative theories.
In this model, the auxiliary fields are not accompanied by the
space-time derivatives and therefore they can be
eliminated by their equations
of motion.
In principle, it is possible to write down the explicit on-shell
actions of the models.
In particular, the scalar potential that shows
up after eliminating the auxiliary fields looks more apparent
\cite{SaYaYo}.
The coupling of higher derivative chiral models to supergravity was also achieved in this type of model
\cite{KoLeOv, FaKe}.
A supersymmetric DBI action was constructed in Ref.~\cite{RoTs}.
The other examples include
a supersymmetric completion of the $P(X,\varphi)$ model \cite{KhLeOv},
the supersymmetric Galileon inflation models \cite{KhLeOv2}
and models for the ghost condensation \cite{KoLeOv2}.
The same structure appears in quantum effective actions \cite{Buchbinder:1994iw, Kuzenko:2014ypa}
A higher derivative supersymmetric
${\mathbb C}P^1$ model
free from the auxiliary field problem
was also considered previously
as a supersymmetric extension
\cite{BeNeSc,Fr} of the Faddeev-Skyrme model \cite{Faddeev:1996zj} and a
supersymmetric baby Skyrme model \cite{Adam:2011hj,AdQuSaGuWe}.
The formalism in Refs.~\cite{AdQuSaGuWe, KhLeOv} has been also applied
to the construction of manifestly supersymmetric
higher derivative corrections to supersymmetric
nonlinear realizations \cite{Nitta:2014fca}.
In the former half of this paper,
we study higher derivative chiral models
developed in Refs.~\cite{AdQuSaGuWe, KhLeOv}
in the superfield formalism,
where higher derivative terms can be introduced
as a tensor with two holomorphic and symmetric indices
and two anti-holomorphic and symmetric indices.
We find a surprising fact that has
been overlooked in past studies on the supersymmetric
completions of various higher derivative models.
The model with a single chiral superfield
admits a supersymmetric extension of {\it arbitrary} bosonic models that consist of a single complex scalar field.
As an example, we present
a supersymmetric extension of the
Faddeev-Skyrme model \cite{Faddeev:1996zj}.
The bosonic part of this model does
not contain four time derivatives.
This is in contrast to the
previously proposed supersymmetric extension
\cite{BeNeSc,Fr} of the Faddeev-Skyrme model
that contains an additional four derivative term
that includes four time derivatives.
Moreover, we point out
that an arbitrary scalar potential can be
introduced even without the superpotential.
We further work out the
higher derivative chiral models with superpotentials.
The resulting on-shell Lagrangians are highly non-linear.
We study perturbative analysis revealing
the possibility of ghost kinetic term and
deformations of the scalar potential.
Meanwhile, Bogomol'nyi-Prasad-Sommerfield (BPS)
topological solitons play important roles
in the study of non-perturbative dynamics of
supersymmetric field theories
since they break and preserve a fraction of supersymmetry, belong to short supermultiplets,
and consequently are stable against
quantum corrections \cite{Witten:1978mh}.
When a BPS soliton preserves $p/q$ of supersymmetry,
it is called a $p/q$ BPS soliton.
For instance, Yang-Mills instantons,
BPS monopoles, vortices, lumps and
domain walls \cite{Dvali:1996xe} are of $1/2$ BPS
and composite solitons such as
domain wall junctions are of 1/4 BPS
in theories with four supercharges \cite{GiTo,Oda:1999az,NaNiSa}
and eight supercharges \cite{Eto:2005cp}
(see Refs.~\cite{Shifman:2007ce,Eto:2006pg,Eto:2005sw}
as a review for a fraction of supersymmetry for BPS states).
BPS solitons remain important
in supersymmetric
field theories with higher derivative terms.
Prime examples of such
solitons contain 1/2 BPS lumps in supersymmetric
${\mathbb C}P^1$ models with a
four-derivative term \cite{Eto:2012qda},
supersymmetric baby Skyrmions, which are compactons \cite{Adam:2011hj,AdQuSaGuWe};
and BPS compactons in K-field theories
\cite{AdQuSaGuWe2,AdQuSaGuWe3}.
The higher derivative ${\mathbb C}P^1$ model
in Ref.~\cite{Eto:2012qda}
appears as the effective theory of
a 1/2 BPS non-Abelian vortex
\cite{Hanany:2003hp}
in supersymmetric theories with eight supercharges.
Then, the 1/2 BPS lumps in the vortex
correspond to Yang-Mills instantons in the bulk
\cite{Eto:2004rz}.
While a few examples of BPS solitons in
higher derivative supersymmetric theories
have been studied thus far,
a systematic study of BPS solitons
in such theories is needed.
In the latter half of this paper, we give a general framework to
examine BPS states in supersymmetric higher derivative
chiral models. Our framework does not only reproduce,
in a unified manner, a few remarkable previous studies of
the BPS bounds in the supersymmetric higher
derivative models admitting BPS baby Skyrmions \cite{Adam:2011hj,AdQuSaGuWe},
BPS compactons \cite{AdQuSaGuWe2,AdQuSaGuWe3},
and BPS lumps \cite{Eto:2012qda},
but also includes the more general cases with several new BPS states;
1/2 BPS domain walls,
1/4 BPS domain wall junctions,
1/2 and 1/4 BPS lumps and baby Skyrmions.
In particular, we find
BPS baby Skyrmions found in Ref.~\cite{AdQuSaGuWe}
to be 1/4 BPS states.
We show that 1/2 BPS domain walls
and 1/2 BPS lumps do not receive
higher derivative corrections
while 1/4 BPS domain wall junctions do.
The organization of this paper is as follows.
In Sec.~\ref{sec:hdc}, we introduce the supersymmetric higher derivative
chiral model with four supercharges.
We write down the equation of motion for the auxiliary fields
and analyze the structure of the on-shell Lagrangians.
In particular, we introduce the superpotential and the deformation of
the scalar potential caused by the higher derivative terms is discussed.
We then examine BPS states that preserve 1/2 and 1/4 of the
original supersymmetry in subsequent sections.
The 1/2 BPS domain wall and 1/4 BPS
domain wall junctions
are studied in Sec.~\ref{sec:wall},
and 1/2 BPS and 1/4 BPS lumps are studied in Sec.~\ref{sec:lump}.
Section \ref{sec:conc} is devoted to conclusion and
discussions. Notations and conventions of superfields are found in
the Appendix \ref{sec:notation}.
\section{Higher derivative chiral models}\label{sec:hdc}
In the first subsection, we present
general higher derivative chiral models
with multiple chiral superfields.
In the second subsection,
we further work out the models with
a single chiral superfield without and with
a superpotential.
\subsection{General chiral models}
We consider four-dimensional $\mathcal{N} = 1$
supersymmetric higher derivative chiral models that have specific
properties.
The Lagrangian consists of chiral superfields $\Phi^i$ $(i=1, \cdots,
N)$, for which the component expansion in the chiral base $y^m =
x^m + i \theta \sigma^m \bar{\theta}$ is
\begin{align}
\Phi^i (y,\theta) = \varphi^i (y)
+ \theta \psi^i (y) + \theta^2 F^i(y),
\end{align}
where $\varphi^i$ is the complex scalar field, $\psi^i$ is the Weyl
fermion and $F^i$ is the complex auxiliary field.
The notations and conventions of the chiral superfield are found in
Appendix
\ref{sec:notation}.
The supersymmetric Lagrangian with
higher derivative terms is given by
\begin{align}
\mathcal{L} =& \ \int \! d^4 \theta \ K (\Phi^i, \Phi^{\dagger \bar{j}})
+ \frac{1}{16} \int \! d^4 \theta \
\Lambda_{ik\bar{j} \bar{l}}
(\Phi,
\Phi^{\dagger})
D^{\alpha} \Phi^i
D_{\alpha} \Phi^k \bar{D}_{\dot{\alpha}} \Phi^{\dagger \bar j}
\bar{D}^{\dot{\alpha}} \Phi^{\dagger \bar{l}}
\notag \\
& + \left(\int \! d^2 \theta \ W(\Phi^i) + {\rm h.c.}\right)
\label{eq:Lagrangian}
\end{align}
where $K$ is the K\"ahler potential
and
$W$ is a superpotential as usual.
Higher derivative terms are produced by
the second term proportional to
$\Lambda_{ik\bar{j} \bar{l}}$,
which is
a $(2,2)$ K\"ahler tensor
symmetric in holomorphic and anti-holomorphic indices,
of which the components are
functions of $\Phi^i$ and $\Phi^{\dagger \bar{i}}$
(admitting space-time derivatives acting on them).\footnote{
This tensor term was obtained
in Ref.~\cite{Banin:2006db}
as a part of the supersymmetric Wess-Zumino-Witten term.
}
As we will see, the most important
feature
of this model is that the auxiliary fields never become
dynamical; the equation of motion for the auxiliary fields is an algebraic equation.
Now we examine the component structure of the model \eqref{eq:Lagrangian}.
The fourth derivative part of the Lagrangian \eqref{eq:Lagrangian} has
an essential property. This term is evaluated as
\begin{align}
D^{\alpha} \Phi^i D_{\alpha} \Phi^k \bar{D}_{\dot{\alpha}}
\Phi^{\dagger\bar{j}} \bar{D}^{\dot{\alpha}}
\Phi^{\dagger\bar{l}}
=& \ 16 \theta^2 \bar{\theta}^2
\left[
\frac{}{}
(\partial_m \varphi^i \partial^m \varphi^k) (\partial_m
\bar{\varphi}^{\bar{j}} \partial^m \bar{\varphi}^{\bar{l}})
- 2
\partial_m \varphi^i F^k
\partial^n \bar{\varphi}^{\bar{j}} \bar{F}^{\bar{l}}
+ F^i \bar{F}^{\bar{j}} F^k \bar{F}^{\bar{l}}
\right]
+ I_f,
\label{eq:4th}
\end{align}
where $I_f$ stands for terms that contain
fermion fields.
Since the bosonic part
of the right hand side of \eqref{eq:4th} saturates
the Grassmann coordinate $\theta^2 \bar{\theta}^2$,
only the lowest component of the tensor $\Lambda_{ik\bar{j} \bar{l}}$
contributes to the bosonic part of the Lagrangian.
Therefore the bosonic part of the Lagrangian \eqref{eq:Lagrangian} is
\begin{align}
\mathcal{L}_b =& \
\frac{\partial^2 K}{\partial \varphi^i \partial \bar{\varphi}^{\bar{j}}}
(- \partial_m \varphi^i \partial^m \bar{\varphi}^{\bar{j}} + F^i
\bar{F}^{\bar{j}} )
+ \frac{\partial W}{\partial \varphi^i} F^i + \frac{\partial
\bar{W}}{\partial \bar{\varphi}^{\bar{j}}} \bar{F}^{\bar{j}}
\notag \\
& + \Lambda_{ik\bar{j} \bar{l}} (\varphi, \bar{\varphi})
\left[
\frac{}{}
(\partial_m \varphi^i \partial^m \varphi^k) (\partial_n
\bar{\varphi}^{\bar{j}} \partial^n \bar{\varphi}^{\bar{l}})
-
\partial_m \varphi^i F^k
\partial^m \bar{\varphi}^{\bar{j}} \bar{F}^{\bar{l}}
+ F^i \bar{F}^{\bar{j}} F^k \bar{F}^{\bar{l}}
\right].
\label{eq:comLagrangian}
\end{align}
This Lagrangian exhibits a higher derivative model that has the following properties:
(I) the higher derivative terms are governed by the tensor
$\Lambda_{ik\bar{j} \bar{l}}$, and (II) the model is manifestly
(off-shell) supersymmetric and K\"ahler invariant provided that $K$ and $W$
are scalars and $\Lambda_{ik\bar{j} \bar{l}}$ is a tensor.
Among other things, the auxiliary fields do not have a space-time
derivative\footnote{This is true only for the purely bosonic terms.
There are derivative interactions of the auxiliary fields in the
fermionic contributions $I_f$ \cite{KhLeOv}.
They are irrelevant when classical configurations of fields are concerned.
}
and they are eliminated by the following equation of motion:
\begin{align}
\frac{\partial^2 K}{\partial \varphi^i \partial \bar{\varphi}^{\bar{j}}}
F^i - 2 \partial_m \varphi^i F^k \Lambda_{ik\bar{j} \bar{l}}
\partial^m \bar{\varphi}^{\bar{l}} +
2 \Lambda_{ik\bar{j} \bar{l}} F^i F^k \bar{F}^{\bar{l}}
+ \frac{\partial \bar{W}}{\partial \bar{\varphi}^{\bar{j}}}
= 0. \label{eq:af-eom}
\end{align}
This is an algebraic equation and, in principle, solvable.
However, the equation \eqref{eq:af-eom} is a simultaneous equation of
cubic power and it is hard to find explicit solutions $F_i$.
We comment that when $W=0$ at least $F_i = 0$ is a solution.
In this case, the on-shell Lagrangian becomes
\begin{align}
\mathcal{L}_b = - \frac{\partial^2 K}{\partial \varphi^i \partial
\bar{\varphi}^{\bar{j}}} \partial_m \varphi^i \partial^m
\bar{\varphi}^{\bar{j}}
+ \Lambda_{i k \bar{j} \bar{l}}
(\partial_m \varphi^i \partial^m \varphi^k) (\partial_n \bar{\varphi}^{\bar{j}}
\partial^n \bar{\varphi}^{\bar{l}}).
\end{align}
In general, there are more solutions other than $F_i = 0$
which we will show explicitly for models with one component field.
\subsection{Chiral models of one component}
Now we consider the single chiral superfield $\Phi$ for simplicity.
The equation of motion for the auxiliary field becomes
\begin{eqnarray}
\begin{aligned}
& K_{\varphi \bar{\varphi}} F
- 2 F
\left(
\partial_m \varphi \partial^m \bar{\varphi} - F \bar{F}
\right) \Lambda (\varphi, \bar{\varphi})
+ \frac{\partial \bar{W}}{\partial
\bar{\varphi}} = 0. \label{eq:af-eom2}
\end{aligned}
\label{eq:aux_eq}
\end{eqnarray}
Here $K_{\varphi \bar{\varphi}} = \frac{\partial K}{\partial \varphi
\partial \bar{\varphi}}$.
We solve the equation \eqref{eq:aux_eq} in the $W=0$ and $W\not=0$ cases separately.
\subsubsection{$W = 0$ case}
When there is no superpotential, the equation for the auxiliary field becomes
\begin{align}
K_{\varphi \bar{\varphi}} F
- 2 F
\left(
\partial_m \varphi \partial^m \bar{\varphi} - F \bar{F}
\right) \Lambda
= 0.
\label{eq:eom_auxiliary_no_supot}
\end{align}
Then the solutions are found to be
\begin{align}
F
=& \ 0,
\label{eq:first_sol}
\\
F \bar{F} =& \ - \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial^m
\bar{\varphi}.
\label{eq:second_sol}
\end{align}
There are
two different on-shell branches associated with
the solutions \eqref{eq:first_sol} and \eqref{eq:second_sol}.
For the first solution \eqref{eq:first_sol}, the bosonic part of the on-shell Lagrangian is
\begin{align}
\mathcal{L}_{1b} = - K_{\varphi \bar{\varphi}} \partial_m \varphi \partial^m \bar{\varphi} +
(\partial_m \varphi \partial^m \varphi) (\partial_n \bar{\varphi}
\partial^n \bar{\varphi}) \Lambda.
\label{eq:W0_osl1}
\end{align}
The first term is the ordinary kinetic term and the second term contains
higher derivative correction terms. We call this the canonical branch.
An example of the model is the $\mathcal{N} = 1$ supersymmetric DBI
action for the world-volume theory of single D3-brane.
The corresponding K\"ahler metric is canonical, $K_{\varphi \bar{\varphi}} = 1$, and
the function $\Lambda$
is given by \cite{RoTs}
\begin{align}
\Lambda =
\frac{1}{
1 + A + \sqrt{(1 + A)^2 - B}
}, \quad
A = \partial_m \Phi \partial^m \Phi^{\dagger}, \quad
B = \partial_m \Phi \partial^m \Phi \partial_n \Phi^{\dagger} \partial^n \Phi^{\dagger}.
\end{align}
The other examples include
a supersymmetric completion of the $P(X,\varphi)$ model \cite{KhLeOv},
the supersymmetric Galileon inflation models \cite{KhLeOv2}
and models for the ghost condensation \cite{KoLeOv2}.
Another example of $\Lambda$ that has been overlooked in the
literature \cite{Adam:2011hj, AdQuSaGuWe, BeNeSc, Fr}
is
\begin{align}
\Lambda = \kappa (\partial_m \Phi \partial^m \Phi \partial_n
\Phi^{\dagger} \partial^n \Phi^{\dagger})^{-1}
\frac{1}{(1 + \Phi \Phi^{\dagger})^4}
\left[
(\partial_m \Phi^{\dagger} \partial^m \Phi)^2
-
\partial_m \Phi \partial^m \Phi \partial_n \Phi^{\dagger} \partial^n \Phi^{\dagger}
\right],
\end{align}
where $\kappa$ is a parameter.
Then, with the Fubini-Study metric $K_{\varphi \bar{\varphi}} =
\frac{1}{(1 + |\varphi|^2)^2}$ for the $\mathbb{C}P^1$ model,
the bosonic part of the Lagrangian becomes
\begin{align}
\mathcal{L}_{1b} = - \frac{\partial_m \varphi \partial^m
\bar{\varphi}}{(1 + |\varphi|^2)^2}
+ \kappa \frac{(\partial_m \varphi \partial^m \bar{\varphi})^2 -
|\partial_m \varphi \partial^m \varphi|^2}{(1 + |\varphi|^2)^4}.
\label{eq:FS}
\end{align}
This is nothing but the Faddeev-Skyrme model
\cite{Faddeev:1996zj}.
The previous trials to construct
an ${\cal N}=1$ supersymmetric extension
of the Faddeev-Skyrme model concluded that
one needs an extra four-derivative term
containing four time derivatives \cite{BeNeSc, Fr}
while the Lagrangian in Eq.~\eqref{eq:FS} does not.
It was discussed in Ref.~\cite{Fr} that such a term destabilizes Hopfions (knot solitons).
Therefore, the Lagrangian \eqref{eq:Lagrangian} provides an $\mathcal{N} =1$ supersymmetric extension
of the Faddeev-Skyrme model
without four time derivatives, which is expected to give
stable Hopfions.
More generally, since the function $\Lambda$ is completely arbitrary, one can construct
supersymmetric extension of {\it any} bosonic models that consist of a
complex scalar field $\varphi$.
More surprisingly, we further point out
that it is also possible to introduce an arbitrary scalar potential
$V(\varphi,\varphi^*)$
even without superpotentials,
by choosing $\Lambda$ as
\begin{align}
\Lambda = - (\partial_m \Phi \partial^m \Phi \partial_n
\Phi^{\dagger} \partial^n \Phi^{\dagger})^{-1} V(\Phi,\Phi^\dagger) .
\end{align}
However, as we will clarify later,
superpotentials play an important role when one considers BPS
solutions.
On the other hand, for the second solution \eqref{eq:second_sol},
the bosonic part of the on-shell Lagrangian is
\begin{align}
\mathcal{L}_{2b} = \left(
\frac{}{}
|\partial_m \varphi \partial^m \varphi |^2
- (\partial_m \varphi \partial^m \bar{\varphi})^2
\right) \Lambda - \frac{(K_{\varphi \bar{\varphi}})^2}{4 \Lambda}.
\label{eq:W0_osl2}
\end{align}
In this branch, the canonical kinetic term disappears.\footnote{
When $\Lambda$ is chosen as $\Lambda = - \left(
\frac{}{}
|\partial_m \varphi \partial^m \varphi |^2
- (\partial_m \varphi \partial^m \bar{\varphi})^2
\right)^{-1} \partial_n \varphi \partial^n \bar{\varphi}$, the canonical
kinetic term recovers.
However quite non-linear higher derivative terms remain in the
Lagrangian due to the factor $1/\Lambda$.
This possibility was discussed in the context of higher derivative
supergravity models \cite{KoLeOv}.
}
This model was first studied in
Ref.~\cite{AdQuSaGuWe} where supersymmetric
extensions of the baby Skyrme model are discussed.
We note that the second branch \eqref{eq:W0_osl2}
does not have the smooth limit to the canonical theory ($\Lambda \to 0$).
Therefore we call this the non-canonical branch.
Since $F \bar{F}$ should be positive semi-definite,
the second solution \eqref{eq:second_sol} is consistent only in the region
\begin{align}
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi
\partial^m \bar{\varphi} \ge 0.
\label{eq:consistency}
\end{align}
We comment on the last term in Eq.~\eqref{eq:W0_osl2}.
The term
$(K_{\varphi \bar{\varphi}})^2/4 \Lambda$ is considered as a scalar potential term since it remains
when the function $\Lambda$ does not depend on fields with space-time derivatives.
For a vacuum configuration, the condition \eqref{eq:consistency}
implies $\Lambda < 0$ for the positive definite K\"ahler metric
$K_{\varphi \bar{\varphi}} > 0$.
Then the scalar potential at a vacuum becomes negative
even for the manifestly supersymmetric construction of the model.
One resolution of this puzzle is the existence of
ghosts, i.e., fields with a kinetic term of the wrong sign.
However it is not obvious whether ghosts exist or not
since there is no kinetic term in the Lagrangian \eqref{eq:W0_osl2} and
no consistent free theory is defined.
In that case, $K$ loses its meaning of the K\"ahler potential and
what determines the sign of the potential energy is the function $K$.
When $K_{\varphi \bar{\varphi}}$ is negative, $\Lambda$ and the scalar potential
become positive.
Actually, choosing the functions of $K$ and $\Lambda$ appropriately, one can
construct scalar potentials that have desired properties
\cite{AdQuSaGuWe}.
\subsubsection{$W \neq 0$ case}
When $W\not=0$ one eliminates $\bar{F}$ in \eqref{eq:aux_eq} and obtains
the equation for the auxiliary field $F$:
\begin{eqnarray}
2 \Lambda (\varphi, \bar{\varphi}) \frac{\partial W}{\partial \varphi} F^3
+
\frac{\partial \bar{W}}{\partial \bar{\varphi}}
\left(
K_{\varphi \bar{\varphi}} - 2 \Lambda (\varphi, \bar{\varphi}) \partial_m \varphi \partial^m \bar{\varphi}
\right) F
+
\left(
\frac{\partial \bar{W}}{\partial \bar{\varphi}}
\right)^2 = 0.
\label{eq:aux_eom}
\end{eqnarray}
When there are no higher derivative corrections $\Lambda = 0$,
one recovers the ordinary $F$-term solution $F = - \frac{1}{K_{\varphi \bar{\varphi}}}
\frac{\partial \bar{W}}{\partial \bar{\varphi}}$.
Since the equation \eqref{eq:aux_eom} is an algebraic equation of cubic
power, the solutions are obtained by the Cardano's method \cite{SaYaYo},
\begin{eqnarray}
& & F = \omega^k
\sqrt[3]{
- \frac{q}{2}
+ \sqrt{\left(\frac{q}{2}\right)^2 + \left(\frac{p}{3}\right)^3}}
+ \omega^{3-k}
\sqrt[3]{
- \frac{q}{2}
- \sqrt{\left(\frac{q}{2}\right)^2 + \left(\frac{p}{3}\right)^3}},
\notag \\
& & k = 0,1,2, \ \omega^3 = 1,
\label{eq:aux_3sol}
\end{eqnarray}
where $\omega$ is a cubic root of unity and $p$ and $q$ are given by
\begin{eqnarray}
p &=& \frac{1}{2 \Lambda (\varphi, \bar{\varphi})}
\left(
\frac{\partial W}{\partial \varphi}
\right)^{-1}
\left(
\frac{\partial \bar{W}}{\partial \bar{\varphi}}
\right)
\left(
K_{\varphi \bar{\varphi}} - 2 \Lambda (\varphi, \bar{\varphi}) \partial_m \varphi \partial^m \bar{\varphi}
\right), \\
q &=& \frac{1}{2 \Lambda (\varphi, \bar{\varphi})}
\left(
\frac{\partial W}{\partial \varphi}
\right)^{-1}
\left(
\frac{\partial \bar{W}}{\partial \bar{\varphi}}
\right)^2.
\end{eqnarray}
The on-shell Lagrangian is obtained by substituting the solutions of the
auxiliary field into the Lagrangian \eqref{eq:comLagrangian},
\begin{align}
\mathcal{L}_{b} =& - \frac{\partial^2 K}{\partial \varphi \partial \bar{\varphi}}
\partial_m \varphi \partial^m \bar{\varphi}
+ (\partial_m \varphi \partial^m \varphi) (\partial_n \bar{\varphi}
\partial^n \bar{\varphi}) \Lambda
\notag \\
&
+ \tilde{F} \bar{\tilde{F}} (- K_{\varphi \bar{\varphi}} + 2 \Lambda \partial_m \varphi \partial^m
\bar{\varphi}) - 3 (\tilde{F} \bar{\tilde{F}})^2 \Lambda,
\label{eq:on-shell_Lagrangian}
\end{align}
where $\tilde{F}$ ($\bar{\tilde{F}}$) is one of the solutions in
\eqref{eq:aux_3sol}.
Therefore, there are three different on-shell branches in this model.
We note that although the model is corrected by higher derivative terms,
the supersymmetry requires correction terms in the scalar potential that
do not contain derivative terms.
In particular, the scalar potential of the model is
calculated to be
\begin{align}
V (\varphi) = |\tilde{F}|^2 (K_{\varphi \bar{\varphi}} + 3 \Lambda^{(0)} |\tilde{F}|^2).
\label{eq:potential}
\end{align}
Here $\Lambda^{(0)}$ is the function $\Lambda$ where $\partial_m \varphi
= 0$.
We note that even for the manifestly supersymmetric Lagrangian
\eqref{eq:Lagrangian} with the positive K\"ahler metric $K_{\varphi
\bar{\varphi}}$, a negative scalar potential is possible when
$\Lambda^{(0)} < 0$.
Again, this fact would be an indication of ghost states in the theory.
As we will see below, the on-shell Lagrangian potentially includes ghost states.
Now we examine the structure of the on-shell Lagrangians in each
branch. To see the effects of superpotentials,
we write down the explicit on-shell component Lagrangian.
In particular, we examine the relation between the positive
definiteness of the scalar potential and the ghost states.
A similar analysis about the scalar potential
was performed in the context of the
four-dimensional $\mathcal{N} = 1$ supergravity \cite{KoLeOv},
in which negative potentials are not problematic.
On the other hand,
negative potentials could be problematic
for the rigid supersymmetric case,
on which we focus here.
We note that when $W \not= 0$,
a solution $F=0$ is not allowed.
We first consider
the canonical branch
where the solution of the auxiliary field \eqref{eq:aux_3sol} has
the smooth limit $\Lambda \to 0$ \cite{SaYaYo}.
We look for a perturbative expression of the Lagrangian for small $\Lambda$.
The solution of the auxiliary field is expanded as
\begin{align}
F = F_0 + \alpha F_1 + \alpha^2 F_2 + \cdots
\end{align}
where $\alpha$ is a parameter associated with the small $\Lambda$ expansion and
$F_0$ is the solution in $\alpha = 0$ ($\Lambda = 0$). This is given by
\begin{align}
F_0 = - (K_{\varphi \bar{\varphi}})^{-1} \bar{W}'.
\end{align}
Here $W' = \frac{\partial W}{\partial \varphi}$ and $\bar{W}'$ is the
complex conjugate of $W'$.
The explicit forms of $F_1$ and $F_2$ are obtained iteratively. They are
found to be
\begin{align}
F_1 =& \ \frac{2 \Lambda \bar{W}'}{(K_{\varphi \bar{\varphi}})^2}
\left[
\frac{W' \bar{W}'}{(K_{\varphi \bar{\varphi}})^2}
- \partial_m \varphi \partial^m \bar{\varphi}
\right], \\
F_2 =& \ - \frac{4 \Lambda^2 \bar{W}'}{(K_{\varphi \bar{\varphi}})^7}
(\bar{W}' W' - K_{\varphi \bar{\varphi}} \partial_m \varphi \partial^m \bar{\varphi})
\left\{
3 W' \bar{W}' - (K_{\varphi \bar{\varphi}})^2 \partial_m \varphi
\partial^m \bar{\varphi}
\right\}.
\end{align}
Then, substituting these solutions into the auxiliary field $F$ in the
Lagrangian \eqref{eq:on-shell_Lagrangian}, we obtain the
on-shell Lagrangian, (we take $\alpha = 1$ for simplicity)
\begin{align}
\mathcal{L}_b =& - K_{\varphi \bar{\varphi}} \partial_m \varphi \partial^m \bar{\varphi}
- \frac{2 \Lambda V_0}{K_{\varphi \bar{\varphi}}} \partial_m \varphi \partial^m \bar{\varphi}
+ \frac{8 \Lambda^2 V_0^2}{(K_{\varphi \bar{\varphi}})^3} \partial_m \varphi \partial^m \bar{\varphi}
\notag \\
&
+ \Lambda |\partial_m \varphi \partial^m \varphi|^2
- \frac{4 V_0 \Lambda^2}{(K_{\varphi \bar{\varphi}})^2} (\partial_m \varphi \partial^m \bar{\varphi})^2
\notag \\
&
- V_0
+ \frac{\Lambda V_0^2}{(K_{\varphi \bar{\varphi}})^2}
- \frac{4 \Lambda^2 V_0^3}{(K_{\varphi \bar{\varphi}})^4}
+ \mathcal{O} (\alpha^4).
\end{align}
Here $V_0 = \frac{1}{K_{\varphi \bar{\varphi}}} | W' |^2$ is the ordinary scalar potential in the
supersymmetric chiral models.
We note that the scalar potential is deformed by the non-zero
$\Lambda$ and the vacuum structure clearly depends on the structure of
the function $\Lambda$.
The examples of the deformed scalar potentials are found in Fig.~\ref{fig:potential}.
\begin{figure}[t]
\begin{center}
\subfigure[$\Lambda = 1$]
{
\includegraphics[scale=.5]{pot1.eps}
}
\subfigure[$\Lambda = \varphi \bar{\varphi}$]
{
\includegraphics[scale=.5]{pot2.eps}
}
\subfigure[$\Lambda = (\varphi \bar{\varphi})^{-1}$]
{
\includegraphics[scale=.5]{pot3.eps}
}
\end{center}
\caption{
Examples of the deformed potentials $V(|\varphi|)$ for $K_{\varphi
\bar{\varphi}}$, $W = \Phi - \frac{1}{3} \Phi^3$.
The upper (blue) lines represent the undeformed potentials,
while the lower (red) lines are deformed ones. The figures correspond to the $k=0$ solution.
}
\label{fig:potential}
\end{figure}
The Lagrangian contains an infinite number of the higher
derivative terms that are induced by non-zero $\Lambda$ and $W$.
The structure of the derivative terms is completely determined by supersymmetry.
We point out that even for the canonical kinetic term, it is deformed by $\Lambda$.
Up to $\mathcal{O} (\Lambda^2)$, it is given by
\begin{align}
\mathcal{L}_K = -
\left[
K_{\varphi \bar{\varphi}} + \frac{2 \Lambda V_0}{K_{\varphi \bar{\varphi}}}
- \frac{8 \Lambda^2 V_0^2}{(K_{\varphi \bar{\varphi}})^3}
\right]
\partial_m \varphi \partial^m \bar{\varphi}
+ \mathcal{O} (\Lambda^3).
\label{eq:lag-pot}
\end{align}
Since $\Lambda$ is an arbitrary function, the sign of the kinetic
term can be flipped even for the positive
definite K\"ahler metric
$K_{\varphi \bar{\varphi}}$. If the sign of the kinetic term is changed, there appear ghost
states in the theory \cite{AnDuGh}.
In that case, the model shows instability caused by the higher
derivatives. This fact leads to the non-positive semi-definite potential
\eqref{eq:potential} even for supersymmetric theories.
The sign of the kinetic term depends on the explicit forms of the functions $\Lambda$ and $W$.
Although it is important and interesting,
we do not pursue the (non-)existence of the ghost states in this paper.
We also note that the metric of the
target space of the nonlinear sigma model
in the Lagrangian (\ref{eq:lag-pot})
does not have to be K\"ahler anymore
even though it is ${\cal N}=1$ supersymmetric.
Next, we study the effect of the superpotential in the non-canonical branch
associated with the solution \eqref{eq:second_sol}.
Since we cannot take the $\Lambda \to 0$ limit, we consider the small $W$ perturbation
around $W = 0$.
The solution of the auxiliary field is expanded as
\begin{align}
F = F_0' + \beta F_1' + \beta^2 F_2' + \cdots,
\end{align}
where $\beta$ is a parameter associated with the small $W$ expansion and
\begin{align}
F_0' = \sqrt{ - \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial^m \bar{\varphi}}.
\end{align}
Here we choose a real solution of $F_0$. Using the $U(1)_R$ symmetry, we
make the superpotential be real and positive.
Then, the solutions $F_1'$ and $F_2'$ are found to be
\begin{align}
F_1' =& \ - \frac{W'}{4 \Lambda}
\left(
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial \bar{\varphi}
\right)^{-1}, \\
F_2' =& \ - \frac{3 (W')^2}{32 \Lambda^2}
\left(
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial \bar{\varphi}
\right)^{- \frac{5}{2}}.
\end{align}
The on-shell Lagrangian is
\begin{align}
\mathcal{L}_b =& \
\left(
|\partial_m \varphi \partial^m \bar{\varphi}|^2 - (\partial_m \varphi
\partial^m \bar{\varphi})^2
\right) \Lambda - \frac{(K_{\varphi \bar{\varphi}})^2}{4 \Lambda}
\notag \\
& \ - 2 (K_{\varphi \bar{\varphi}} V_0)^{\frac{1}{2}}
\left(
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial^m \bar{\varphi}
\right)^{\frac{1}{2}}
- \frac{K_{\varphi \bar{\varphi}} V_0}{16 \Lambda}
\left(
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial^m \bar{\varphi}
\right)^{-1}
+ \mathcal{O} (\beta^3).
\end{align}
We can observe that the scalar potential $(K_{\varphi
\bar{\varphi}})^2/4\Lambda$ is deformed by the superpotential $W$.
Finally, a comment is in order.
We started from the four-dimensional theory.
However, the lower dimensional models, such as
three-dimensional $\mathcal{N} =2$
and two-dimensional $\mathcal{N} =(2,2)$
theories can be easily obtained by
the dimensional reduction.
Actually, the $W=0$ case corresponds to
the three-dimensional $\mathcal{N} = 2$ models discussed in Ref.~\cite{AdQuSaGuWe}.
\section{BPS domain walls and their junction}\label{sec:wall}
In this and the next sections,
we study BPS configurations in the supersymmetric higher
derivative chiral models discussed in the previous section.
Since we consider models with scalar fields, we focus on the BPS domain walls and lumps in the following.
BPS equations that preserve a fraction of supersymmetry are obtained
from the condition that the on-shell supersymmetry transformation of the
fermion vanishes $\delta_{\xi}^{\text{on}} \psi_{\alpha} = 0$.
Here $\delta^{\text{on}}_{\xi}$ represents the on-shell supersymmetry
transformation by parameters $\xi_{\alpha}$, $\bar{\xi}^{\dot{\alpha}}$.
The off-shell supersymmetry transformation $\delta^{\text{off}}_{\xi}$ of the fermion is given by
\begin{equation}
\delta_{\xi}^{\text{off}} \psi_{\alpha}
= i \sqrt{2} (\sigma^m)_{\alpha \dot{\alpha}}
\bar{\xi}^{\dot{\alpha}} \partial_m \varphi
+ \sqrt{2} \xi_{\alpha} F,
\label{eq:SUSY_transformation}
\end{equation}
By substituting a solution of the auxiliary field $F$ into
$\delta^{\text{off}}_{\xi} \psi_{\alpha} = 0$ and assuming
a specific field configuration together with appropriate Killing
spinor conditions on $\xi_{\alpha}$, $\bar{\xi}^{\dot{\alpha}}$, we find
corresponding on-shell BPS equations.
Since there is the variety of branches associated with the solutions $F$ in our
model, we study each branch separately.
\subsection{1/2 BPS domain walls}
When a scalar field model with an ordinary canonical kinetic term has a potential with
several
vacua, there is a domain wall solution that interpolates
between these vacua.
We look for 1/2 BPS domain wall solutions in the higher derivative model \eqref{eq:Lagrangian}.
We consider domain wall configurations of the complex scalar field $\varphi$.
Namely, the field depends on the one direction $\varphi = \varphi
(x^1)$.
We first consider the case in which the superpotential exists.
In this case, the solution $F=0$ is not allowed. Therefore, we
generically consider the $F \not=0$ branch.
The Killing spinor condition for the 1/2 BPS domain wall configuration is \cite{Dvali:1996xe}
\begin{eqnarray}
\xi_{\alpha} = - i e^{i \eta} (\sigma^1)_{\alpha \dot{\alpha}}
\bar{\xi}^{\dot{\alpha}}.
\end{eqnarray}
Here $\eta$ is a phase factor.
Then the off-shell BPS equation is
\begin{eqnarray}
\partial_1 \varphi = e^{i \eta } F.
\label{eq:BPS_domain_wall_os}
\end{eqnarray}
By plugging a solution in \eqref{eq:aux_3sol} into the right
hand side of the equation \eqref{eq:BPS_domain_wall_os} and arranging the
resulting condition by $\partial_1 \varphi$, we obtain the on-shell BPS condition.
Here, instead of that, we use the equation of motion for the auxiliary
field $F$ in order to observe the universal property of the three solutions \eqref{eq:aux_3sol}.
Substituting the BPS condition \eqref{eq:BPS_domain_wall_os} into the equation of motion for $F$, we obtain
\begin{align}
K_{\varphi \bar{\varphi}} e^{-i\eta} \partial_1 \varphi +
\left\{
- 2 e^{-i \eta} \partial_1 \varphi \cdot \partial_1 \varphi \partial_1
\bar{\varphi}
+ 2 e^{-2i\eta} (\partial_1 \varphi)^2 e^{i\eta} \partial_1 \bar{\varphi}
\right\}
\Lambda
+ \frac{\partial \bar{W}}{\partial \bar{\varphi}}
= 0.
\end{align}
The higher derivative terms including $\Lambda$ cancel out and we obtain
the on-shell BPS equation
\begin{align}
K_{\varphi \bar{\varphi}} \partial_1 \varphi + e^{i\eta} \frac{\partial \bar{W}}{\partial
\bar{\varphi}} = 0.
\label{eq:BPS_domain_wall}
\end{align}
Equation.~\eqref{eq:BPS_domain_wall} is
nothing but the ordinary (without higher derivative terms) BPS condition for the domain wall.
This result suggests that
even for the existence of the three different on-shell branches in the model, the BPS
domain wall cannot distinguish them.
Furthermore,
the on-shell energy density of the domain wall is evaluated as
\begin{align}
\mathcal{E} =& \ K_{\varphi \bar{\varphi}} |\partial_1 \varphi |^2 - |\partial_1 \varphi |^4 \Lambda
- | \partial_1 \varphi |^2 (- K_{\varphi \bar{\varphi}} + 2 \Lambda |\partial_1 \varphi |^2) +
3 |\partial_1 \varphi |^4 \Lambda
\notag \\
=& - e^{-i\eta} \partial_1 W + h.c.
\end{align}
The last expression gives the tension of the ordinary BPS domain wall.
Therefore we conclude that all the higher derivative corrections
to the solutions and energy are canceled out in the BPS domain walls.
This is a consequence of the fact that the configuration depends on the one direction.
It is easy to confirm that the solutions to the BPS condition
\eqref{eq:BPS_domain_wall_os} together with the equation of motion for
the auxiliary field \eqref{eq:BPS_domain_wall} satisfy the full equation
of motion for the scalar field
\footnote{We have assumed that $\Lambda$ does not depends on the
second space-time derivatives or higher of $\varphi$.}
\begin{align}
& - \frac{\partial^3 K}{\partial \varphi \partial^2 \bar{\varphi}}
( |\partial_m \varphi|^2 - |F|^2) + \frac{\partial^2 \bar{W}}{\partial
\bar{\varphi}^2} \bar{F}
+ \left[
|\partial_m \varphi \partial^m \varphi |^2 - 2 |F|^2 |\partial_m
\varphi|^2 + |F|^4
\right] \frac{\partial \Lambda}{\partial \bar{\varphi}}
\notag \\
& - \partial_m
\left[
- K_{\varphi \bar{\varphi}}
\partial^m \varphi
+ 2 \Lambda ( (\partial_n \varphi)^2 \partial^m \bar{\varphi} - |F|^2
\partial^m \varphi)
+
\left\{
|\partial_n \varphi \partial^n \varphi|^2 - 2 |F|^2 |\partial_n
\varphi|^2 + |F|^4
\right\} \frac{\partial \Lambda}{\partial (\partial_m \bar{\varphi})}
\right] = 0.
\label{eq:varphi_eom}
\end{align}
Next, we consider the case in which $W = 0$. Even for this case,
there is the scalar potential $(K_{\varphi \bar{\varphi}})^2/4 \Lambda$
in the Lagrangian \eqref{eq:W0_osl2}. This branch corresponds to the
$F\not=0$ solution \eqref{eq:second_sol}.
Substituting the off-shell BPS condition \eqref{eq:BPS_domain_wall_os} into the equation of motion for
the auxiliary field \eqref{eq:eom_auxiliary_no_supot} and assuming
$F\not=0$, the on-shell BPS condition becomes
\begin{align}
K_{\varphi \bar{\varphi}} = 0.
\label{eq:NC_dw}
\end{align}
This condition never provides the domain wall equation.
When there is no ghost, Eq.~\eqref{eq:NC_dw}
is just a vacuum condition of the scalar potential
$(K_{\varphi \bar{\varphi}})^2/4 \Lambda$.
Therefore,
although there is a scalar
potential in the non-canonical branch, superpotentials are necessary for
1/2 BPS domain wall solutions.
We also note that the 1/2 BPS domain wall solution to the equation
\eqref{eq:BPS_domain_wall} interpolates between ``vacua'' specified by the
superpotential $W' = 0$ as its tension stands for.
We stress that the condition $W' = 0$ does not always imply vacua of the
scalar potential especially in the non-canonical branch.
The BPS domain walls remain intact even for the deformation of the scalar potential.
Although there are other vacua that originate from the singularity of
the function $\Lambda$ (see Fig.1 (c)), domain walls that interpolate
these vacua are not BPS and break all the supersymmetry.
\subsection{1/4 BPS domain wall junctions}
We next consider 1/4 BPS domain wall junctions
\cite{GiTo}.
The scalar field depends on the two spacial directions $x^1 and x^2$.
First, we consider the $W \not= 0$ case.
We impose the Killing spinor conditions on the supersymmetry parameters,
\begin{align}
\frac{1}{2} (\sigma^1 + i \sigma^2)_{\alpha \dot{\alpha}} \bar{\xi}^{\dot{\alpha}} =
0, \qquad
\frac{1}{2} (\sigma^1 - i \sigma^2)_{\alpha \dot{\alpha}}
\bar{\xi}^{\dot{\alpha}} = i e^{- i \eta} \xi_{\alpha},
\label{eq:Killing_spinor}
\end{align}
where $\eta$ is a phase factor.
Then we obtain the BPS condition from the supersymmetry transformation
\eqref{eq:SUSY_transformation},
\footnote{
We define the complex coordinate $z = \frac{1}{2} (x^1 + i x^2)$ and
derivatives $\partial = \frac{\partial}{\partial z} = \partial_1 - i
\partial_2$. $\bar{\partial}$ is the complex conjugate of $\partial$.
}
\begin{align}
\bar{\partial} \varphi = e^{i\eta} F.
\label{eq:BPS_junction}
\end{align}
Here $F$ in the right hand side is one of the solutions in \eqref{eq:aux_3sol}.
This is the 1/4 BPS condition.
Substituting the condition \eqref{eq:BPS_junction} into the equation of
motion \eqref{eq:aux_eq} for the auxiliary field, we obtain the on-shell BPS equation on
the scalar field:
\begin{align}
K_{\varphi \bar{\varphi}} \bar{\partial} \varphi - \bar{\partial} \varphi
(|\partial \varphi|^2 - |\bar{\partial} \varphi|^2) \Lambda +
e^{i \eta} \frac{\partial \bar{W}}{\partial \bar{\varphi}} = 0.
\label{eq:on-shell_BPS_junction}
\end{align}
When $\Lambda = 0$, the on-shell BPS equation
\eqref{eq:on-shell_BPS_junction} becomes that of the
ordinary BPS domain wall junctions \cite{GiTo} of which the analytic
solutions are studied in Ref.~\cite{NaNiSa}.
Different from the 1/2 BPS domain wall case, the higher derivative corrections
do not cancel in the equation \eqref{eq:on-shell_BPS_junction}.
The solutions are deformed from the ones in Ref.~\cite{NaNiSa} in general
and depend on the explicit form of the function $\Lambda$.
Now we confirm that the BPS solutions to \eqref{eq:BPS_junction} satisfy
the full equation of motion for the scalar field \eqref{eq:varphi_eom}.
Using the BPS condition \eqref{eq:BPS_junction}, we find the following
terms in \eqref{eq:varphi_eom} vanish:
\begin{align}
|\partial_m \varphi \partial^m \varphi |^2 - 2 |F|^2 |\partial_m
\varphi|^2 + |F|^4 = 0.
\end{align}
By using the BPS equation and the equation of motion for the auxiliary
field, we find that the other terms in \eqref{eq:varphi_eom} also vanish:
\begin{align}
& - \frac{\partial^3 K}{\partial \varphi \partial^2 \bar{\varphi}}
( |\partial_m \varphi|^2 - |F|^2) + \frac{\partial^2 \bar{W}}{\partial
\bar{\varphi}^2} \bar{F}
\notag \\
& - \partial_m
\left[
- \frac{\partial^2 K}{\partial \varphi \partial \bar{\varphi}}
\partial^m \varphi
+ 2 \Lambda ( (\partial_n \varphi)^2 \partial^m \bar{\varphi} - |F|^2
\partial^m \varphi)
\right] = 0.
\end{align}
Therefore we conclude that the solutions to the deformed BPS equation
\eqref{eq:on-shell_BPS_junction} actually satisfy the full equation of
motion in Eq.~\eqref{eq:varphi_eom}.
The energy density of the domain wall junction is evaluated as
\begin{align}
\mathcal{E} =& \ K_{\varphi \bar{\varphi}} \partial_i \varphi \partial_i \bar{\varphi} -
(\partial_i \varphi \partial_i \varphi) (\partial_j \bar{\varphi}
\partial_j \bar{\varphi}) \Lambda
- |F|^2 (-K_{\varphi \bar{\varphi}} + 2 \Lambda \partial_i \varphi \partial_i
\bar{\varphi}) + 3 |F|^4 \Lambda
\notag \\
=& \
\frac{1}{2} K_{\varphi \bar{\varphi}} (|\partial \varphi|^2 - |\bar{\partial} \varphi|^2) - 2
\mathrm{Re}
\left[
e^{- i \eta} \frac{\partial W}{\partial \bar{z}}
\right].
\end{align}
This is nothing but the expression of the ordinary (without higher
derivative terms) domain wall junctions.
After integration over the $(x^1,x^2)$ plane, the first term gives the
junction charge and the second term gives the tension of the domain walls.
They are evaluated on the boundary at the infinity of the $(x^1, x^2)$
plane.
Again, the junction charge and the domain wall tension are solely determined by
the asymptotic boundary conditions of the scalar field and the superpotential,
and do not depend on the function $\Lambda$.
Although the expression of the Bogomol'nyi bound of the energy is not
deformed by the higher derivative terms, we stress that the solutions of the 1/4 BPS
domain wall junction are potentially deformed in general.
Finally we examine 1/4 BPS domain wall junctions in the $W = 0$ non-canonical branch.
The Killing spinor and the off-shell BPS conditions are
given by Eqs.~\eqref{eq:Killing_spinor} and \eqref{eq:BPS_junction}.
The solution of the auxiliary field is given in Eq.~\eqref{eq:second_sol}.
Then, the on-shell BPS equation is found to be
\begin{align}
\frac{1}{2} (|\partial \varphi|^2 - |\bar{\partial} \varphi|^2) =
\frac{K_{\varphi \bar{\varphi}}}{2 \Lambda}.
\label{eq:non-canonical_dw_junction}
\end{align}
Eq.~\eqref{eq:non-canonical_dw_junction} is supplemented by the
consistency condition in Eq.~\eqref{eq:consistency}.
Again, the higher derivative corrections to the
on-shell 1/4 BPS condition are not canceled.
We will comment on this equation in the next section.
\section{BPS lumps and baby Skyrmions} \label{sec:lump}
Next we consider lumps in $W= 0$ higher derivative models.
We look for the BPS equation for lumps that depend on $x^1$ and $x^2$.
Recall that for the $W = 0$ case, the solutions of the auxiliary field
are given by
\begin{align}
F =& \ 0,
\label{eq:sol1}
\\
F =& \
e^{i \alpha}
\sqrt{
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \partial_m \varphi \partial^m \bar{\varphi}
},
\label{eq:sol2}
\end{align}
where $\alpha$ is a phase factor.
There are the canonical and non-canonical branches associated with the
solutions in Eqs.~\eqref{eq:sol1} and \eqref{eq:sol2}, respectively.
In the following subsections, we examine BPS lump equations in each branch.
\subsection{1/2 BPS lumps}
We first focus on the canonical branch associated with the solution \eqref{eq:sol1}.
Hopfions in the supersymmetric higher derivative
$\mathbb{C}P^1$ model of this type
were discussed before \cite{BeNeSc, Fr}.
BPS lumps in the supersymmetric higher derivative
$\mathbb{C}P^1$ model
were discussed in Ref.~\cite{Eto:2012qda}.
BPS lumps in the higher derivative $\mathbb{C}P^n$ non-linear sigma models were discussed in a different
context without supersymmetry
\cite{Ferreira:2008nn}.
In this branch, the BPS lump equation is obtained by imposing
the first condition in \eqref{eq:Killing_spinor} on the spinor
$\bar{\xi}^{\dot{\alpha}}$,
as can be seen in, e.g., Refs.~\cite{Shifman:2007ce,Eto:2006pg,Eto:2005sw}.
Then, the BPS equation for lumps is given by \cite{Polyakov:1975yp}
\begin{align}
\bar{\partial} \varphi =& 0.
\label{eq:lump1}
\end{align}
This is nothing but the ordinary 1/2 BPS lump condition.
This is confirmed by the Bogomol'nyi bound of the energy density.
For the canonical branch, we have
the energy density
\begin{align}
\mathcal{E} =& \ K_{\varphi \bar{\varphi}} |\partial_i \varphi|^2 - |\partial_i \varphi
\partial_i \varphi|^2 \Lambda
\notag \\
=& \
|\bar{\partial} \varphi|^2
\left(
K_{\varphi \bar{\varphi}} - |\partial \varphi|^2 \Lambda
\right) - i K_{\varphi \bar{\varphi}} \varepsilon_{ij} \partial_i
\varphi \partial_j \bar{\varphi}
\notag \\
\ge& \ - i K_{\varphi \bar{\varphi}} \varepsilon_{ij} \partial_i \varphi
\partial_j \bar{\varphi},
\label{eq:BPS-bound-lump}
\end{align}
where we have assumed the condition
$\Lambda \le K_{\varphi \bar{\varphi}}/|\partial \varphi|^2$
for the positive-semi definiteness of the energy $\mathcal{E}$.
The right hand side is nothing but the topological charge density for
the 1/2 BPS lump. The energy bound is saturated provided the condition
\eqref{eq:lump1} is satisfied.
Then we find that the higher derivative corrections to the solutions and
the energy bound are canceled out in this branch.
It is confirmed that solutions to the equation \eqref{eq:lump1} satisfy
the full equation of motion for the scalar field \eqref{eq:varphi_eom}.
When we consider the Fubini-Study metric for the ${\mathbb C}P^1$ model
and take the function $\Lambda$ as
\begin{align}
K_{\varphi \bar{\varphi}} = \frac{1}{(1 + |\varphi|^2)^2}, \qquad
\Lambda= \frac{1}{(1+|\varphi|^2)^4}
\label{eq:CP1}
\end{align}
the bound \eqref{eq:BPS-bound-lump}
becomes just the BPS bound obtained
in the context of the effective theory on a
non-Abelian vortex \cite{Eto:2012qda}.
In summary, although the Lagrangian contains higher derivative corrections,
the 1/2 BPS lump solution to the equation \eqref{eq:lump1} (which is a
holomorphic function with appropriate boundary conditions) does not
receive any corrections in the canonical branch \eqref{eq:W0_osl1}.
\subsection{1/4 BPS lumps as compact baby Skyrmions}
We next consider the non-canonical branch.
Since this is associated with the $F\not=0$ solution \eqref{eq:sol2}
even for $W=0$, we need to impose both of the two conditions in
\eqref{eq:Killing_spinor} in order to obtain the BPS equation from the variation of the fermion.
Then the BPS equation is
\begin{align}
\bar{\partial} \varphi =& e^{i \eta'}
\sqrt{- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda} + \frac{1}{2} (|\partial \varphi|^2 +
|\bar{\partial}\varphi|^2)
},
\label{eq:lump2}
\end{align}
where $\eta' = \eta + \alpha \in \mathbb{R}$ is a phase factor.
This is the 1/4 BPS equation.
Again, the BPS lump does not cancel the higher derivative corrections
generally.
We can make the deformed BPS equation \eqref{eq:lump2} into the
following form,
\begin{align}
\frac{1}{2} (|\partial \varphi|^2 - |\bar{\partial} \varphi|^2) =
\frac{K_{\varphi \bar{\varphi}}}{2 \Lambda}.
\label{eq:lump2-2}
\end{align}
We confirm that the solutions to the BPS equation \eqref{eq:lump2}
satisfy the full on-shell equation of motion for the scalar field
\eqref{eq:varphi_eom}.
In the non-canonical branch, we have the Bogomol'nyi completion of the
energy
\begin{align}
\mathcal{E} =& \
- \left(
|\partial_i \varphi \partial_i \varphi|^2 - (\partial_i \varphi
\partial_i \bar{\varphi})^2
\right) \Lambda + \frac{(K_{\varphi \bar{\varphi}})^2}{4 \Lambda}
\notag \\
=& \
\Lambda
\left[
\frac{1}{2}
(|\partial \varphi|^2 - |\bar{\partial} \varphi|^2)
- \frac{K_{\varphi \bar{\varphi}}}{2 \Lambda}
\right]^2 + \frac{K_{\varphi \bar{\varphi}}}{2} (|\partial \varphi|^2 - |\bar{\partial}
\varphi|^2)
\notag \\
\ge & \ - i K_{\varphi \bar{\varphi}} \varepsilon_{ij} \partial_i
\varphi \partial_j \bar{\varphi}.
\label{eq:lump_energy_bound}
\end{align}
Since we have $\Lambda > 0$
for static configurations
from the consistency condition
\eqref{eq:consistency} of the solution,
the energy bound is saturated by the topological
charge density of lumps provided that the BPS condition
\eqref{eq:lump2-2} is satisfied.
It is obvious that the expression of the topological charge is not corrected by the
higher derivative terms.
In the non-canonical branch,
the Lagrangian does not
contain ordinary canonical kinetic term.
An example of such a kind of non-canonical model is the extremal (BPS) baby Skyrme model.
The model consists of the fourth derivative term and potential terms in
(2+1) dimensions. More concretely, if we take the K\"ahler potential and
$\Lambda$ as in \eqref{eq:CP1},
then the Lagrangian \eqref{eq:W0_osl2} is nothing but the fourth
derivative part of the baby Skyrme model with an irrelevant constant term.
In Ref.~\cite{AdQuSaGuWe} the
authors found specific K\"ahler potentials and constructed the potentials of the
baby Skyrme model.
Actually, the condition \eqref{eq:lump2-2} was
first found in the
supersymmetric baby Skyrme model \cite{AdQuSaGuWe}.
Eq.~\eqref{eq:lump2-2} is the same as the one found in the previous
section, Eq.~\eqref{eq:non-canonical_dw_junction}.
The difference of solutions is specified by boundary conditions.
However, the energy bound in \eqref{eq:lump_energy_bound} suggests that
there are no BPS domain walls junctions in the non-canonical branch.
The example of solutions to Eq.~\eqref{eq:lump2-2} are
the compact baby Skyrmions
\cite{Adam:2009px,AdRoSaGuWe} that are solitons
with compact support.
The BPS states in the higher derivative chiral models are summarized in
Table \ref{tb:BPS_states}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l||c|c|}
\hline
& 1/2 BPS & 1/4 BPS \\
\hline \hline
$W=0$ & Lumps ($F=0$) & Compact lumps ($F\not=0$)\\
\hline
$W\not=0$ & Domain walls ($F\not=0$) & Deformed domain wall junctions ($F\not=0$)
\\
\hline
\end{tabular}
\caption{BPS states in the $W=0$
and $W\not=0$ higher derivative chiral models.
Corresponding solutions of the auxiliary field $F$ (whether they vanish or not)
are also presented.}
\end{center}
\label{tb:BPS_states}
\end{table}
\section{Conclusion and discussions} \label{sec:conc}
In this paper we have studied BPS states in the four-dimensional
$\mathcal{N} = 1$ supersymmetric higher
derivative chiral model of which the Lagrangian is given in Eq.~\eqref{eq:Lagrangian}.
The model is governed by
a (2,2) K\"ahler tensor $\Lambda_{ij\bar k \bar l}$
symmetric in holomorphic and anti-holomorphic indices,
in addition to the K\"ahler potential $K$ and the
superpotential $W$. They are functions of the chiral superfields
$\Phi^i$.
In particular, the tensor $\Lambda_{ij\bar k \bar l}$ determines the
higher derivative interactions of the models.
A specific feature of the model is that the auxiliary fields $F^i$
do not have space-time derivatives on them and can be eliminated by their equation of motion
algebraically.
One can explicitly write down the on-shell Lagrangian of the model at least for a single chiral superfield.
Since the equation of motion for the auxiliary fields is no longer
a linear equation, there are several on-shell branches in this model.
This fact deserves new non-trivial BPS equations that include higher
derivative corrections.
When there is no superpotential, there are two distinct on-shell branches.
One is the canonical branch associated with the
solution $F=0$. An example of this model is the supersymmetric DBI model
\cite{RoTs}.
We have shown that this branch, in fact, allows a supersymmetric extension
of any bosonic models of complex scalar fields.
We have exhibited the explicit function $\Lambda$ which corresponds to
the supersymmetric extension of the Faddeev-Skyrme model without four time derivatives,
which is in contrast to the previous studies
\cite{BeNeSc,Fr}
concluding that such a term is necessary for
supersymmetry.
The other branch is the non-canonical one corresponding to
the solution $F\not=0$. In this branch, the ordinary canonical kinetic
term disappears and the Lagrangian starts from the forth order
derivative terms. An example of this model is the extremal (BPS) baby Skyrme model.
This branch was discussed in Refs.~\cite{AdQuSaGuWe, KhLeOv}.
Although the $W=0$ case has been essentially discussed in the literature
\cite{AdQuSaGuWe, KhLeOv},
things get more involved when one introduces a superpotential $W$.
In this case, a solution $F=0$ is not allowed.
There are three on-shell branches associated with the three different
solutions of the auxiliary field equation \cite{SaYaYo,KoLeOv, FaKe}.
The resulting on-shell Lagrangians have highly non-linear expressions.
Perturbative analysis reveals the possibility of ghost kinetic terms and
deformations of the scalar potential.
Even though the on-shell Lagrangian is complicated and becomes highly
non-linear
in the
$W \not= 0$ case, one can derive the off-shell BPS
conditions from the supersymmetry transformation of fermions.
These conditions are supplemented by the equation of motion for the
auxiliary field giving rise to the on-shell conditions.
We have analyzed the properties of BPS states.
For the 1/2 BPS domain wall case, the higher derivative corrections are
exactly canceled out in the $W\not=0$ case.
The solution to the BPS equation satisfies the full equation of motion
for the scalar field.
We have shown that the tension of the domain wall does not receive any
higher derivative corrections.
In the $W=0$ non-canonical branch, the 1/2 BPS condition does not
provide the domain wall equation.
For the 1/4 BPS domain wall junction in the $W\not=0$ case, the on-shell BPS equation
receives higher derivative corrections.
This is a new 1/4 BPS equation for domain wall junctions.
The solution is deformed by the higher derivative effects and it is
confirmed that the solution satisfies the full equation of motion.
The expression of the energy bound is shown to be the same as the
ordinary (without higher derivative terms) theory, namely, the sum of
the junction charge and the tension.
For lump configurations in the $W=0$ case, there are two on-shell BPS equations.
One is the 1/2 BPS lumps associated with the $F = 0$ solution, where all the
derivative corrections are canceled out.
The other is the 1/4 BPS lumps associated with the $F\not=0$ solution.
The on-shell BPS equation is deformed by the higher derivative corrections.
This is nothing but the equation studied in Ref.~\cite{AdQuSaGuWe}.
An example of solutions to this equation is compactons in the extremal
(BPS) baby Skyrme model.
While we were able to solve explicitly
auxiliary field equations
(\ref{eq:af-eom})
only for one chiral superfield,
reducing the third order algebraic equation
(\ref{eq:af-eom2}),
the multicomponent equation
(\ref{eq:af-eom}) has yet to be solved.
When the target space has a large isometry,
it should be possible to solve it.
Construction of more general target spaces,
for instance a higher derivative ${\mathbb C}P^n$ model
and its BPS solitons remains as a future problem.
While we have exhausted all BPS states
that are already
known in conventional ${\cal N}=1$ supersymmetric theories
without higher derivatives,
there may still remain
unknown BPS states particular for
higher derivative theories.
In fact, 1/4 baby BPS Skyrmions do not exist in
conventional theories.
A sine-Gordon kink inside
a domain wall (corresponding to
a baby Skyrmion in the bulk) \cite{Nitta:2012xq},
a baby Skyrmion inside a domain wall
(corresponding to a three-dimensional Skyrmion in the bulk)
\cite{Nitta:2012wi}, or
a baby Skyrmion string
ending on a domain wall \cite{Gauntlett:2000de}
or
stretched between
domain walls \cite{Isozumi:2004vg,Eto:2006pg}
is one of possibilities
of composite BPS states.
It should be important to generalize
our formalism to theories with extended
supersymmetries such as
eight supercharges.
Although, only four out of eight supercharges are manifestly realized in
the $\mathcal{N} = 1$ superfield formalism, this is still useful to study the
off-shell effective theory of BPS solitons in models with eight
supercharges \cite{Eto:2006uw}.
Supersymmetric theories with eight supercharges
are known to admit plenty of composite
BPS states \cite{Eto:2006pg,Shifman:2007ce}.
In particular,
a classification of all possible BPS states
in supersymmetric theories with eight supercharges
was given in Ref.~\cite{Eto:2005sw}.
It is an interesting future problem to explore
which BPS states (do not) receive
higher derivative corrections.
As this problem concerns,
1/2 BPS topological solitons
in theories with eight supercharges
preserve four supercharges
on their world-volume.
Off-shell effective actions
of the 1/2 BPS domain wall and vortex
were obtained in $d=3+1$, ${\cal N}=1$ superfield formalism
at the leading order \cite{Eto:2006uw}.
The formulation presented in this paper
should be useful to obtain
the off-shell action of higher derivative corrections
to these effective actions.
For instance,
as mentioned in Eqs.~(\ref{eq:lump1}) and (\ref{eq:BPS-bound-lump}),
the ${\mathbb C}P^1$ model with
a four-derivative term appearing as
the effective theory of a non-Abelian vortex
admits 1/2 BPS lumps \cite{Eto:2012qda},
corresponding to Yang-Mills instantons in the bulk
\cite{Eto:2004rz}.
In the same way,
an $SU(N)$ principal chiral model with
the Skyrme term appears
\cite{Eto:2005cc}
on a non-Abelian domain wall
\cite{Shifman:2003uh}.
The off-shell higher derivative corrections
to the effective theories on these solitons
are some of future directions.
\subsection*{Acknowledgments}
The authors would like to thank Masahide Yamaguchi for discussions.
The work of M.\ N.\ is supported in part by Grant-in-Aid for
Scientific Research (No. 25400268) and by the ``Topological
Quantum Phenomena'' Grant-in-Aid for Scientific Research on
Innovative Areas (No. 25103720) from the Ministry of Education,
Culture, Sports, Science and Technology (MEXT) of Japan.
The work of S.~S. is supported in part by Kitasato University Research Grant for Young
Researchers.
\begin{appendix}
\section{Notation and conventions}\label{sec:notation}
We use the notation of the textbook of
Wess and Bagger \cite{Wess:1992cp}.
The component expansion of the $\mathcal{N} = 1$ chiral superfield in
the $x$-basis is
\begin{equation}
\Phi (x, \theta, \bar{\theta}) = \varphi
+ i \theta \sigma^m \bar{\theta} \partial_m \varphi + \frac{1}{4}
\theta^2 \bar{\theta}^2 \Box \varphi + \theta^2 F,
\end{equation}
where
only the bosonic components are presented.
The supercovariant derivatives are defined as
\begin{eqnarray}
D_{\alpha} = \frac{\partial}{\partial \theta^{\alpha}} + i
(\sigma^m)_{\alpha \dot{\alpha}} \bar{\theta}^{\dot{\alpha}}
\partial_m, \quad
\bar{D}_{\dot{\alpha}} = - \frac{\partial}{\partial
\bar{\theta}^{\dot{\alpha}}} - i \theta^{\alpha} (\sigma^m)_{\alpha
\dot{\alpha}} \partial_m.
\end{eqnarray}
The sigma matrices are $\sigma^m = (\mathbf{1}, \vec{\tau})$.
Here $\vec{\tau} = (\tau^1, \tau^-2, \tau^3)$ are Pauli matrices.
The bosonic component of the supercovariant derivatives of $\Phi^i$ are
\begin{align}
D^{\alpha} \Phi^i D_{\alpha} \Phi^j =& \
- 4 \bar{\theta}^2 \partial_m \varphi^i \partial^m \varphi^j
+ 4 i (\theta \sigma^m \bar{\theta}) (\partial_m \varphi^i F^j + F^i
\partial_m \varphi^j)
- 4 \theta^2 F^i F^j
\notag \\
& \ + 2 \theta^2 \bar{\theta}^2
\left(
\Box \varphi^i F^j + F^i \Box \varphi^j - \partial_m \varphi^i
\partial^m F^j - \partial_m F^i \partial^m \varphi^j
\right), \\
\bar{D}_{\dot{\alpha}} \Phi^{\dagger\bar{i}} \bar{D}^{\dot{\alpha}}
\Phi^{\dagger\bar{j}} =& \
- 4 \theta^2 \partial_m \bar{\varphi}^{\bar{i}} \partial^m
\bar{\varphi}^{\bar{j}}
- 4 i (\theta \sigma^m \bar{\theta}) (\partial_m \bar{\varphi}^{\bar{i}}
\bar{F}^{\bar{j}} + \bar{F}^{\bar{i}} \partial_m
\bar{\varphi}^{\bar{j}})
+ 4 \bar{\theta}^2 \bar{F}^{\bar{i}} \bar{F}^{\bar{j}}
\notag \\
& \
+ 2 \theta^2 \bar{\theta}^2
\left(
\bar{F}^{\bar{i}} \Box \bar{\varphi}^{\bar{j}} + \Box
\bar{\varphi}^{\bar{i}} \bar{F}^{\bar{j}}
- \partial_m \bar{\varphi}^{\bar{i}} \partial^m \bar{F}^{\bar{j}}
- \partial_m \bar{F}^{\bar{i}} \partial^m \bar{\varphi}^{\bar{j}}
\right),
\\
D^{\alpha} \Phi^i D_{\alpha} \Phi^k \bar{D}_{\dot{\alpha}}
\Phi^{\dagger\bar{j}} \bar{D}^{\dot{\alpha}}
\Phi^{\dagger\bar{l}}
=& \ 16 \theta^2 \bar{\theta}^2
\left[
\frac{}{}
(\partial_m \varphi^i \partial^m \varphi^k) (\partial_m
\bar{\varphi}^{\bar{j}} \partial^m \bar{\varphi}^{\bar{l}})
\right.
\notag \\
&
\left.
- \frac{1}{2}
\left(
\partial_m \varphi^i F^k + F^i \partial_m \varphi^k
\right)
\left(
\partial^n \bar{\varphi}^{\bar{j}} \bar{F}^{\bar{l}}
+ \bar{F}^{\bar{j}} \partial^n \bar{\varphi}^{\bar{l}}
\right)
+ F^i \bar{F}^{\bar{j}} F^k \bar{F}^{\bar{l}}
\right].
\end{align}
\end{appendix}
|
2,869,038,155,276 | arxiv | \section{Introduction}
\label{intro}
For many decades thermoelectric (TE) energy conversion successfully enabled self-supporting energy
devices for outer-space missions or integrated electronics
\cite{Sales:2002p6580,Tritt:2006p15694}. However, bad conversion efficiency
prohibited thermoelectrics the break-through as an alternative energy source.
The conversion performance of a TE material is quantified by the
figure of merit (FOM)
\begin{equation}
ZT=\frac{\sigma S^{2}}{\kappa_{el} + \kappa_{ph}} T = \frac{ S^{2}}{L + \frac{\kappa_{ph}}{\sigma T}},
\label{ZT}
\end{equation}
where $\sigma$ is the electrical conductivity, $S$ the thermopower, $\kappa_{el}=L \sigma T$ and
$\kappa_{ph}$ are the electronic and lattice contribution to the thermal conductivity, respectively.
L denotes the Lorenz function, which becomes the
Lorenz number L$_0=\frac{(\pi k_{b})^2}{3e^2}$ in the highly degenerate, metallic limit.
In recent years nano-structuring concepts \cite{Bottner:2006p2812,Nolas:1999p15771}
enable higher values for $ZT$ by increasing the
numerator, called power factor $PF=\sigma S^{2}$, or decreasing
the denominator of Eq.~\ref{ZT}.
The latter is obtained by phonon-blocking at superlattice (SL) interfaces or grain boundaries \cite{BorcaTasciuc:2000p15132,Lee:1997p1545,Pernot:2010p14944,Venkatasubramanian:2000p7305}
and leads to a reduced lattice thermal conductivity $\kappa_{ph}$.
Here, the Lorenz function is particularly important for thermoelectrics,
providing a measure to separate the electronic and lattice contribution
to the thermal conductivity \cite{Uher:1974p15736}.
Deviations $L\neq L_0$ already occur in the degenerate limit for simple metals, semi-metals
and semi-conductors \cite{Kumar:1993p15734}. Hence, assuming incorrect values for the Lorenz number
leads to incorrect values for $\kappa_{el}$ and $\kappa_{ph}$ and can even sum up to non-physically
negative values for $\kappa_{ph}$ \cite{Sharp:2001p15840}. To the best of our knowledge,
investigations on the Lorenz function of thermoelectric SLs on an \textit{ab initio} level are missing so far.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig1.eps}
\caption{Lorenz function L (thick black lines, ref. to left scale) and
electronic contribution $\kappa_{el}$ to the total thermal conductivity (thin green lines, ref. to the right scale)
in dependence on position of the chemical potential $\mu$ within a spherical two band
model. Results are shown for (a) fixed effective masses $m_{vb}=m_{cb}$ and varying temperatures and
(b) fixed temperature $T=\unit[300]{K}$ and varying effective masses. The band gap is fixed to $E_{g}=\unit[0.1]{eV}$
(gray shaded areas) and the Lorenz function is related
to the metallic limit L$_0=\unit[2.44\times 10^{-8}]{W \Omega/K^{2}}$.}
\label{fig:1}
\end{figure}
In the present work we analyse the anisotropic Lorenz function for
Bi$_{2}$Te$_{3}$/Sb$_{2}$Te$_{3}$ SLs, as well as for the bulk constituents.
The two telluride single crystals and the composed p-type SL
show highest values for bulk and nano-structured TE so far \cite{Venkatasubramanian:2001p114}.
On the basis of \textit{ab initio}
density functional theory (DFT) and semi-classical Boltzmann transport equations (BTE) the
Lorenz function is in particular studied for different charge carrier concentrations and SL periods
at room-temperature.
\section{Methodology}
\label{method}
For both Bi$_{2}$Te$_{3}$~ and Sb$_{2}$Te$_{3}$, as well as for the composed SLs,
we used the experimental lattice parameters and relaxed atomic positions \cite{Landolt}
as provided for the hexagonal Bi$_{2}$Te$_{3}$~ crystal structure. The 15 atomic layers per unit cell are
composed out of three quintuples Te$_1$-Bi-Te$_2$-Bi-Te$_1$.
The hexagonal lattice parameters are equally chosen to be
${a^{hex}_{BiTe}}=4.384${\AA} and $c^{hex}_{BiTe}=30.487${\AA} for
Bi$_{2}$Te$_{3}$, Sb$_{2}$Te$_{3}$~ and the SLs respectively. Preceding studies revealed
that a larger in-plane lattice constant, e.g. ${a^{hex}_{BiTe}}>{a^{hex}_{SbTe}}$,
is favourable for an enhanced cross-plane TE
transport \cite{Hinsche:2011p15707,Yavorsky:2011p15466}.
To introduce SLs with different SL periods we subsequently substitute the Bi sites
by Sb, starting with six Bi sites in hexagonal bulk Bi$_{2}$Te$_{3}$. Substituting four atomic layers
of Bi with Sb leads to a (Bi$_2$Te$_3)_{x}$/(Sb$_2$Te$_3)_{1-x}$ SL~ with $x=\frac{2}{6}$, that is one quintuple Bi$_{2}$Te$_{3}$~ and two quintuple Sb$_{2}$Te$_{3}$.
The latter case coincides with a (10\AA/20\AA)-(Bi$_{2}$Te$_{3}$/Sb$_{2}$Te$_{3}$) SL
in the experimental notation of Ref.~\cite{Venkatasubramanian:2001p114}~.
Semi-classical BTE were extensively used in the past to calculate TE transport
properties \cite{MZiman:1960p6024,Mertig:1999p12776} and offer a high reliability for
narrow-gap semi-conductors in a broad doping and temperature range
\cite{Hinsche:2011p15707,Park:2010p11006,Chaput:2005p1405,Huang:2008p559}.
Within the relaxation time approximation (RTA) the transport distribution
function (TDF) $\mathcal{L}_{\perp, \|}^{(0)}(\mu, 0)$~\cite{Mahan:1996p508} and with this the
generalized conductance moments $\mathcal{L}_{\perp, \|}^{(n)}(\mu, T)$ are defined as
\begin{eqnarray}
& \mathcal{L}_{\perp, \|}^{(n)}(\mu, T)= \nonumber \\
&\frac{\tau}{(2\pi)^3} \sum \limits_{\nu} \int\ d^3k \left( v^{\nu}_{k,(\perp, \|)}\right)^2 (E^{\nu}_k-\mu)^{n}\left( -\frac{\partial f_{(\mu,T)}}{\partial E} \right)_{E=E^{\nu}_k} \nonumber .
\\
\label{Tcoeff}
\end{eqnarray}
$f_{(\mu,T)}$ is the \textsc{Fermi-Dirac}-distribution and $v^{\nu}_{k,(\|)}$, $v^{\nu}_{k,(\perp)}$
denote the group velocities in the directions in the
hexagonal basal plane and perpendicular to it, respectively. Within here the group velocities
were obtained as derivatives along
the lines of the Bl\"ochl mesh in the whole Brillouin zone (BZ)~\cite{Yavorsky:2011p15466}.
The band structure $E_k^{\nu}$ of band $\nu$ was obtained by accurate first principles density functional theory calculations (DFT),
as implemented in the fully relativistic screened Korringa-Kohn-Rostoker Greens-function method (KKR) \cite{Gradhand:2009p7460}.
Within this approach the \textsc{Dirac}-equation is solved self-consistently and with that spin-orbit-coupling (SOC) is included.
Exchange and correlation effects were accounted for by the local density
approximation (LDA) parametrized by Vosko, Wilk, and Nusair \cite{Vosko1980}. Detailed studies on the electronic
structure, the thermoelectric transport and challenges in the numerical determination
of the group velocities of Bi$_{2}$Te$_{3}$, Sb$_{2}$Te$_{3}$ and their SLs have been published before
\cite{Hinsche:2011p15707,Yavorsky:2011p15466,Zahn:2011p15523,ArxivSL}.
For convenience, the relaxation time $\tau$ was chosen as $\unit[10]{fs}$ for the considered systems.
Straight forward, the temperature- and doping-dependent
electrical conductivity $\sigma$ and thermopower $S$
in the in- and cross-plane directions are defined as
\begin{eqnarray}
\sigma_{_{\perp, \|}}=e^2 \mathcal{L}_{\perp, \|}^{(0)}(\mu, T) \qquad
S_{_{\perp, \|}}=\frac{1} {eT} \frac{\mathcal{L}_{\perp, \|}^{(1)}(\mu,T)} {\mathcal{L}_{\perp, \|}^{(0)}(\mu,T)}
\label{Seeb},
\end{eqnarray}
and the electronic part to the total thermal conductivity accounts to
\begin{equation}
\kappa_{el}{_{\perp, \|}}=\frac{1}{T}(\mathcal{L}_{\perp, \|}^{(2)}(\mu,T)-\frac{\left(\mathcal{L}_{\perp, \|}^{(1)}(\mu,T)\right)^2}{\mathcal{L}_{\perp, \|}^{(0)}(\mu,T)}) \, .
\label{kel}
\end{equation}
The second term in eq.~\ref{kel} introduces corrections due to the Peltier heat flow that can occur when
bipolar conduction takes place \cite{Goldsmid:1965p15735,Tritt:2004p15755}. Using Eqs.~\ref{Seeb} and \ref{kel} and
the abbreviation $\kappa^{0}=\frac{1}{T}\mathcal{L}_{\perp, \|}^{(2)}(\mu,T)$~\cite{Mahan:1996p508},
we find the Lorenz function as
\begin{equation}
L_{\perp, \|}=\frac{\kappa^{0}}{\sigma_{_{\perp, \|}} T}-S_{_{\perp, \|}}^{2}.
\label{Lorenz}
\end{equation}
Eq.~\ref{Lorenz} clearly shows that in the low temperature regime L consists of a constant
term and a negative term of order $T^2$.
\section{Results}
\label{results}
To introduce our discussions, in \f{1} the Lorenz function L and
the corresponding electronic thermal conductivity
$\kappa_{el}$ in dependence on the chemical potential $\mu$ are shown
for a spherical two band model (SBM). Varying temperatures and $m_{cb}=m_{vb}$ (cf. \f{1}(a)) and
different effective masses ratio $\nicefrac{m_{cb}}{m_{vb}}$ and fixed temperature $T=\unit[300]{K}$ (cf. \f{1}(b)) are assumed.
Within here $m_{cb}$ and $m_{vb}$
are the isotropic effective masses of the conduction band (CB) and valence band (VB), respectively.
Setting the valence band maximum to zero and $E_g$ the band gap size,
the TDF scales as $\mathcal{L}_{VB}^{(0)}(\mu, 0) \sim \sqrt{m_{vb}}(-\mu)^{3/2}$ and
$\mathcal{L}_{VB}^{(0)}(\mu, 0) \sim \sqrt{m_{cb}}(\mu-E_{g})^{3/2}$ for the VB and CB, respectively.
From Eqs.~\ref{kel} and \ref{Lorenz} it is obvious that within a SBM deviations for L and $\kappa_{el}$
from the metallic limit will merely occur near the band gap, where the thermopower S changes significantly.
Near the band edges S increases
approximately as $S \sim \frac{-1}{\mu T}$. Thus L, as well as $\kappa_{el}$ minimize and
the minimum decreases with decreasing temperature, while shifting towards the middle of the gap (cf. \f{1}(a)).
At $T=\unit[100]{K}$ $\nicefrac{L}{L_0}\sim 0.8$ at the band edges.
In the intrinsic regime $\nicefrac{L}{L_0}$ and $\kappa_{el}$ increase, as the thermopower and electrical
conductivity are reduced due to bipolar contributions.
Figuratively speaking, the additional contribution arises from the fact that electron and holes can
move together in the same direction, transporting energy but not carrying
any net charge~\cite{Goldsmid:1965p15735}.
According to Goldsmid~\cite{Goldsmid:1956p15499} and Price~\cite{Price:1956p15839} the deviation of
the Lorenz number from the metallic limit in the intrinsic regime holds to some extent
$\nicefrac{L}{L_0}=1+\frac{1}{2}\frac{m_{cb}m_{vb}}{(m_{cb}+m_{vb})^2} \left( \nicefrac{E_g}{k_B T+4} \right)^{2}$.
Therefore, assuming a fixed charge carrier concentration, $\nicefrac{L}{L_0}$ achieves very large values
at small temperatures and/or large band gaps. Assuming the above approaches~\cite{Goldsmid:1956p15499,Price:1956p15839},
together with $m_{cb}=m_{vb}$ and $E_{g}=\unit[0.1]{eV}$ one achieves $\nicefrac{L}{L_0} \sim 9$
at room temperature for $\mu$ located deep in the gap.
If $m_{vb} > m_{cb}$, as shown in \f{1}(b), the intrinsic regime $N_n=N_p$ and with that the maximal value of
$\nicefrac{L}{L_0}$ and $\kappa_{el}$ at bipolar conduction shifts towards the CBM.
With increasing $m_{vb}$ and hence due to the enhanced electrical conductivity $\sigma$ in the VB it
is obvious, that $\kappa_{el}$ under hole doping will increase, too.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig2.eps}
\caption{Lorenz function L (solid lines, ref. to left scale) and
electronic contribution $\kappa_{el}$ to the total thermal conductivity (dashed lines, ref. to right scale)
in dependence on position of the chemical potential $\mu$ for
(a) bulk Sb$_2$Te$_3$ (b) (Bi$_2$Te$_3)_{x}/($Sb$_2$Te$_3)_{1-x}$ SL at $x=\frac{2}{6}$
and (c) bulk Bi$_2$Te$_3$. The in-plane (thick lines)
and cross-plane (thin lines) transport directions are compared. The Lorenz function is related to the
metallic limit L$_0=\unit[2.44\times 10^{-8}]{W \Omega/K^{2}}$.
Plotted on to the graph of the Lorenz function
in the in-plane direction is a color code referring to the charge carrier concentration.
The red cross emphasizes the change from n to p doping.
The temperature was fixed to $\unit[300]{K}$. Thin vertical dash-dotted lines
emphasize the position of the chemical potential for a
charge carrier concentration of $N = \unit[3\times 10^{19}]{cm^{-3}}$ under
p and n doping (red and blue color). The grey shaded areas show the band gap.
Green open circles in (c) show experimental results from Ref.~\cite{Goldsmid:1965p15735}
for $\kappa_{el,\|}$ for an n-type Bi$_{2}$Te$_{3}$~ single crystal.}
\label{fig:2}
\end{figure}
\f{2} presents first principle calculations for the Lorenz function L and the
related electronic part $\kappa_{el}$ of the thermal conductivity.
The dependence on the charge carrier concentration for (a) Sb$_{2}$Te$_{3}$, (b) a
(Bi$_2$Te$_3)_{x}/($Sb$_2$Te$_3)_{1-x}$ SL at $x=\nicefrac{2}{6}$ and (c) Bi$_{2}$Te$_{3}$,
is shown, respectively. Due to the high conductivity anisotropy $\nicefrac{\sigma_{\|}}{\sigma_{\perp}}>1$
for all of the considered systems\cite{Hinsche:2011p15707,ArxivSL}, $\kappa_{el,\perp}$
is strongly suppressed compared to $\kappa_{el,\|}$, too. Furthermore, it is obvious
that the maximal peak of the Lorenz function is shifted towards the CBM, latter stemming from
a larger density of states at the VBM and a higher absolute hole electrical conductivity. Maximal numbers
for the Lorenz function $\nicefrac{L}{L_0}$ in the intrinsic regime were found to be between
6 and 10 for the considered systems, showing only a slight directional anisotropy.
For Sb$_{2}$Te$_{3}$~ (cf. \f{2}(a)) the Lorenz function exhibits only minor anisotropies $\nicefrac{L_{\|}}{L_{\perp}}$
in a wide doping range, while stating $L_{\perp} \sim 1.15 L_{\|}$ at increased electron doping.
Reduction of $\nicefrac{L_{\|}}{L_{\perp}}$ due to bipolar diffusion effects is more apparent at
hole doping compared to electron doping, here showing in-plane $\nicefrac{L}{L_0} \sim 0.75$ and
$\nicefrac{L}{L_0} \sim 0.92$ at an electron and hole doping of $N = \unit[3\times 10^{19}]{cm^{-3}}$, respectively.
For bulk Bi$_{2}$Te$_{3}$~ the picture is comparable. However, in the thermoelectric most interesting range,
about $\unit[200]{meV}$ around the band edges, $\nicefrac{L_{\|}}{L_{\perp}}$ is always less than unity,
comparable to previous publications~\cite{Huang:2008p559}.
Furthermore $L_{\perp}$ is larger than the metallic limit $L_0$. Very often values of $L_{\perp} \sim 0.5-0.6$~\cite{Beyer:2002p15267,Venkatasubramanian:2001p114}
are assumed for the experimental determination of $\kappa_{el,\perp}$ in Bi$_{2}$Te$_{3}$/Sb$_{2}$Te$_{3}$ SLs. In turn,
this most probably leads to an underestimation of the electrical contribution to total thermal conductivity
in cross-plane direction. For bulk Bi$_{2}$Te$_{3}$~ experimental values~\cite{Goldsmid:1965p15735}
for the in-plane part $\kappa_{el,\|}$ are available as a reference in \f{2}(c) (green, open circles).
We find very good accordance to our calculations in the intrinsic range,
while our results slightly overestimate $\kappa_{el,\|}$ in the extrinsic regime.
Strong deviations for the Lorenz function from the bulk limit could be found for an electron conducting (Bi$_2$Te$_3)_{x}$/(Sb$_2$Te$_3)_{1-x}$ SL~
at $x=\nicefrac{2}{6}$, i.e. (10\AA/20\AA)-(Bi$_{2}$Te$_{3}$/Sb$_{2}$Te$_{3}$).
In a current publication~\cite{ArxivSL} we showed, that strong quantum well effects (QWE) in the CB of the SLs lead to
an enhanced electrical conductivity anisotropy, The latter was most pro\-noun\-ced for the SL at $x=\nicefrac{2}{6}$
showing $\nicefrac{\sigma_{\|}}{\sigma_{\perp}} \sim 20$ at electron doping of $N = \unit[3\times 10^{19}]{cm^{-3}}$.
Caused by the QWE, the cross-plane electrical conductivity $\sigma_{\perp}$ is drastically suppressed,
and hence $L_{\perp}$ remarkably enhanced.
The cross-plane Lorenz function obtains rather large values between $\nicefrac{L}{L_0} \sim 1.5-1.8$
at extrinsic carrier concentrations of about $N = \unit[3-30\times 10^{19}]{cm^{-3}}$ for hole and electron
doping, respectively. Additionally,
oscillations of $\nicefrac{L}{L_0}$ with varying doping are found, which are much more pronounced than
in the bulk materials. Both effects have been proposed within a 1-dimensional model for
thermoelectric SLs before~\cite{Bian:2007p15769}. As expected, in the extrinsic region,
at increasing charge carrier concentration, $L$ saturates gradually towards the metallic limit $L_0$
and the thermal conductivity rises with electrical conductivity.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig3.eps}
\caption{Cross-plane component of the Lorenz function L$_{\perp}$
for (Bi$_2$Te$_3)_{x}/($Sb$_2$Te$_3)_{1-x}$ superlattices in dependence
on the superlattice period. The temperature is fixed to $\unit[300]{K}$ and results for three
different charge carrier concentrations (in units of $\unit[]{cm^{-3}}$)
are compared. (a) refers to electron doping, while (b) refers to hole doping.
The Lorenz function is related to the metallic limit L$_0=\unit[2.44\times 10^{-8}]{W \Omega/K^{2}}$.
Lines are guides to the eye.}
\label{fig:3}
\end{figure}
To support our findings, in \f{3} the cross-plane Lorenz function $L_{\perp}$ at different
SL periods is shown. The influence of varying (a) electron and (b) hole doping is given, respectively.
Under hole doping, due to the vanishing band-offset at the VBM~\cite{ArxivSL}, the Lorenz function behaves as smooth interpolation
between the bulk limits (cf. \f{3}(b)). At lower charge carrier concentrations $\nicefrac{L_{\perp}}{L_0}$
is more suppressed due to a stronger impact of the bipolar diffusion.
Under varying electron doping we find $\nicefrac{L_{\perp}}{L_0}$ being remarkably enhanced for
SL periods of $x=\nicefrac{2}{6}$ and $x=\nicefrac{4}{6}$, respectively. For those SL periods
large suppressions of the cross-plane electrical conductivity $\sigma_{\perp}$ were found, too.
At $N = \unit[3\times 10^{19}]{cm^{-3}}$ anisotropies as large as $\nicefrac{\sigma_{\|}}{\sigma_{\perp}} \sim 20$ and
$\nicefrac{\sigma_{\|}}{\sigma_{\perp}} \sim 14$ for SL periods of $x=\nicefrac{2}{6}$ and $x=\nicefrac{4}{6}$
are reported, respectively \cite{ArxivSL}. We state that due to quantum confinement effects
in the electron-conducting SLs unexpected deviations of the Lorenz function from $L_0$
can occur also at higher values of doping, which is counterintuitive.
The latter could lead to wrong estimations for the electronic part of the thermal
conductivity $\kappa_{el}$ and consequently
for the lattice thermal conductivity $\kappa_{ph}$.
\section{Conclusion}
We presented first principles calculation for the Lorenz function of electron- and
hole-conducting Bi$_{2}$Te$_{3}$/Sb$_{2}$Te$_{3}$ superlattices and the related bulk materials at varying charge carrier concentration.
As expected, due to bipolar conduction, the Lorenz function increases to large values within
the intrinsic doping regime.
More significantly, the Lorenz function L of the superlattices does
not change monotonically at extrinsic charge
carrier concentrations. While at increased doping an asymptote convergence of L towards
the metallic limit $L_0$ is found, a distinct oscillatory behaviour of L is observed.
This is most pronounced under electron doping and caused by quantum well effects in the
conduction bands of the superlattices.
This counterintuitive effect has consequences for
the determination of the thermal conductivity, as
L is generally used to separate $\kappa_{el}$ and $\kappa_{ph}$.
At thermoelectrically profitable charge carrier concentrations the
application of the metallic value
$L_0$ to determine the electronic thermal conductivity could lead to a deviation of a factor of two
in both directions in the worst case.
Consequently, this leads to wrong estimations of the lattice thermal contribution and the
figure of merit. A similar behaviour was found theoretically for p-type SiGe superlattices~\cite{ArxivSiGe}
and this behaviour could be a general effect in thermoelectric superlattices influenced by
quantum well effects.
\begin{acknowledgements}
This work was supported by the Deutsche For\-schungsgemeinschaft, SPP 1386 `Nanostrukturierte Thermoelektrika:
Theorie, Modellsysteme und kontrollierte Synthese'. N. F. Hinsche is
member of the International Max Planck Research School for Science and Technology of Nanostructures.
\end{acknowledgements}
|
2,869,038,155,277 | arxiv | \section{Introduction and results}
\noindent{\em
Let $\G=\gnm$ denote the random graph on the vertex set $\brk n=\cbc{1,\ldots,n}$ with precisely $m$ edges.
Unless specified otherwise, we assume that $m=m(n)=\lceil dn/2\rceil$ for a fixed number $d>0$.
As usual, $\gnm$ has a property $\cA$ ``with high probability'' (``w.h.p.'') if $\lim_{n\to\infty}\pr\brk{\gnm\in\cA}=1$.
}
\subsection{Background and motivation}
Going back to the seminal paper of Erd\H{o}s\ and R\'enyi~\cite{ER} that founded the theory of random graphs,
the problem of coloring $\gnm$ remains one
of the longest-standing challenges in probabilistic combinatorics.
Over the past half-century, efforts have been devoted to determining the likely value of the chromatic number
$\chi(\gnm)$~\cite{AchNaor,BBColor,LuczakColor,Matula} and its concentration~\cite{AlonKriv,Luczak,ShamirSpencer}
as well as to algorithmic problems such as constructing or sampling colorings of the random graph~\cite{AchMolloy,DyerFr10,Efthy14,Efthy12,GMcD,KSud}.
A tantalising feature of the random graph coloring problem is the interplay between local and global effects.
{\em Locally} around almost any vertex the random graph is bipartite w.h.p.\
In fact, for any fixed average degree $d>0$ and for any fixed $\omega$ the depth-$\omega$ neighborhood
of all but $o(n)$ vertices is just a tree w.h.p.\
Yet {\em globally} the chromatic number of the random graph may be large.
Indeed, for any number $k\geq3$ of colors there exists a {\em sharp threshold sequence} $d_{k-\mathrm{col}}=d_{k-\mathrm{col}}(n)$
such that for any fixed $\eps>0$, $\gnm$ is $k$-colorable w.h.p.\ if $2m/n<d_{k-\mathrm{col}}(n)-\eps$,
whereas the random graphs fails to be $k$-colorable w.h.p.\ if $2m/n>d_{k-\mathrm{col}}(n)+\eps$~\cite{AchFried}.
Whilst the thresholds $d_{k-\mathrm{col}}$ are not known precisely, there are close upper and lower bounds.
The best current ones read
\begin{equation}\label{eqdk}
d_{k,\mathrm{cond}}=(2k-1)\ln k-2\ln 2+\delta_k\leq\liminf_{n\to\infty}d_{k-\mathrm{col}}(n)\leq \limsup_{n\to\infty}d_{k-\mathrm{col}}(n)\leq(2k-1)\ln k-1+\eps_k,
\end{equation}
where $\lim_{k\to\infty}\delta_k=\lim_{k\to\infty}\eps_k=0$~\cite{AchNaor,Covers,Danny}.
To be precise,
the lower bound in~(\ref{eqdk})
is formally defined as
\begin{equation}\label{eqdc}
d_{k,\mathrm{cond}}=\inf\cbc{d>0:\limsup_{n\to\infty}\Erw[Z_{k}(\gnm)^{1/n}]<k(1-1/k)^{d/2}}.
\end{equation}
This number, called the {\em condensation threshold} due to a connection with statistical physics~\cite{pnas},
can be computed precisely for $k$ exceeding a certain constant $k_0$~\cite{Cond}.
An asymptotic expansion yields the expression in~(\ref{eqdk}).
The contrast between local and global effects was famously pointed out by Erd\H{o}s,
who produced $\gnm$ as an example of a graph that simultaneously has a high chromatic number and a high girth~\cite{Erdos}.
The present paper aims at a more precise understanding of this collusion between short-range and long-range effects.
For instance, do global effects entail ``invisible'' constraints on the colorings of the local neighborhoods so that certain ``local'' colorings do not
extend to a coloring of the entire graph?
And what correlations do typically exist between the colors of vertices at a large distance?
A natural way of formalising these questions is as follows.
Let $k\geq3$ be a number of colors, fix some number $\omega>0$
and assume that $d<d_{k,\mathrm{cond}}$ so that $\G=\gnm$ is $k$-colorable w.h.p.\
Moreover, pick a vertex $v_0$ and fix a $k$-coloring $\sigma_0$ of its depth-$\omega$ neighborhood.
How many ways are there to extend $\sigma_0$ to a $k$-coloring of the entire graph, and how does this number depend on $\sigma_0$?
Additionally, if we pick a vertex $v_1$ that is ``far away'' from $v_0$ and if we pick another $k$-coloring $\sigma_1$ of the
depth-$\omega$ neighborhood of $v_1$, is there a $k$-coloring $\sigma$ of $\G$ that simultaneously extends both $\sigma_0$ and $\sigma_1$?
If so, how many such $\sigma$ exist, and how does this depend on $\sigma_0,\sigma_1$?
The main result of this paper (\Thm~\ref{Thm_xlwc} below) provides a very neat and accurate answer to these questions.
It shows that w.h.p.\ all ``local'' $k$-colorings $\sigma_0$ extend
to {\em asymptotically the same} number of $k$-colorings of the entire graph.
Let us write $\cS_k(G)$ for the set
of all $k$-colorings of a graph $G$ and
let $Z_{k}(G)=|\cS_k(G)|$ be the number of $k$-colorings.
Moreover, let $\partial^\omega(G,v_0)$ be the depth-$\omega$ neighborhood of a vertex $v_0$ in $G$
(i.e., the subgraph of $G$ obtained by deleting all vertices at distance greater than $\omega$ from $v_0$).
Then w.h.p.\ any $k$-coloring $\sigma_0$ of $\partial^\omega(\G,v_0)$ has $$\frac{(1+o(1))Z_{k}(\G)}{Z_{k}(\partial^\omega(\G,v_0))}$$ extensions
to a $k$-coloring of $\G$.
Moreover, if we pick another vertex $v_1$ at random and fix some $k$-coloring $\sigma_1$ of the depth-$\omega$ neighborhood of $v_1$,
then w.h.p.\ the number of joint extensions of $\sigma_0,\sigma_1$ is
$$\frac{(1+o(1))Z_{k}(\G)}{Z_{k}(\partial^\omega(\G,v_0))Z_{k}(\partial^\omega(\G,v_1))}.$$
In other words, if we choose a $k$-coloring $\SIGMA$ uniformly at random, then the distribution of the $k$-coloring
that $\SIGMA$ induces on the subgraph $\partial^\omega(\G,v_0)\cup\partial^\omega(\G,v_1)$, which is a forest w.h.p., is asymptotically uniform.
The same statement extends to any fixed number $v_0,\ldots,v_l$ of vertices.
\subsection{Results}
The appropriate formalism for describing the limiting behavior of the local structure of the random graph is the
concept of \emph{local weak convergence}~\cite{Aldous,BenjaminiSchramm}.
The concrete instalment of the formalism that we employ is reminiscent of that used in~\cite{BST,MMS}.
(\Cor~\ref{Cor_xlwc} below provides a statement that is equivalent to the main result but that avoids the formalism of local weak convergence.)
Let $\mathfrak G$ be the set of all locally finite connected graphs whose vertex set is a countable subset of $\RR$.
Further, let $\mathfrak G_k$ be the set of all triples $(G,v_0,\sigma)$ such that $G\in\mathfrak G$, $\sigma:V(G)\to[k]$ is a $k$-coloring of $G$ and $v_0\in V(G)$ is a
distinguished vertex that we call the {\em root}.
We refer to $(G,v_0,\sigma)$ as a {\em rooted $k$-colored graph}.
If $(G',v_0',\sigma')$ is another rooted $k$-colored graph,
we call $(G,v_0,\sigma)$ and $(G',v_0',\sigma')$ {\em isomorphic} ($(G,v_0,\sigma)\ism(G',v_0',\sigma')$)
if there is an isomorphism $\varphi:G\to G'$ such that $\varphi(v_0)=\varphi(v_0')$, $\sigma=\sigma'\circ\varphi$
and such that for any $v,w\in V(G)$ such that $v<w$ we have $\varphi(v)<\varphi(w)$.
Thus, $\varphi$ preserves the root, the coloring and the order of the vertices (which are reals).
Let $[G,v_0,\sigma]$ be the isomorphism class of $(G,v_0,\sigma)$ and
let $\cG_k$ be the set of all isomorphism classes of rooted $k$-colored graphs.
For an integer $\omega\geq0$ and $\Gamma\in\cG_k$
we let $\partial^\omega\Gamma$ denote the isomorphism class of the rooted $k$-colored graph obtained from $\Gamma$ by
deleting all vertices whose distance from the root exceeds $\omega$.
Then any $\Gamma$, $\omega\geq0$ give rise to a function
\begin{equation}\label{eqtopology}
\cG_k\to\cbc{0,1},\qquad\Gamma'\mapsto\vec{1}\cbc{\partial^\omega\Gamma'= \partial^\omega\Gamma}.
\end{equation}
We endow $\cG_k$ with the coarsest topology that makes all of these functions continuous.
Further, for $l\geq1$ we equip $\cG_k^l$ with the corresponding product topology.
Additionally, the set $\cP(\cG_k^l)$ of probability measures on $\cG_k^l$ carries the weak topology,
as does the set $\cP^2(\cG_k^l)$ of all probability measures on $\cP(\cG_k^l)$.
The spaces $\cG_k^l,\cP(\cG_k^l),\cP^2(\cG_k^l)$ are Polish~\cite{Aldous}.
For $\Gamma\in\cG_k$ we denote by $\atom_{\Gamma}\in\cP(\cG_k)$ the Dirac measure that puts mass one on $\Gamma$.
Let $G$ be a finite $k$-colorable graph whose vertex set $V(G)$ is contained in $\RR$ and let $v_1,\ldots,v_l\in V(G)$.
Then we can define a probability measure on $\cG_k^l$ as follows.
Letting $ G \| v$ denote the connected component of $v\in V(G)$ and $\sigma \| v$ the restriction of $\sigma:V(G)\to[k]$ to $G\|v$, we define
\begin{equation}\label{eqempirical}
\lambda\bc{G,v_1,\ldots,v_l}=\frac1{Z_{k}(G)}\sum_{\sigma\in \cS_k(G)}\bigotimes_{i=1}^l\atom_{[G\|v_i,v_i,\sigma\|v_i]}\in\cP(\cG_k^l).
\end{equation}
The idea is that $\lambda_{G,v_1,\ldots,v_l}$ captures the joint empirical distribution
of colorings induced by a random coloring of $G$ ``locally'' in the vicinity of the ``roots'' $v_1,\ldots,v_l$.
Further, let
$$\vec\lambda_{n,m,k}^l=\frac1{n^l}\sum_{v_1,\ldots,v_l\in[n]}\Erw[\atom_{\lambda\bc{\gnm,v_1,\ldots,v_l}}|\chi(\gnm)\leq k]\in\cP^2(\cG_k^l).$$
This measure captures the typical distribution of the local colorings in a random graph with $l$ randomly chosen roots.
We are going to determine the limit of $\vec\lambda_{n,m,k}^l$ as $n\to\infty$.
To characterise this limit, let $\T^*(d)$ be a (possibly infinite) random Galton-Watson tree rooted at a vertex $v_0^*$ with offspring distribution ${\rm Po}(d)$.
We embed $\T^*(d)$ into $\RR$ by independently mapping each vertex to a uniformly random point in $[0,1]$;
with probability one, all vertices get mapped to distinct points.
Let $\T(d)\in\mathfrak G$ signify the resulting random tree and let $v_0$ denote its root.
For a number $\omega>0$ we let $\partial^\omega\T(d)$
denote the (finite) rooted tree obtained from $\T(d)$ by removing all vertices at a distance greater than $\omega$ from $v_0$.
Moreover, for $l\geq1$ let $\T^{1}(d),\ldots,\T^{l}(d)$ be $l$ independent copies of $\T(d)$ and set
\begin{align}\label{eqTreeSeq}
\vec\thet_{d,k}^l\brk\omega&=\Erw\brk{\atom_{\bigotimes_{i\in[l]}\lambda\bc{\partial^\omega\T^{i}(d)}}}\in \cP^2(\cG_k^l),
&\mbox{where}\\
\lambda\bc{\partial^\omega\T^{i}(d)}&=\frac1{Z_{k}(\partial^\omega\T^{i}(d))}\sum_{\sigma\in\cS_k(\partial^\omega\T^{i}(d))}
\atom_{[\partial^\omega\T^{i}(d),v_0,\sigma]}\in\cP(\cG_k^l)&\mbox{ (cf.~(\ref{eqempirical})).}
\nonumber
\end{align}
The sequence $(\vec\thet_{d,k}^l\brk\omega)_{\omega\geq1}$ converges (see Appendix~\ref{Sec_TreeSeq}) and we let
$$\vec\thet_{d,k}^l=\lim_{\omega\to\infty}\vec\thet_{d,k}^l\brk\omega.$$
Combinatorially, $\vec\thet_{d,k}^l$
corresponds to sampling $l$ copies of the Galton-Watson tree $\T(d)$ independently.
These trees are colored by assigning a random color to each of the $l$ roots independently and proceeding down each tree
by independently choosing a color for each vertex from the $k-1$
colors left unoccupied by the parent.
\begin{theorem}\label{Thm_xlwc}
There is a number $k_0>0$ such that for all $k\geq k_0$, $d<d_{k,\mathrm{cond}}$, $l>0$ we have
$\lim_{n\to\infty}\vec\lambda_{n,m,k}^{l}=\vec\thet_{d,k}^{l}$.
\end{theorem}
Fix numbers $\omega\geq1$, $l\geq1$, choose a random graph $\G=\gnm$ for some large enough $n$ and choose vertices
$\vec v_1,\ldots,\vec v_l$ uniformly and independently at random.
Then the depth-$\omega$ neighborhoods $\partial^\omega(\G,\vec v_1),\ldots,\partial^\omega(\G,\vec v_l)$
are pairwise disjoint and the union $\cF=\partial^\omega(\G,\vec v_1)\cup\cdots\cup\partial^\omega(\G,\vec v_l)$ is a forest w.h.p.\
Moreover, the distance between any two trees in $\cF$ is $\Omega(\ln n)$ w.h.p.\
Given that $\G$ is $k$-colorable, let $\SIGMA$ be a random $k$-coloring of $\G$.
Then $\SIGMA$ induces a $k$-coloring of the forest $\cF$.
\Thm~\ref{Thm_xlwc} implies that w.h.p.\ the distribution of the induced coloring is at a total variation distance $o(1)$
from the uniform distribution on the set of all $k$-colorings of $\cF$.
Formally, let us write $\mu_{k,G}$ for the probability distribution on $[k]^{V(G)}$ defined by
\begin{align*}
\mu_{k,G}(\sigma)&=\vec{1}\cbc{\sigma\in\cS_k(G)}Z_{k}(G)^{-1}&(\sigma\in[k]^{V(G)}),
\end{align*}
i.e., the uniform distribution on the set of $k$-colorings of the graph $G$.
Moreover, for $U\subset V(G)$ let
$\mu_{k,G|U}$ denote the projection of $\mu_{k,G}$ onto $[k]^U$, i.e.,
\begin{align*}
\mu_{k,G|U}(\sigma_0)&=\mu_{k,G}\bc{\cbc{\sigma\in[k]^V:\forall u\in U:\sigma(u)=\sigma_0(u)}}&(\sigma_0\in[k]^U).
\end{align*}
If $H$ is a subgraph of $G$, then we just write $\mu_{k,G|H}$ instead of $\mu_{k,G|V(H)}$.
Let $\TV\nix$ denote the total variation norm.
\begin{corollary}\label{Cor_xlwc}
There is a constant $k_0>0$ such that
for any $k\geq k_0$, $d<d_{k,\mathrm{cond}}$, $l\geq1$, $\omega\geq0$ we have
\begin{align*}
\lim_{n\to\infty}\frac1{n^l}\sum_{v_1,\ldots,v_l\in[n]}
\Erw\TV{\mu_{k,\G|\partial^\omega(\G,v_1)\cup\cdots\cup\partial^\omega(\G,v_l)}
-\mu_{k,\partial^\omega(\G,v_1)\cup\cdots\cup\partial^\omega(\G,v_l)}}=0.
\end{align*}
\end{corollary}
Since w.h.p.\ the pairwise distance of $l$ randomly chosen vertices $v_1,\ldots,v_l$ in $\G$ is $\Omega(\ln n)$,
we observe that w.h.p.
$$\mu_{k,\partial^\omega(\G,v_1)\cup\cdots\cup\partial^\omega(\G,v_l)}=\bigotimes_{i\in[l]}\mu_{k,\partial^\omega(\G,v_i)}.$$
With very little work it can be verified that \Cor~\ref{Cor_xlwc} is actually equivalent to \Thm~\ref{Thm_xlwc}.
Setting $\omega=0$ in \Cor~\ref{Cor_xlwc} yields the following statement, which is of interest in its own right.
\begin{corollary}\label{Cor_decay}
There is a number $k_0>0$ such that for all $k\geq k_0$, $d<d_{k,\mathrm{cond}}$ and any integer $l>0$ we have
\begin{equation}\label{eqCor_decay}
\lim_{n\to\infty}\frac1{n^l}\sum_{v_1,\ldots,v_l\in[n]}
\Erw\TV{\mu_{k,\G|\cbc{v_1,\ldots,v_l}}-\bigotimes_{i\in\brk l}\mu_{k,\G|\cbc{v_i}}}=0.
\end{equation}
\end{corollary}
By the symmetry of the colors, $\mu_{k,\G|\cbc v}$ is just the uniform distribution on $\brk k$ for every vertex $v$.
Hence, \Cor~\ref{Cor_decay} states that for $d<d_{k,\mathrm{cond}}$ w.h.p.\ in the random graph $\G$ for randomly chosen
vertices $\vec v_1,\ldots,\vec v_l$ the following is true:
if we choose a $k$-coloring $\SIGMA$ of $\G$ at random, then $(\SIGMA(\vec v_1),\ldots,\SIGMA(\vec v_l))\in[k]^l$
is asymptotically uniformly distributed.
Prior results of Montanari and Gershenfeld~\cite{GM} and of Montanari, Restrepo and Tetali~\cite{Prasad}
imply that (\ref{eqCor_decay}) holds for $d<2(k-1)\ln(k-1)$, about an additive $\ln k$ below $d_{k,\mathrm{cond}}$.
The above results and their proofs are inspired by ideas from statistical physics.
More specifically, physicists have developed a
non-rigorous but analytic technique, the so-called ``cavity method''~\cite{MM}, which has led to various conjectures on the random graph coloring problem.
These include a prediction as to the precise value of $d_{k,\mathrm{cond}}$ for any $k\geq3$~\cite{LenkaFlorent} as well as
a conjecture as to the precise value of the $k$-colorability threshold $d_{k-\mathrm{col}}$~\cite{KPW}.
While the latter formula is complicated, asymptotically we expect that $d_{k-\mathrm{col}}=(2k-1)\ln k-1+\eps_k$, where $\lim_{k\to\infty}\eps_k=0$.
According to this conjecture, the upper bound in~(\ref{eqdk}) is asymptotically tight and $d_{k-\mathrm{col}}$ is strictly greater than $d_{k,\mathrm{cond}}$.
Furthermore, according to the physics considerations (\ref{eqCor_decay}) holds for any $k\geq3$ and any $d<d_{k,\mathrm{cond}}$~\cite{pnas}.
\Cor~\ref{Cor_decay} verifies this conjecture for $k\geq k_0$.
By contrast, according to the physics predictions, (\ref{eqCor_decay}) does {\em not} hold for $d_{k,\mathrm{cond}}<d<d_{k-\mathrm{col}}$.
As (\ref{eqCor_decay}) is the special case of $\omega=0$ of \Thm~\ref{Thm_xlwc} (resp.\ \Cor~\ref{Cor_xlwc}), the conjecture implies
that neither of these extend to $d>d_{k,\mathrm{cond}}$.
In other words, the physics picture suggests that \Thm~\ref{Thm_xlwc}, \Cor~\ref{Cor_xlwc} and \Cor~\ref{Cor_decay}
are \emph{optimal}, except that the assumption $k\geq k_0$ can possibly be replaced by $k\geq3$.
\subsection{An application}
Suppose we draw a $k$-coloring $\SIGMA$ of $\G$ at random.
Of course, the colors that $\SIGMA$ assigns to the neighbors of a vertex $v$ and the color of $v$ are correlated
(they must be distinct).
More generally, it seems reasonable to expect that for any {\em fixed} ``radius'' $\omega$ the colors assigned to the
vertices at distance $\omega$ from $v$ and the color of $v$ itself will typically be correlated.
But will these correlations persist as $\omega\to\infty$?
This is the ``reconstruction problem'', which has received considerable attention in the context
of random constraint satisfaction problems in general and in random graph coloring in particular~\cite{pnas,Prasad,SlyReconstruction}.
To illustrate the use of \Thm~\ref{Thm_xlwc} we will show how it readily implies the result on the reconstruction
problem for random graph coloring from~\cite{Prasad}.
To formally state the problem, assume that $G$ is a finite $k$-colorable graph.
For $v\in V(G)$ and a subset $\emptyset\neq{\mathcal R}\subset\cS_k(G)$ let $\mu_{k,G|v}(\nix|\cU)$ be the probability distribution on $\brk k$ defined by
\begin{align*}
\mu_{k,G|v}(i|{\mathcal R})&=\frac1{\abs{\mathcal R}}\sum_{\sigma\in{\mathcal R}}\vec{1}\cbc{\sigma(v)=i},
\end{align*}
i.e., the distribution of the color of $v$ in a random coloring $\sigma\in{\mathcal R}$.
For $v\in V(G)$, $\omega\geq1$ and $\sigma_0\in\cS_k(G)$ let
$${\mathcal R}_{k,G}(v,\omega,\sigma_0)=\cbc{\sigma\in\cS_k(G):\forall u\in V(G)\setminus\partial^{\omega-1}(G,v):\sigma(u)=\sigma_0(u)}.$$
Thus, ${\mathcal R}_{k,G}(v,\omega,\sigma_0)$ contains all $k$-colorings that coincide with $\sigma_0$ on vertices whose distance from $v$ is
{\em at least} $\omega$.
Moreover, let
\begin{align*}
\corr_{k,G}(v,\omega,\sigma_0)&=\frac12\sum_{i\in [k]}\left| \mu_{k,G|v}(i|{\mathcal R}_{k,G}(v,\omega,\sigma_0))
-\frac1k\right |, &
\corr_{k,G}(v,\omega)&=\frac1{Z_{k}(G)}\sum_{\sigma_0\in\cS_k(G)}\corr_{k,G}(v,\omega,\sigma_0).
\end{align*}
Clearly, for symmetry reasons, if we draw a $k$-coloring $\SIGMA\in\cS_k(G)$ uniformly at random,
then $\SIGMA(v)$ is uniformly distributed over $\brk k$.
What $\corr_{k,G}(v,\omega,\sigma_0)$ measures is how much conditioning on the event $\SIGMA\in{\mathcal R}_{k,G}(v,\omega,\sigma_0)$ biases the color of $v$.
Accordingly, $\corr_{k,G}(v,\omega)$ measures the bias induced by a {\em random} ``boundary condition'' $\sigma_0$.
We say that \emph{non-reconstruction} occurs in $\gnm$ if
$$\lim_{\omega\to\infty}\lim_{n\to\infty}\frac1n\sum_{v\in[n]}\Erw[\corr_{k,\gnm}(v, \omega)]=0.$$
Otherwise, \emph{reconstruction} occurs.
Analogously,
recalling that $\vec T(d)$ is the Galton-Watson tree rooted at $v_0$, we say that
\emph{tree non-reconstruction} occurs at $d$ if
$\lim_{\omega\to\infty}\Erw[\corr_{k,\partial^\omega\vec T(d)}(v_0, \omega )]=0.$
Otherwise, \emph{tree reconstruction} occurs.
\begin{corollary}\label{Cor_reconstr}
There is a number $k_0>0$ such that for all $k\geq k_0$ and $d<d_{k,\mathrm{cond}}$ the following is true.
\begin{equation}\label{eqCor_reconstr}
\parbox{12cm}{Reconstruction occurs in $\gnm$ $\Leftrightarrow$ tree reconstruction occurs at $d$.}
\end{equation}
\end{corollary}
Montanari, Restrepo and Tetali~\cite{Prasad} proved~(\ref{eqCor_reconstr}) for $d<2(k-1)\ln(k-1)$, about an additive $\ln k$ below $d_{k,\mathrm{cond}}$.
This gap could be plugged by invoking recent results on the geometry of the set of $k$-colorings~\cite{Silent,Covers,Molloy}.
However, we shall see that \Cor~\ref{Cor_reconstr} is actually an immediate consequence of \Thm~\ref{Thm_xlwc}.
The point of \Cor~\ref{Cor_reconstr} is that it reduces the reconstruction problem on a combinatorially extremely
intricate object, namely the random graph $\gnm$, to the same problem on a much simpler structure,
namely the Galton-Watson tree $\T(d)$.
That said, the reconstruction problem on $\T(d)$ is far from trivial.
The best current bounds show that there exists a sequence $(\delta_k)_k\to 0$ such that non-reconstruction holds in $\T(d)$ if $d<(1-\delta_k)k\ln k$ while
reconstruction occurs if $d>(1+\delta_k)k\ln k$~\cite{GWReconstruction}.
\subsection{Techniques and outline}
None of the arguments in the present paper are particularly difficult.
It is rather that a combination of several relatively simple ingredients proves remarkably powerful.
The starting point of the proof is a recent result~\cite{Silent} on the concentration of the number $Z_{k}(\gnm)$ of $k$-colorings of $\gnm$.
This result entails a very precise connection between a fairly simple probability distribution, the so-called ``planted model'', and the experiment
of sampling a random coloring of a random graph, thereby extending the ``planting trick'' from~\cite{Barriers}.
However, this planting argument is not powerful enough to establish \Thm~\ref{Thm_xlwc} (cf.\ also the discussion in~\cite{BST}).
Therefore, in the present paper the key idea is to use the information about $Z_{k}(\gnm)$ to introduce an enhanced variant of the planting trick.
More specifically, in \Sec~\ref{Sec_planting} we will establish a connection between the experiment of sampling a random {\em pair} of colorings of $\gnm$
and another, much simpler probability distribution that we call the {\em planted replica model}.
We expect that this idea will find future uses.
Apart from the concentration of $Z_{k}(\gnm)$, this connection also hinges on a study of the ``overlap'' of two randomly chosen colorings of $\gnm$.
The overlap was studied in prior work on reconstruction~\cite{GM,Prasad} in the case that $d<2(k-1)\ln(k-1)$ based on considerations
from the second moment argument of Achlioptas and Naor~\cite{AchNaor} that gave the best lower bound on the $k$-colorability threshold at the time.
To extend the study of the overlap to the whole range $d\in(0,d_{k,\mathrm{cond}})$, we crucially harness insights from the improved second moment
argument from~\cite{Danny} and the rigorous derivation of the condensation threshold~\cite{Cond}.
As we will see in \Sec~\ref{Sec_Nor},
the study of the planted replica model allows us to draw conclusions as to the typical ``local'' structure of pairs of random colorings of $\gnm$.
To turn these insights into a proof of \Thm~\ref{Thm_xlwc}, in \Sec~\ref{Sec_lwc} we extend an elegant argument from~\cite{GM}, which was used there to
establish the asymptotic independence of the colors assigned to a bounded number of randomly chosen individual vertices (reminiscent of~(\ref{eqCor_decay}))
for $d<2(k-1)\ln(k-1)$.
The bottom line is that the strategy behind the proof of \Thm~\ref{Thm_xlwc} is rather generic.
It probably extends to other problems of a similar nature.
A natural class to think of are the binary problems studied in~\cite{Prasad}.
Another candidate might be the hardcore model, which was studied in~\cite{BST} by a somewhat different approach.
\section{Preliminaries}
\subsection{Notation}
For a finite or countable set $\cX$ we denote by $\cP(\cX)$ the set of all probability distributions on $\cX$,
which we identify with the set of all maps $p:\cX\to[0,1]$ such that $\sum_{x\in\cX}p(x)=1$.
Furthermore, if $N>0$ is an integer, then $\cP_N(\cX)$ is the set of all $p\in\cP(\cX)$ such that $Np(x)$ is an integer for every $x\in\cX$.
With the convention that $0\ln0=0$, we denote the entropy of $p\in\cP(\cX)$ by
$$H(p)=-\sum_{x\in\cX}p(x)\ln p(x).$$
Let $G$ be a $k$-colorable graph.
By $\SIGMA^{k,G},\SIGMA^{k,G}_1,\SIGMA^{k,G}_2,\ldots\in\cS_k(G)$ we denote independent uniform samples from $\cS_k(G)$.
Where $G,k$ are apparent from the context, we omit the superscript.
Moreover, if $X:\cS_k(G)\to\RR$, we write
$$\bck{X(\SIGMA)}_{G,k}=\frac1{Z_{k}(G)}\sum_{\sigma\in\cS_k(G)}X(\sigma).$$
More generally, if $X:\cS_k(G)^l\to\RR$, then
$$\bck{X(\SIGMA_1,\ldots,\SIGMA_l)}_{G,k}=\frac1{Z_{k}(G)^l}\sum_{\sigma_1,\ldots,\sigma_l\in\cS_k(G)}X(\sigma_1,\ldots,\sigma_l).$$
We omit the subscript $G$ and/or $k$ where it is apparent from the context.
Thus, the symbol $\bck\nix_{G,k}$ refers to the average over randomly chosen $k$-colorings of a {\em fixed} graph $G$.
By contrast, the standard notation $\Erw\brk\nix$, $\pr\brk\nix$ will be used to indictate that the expectation/probability is taken
over the choice of the random graph $G(n,m)$.
Unless specified otherwise, we use the standard $O$-notation to refer to the limit $n\to\infty$.
Throughout the paper, we tacitly assume that $n$ is sufficiently large for our various estimates to hold.
By a {\em rooted graph} we mean a graph $G$ together with a distinguished vertex $v$, the {\em root}.
The vertex set is always assumed to be a subset of $\RR$.
If $\omega\geq0$ is an integer, then $\nbg{G}{v}$ signifies the subgraph of $G$ obtained by removing all
vertices at distance greater than $\omega$ from $v$ (including those vertices of $G$ that are not reachable from $v$), rooted at $v$.
An {\em isomorphism} between two rooted graphs $(G,v)$, $(G',v')$ is an isomorphism $G\to G'$ of the underlying graphs
that maps $v$ to $v'$ and that preserves the order of the vertices (which is why we insist that they be reals).
\subsection{The first moment}\label{Sec_firstMoment}
The present work builds upon results on the first two moments of $Z_{k}(\gnm)$.
\begin{lemma}\label{Lemma_firstMoment}
For any $d>0$,
$\Erw[Z_{k}(\G)]=\Theta(k^n(1-1/k)^m).$
\end{lemma}
Although \Lem~\ref{Lemma_firstMoment} is folklore, we briefly comment on how the expression comes about.
For $\sigma:\brk n\to\brk k$ let
\begin{equation}\label{eqForb1}
\cF(\sigma)=\sum_{i=1}^k\bink{|\sigma^{-1}(i)|}{2}
\end{equation}
be the number of edges of the complete graph that are monochromatic under $\sigma$.
Then
\begin{align}\label{eqForb2}
\pr\brk{\sigma\in\cS_k(\G)}&=\bink{\bink n2-\cF(\sigma)}{m}\bigg/\bink{\bink n2}{m}.
\end{align}
By convexity, we have $\cF(\sigma)\geq\frac1k\bink n2$ for all $\sigma$.
In combination with~(\ref{eqForb2}) and the linearity of expectation, this implies that $\Erw[Z_{k}(\gnm)]= O(k^n(1-1/k)^m).$
Conversely, there are $\Omega(k^n)$ maps $\sigma:\brk n\to\brk k$ such that $\left|n/k-|\sigma^{-1}(i)| \right|\leq\sqrt n$ for all $i$, and
$\cF(\sigma)/\bink n2=1/k+O(1/n)$ for all such $\sigma$.
This implies $\Erw[Z_{k}(\G)]=\Omega(k^n(1-1/k)^m).$
The following result shows that $Z_{k}(\G)$ is tightly concentrated about its expectation for $d<d_{k,\mathrm{cond}}$.
\begin{theorem}[\cite{Silent}]\label{Thm_Z}
There is $k_0>0$ such that for all $k\geq k_0$ and all $d<d_{k,\mathrm{cond}}$ we have
$$\lim_{\omega\to\infty}\lim_{n\to\infty}\pr\brk{|\ln Z_k(\G)-\ln\Erw[Z_k(\G)]|\leq\omega}=1.$$
\end{theorem}
For $\alpha=(\alpha_1,\ldots,\alpha_k)\in\cP_n([k])$ we let $Z_{\alpha}(\G)$ be the number of $k$-colorings $\sigma$ of $\G$
such that $|\sigma^{-1}(i)|=\alpha_i n$ for all $i\in[k]$.
Conversely, for a map $\sigma:\brk n\to\brk k$ let $\alpha(\sigma)=n^{-1}(\sigma^{-1}(i))_{i\in[k]}\in\cP_n(\brk k)$.
Additionally, let $\bar\alpha=k^{-1}\vec{1}=(1/k,\ldots,1/k)$.
\begin{lemma}[{\cite[\Lem~3.1]{Silent}}]\label{Lemma_phiFirstMoment}
Let
$\varphi(\alpha)=H(\alpha)+\frac{d}2\ln\bc{1-\norm\alpha_2^2}$.
Then
\begin{align*}
\Erw[Z_\alpha(\Gnm)]&=O(\exp(n\varphi(\alpha)))&\mbox{uniformly for all }\alpha\in\cP_n(\brk k),\\
\Erw[Z_\alpha(\Gnm)]&=\Theta(n^{(1-k)/2})\exp(n\varphi(\alpha))&
\mbox{uniformly for all $\alpha\in\cP_n(\brk k)$ such that $\norm{\alpha-\bar\alpha}_2\leq k^{-3}$}.
\end{align*}
\end{lemma}
\subsection{The second moment}\label{Sec_secondMoment}
Define the {\em overlap} of $\sigma,\tau:[n]\to[k]$ as the $k\times k$ matrix $\rho(\sigma,\tau)$ with entries
$$\rho_{ij}(\sigma,\tau)=\frac1n\abs{\sigma^{-1}(i)\cap\tau^{-1}(j)}.$$
Then the number of edges of the complete graph that are monochromatic under either $\sigma$ or $\tau$ equals
$$\cF(\sigma,\tau)=\cF(\sigma)+\cF(\tau)-\sum_{i,j\in\brk k}\bink{n\rho_{ij}(\sigma,\tau)}{2}.$$
For $i\in\brk k$ let $\rho_{i\nix}$ signify the $i$th row of the matrix $\rho$, and for $j\in\brk k$ let $\rho_{\nix j}$ denote the $j$th column.
An elementary application of inclusion/exclusion yields (cf.~\cite[Fact~5.4]{Silent})
\begin{align}\label{eqErwZrho}
\pr[\sigma,\tau\in\cS_k(\Gnm)]&
=\frac{\bink{\bink n2-\cF(\sigma,\tau)}m}{\bink{\bink n2}m}
=O\bc{\brk{1-\sum_{i\in[k]}(\norm{\rho_{i\nix}(\sigma,\tau)}_2^2+\norm{\rho_{\nix i}(\sigma,\tau)}_2^2)
+\norm{\rho(\sigma,\tau)}_2^2}^m}.
\end{align}
We can view $\rho(\sigma,\tau)$ as a distribution on $\brk k\times\brk k$, i.e., $\rho(\sigma,\tau)\in\cP_n(\brk k^2)$.
Let $\bar\rho$ be the uniform distribution on $\brk k^2$.
Moreover, for $\rho\in\cP_{n}(\brk k^2)$ let $Z_\rho^\otimes(\Gnm)$ be the number of pairs $\sigma_1,\sigma_2\in\cS_k(\Gnm)$ with overlap $\rho$.
Finally, let
\begin{align}
{\mathcal R}_{n,k}(\omega)&=\cbc{\rho\in\cP_n([k]^2):\forall i\in\brk k:
\norm{\rho_{i\nix}-\bar\alpha}_2,\norm{\rho_{\nix i}-\bar\alpha}_2\leq\sqrt{\omega/n}},&\mbox{and}\\
f(\rho)&=H(\rho)+\frac d2\ln(1-2/k+\norm{\rho}_2^2).
\end{align}
\begin{lemma}[\cite{AchNaor}]\label{Lemma_f}
Assume that $\omega=\omega(n)\to\infty$ but $\omega=o(n)$.
For all $k\geq3,d>0$ we have
\begin{align*}
\Erw[Z_\rho^\otimes(\Gnm)]&=O(n^{(1-k^2)/2})\exp(nf(\rho))&
\mbox{uniformly for all $\rho\in{\mathcal R}_{n,k}(\omega)$ s.t.\ }\norm{\rho-\bar\rho}_\infty\leq k^{-3},\\
\Erw[Z_\rho^\otimes(\Gnm)]&=O(\exp(nf(\rho)))&\mbox{uniformly for all $\rho\in{\mathcal R}_{n,k}(\omega)$.}
\end{align*}
Moreover, if $d<2(k-1)\ln(k-1)$, then for any $\eta>0$ there exists $\delta>0$ such that
\begin{equation}\label{eqAchNaor}
f(\rho)<f(\bar\rho)-\delta\qquad\mbox{ for all $\rho\in{\mathcal R}_{n,k}(\omega)$ such that $\norm{\rho-\bar\rho}_2>\eta$}.
\end{equation}
\end{lemma}
The bound~(\ref{eqAchNaor}) applies for $d<2(k-1)\ln(k-1)$, about $\ln k$ below $d_{k,\mathrm{cond}}$.
To bridge the gap, let $\kappa=1-\ln^{20}k/k$ and call $\rho\in\cP_n(\brk k^2)$ {\em separable} if $k\rho_{ij}\not\in(0.51,\kappa)$ for all $i,j\in[k]$.
Moreover, $\sigma\in\cS_k(\G)$ is {\em separable} if $\rho(\sigma,\tau)$ is separable for all $\tau\in\cS_k(\G)$.
Otherwise, we call $\sigma$ {\em inseparable}.
Further, $\rho$ is {\em $s$-stable} if there are precisely $s$ entries such that $k\rho_{ij}\geq\kappa$.
\begin{lemma}[\cite{Danny}]\label{Lemma_Danny}
There is $k_0$ such that for all $k>k_0$ and all
$2(k-1)\ln(k-1)\leq d\leq2k\ln k$ the following is true.
\begin{enumerate}
\item Let $\tilde Z_k(\G)=\abs{\cbc{\sigma\in\cS_k(\G):\sigma\mbox{ is inseparable}}}$.
Then
$\Erw[\tilde Z_k(\G)]\leq\exp(-\Omega(n))\Erw[Z_{k}(\G)]$.
\item Let $1\leq s\leq k-1$.
Then $f(\rho)<f(\bar\rho)-\Omega(1)$ uniformly for all $s$-stable $\rho$.
\item For any $\eta>0$ there is $\delta>0$ such that
$\sup\{f(\rho):\mbox{$\rho$ is $0$-stable and $\norm{\rho-\bar\rho}_2>\eta$}\}<f(\bar\rho)-\delta$.
\end{enumerate}
\end{lemma}
\Lem~\ref{Lemma_Danny} omits the $k$-stable case.
To deal with it, we introduce
\begin{equation}\label{eqcluster}
{\mathcal C}(G,\sigma)=\cbc{\tau\in\cS_k(G):\rho(\sigma,\tau)\mbox{ is $k$-stable}}.
\end{equation}
\begin{lemma}[\cite{Cond}]\label{Lemma_clusterSize}
There exist $k_0$ and $\omega=\omega(n)\to\infty$ such that for all $k\geq k_0$, $2(k-1)\ln(k-1)\leq d<d_{k,\mathrm{cond}}$ we have
\begin{align*}
\lim_{n\to\infty}\pr\brk{\bck{|{\mathcal C}(\G,\SIGMA)|}_{\G,k}\leq\omega^{-1}\Erw\brk{Z_{k}(\G)}}=1.
\end{align*}
\end{lemma}
\subsection{A tail bound}
Finally, we need the following inequality.
\begin{lemma}[\cite{Lutz}]\label{Lemma_Lutz}
Let $X_1,\ldots,X_N$ be independent random variables with values in a finite set $\Lambda$.
Assume that $f:\Lambda^N\ra\RR$ is a function, that $\Gamma\subset\Lambda^N$ is an event and that $c,c'>0$ are numbers such that
the following is true.
\begin{equation}\label{eqTL}
\parbox{12cm}{If $x,x'\in\Lambda^N$ are such that there is $k\in\brk N$ such that $x_i=x_i'$ for all $i\neq k$, then
$$|f(x)-f(x')|\leq\left\{\begin{array}{cl}
c&\mbox{ if }x\in\Gamma,\\
c'&\mbox{ if }x\not\in\Gamma.
\end{array}\right.$$}
\end{equation}
Then for any $\gamma\in(0,1]$
and any $t>0$ we have
$$\pr\brk{|f(X_1,\ldots,X_N)-\Erw[f(X_1,\ldots,X_N)]|>t}
\leq2\exp\bc{-\frac{t^2}{2N(c+\gamma (c'-c))^2}}+\frac{2 N}\gamma\pr\brk{(X_1,\ldots,X_N)\not\in\Gamma}.$$
\end{lemma}
\section{The planted replica model}\label{Sec_planting}
\noindent{\em Throughout this section we assume that $k\geq k_0$ for some large enough constant $k_0$ and that $d<d_{k,\mathrm{cond}}$.}
\medskip
\noindent
In this section we introduce the key tool for the proof of \Thm~\ref{Thm_xlwc}, the {\em planted replica model}.
This is the probability distribution $\plp$ on triples $(G,\sigma_1,\sigma_2)$ such that $G$ is a graph on $[n]$ with $m$ edges
and $\sigma_1,\sigma_2\in\cS_k(G)$ induced by the following experiment.
\begin{description}
\item[PR1] Sample two maps $\plSIGMA_1,\plSIGMA_2:[n]\to[k]$ independently and uniformly at random subject to the condition
that $\cF(\plSIGMA_1,\plSIGMA_2)\leq\bink n2-m$.
\item[PR2] Choose a graph $\plG$ on $[n]$ with precisely $m$ edges uniformly at random, subject to the condition that
both $\plSIGMA_1,\plSIGMA_2$ are proper $k$-colorings.
\end{description}
\noindent
We define
$$\plp(G,\sigma_1,\sigma_2)=\pr\brk{(\plG,\plSIGMA_1,\plSIGMA_2)=(G,\sigma_1,\sigma_2)}.$$
Clearly, the planted replica model is quite tame so that it should be easy to bring the known techniques from the theory of random graphs to bear.
Indeed, the conditioning in {\bf PR1} is harmless because $\Erw[\cF(\plSIGMA_1,\plSIGMA_2)]\sim(2/k-1/k^2)\bink n2$ while $m=O(n)$.
Hence, by the Chernoff bound we have $\cF(\plSIGMA_1,\plSIGMA_2)\leq\bink n2-m$ w.h.p.\
Moreover, {\bf PR2} just means that we draw $m$ random edges out of the $\bink n2-\cF(\plSIGMA_1,\plSIGMA_2)$ edges of the complete graph
that are bichromatic under both $\plSIGMA_1,\plSIGMA_2$.
In particular, we have the explicit formula
\begin{align*}
\plp(G,\sigma_1,\sigma_2)&=\frac1
{\abs{\cbc{(\tau_1,\tau_2)\in[k]^n\times [k]^n:\cF(\tau_1,\tau_2)\leq\bink n2-m}}}
{\sum_{\tau_1,\tau_2:\brk n\to\brk k,\,\cF(\tau_1,\tau_2)\leq\bink n2-m}\bink{\bink n2-\cF(\tau_1,\tau_2)}{m}^{-1}}.
\end{align*}
The purpose of the planted replica model is to get a handle on another experiment, which at first glance seems far less amenable.
The {\em random replica model} $\prcp$ is a probability distribution on triples $(G,\sigma_1,\sigma_2)$ such that $\sigma_1,\sigma_2\in\cS_k(G)$ as well.
It is induced by the following experiment.
\begin{description}
\item[RR1] Choose a random graph $\G=\gnm$ subject to the condition that $\G$ is $k$-colorable.
\item[RR2] Sample two colorings $\SIGMA_1,\SIGMA_2$ of $\G$ uniformly and independently.
\end{description}
\noindent
Thus, the random replica model is defined by the formula
\begin{align}\label{eqrr}
\prcp(G,\sigma_1,\sigma_2)&=\pr\brk{(\G,\SIGMA_1,\SIGMA_2)=(G,\sigma_1,\sigma_2)}=
\brk{\bink{\bink n2}m\pr\brk{\chi(\G)\leq k}Z_{k}(G)^2}^{-1}.
\end{align}
Since we assume that $d<d_{k,\mathrm{cond}}$, $\G$ is $k$-colorable w.h.p.\
Hence, the conditioning in {\bf RR1} is innocent.
But this is far from true of the experiment described in {\bf RR2}.
For instance, we have no idea as to how one might implement {\bf RR2} constructively for $d$ anywhere near $d_{k,\mathrm{cond}}$.
In fact, the best current algorithms for finding a single $k$-coloring of $\G$, let alone a random pair, stop working for degrees $d$ about a factor of two
below $d_{k,\mathrm{cond}}$ (cf.\ \cite{Barriers}).
Yet the main result of this section shows that for $d<d_{k,\mathrm{cond}}$, the ``difficult'' random replica model can be studied by means of the ``simple'' planted replica model.
More precisely, recall that a sequence $(\mu_n)_{n}$ of probability measures is {\em contiguous} with respect to another sequence $(\nu_n)_n$
if $\mu_n,\nu_n$ are defined on the same ground set for all $n$ and if for any sequence $(\cA_n)_n$ of events such that $\lim_{n\to\infty}\nu_n(\cA_n)=0$
we have $\lim_{n\to\infty}\mu_n(\cA_n)=0$.
\begin{proposition}\label{Thm_contig}
If $d<d_{k,\mathrm{cond}}$, then $\rrm$ is contiguous with respect to $\prm$.
\end{proposition}
The rest of this section is devoted to the proof of \Prop~\ref{Thm_contig}.
A key step is to study the distribution of the overlap of two random $k$-colorings $\SIGMA_1,\SIGMA_2$ of $\G$,
whose definition we recall from \Sec~\ref{Sec_secondMoment}.
\begin{lemma}
\label{Lemma_Z2}
Assume that $d<d_{k,\mathrm{cond}}$.
Then $\Erw[\bck{\norm{\rho(\SIGMA_1,\SIGMA_2)-\bar\rho}_2}_{\G}]=o(1).$
\end{lemma}
In words, \Lem~\ref{Lemma_Z2} asserts that the expectation over the choice of the random graph $\G$
(the outer $\Erw$) of the average $\ell_2$-distance of the overlap of two randomly chosen $k$-colorings of $\G$
from $\bar\rho$ goes to $0$ as $n\to\infty$.
To prove this statement the following intermediate step is required;
we recall the $\alpha\bc\nix$ notation from \Sec~\ref{Sec_firstMoment}.
The $d<2(k-1)\ln(k-1)$ case of \Lem~\ref{Lemma_Z2} was previously proved in~\cite{Prasad} by way of the second moment analysis from~\cite{AchNaor}.
As it turns out, the regime $2(k-1)\ln(k-1)<d<d_{k,\mathrm{cond}}$ requires a somewhat more sophisticated argument.
In any case, for the sake of completeness we give a full prove of \Lem~\ref{Lemma_Z2}, including the $d<2(k-1)\ln(k-1)$
(which adds merely three lines to the argument).
Similarly, in~\cite{Prasad} the following claim was established in the case $d<2(k-1)\ln(k-1)$.
\begin{claim}\label{Cor_phiFirstMoment}
Suppose that $d<d_{k,\mathrm{cond}}$ and that $\omega=\omega(n)$ is such that $\lim_{n\to\infty}\omega(n)=\infty$ but $\omega=o(n)$.
Then w.h.p.\ $\G$ is such that
$$\bck{\vec{1}\cbc{\norm{\alpha(\SIGMA)-\bar\alpha}_2>\sqrt{\omega/n}}}_{\G}\leq\exp(-\Omega(\omega)).$$
\end{claim}
\begin{proof}
We combine \Thm~\ref{Thm_Z} with a standard ``first moment'' estimate similar to the proof of~\cite[\Lem~5.4]{Prasad}.
The entropy function $\alpha\in\cP(\brk k)\mapsto H(\alpha)=-\sum_{i=1}^k\alpha_i\ln\alpha_i$ is concave and attains its global maximum at $\bar\alpha$.
In fact, the Hessian of $\alpha\mapsto H(\alpha)$ satisfies $D^2H(\alpha)\preceq-2\id$.
Moreover, since $\alpha\mapsto\norm\alpha_2^2$ is convex, $\alpha\mapsto\frac{d}2\ln(1-\norm\alpha_2^2)$ is concave and attains
is global maximum at $\bar\alpha$ as well.
Hence, letting $\varphi$ denote the function from \Lem~\ref{Lemma_phiFirstMoment}, we find $D^2\varphi(\alpha)\preceq-2\id$.
Therefore, we obtain from \Lem~\ref{Lemma_phiFirstMoment} that
\begin{equation}\label{eqCor_phiFirstMoment1}
\Erw[Z_\alpha(\G)]\leq \exp(n(\varphi(\bar\alpha)-\norm{\alpha-\bar\alpha}_2^2))\cdot\begin{cases}
O(1)&\mbox{ if }\norm{\alpha-\bar\alpha}_2> 1/\ln n,\\
O(n^{(1-k)/2})&\mbox{ otherwise.}
\end{cases}
\end{equation}
Further, letting $$Z'(\G)=\sum_{\alpha\in\cP_n(\brk k):\norm{\alpha-\bar\alpha}_2>\sqrt{\omega/n}}Z_\alpha(\G)$$
and treating the cases $\omega\leq \ln^2 n$ and $\omega\geq\ln^2 n$ separetely, we obtain from~(\ref{eqCor_phiFirstMoment1}) that
\begin{equation}\label{eqCor_phiFirstMoment2}
\Erw[Z'(\G)]\leq \exp(-\Omega(\omega))\exp(n(\varphi(\bar\alpha)).
\end{equation}
Since \Lem~\ref{Lemma_firstMoment} shows that $\Erw[Z_{k}(\G)]=\Theta(k^n(1-1/k)^m)=\exp(n\varphi(\bar\alpha))$,
(\ref{eqCor_phiFirstMoment2}) yields $\Erw[Z'(\G)]=\exp(-\Omega(\omega))\Erw[Z_{k}(\G)]$.
Hence, by Markov's inequality
\begin{equation}\label{eqCor_phiFirstMoment3}
\pr\brk{Z'(\G)\leq\exp(-\Omega(\omega))\Erw[Z_{k}(\G)]}\geq1-\exp(-\Omega(\omega)).
\end{equation}
Finally, since $\bck{\norm{\alpha(\SIGMA)-\bar\alpha}_2>\sqrt{\omega/n}}_{\G}=Z'(\G)/Z_{k}(\G)$
and because $Z_{k}(\G)\geq\Erw[Z_{k}]/\omega$ w.h.p.\ by \Thm~\ref{Thm_Z},
the assertion follows from~(\ref{eqCor_phiFirstMoment3}).
\end{proof}
\begin{proof}[Proof of \Lem~\ref{Lemma_Z2}]
We bound
$$\Lambda=\sum_{\sigma_1,\sigma_2\in\cS_k(\Gnm)}\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2
=Z_{k}(\Gnm)^2\bck{\norm{\rho(\SIGMA_1,\SIGMA_2)-\bar\rho}_2}_{\Gnm}$$
by a sum of three different terms.
First, letting, say, $\omega(n)=\ln n$, we set
\begin{align*}
\Lambda_1&=
\sum_{\sigma_1,\sigma_2\in\cS_k(\Gnm)}\vec{1}\cbc{\norm{\alpha(\sigma_1)-\bar\alpha}_2>\sqrt{\omega/n}}
=Z_{k}(\Gnm)^2\bck{\norm{\alpha(\SIGMA)-\bar\alpha}_2>\sqrt{\omega/n}}_{\Gnm}.
\end{align*}
To define the other two, let $\cS_k'(\Gnm)$ be the set of all $\sigma\in\cS_k(\Gnm)$ such that $\norm{\alpha(\sigma)-\bar\alpha}_2\leq\sqrt{\omega/n}$.
Let $\eta>0$ be a small but $n$-independent number and let
\begin{align*}
\Lambda_2&=
\sum_{\sigma_1,\sigma_2\in\cS_k'(\Gnm)}\vec{1}\cbc{\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2\leq\eta}
\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2,&
\Lambda_3&= \sum_{\sigma_1,\sigma_2\in\cS_k'(\Gnm)}\vec{1}\cbc{\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2>\eta}.
\end{align*}
Since $\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2\leq2$ for all $\sigma_1,\sigma_2$, we have
\begin{equation}\label{eqLambda_dec}
\Lambda\leq4(\Lambda_1+\Lambda_2)+\Lambda_3.
\end{equation}
Hence, we need to bound $\Lambda_1,\Lambda_2,\Lambda_3$.
With respect to $\Lambda_1$, Claim~\ref{Cor_phiFirstMoment} implies that
\begin{align}\label{eqLemma_Z2_2}
\pr\brk{\Lambda_1\leq\exp(-\Omega(\sqrt n))Z_{k}(\Gnm)^2}&=1-o(1).
\end{align}
To estimate $\Lambda_2$, we let $f$ denote the function from \Lem~\ref{Lemma_f}.
Observe that $Df(\bar\rho)=0$, because $\bar\rho$ maximises the entropy and minimises the $\ell_2$-norm.
Further, a straightforward calculation reveals that for any $i,j,i',j'\in[k],\ (i,j)\neq(i',j')$,
\begin{align*}
\frac{\partial^2 f(\rho)}{\partial\rho_{ij}^2}&=-\frac1{\rho_{ij}}+\frac d{1-2/k+\norm{\rho}_2^2}-\frac{2d\rho_{ij}^2}{(1-2/k+\norm{\rho}_2^2)^2},&
\frac{\partial^2 f(\rho)}{\partial\rho_{ij}\partial\rho_{i'j'}}&=-\frac{2d\rho_{ij}\rho_{i'j'}}{(1-2/k+\norm{\rho}_2^2)^2}.
\end{align*}
Consequenctly, choosing, say, $\eta<k^{-4}$, ensures that the Hessian satisfies
\begin{equation}\label{eqLemma_Z2_11}
D^2f(\rho)\preceq-2\id\qquad\mbox{ for all $\rho$ such that $\norm{\rho-\bar\rho}_2^2\leq\eta$.}
\end{equation}
Therefore, \Lem~\ref{Lemma_f}
yields
\begin{align}
\Erw[\Lambda_2]&\leq
\sum_{\rho\in{\mathcal R}_{n,k}(\eta)}\norm{\rho-\bar\rho}_2\Erw[Z_\rho^\otimes(\Gnm)]\nonumber\\
&\leq O(n^{(1-k^2)/2})\exp(nf(\bar\rho))
\sum_{\rho\in{\mathcal R}_{n,k}(\eta)}\norm{\rho-\bar\rho}_2\exp(n(f(\rho)-f(\bar\rho)))\nonumber\\
&\leq O(n^{(1-k^2)/2})\exp(nf(\bar\rho))\sum_{\rho\in{\mathcal R}_{n,k}(\eta)}\norm{\rho-\bar\rho}_2\exp(-nk^{-2}\norm{\rho-\bar\rho}^2)
&\mbox{[by~(\ref{eqLemma_Z2_11})]}.
\label{eqLemma_Z2_11_a}
\end{align}
Further, since $\rho_{kk}=1-\sum_{(i,j)\neq(k,k)}\rho_{ij}$ for any $\rho\in{\mathcal R}_{n,k}(\eta)$, substituting $x=\sqrt n\rho$ in~(\ref{eqLemma_Z2_11_a}) yields
\begin{align} \label{eqLemma_Z2_11_b}
\Erw[\Lambda_2]&\leq O(n^{(1-k^2)/2})\exp(nf(\bar\rho)) \int_{\RR^{k^2-1}}\frac{\norm{x}_2}{\sqrt n}\exp(-k^{-2}\norm{x}_2^2)dx
=O(n^{-1/2})\exp(nf(\bar\rho)).
\end{align}
Since $f(\bar\rho)=2\ln k+d\ln(1-1/k)$, \Lem~\ref{Lemma_firstMoment} yields
\begin{align}\label{eqfbarrho}
\exp(n f(\bar\rho))&\leq O(\Erw[Z_{k}(\Gnm)]^2).
\end{align}
Therefore, (\ref{eqLemma_Z2_11_b}) entails that
\begin{align}\label{eqLambda2}
\Erw[\Lambda_2]&\leq O(n^{-1/2})\Erw[Z_{k}(\Gnm)]^2.
\end{align}
To bound $\Lambda_3$, we consider two separate cases.
The first case is that $d\leq2(k-1)\ln(k-1)$.
Then \Lem~\ref{Lemma_f} and~(\ref{eqfbarrho}) yield
\begin{equation}\label{eqLambda3_case1}
\Erw[\Lambda_3]\leq\exp(nf(\bar\rho)-\Omega(n))\leq\exp(-\Omega(n))\Erw[Z_{k}(\Gnm)]^2.
\end{equation}
The second case is that $2(k-1)\ln(k-1)\leq d<d_{k,\mathrm{cond}}$.
We introduce
\begin{align*}
\Lambda_{31}&=\sum_{\sigma_1,\sigma_2\in\cS_k'(\Gnm)}\vec{1}\cbc{\sigma_1\mbox{ fails to be separable}},\\
\Lambda_{32}&=\sum_{\sigma_1,\sigma_2\in\cS_k'(\Gnm)}\vec{1}\cbc{\rho(\sigma_1,\sigma_2)\mbox{ is $s$-stable for some $1\leq s\leq k$}},\\
\Lambda_{33}&=\sum_{\sigma_1,\sigma_2}\vec{1}\cbc{\rho(\sigma_1,\sigma_2)\mbox{ is $0$-stable and }
\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2>\eta},\\
\Lambda_{34}&=\sum_{\sigma_1,\sigma_2\in\cS_k'(\Gnm)}\vec{1}\cbc{\rho(\sigma_1,\sigma_2)\mbox{ is $k$-stable}},
\end{align*}
so that
\begin{equation}\label{eqLemma_Z2_1}
\Lambda_3\leq\Lambda_{31}+\Lambda_{32}+\Lambda_{33}+\Lambda_{34}.
\end{equation}
By the first part of \Lem~\ref{Lemma_Danny} and Markov's inequality,
\begin{align}\label{eqLemma_Z2_Lambda31}
\pr\brk{\Lambda_{31}\leq\exp(-\Omega(n))Z_{k}(\Gnm)\Erw[Z_{k}(\Gnm)]}&=1-o(1).
\end{align}
Further, combining \Lem~\ref{Lemma_f} with the second part of \Lem~\ref{Lemma_Danny}, we obtain
\begin{align}\label{eqLemma_Z2_Lambda32}
\pr\brk{\Lambda_{32}\leq\exp(n f(\bar\rho)-\Omega(n))}&=1-o(1).
\end{align}
Addionally, \Lem~\ref{Lemma_f} and the third part of \Lem~\ref{Lemma_Danny} yield
\begin{align}\label{eqLemma_Z2_Lambda33}
\pr\brk{\Lambda_{33}\leq\exp(n f(\bar\rho)-\Omega(n))}&=1-o(1).
\end{align}
Moreover, \Lem~\ref{Lemma_clusterSize} entails that
\begin{align}\label{eqLemma_Z2_Lambda34}
\pr\brk{\Lambda_{34}\leq\exp(-\Omega(n))Z_{k}(\Gnm)\Erw[Z_{k}(\Gnm)]}&=1-o(1).
\end{align}
Finally, combining (\ref{eqLemma_Z2_Lambda31})--(\ref{eqLemma_Z2_Lambda34}) with (\ref{eqfbarrho}) and~(\ref{eqLemma_Z2_1})
and using Markov's inequality once more, we obtain
\begin{align}\label{eqLemma_Z2_Lambda3_case2}
\pr\brk{\Lambda_{3}\leq\exp(-\Omega(n))\Erw[Z_{k}(\Gnm)]^2}&=1-o(1).
\end{align}
In summary, combining (\ref{eqLambda_dec}), (\ref{eqLemma_Z2_2}), (\ref{eqLambda2}), (\ref{eqLambda3_case1}) and~(\ref{eqLemma_Z2_Lambda3_case2})
and setting, say, $\omega=\omega(n)=\ln\ln n$,
we find that
\begin{align}\label{eqLemma_Z2_667}
\pr\brk{\Lambda\leq \sqrt{\omega/n}\,\Erw[Z_{k}(\Gnm)]^2}&=1-o(1).
\end{align}
Since $\Lambda=Z_{k}(\Gnm)^2\bck{\norm{\rho(\SIGMA_1,\SIGMA_2)-\bar\rho}_2}_{\Gnm}$
and as $Z_{k}(\Gnm)\geq \Erw[Z_{k}(\Gnm)]/\omega$ w.h.p.\ by \Thm~\ref{Thm_Z},
the assertion follows
from~(\ref{eqLemma_Z2_667}).
\end{proof}
\Lem~\ref{Lemma_Z2} puts us in a position to prove \Prop~\ref{Thm_contig} by
extending the argument that was used to ``plant'' single $k$-colorings in~\cite[\Sec~2]{Silent} to the current setting of ``planting'' pairs of $k$-colorings.
\begin{proof}[Proof of \Prop~\ref{Thm_contig}]
Assume for contradiction that $(\cA_n')_{n\geq1}$ is a sequence of events such that for some fixed number $\eps>0$ we have
\begin{equation}} \newcommand{\eeq}{\end{equation}\label{eqThm_cont0'}
\lim_{n\ra\infty}\plp\brk{\cA_n'}=0\quad\mbox{while}\quad\limsup_{n\ra\infty}\prcp\brk{\cA_n'}>2\eps.
\eeq
Let
$\omega(n)=\ln\ln1/\plp\brk{\cA_n'}.$
Then $\omega=\omega(n)\to\infty$.
Let $\cB_n$ be the set of all pairs $(\sigma_1,\sigma_2)$ of maps $[n]\to[k]$ such that
$\norm{\rho(\sigma_1,\sigma_2)-\bar\rho}_2\leq\sqrt{\omega/n}$
and define $$\cA_n=\cbc{(G,\sigma_1,\sigma_2)\in\cA_n':(\sigma_1,\sigma_2)\in\cB_n}.$$
Then \Lem~\ref{Lemma_Z2} and~(\ref{eqThm_cont0'}) imply that
\begin{equation}\label{eqThm_cont_overlap}
\lim_{n\ra\infty}\plp\brk{\cA_n}=0\quad\mbox{while}\quad\limsup_{n\ra\infty}\prcp\brk{\cA_n}>\eps.
\end{equation}
Furthermore,
\begin{equation}\label{eq_planting0}
\omega(n)\sim\ln\ln\bc{1/\plp\brk{\cA_n}}\to\infty.
\end{equation}
For $\sigma_1,\sigma_2:\brk n\to\brk k$
let $\G(n,m|\sigma_1,\sigma_2)$ be the random graph $\gnm$ conditional on the event that $\sigma_1,\sigma_2$ are $k$-colorings.
That is, $\G(n,m|\sigma_1,\sigma_2)$ consists of $m$ random edges that are bichromatic under $\sigma_1,\sigma_2$.
Then
\begin{eqnarray}
\Erw[Z_{k}(\gnm)^2\vec{1}\cbc{\cA_n}]&=&
\sum_{(\sigma_1,\sigma_2)\in\cB_n}\pr\brk{\sigma_1,\sigma_2\in\cS_k(\gnm), (\gnm,\sigma_1,\sigma_2)\in\cA_n}\nonumber\\
&=&\sum_{(\sigma_1,\sigma_2)\in\cB_n}\pr\brk{(\gnm,\sigma_1,\sigma_2)\in\cA_n|\sigma_1,\sigma_2\in\cS_k(\gnm)}
\pr\brk{\sigma_1,\sigma_2\in\cS_k(\gnm)}\nonumber\\
&=&\sum_{(\sigma_1,\sigma_2)\in\cB_n}
\pr\brk{\G(n,m|\sigma_1,\sigma_2)\in\cA_n}\cdot\pr\brk{\sigma_1,\sigma_2\in\cS_k(\gnm)}.\label{eqThm_cont1}
\end{eqnarray}
Letting
$q_n=\max\cbc{\pr\brk{\sigma_1,\sigma_2\in\cS_k(\gnm)}:(\sigma_1,\sigma_2)\in\cB_n}$,
we obtain from~(\ref{eqThm_cont1}) and the definition {\bf PR1--PR2} of the planted replica model that
\begin{eqnarray} \label{eqThm_cont2}
\Erw[Z_{k}(\gnm)^2\vec{1}\cbc{\cA_n}]
&\leq&q_n\sum_{(\sigma_1,\sigma_2)\in\cB_n}
\pr\brk{\G(n,m|\sigma_1,\sigma_2)\in\cA_n}
\leq k^{2n}q_n\plp\brk{\cA_n}.
\end{eqnarray}
Furthermore,
since $\norm{\rho_{i\nix}(\sigma_1,\sigma_2)}_2^2,\norm{\rho_{\nix i}(\sigma_1,\sigma_2)}_2^2\geq1/k$ for all $i\in[k]$,
(\ref{eqErwZrho}) implies
\begin{align*}
\frac1n\ln\pr\brk{\sigma_1,\sigma_2\in\cS_k(\gnm)}&\leq
\frac d2\ln\bc{1-\frac2k+\norm{\rho(\sigma_1,\sigma_2)}_2^2}+O(1/n)\\
&=d\ln(1-1/k)+O(\omega/n)&\mbox{ for all $(\sigma_1,\sigma_2)\in\cB_n$}.
\end{align*}
Hence, $q_n\leq(1-1/k)^{2m}\exp(O(\omega))$.
Plugging this bound into~(\ref{eqThm_cont2}) and setting $\bar z=\Erw[Z_{k}(\gnm)]$, we see that
\begin{eqnarray} \label{eq_planting1}
\Erw[Z_{k}(\gnm)^2\vec{1}\cbc{\cA_n}]&\leq&
k^{2n}(1-1/k)^{2m}\exp(O(\omega))\plp\brk{\cA_n}
=\bar z^2\exp(O(\omega))\plp\brk{\cA_n}.
\end{eqnarray}
On the other hand, if $\prcp\brk{\cA_n}>\eps$, then
\Thm~\ref{Thm_Z} implies that
$$\prcp\brk{\cA_n\cap\cbc{Z_{k}(\gnm)\geq\bar z/\omega}}>\eps/2.$$
Hence, (\ref{eqrr}) yields
\begin{equation}\label{eq_planting2}
\Erw[Z_{k}(\gnm)^2\vec{1}\cbc{\cA_n}]\geq\frac{\eps}2\bcfr{\bar z}{\omega}^2.
\end{equation}
But due to~(\ref{eq_planting0}), (\ref{eq_planting2}) contradicts~(\ref{eq_planting1}).
\end{proof}
\section{Analysis of the planted replica model}\label{Sec_Nor}
\noindent
\emph{In this section we assume that $k\geq3$ and that $d>0$.}
\medskip\noindent
\Prop~\ref{Thm_contig} reduces the task of studying the random replica model to that of analysing the planted replica model,
which we attend to in the present section.
If $\theta$ is a rooted tree, $\tau_1,\tau_2\in\cS_k(\theta)$, $\omega\geq0$ and if $G$ is a $k$-colorable graph and $\sigma_1,\sigma_2\in\cS_k(G)$, then we let
$$Q_{\theta,\tau_1,\tau_2,\omega}(G,\sigma_1,\sigma_2)=\frac1n\sum_{v\in[n]}
\vec{1}\cbc{\nbh{G}{v}{\sigma_1}\ism(\theta,\tau_1)}\cdot\vec{1}\cbc{\nbh{G}{v}{\sigma_2}\ism(\theta,\tau_2)}.$$
Additionally, set
$$q_{\theta,\omega}=Z_{k}(\theta)^{-2}\pr\brk{\partial^\omega\T(d)\ism \theta}.$$
The aim in this section is to prove the following statement.
\begin{proposition}\label{Prop_Nor}
Let $\theta$ be a rooted tree, $\tau_1,\tau_2\in\cS_k(\theta)$ and $\omega\geq0$.
Let $\plG,\plSIGMA_1,\plSIGMA_2$ be chosen from the distribution $\plp$.
Then $Q_{\theta,\tau_1,\tau_2,\omega}(\hat\G,\hat\SIGMA_1,\hat\SIGMA_2)$ converges to $q_{\theta,\omega}$ in probability.
\end{proposition}
Intuitively, \Prop~\ref{Prop_Nor} asserts that in the planted replica model, the distribution of the ``dicoloring'' that $\hat\SIGMA_1,\hat\SIGMA_2$ induce in the
depth-$\omega$ neighborhood of a random vertex $v$ converges to the uniform distribution on the tree that the depth-$\omega$ neighborhood of $v$ induces.
The proof of \Prop~\ref{Prop_Nor}
is by extension of an argument from~\cite{Cond} for the ``standard'' planted model (with a single coloring) to the planted replica model.
More specifically, it is going to be convenient to work with the following {\em binomial} version $\bplp$ of the planted replica model,
where $p\in(0,1)$.
\begin{description}
\item[PR1'] sample two maps $\plSIGMA_1,\plSIGMA_2:[n]\to[k]$ independently and uniformly at random.
\item[PR2'] generate a random graph $\tilde\G$ by including each of the $\binom{n}{2}-\cF(\plSIGMA_1,\plSIGMA_2)$
edges that are bichromatic under both $\plSIGMA_1,\plSIGMA_2$ with probability $p$ independently.
\end{description}
The distributions $\plp$, $\bplp$ are related as follows.
\begin{lemma}\label{Lemma_binmodel}
Let $p=m/\left(\binom{n}{2}(1-1/k)^2\right)$.
For any event ${\mathcal E}$ we have $\plp\brk{\mathcal E}\leq O(\sqrt n)\bplp\brk{\mathcal E}+o(1).$
\end{lemma}
\begin{proof}
Let $\cB$ be the event that $\norm{\rho(\plSIGMA_1,\plSIGMA_2)-\bar\rho}_2^2\leq n^{-1}\ln\ln n$.
Since $\plSIGMA_1,\plSIGMA_2$ are chosen uniformly and independently, the Chernoff bound yields
\begin{equation}\label{eqLemma_binmodel1}
\bplp\brk{\cB},\plp\brk{\cB}=1-o(1).
\end{equation}
Furthermore, given that $\cB$ occurs we obtain $\cF(\plSIGMA_1,\plSIGMA_2)=(2/k-1/k^2)\bink n2+o(n^{3/2})$.
Therefore, Stirling's formula implies that the event $\cA$ that the graph $\tilde\G$ has precisely $m$ edges satisfies
\begin{equation}\label{eqLemma_binmodel2}
\bplp\brk{\cA|\cB}=\Omega(n^{-1/2}).
\end{equation}
By construction, the binomial model $\bplp$ given $\cA\cap\cB$ is identical to $\plp$ given $\cB$.
Consequently, (\ref{eqLemma_binmodel1}) and~(\ref{eqLemma_binmodel2}) yield
\begin{align*}
\plp\brk{\mathcal E}&\leq\plp\brk{{\mathcal E}|\cB}+o(1)=\bplp\brk{{\mathcal E}|\cA,\cB}+o(1)\leq O(\sqrt n)\bplp\brk{{\mathcal E}}+o(1),
\end{align*}
as desired.
\end{proof}
The following proofs are based on a simple observation.
Given the colorings $\hat\SIGMA_1,\hat\SIGMA_2$, we can construct $\tilde\G$ as follows.
First, we simply insert each of the $\bink n2$ edges of the complete graph on $\brk n$ with probability $p$ independently.
The result of this is, clearly, the Erd\H{o}s-R\'enyi\ random graph $\G(n,p)$.
Then, we ``reject'' (i.e., remove) each edge of this graph that joins two vertices that have the same color under either
$\hat\SIGMA_1$ or $\hat\SIGMA_2$.
\begin{lemma}\label{Lemma_shortcycles}
Let $\omega=\lceil\ln\ln n\rceil$ and assume that $p=O(1/n)$.
\begin{enumerate}
\item Let $\cK(G)$ be the total number of vertices $v$ of the graph $G$
such that $\partial^\omega(G,v)$ contains a cycle.
Then $$\bplp\brk{\cK(\tilde\G)>n^{2/3}}=o(n^{-1/2}).$$
\item Let $\cL$ be the event that there is a vertex $v$ such that $\partial^\omega(\tilde\G,v)$ contains more than $n^{0.1}$ vertices.
Then $$\bplp\brk{\cL}\leq\exp(-\Omega(\ln^2n)).$$
\end{enumerate}
\end{lemma}
\begin{proof}
Obtain the random graph $\G'$ from $\tilde\G$ by adding every edge that is monochromatic under either $\plSIGMA_1,\plSIGMA_2$
with probability $p=m/\left(\binom{n}{2}(1-1/k)^2\right)$ independently.
Then $\G'$ has the same distribution as the standard binomial random graph $\G(n,p)$.
Since $\cK(\tilde\G)\leq\cK(\G')$, the first assertion follows from the well-known fact that $\Erw[\cK(\G(n,p))]\leq n^{o(1)}$ and Markov's inequality.
A similar argument yields the second assertion.
\end{proof}
\begin{lemma}\label{Lemma_conc}
Let $\theta$ be a rooted tree, let $\tau_1,\tau_2\in\cS_k(\theta)$ and let $\omega\geq0$.
Then
$$\bplp\brk{\abs{Q_{\theta,\tau_1,\tau_2,\omega}(\tilde\G,\hat\SIGMA_1,\hat\SIGMA_2)-
\Erw[Q_{\theta,\tau_1,\tau_2,\omega}(\tilde\G,\hat\SIGMA_1,\hat\SIGMA_2)]}>n^{-1/3} }\leq\exp(-\Omega(\ln^2n)).$$
\end{lemma}
\begin{proof}
The proof is based on \Lem~\ref{Lemma_Lutz}.
To apply \Lem~\ref{Lemma_Lutz}, we view $(\tilde\G,\plSIGMA_1,\plSIGMA_2)$ as chosen from a product space $X_2,\ldots,X_N$ with $N=2n$ where
$X_v\in\brk{k}^2$ is uniformly distributed for $v\in[n]$ and
where $X_{n+v}$ is a $0/1$ vector of length $v-1$ whose components are independent ${\rm Be}(p)$ variables
for $v\in[n]$.
Namely, $X_{v}$ with $v\in[n]$ represents the color pair $(\hat\SIGMA_1(v),\hat\SIGMA_2(v))$,
and $X_{n+v}$ for $v\in[n]$ indicates to which vertices $w<v$
with $\hat\SIGMA_1(w)\neq\hat\SIGMA_1(v)$, $\hat\SIGMA_2(w)\neq\hat\SIGMA_2(v)$ vertex $v$ is adjacent (``vertex exposure'').
Define a random variables $S_v=S_v(\tilde\G,\plSIGMA_1,\plSIGMA_2)$ and $S$ by letting
\begin{align*}
S_v &=\vec{1}\cbc{\nbh{\tilde\G}{v}{\plSIGMA_1}\ism(\theta,\tau_1)}\cdot\vec{1}\cbc{\nbh{\tilde\G}{v}{\plSIGMA_2}\ism(\theta,\tau_2)},&
S&=\frac1n\sum_{v\in[n]}S_v.
\end{align*}
Then
\begin{equation}\label{eqQS}
Q_{\theta,\tau_1,\tau_2,\omega}=S.
\end{equation}
Further, set $\lambda=n^{0.01}$ and let $\Gamma$ be the event that $|\nbg{\tilde\G}{v}|\leq \lambda$ for all vertices $v$.
Then by \Lem~\ref{Lemma_shortcycles} we have
\begin{eqnarray}\label{eqLemma_conc1}
\pr\brk{\Gamma}&\geq& 1-\exp(-\Omega(\ln^2n)).
\end{eqnarray}
Furthermore, let $\G'$ be the graph obtained from $\tilde\G$ by removing all edges $e$ that
are incident with a vertex $v$ such that $|\nbg{\tilde\G}{v}|>\lambda$
and let
\begin{align*}
S_v'& =\vec{1}\cbc{\nbh{\G'}{v}{\plSIGMA_2}\ism(\theta,\tau_1)}\cdot\vec{1}\cbc{\nbh{\G'}{v}{\plSIGMA_2}\ism(\theta,\tau_2)},&
S'=\frac1n\sum_{v\in[n]}S_v'.
\end{align*}
If $\Gamma$ occurs, then $S=S'$.
Hence, (\ref{eqLemma_conc1}) implies that
\begin{eqnarray}\label{eqLemma_conc2}
\Erw[S']&=&\Erw[S]+o(1).
\end{eqnarray}
The random variable $S'$ satisfies~(\ref{eqTL}) with $c=\lambda$ and $c'=n$.
Indeed, altering either the colors of one vertex $u$ or its set of neighbors can only affect those vertices $v$
that are at distance at most $\omega$ from $u$, and in $\G'$ there are no more than $\lambda$ such vertices.
Thus, \Lem~\ref{Lemma_Lutz} applied with, say, $t=n^{2/3}$ and $\gamma=1/n$ and~(\ref{eqLemma_conc1}) yield
\begin{eqnarray}\label{eqLemma_conc3}
\pr\brk{|S'-\Erw[S']|>t}\leq\exp(-\Omega(\ln^2n)).
\end{eqnarray}
Finally, the assertion follows from (\ref{eqQS}), (\ref{eqLemma_conc2}) and~(\ref{eqLemma_conc3}).
\end{proof}
To proceed, we need the following concept.
A {\em $k$-dicolored graph} $(G,v_0,\sigma_1,\sigma_2)$ consists of a $k$-colorable graph $G$ with $V(G)\subset\RR$, a root $v_0\in V(G)$
and two $k$-colorings $\sigma_1,\sigma_2:V(G)\to[k]$.
We call two $k$-dicolored graphs $(G,v_0,\sigma_1,\sigma_2)$, $(G',v_0',\sigma_1',\sigma_2')$ {\em isomorphic}
if there is an isomorphism $\pi:G\to G'$ such that $\pi(v_0)=v_0'$ and $\sigma_1=\sigma_1'\circ\pi$, $\sigma_2=
\sigma_2'\circ\pi$ and such that for any $v,u\in V(G)$ such that $v<u$ we have $\pi(v)<\pi(u)$.
\begin{lemma}\label{Lemma_bin_0}
Let $\theta$ be a rooted tree, let $\tau_1,\tau_2\in\cS_k(\theta)$ and let $\omega\geq0$.
Then
\begin{equation}\label{eqProp_Nor_bin0}
\Erw\left[Q_{\theta,\tau_1,\tau_2,\omega}(\tilde{\G})\right]=q_{\theta,\omega}+o(1).
\end{equation}
\end{lemma}
\begin{proof}
Recall that $\T(d)$ is the (possibly infinite) Galton-Watson tree rooted at $v_0$.
Let $\TAU_1,\TAU_2$ denote two $k$-colorings of $\partial^\omega\T(d)$ chosen uniformly at random.
In addition, let $\vec v^*\in[n]$ denote a uniformly random vertex of $\tilde\G$.
To establish~(\ref{eqProp_Nor_bin0}) it suffices to construct a coupling of the random dicolored tree $(\T(d),v_0,\TAU_1,\TAU_2)$
and the random graph $\partial^\omega(\tilde\G,\vec v^*,\hat\SIGMA_1,\hat\SIGMA_2)$ such that
\begin{align}\label{eqEX0}
\pr\brk{\partial^\omega(\tilde\G,\vec v^*,\hat\SIGMA_1,\hat\SIGMA_2)\ism(\T(d),v_0,\TAU_1,\TAU_2)}=1-o(1).
\end{align}
To this end, let $(u(i))_{i\in[n]}$ be a
family of independent random variables such that $u(i)$ is uniformly distributed over the interval $((i-1)/n,i/n)$ for each $i\in[n]$.
The construction of this coupling is based on the principle of deferred decisions.
More specifically, we are going to view the exploration of the depth-$\omega$ neighborhood of $\vec v^*$ in the random graph $\tilde\G$ as a random process,
reminiscent of the standard breadth-first search process for the exploration of the connected components of the random graph.
The colors of the individual vertices and their neighbors are revealed in the course of the exploration process.
The result of the exploration process will be a dicolored tree $(\hat\T,u(\vec v^*),\hat\TAU_1,\hat\TAU_1)$ whose vertex set is contained in $[0,1]$.
This tree is isomorphic to $\partial^\omega(\tilde\G,\vec v^*,\hat\SIGMA_1,\hat\SIGMA_2)$ w.h.p.\
Furthermore, the distribution of the tree is at total variance distance $o(1)$ from that of $(\T(d),v_0,\TAU_1,\TAU_2)$.
Throughout the exploration process, every vertex is marked either \emph{dead}, \emph{alive}, \emph{rejected} or \emph{unborn}.
The semantics of the marks is similar to the one in the usual ``branching process'' argument for the component exploration in the random graph:
vertices whose neighbors have been explored are ``dead'', vertices that have been reached but whose neighbors have not yet been inspected are ``alive'',
and vertices that the process has not yet discovered are ``unborn''.
The additional mark ``rejected'' is necessary because we reveal the colors of the vertices as we explore them.
More specifically, as we explore the neighbors of an alive $v$ vertex,
we insert a ``candidate edge'' between the alive vertex and {\em every} unborn vertex with probability $p$
independently.
If upon revealing the colors of the ``candidate neighbor'' $w$ of $v$ we find a conflict (i.e., $\hat\SIGMA_1(v)=\hat\SIGMA_1(w)$ or
$\hat\SIGMA_2(v)=\hat\SIGMA_2(w)$), we ``reject'' $w$ and the ``candidate edge'' $\{v,w\}$ is discarded.
Additionally, we will maintain for each vertex $v$ a number $D(v)\in[0,\infty]$; the intention is that $D(v)$ is the distance from the root $\vec v^*$ in the part
of the graph that has been explored so far.
The formal description of the process is as follows.
\begin{description}
\item[EX1]
Initially, $\vec v^*$ is alive, $D(\vec v^*)=0$, and all other vertices $v\neq\vec v^*$ are unborn and $D(v)=\infty$.
Choose a pair of colors $(\hat\SIGMA_1(\vec v^*),\hat\SIGMA_2(\vec v^*))\in[k]^2$ uniformly at random.
Let $\hat\T$ be the tree consisting of the root vertex $u(\vec v^*)$ only and let
$\hat\TAU_h(u(\vec v^*))=\hat\SIGMA_h(\vec v^*)$ for $h=1,2$.
\item[EX2]
While there is an alive vertex $y$ such that $D(y)<\omega$, let $v$ be the least such vertex.
For each vertex $w$ that is either rejected or unborn let $a_{vw}={\rm Be}(p)$;
the random variables $a_{vw}$ are mutually independent.
For each unborn vertex $w$ such that $a_{vw}=1$ choose a pair $(\hat\SIGMA_1(w),\hat\SIGMA_2(w))\in[k]^2$ independently and uniformly at random
and set $D(w)=D(v)+1$.
Extend the tree $\hat\T$ by adding the vertex $u(w)$ and the edge $\{u(v),u(w)\}$
and by setting $\hat\TAU_1(u(w))=\hat\SIGMA_1(w)$, $\hat\TAU_2(u(w))=\hat\SIGMA_2(w)$
for every unborn $w$ such that $a_{vw}=1$,
$\hat\SIGMA_1(v)\neq\hat\SIGMA_1(w)$ and $\hat\SIGMA_2(v)\neq\hat\SIGMA_2(w)$.
Finally, declare the vertex $v$ dead, declare all $w$ with $a_{vw}=1$ and $\hat\SIGMA_1(v)\neq\hat\SIGMA_1(w)$ and
$\hat\SIGMA_2(v)\neq\hat\SIGMA_2(w)$ alive, and declare all other $w$ with
$a_{vw}=1$ rejected.
\end{description}
The process stops once there is no alive vertex $y$ such that $D(y)<\omega$ anymore, at which point we have got a tree $\hat\T$ that is embedded into $[0,1]$.
Let $\cA$ be the event that $\partial^\omega(\hat\G,\vec v^*)$ is an acyclic subgraph that contains no more than $n^{0.1}$ vertices.
Furthermore, let ${\mathcal R}$ be the event that in {\bf EX2} it never occurs that $a_{vw}=1$ for a rejected vertex $w$.
Then \Lem~\ref{Lemma_shortcycles} implies that $\pr\brk\cA=1-o(1)$.
Moreover, since $p=O(1/n)$ we have $\pr\brk{{\mathcal R}|\cA}=1-O(n^{-0.8})=1-o(1)$, whence $\pr\brk{\cA\cap{\mathcal R}}=1-o(1)$.
Further, given that $\cA\cap{\mathcal R}$ occurs, $\partial^\omega(\hat\G,\vec v^*,\hat\SIGMA_1,\hat\SIGMA_2)$ is isomorphic to
$(\hat\T,u(\vec v^*),\hat\TAU_1,\hat\TAU_2)$.
Thus,
\begin{align}\label{eqex1}
\pr\brk{\partial^\omega(\hat\G,\vec v^*,\hat\SIGMA_1,\hat\SIGMA_2)\ism (\hat\T,u(\vec v^*),\hat\TAU_1,\hat\TAU_2)}=1-o(1).
\end{align}
Further, if $\cA\cap{\mathcal R}$ occurs, then whenever {\bf EX2} processes an alive vertex $v$ with $D(v)<\omega$,
the number of unborn neighbors of $v$ of every color combination $(s_1,s_2)$ such that $s_1\neq\hat\SIGMA(v)$, $s_2\neq\hat\SIGMA(v)$
is a binomial random variable whose mean lies in the interval $[np/k^2,(n-n^{0.1})p/k^2]$.
The total variation distance of this binomial distribution and the Poisson distribution ${\rm Po}(d/(k-1)^2)$,
which is precisely distribution of the number of children colored $(s_1,s_2)$ in the dicolored Galton-Watson tree,
is $O(n^{-0.9})$ by the choice of $p$.
In addition, let $\cB$ be the event that each interval $((i-1)/n,i/n)$ for $i=1,\ldots,n$ contains at most one vertex of the tree $\partial^\omega\T(d)$.
Then $\pr\brk{\cB}=1-o(1)$ and
given $\cA\cap{\mathcal R}$ and $\cB$, there is a coupling of $(\hat\T,u(\vec v^*),\hat\TAU_1,\hat\TAU_2)$
and $\partial^\omega(\T(d),v_0,\TAU_1,\TAU_2)$ such that
\begin{align}\label{eqex2}
\pr\brk{\partial^\omega(\T(d),v_0,\TAU_1,\TAU_2)=(\hat\T,u(\vec v^*),\hat\TAU_1,\hat\TAU_2)}=1-o(1).
\end{align}
Finally, (\ref{eqEX0}) follows from (\ref{eqex1}) and~(\ref{eqex2}).
\end{proof}
\begin{corollary}\label{Lemma_bin}
Let $\theta$ be a rooted tree, let $\tau_1,\tau_2\in\cS_k(\theta)$ and let $\omega\geq0$.
Moreover, let $p= m/(\bink n2(1-1/k)^2)$.
Then
\begin{equation}\label{eqProp_Nor1}
\lim_{\eps\searrow 0}\lim_{n\to\infty}\sqrt n\cdot\bplp\brk{|Q_{\theta,\tau_1,\tau_2,\omega}-q_{\theta,\tau_1,\tau_2,\omega}|>\eps}=0.
\end{equation}
\end{corollary}
\begin{proof}
This follows by combining \Lem s~\ref{Lemma_conc} and~\ref{Lemma_bin_0}.
\end{proof}
\noindent
Finally, \Prop~\ref{Prop_Nor} is immediate from \Lem~\ref{Lemma_binmodel} and \Cor~\ref{Lemma_bin}.
\section{Establishing local weak convergence}\label{Sec_lwc}
\noindent{\em Throughout this section we assume that $k\geq k_0$ for some large enough constant $k_0$ and that $d<d_{k,\mathrm{cond}}$.}
\medskip
\noindent
Building upon \Prop s~\ref{Thm_contig} and~\ref{Prop_Nor}, we are going to prove \Thm~\ref{Thm_xlwc} and its corollaries.
The key step is to establish the following statement.
\begin{proposition}\label{Prop_replicas}
Let $\omega\geq0$, let $\theta_1,\ldots,\theta_l$ be a rooted trees and let $\tau_1\in\cS_k(\theta_1),\ldots,\tau_l\in\cS_k(\theta_l)$.
Let
$$X_n=\sum_{v_1,\ldots,v_l\in[n]}
\bck{\prod_{i=1}^l\vec{1}\cbc{\partial^\omega(\G,v_i,\SIGMA)\ism(\theta_i,\tau_i)}}_{\G}.$$
Then $n^{-l}X_n$ converges to $\prod_{i=1}^l\pr\brk{\nb{\T(d)}\ism(\theta_i,\tau_i)}$ in probability.
\end{proposition}
\noindent
The purpose of \Prop s~\ref{Thm_contig} and~\ref{Prop_Nor} was to facilitate the proof of the following fact.
\begin{lemma}\label{Cor_Nor}
Let $\theta$ be a rooted tree and let $\tau\in\cS_k(\theta)$.
Moreover, set
\begin{align*}
Q(v)&=\vec{1}\cbc{\nbg{\G}{v}\ism\theta}\cdot
\bck{\prod_{j=1}^2\bc{\vec{1}\cbc{\partial^\omega(\G,v,\SIGMA_j)\ism(\theta,\tau)}-Z_{k}(\theta)^{-1}}
}_{\G},&
Q&=\frac1n\sum_{v\in[n]}Q(v).
\end{align*}
Then $Q$ converges to $0$ in probability.
\end{lemma}
\begin{proof}
Let $t(G,v,\sigma)=\vec{1}\cbc{\nbh{G}{v}{\sigma}\ism(\theta,\tau)}$ and $z=Z_{k}(\theta)$ for brevity.
Then
\begin{align*}
Q(v)&=\vec{1}\cbc{\nbg{\G}{v}\ism\theta}\cdot
\bck{(t(\G,v,\SIGMA_1)-z^{-1})(t(\G,v,\SIGMA_2)-z^{-1})}\\
&=\vec{1}\cbc{\nbg{\G}{v}\ism\theta}\cdot
\bc{\brk{\bck{t(\G,v,\SIGMA_1)t(\G,v,\SIGMA_2)}-z^{-2}}+
2z^{-1}\brk{z^{-1}-\bck{t(\G,v,\SIGMA)}}}.
\end{align*}
Hence, setting
\begin{align*}
Q'(v)&=\vec{1}\cbc{\nbg{\G}{v}\ism\theta}\cdot\brk{\bck{t(\G,v,\SIGMA_1)t(\G,v,\SIGMA_2)}-z^{-2}},&
Q''(v)&=\vec{1}\cbc{\nbg{\G}{v}\ism\theta}\cdot\brk{z^{-1}-\bck{t(\G,v,\SIGMA)}},\\
Q'&=\frac1n\sum_{v\in[n]}Q'(v),&
Q''&=\frac1n\sum_{v\in[n]}Q''(v),
\end{align*}
we obtain
\begin{equation}\label{eqQQQ}
Q=Q'+\frac2zQ''.
\end{equation}
Now, let $(\hat \G,\hat\SIGMA_1,\hat\SIGMA_2)$ denote a random dicolored graph chosen from the planted replica model and set
\begin{align*}
\hat Q'(v)&=\vec{1}\cbc{\nbg{\hat \G}{v}\ism\theta}\cdot\brk{t(\hat\G,v,\hat\SIGMA_1)t(\hat\G,v,\hat\SIGMA_2)-z^{-2}},&
\hat Q''(v)&=\vec{1}\cbc{\nbg{\hat\G}{v}\ism\theta}\cdot\brk{z^{-1}-t(\hat\G,v,\hat\SIGMA_1)},\\
\hat Q'&=\frac1n\sum_{v\in[n]}\hat Q'(v),&
\hat Q''&=\frac1n\sum_{v\in[n]}\hat Q''(v),
\end{align*}
Then \Prop~\ref{Prop_Nor} shows that $\hat Q'$ converges to $0$ in probability.
In addition, applying \Prop~\ref{Prop_Nor} and marginalising $\hat\SIGMA_2$ implies that $\hat Q''$ converges to $0$ in probability as well.
Hence, \Prop~\ref{Thm_contig} entails that $Q'$, $Q''$ converge to $0$ in probability.
Thus, the assertion follows from~(\ref{eqQQQ}).
\end{proof}
\noindent
We complete the proof of \Prop~\ref{Prop_replicas} by generalising the elegant argument that was used in~\cite[\Prop~3.2]{GM}
to establish a statement similar to the $\omega=0$ case of \Prop~\ref{Prop_replicas}.
\begin{lemma}\label{Lemma_replicas}
There exists a sequence $\eps=\eps(n)=o(1)$ such that the following is true.
Let $\theta_1,\ldots,\theta_l$ be rooted trees, let $\tau_1\in\cS_k(\theta_1),\ldots,\tau_l\in\cS_k(\theta_l)$,
let $\emptyset\neq J\subset[l]$ and let $\omega\geq0$ be an integer.
For a graph $G$ let $\cX_{\theta_1,\ldots,\theta_l}(G,J,\omega)$
be the set of all vertex sequences $u_1,\ldots,u_l$ such that $\nbg{G}{u_i}\ism\theta_i$ while
$$\abs{\bck{\prod_{i\in J}\vec{1}\cbc{\nbh{G}{u_i}{\SIGMA}\ism(\theta_i,\tau_i)}-\frac1{Z_{k}(\theta_i)}}_G}>\eps.$$
Then $|\cX_{\theta_1,\ldots,\theta_l}(\G,J,\omega)|\leq\eps n^l$ w.h.p.
\end{lemma}
\begin{proof}
Let $t_{i}(v,\sigma)=\vec{1}\cbc{\nbh{\G}{v}{\sigma}\ism(\theta_i,\tau_i)}$ and $z_i=Z_{k}(\theta_i)$ for the sake of brevity.
Moreover, set
\begin{align*}
Q_i(v)&=\vec{1}\cbc{\nbg{\G}{v}\ism\theta_i}\cdot\bck{(t_{i}(v,\SIGMA_1)-z_i^{-1})(t_{i}(v,\SIGMA_2)-z_i^{-1})}_{\G},&
Q_i&=\frac1n\sum_{v\in[n]}Q_i(v).
\end{align*}
Then \Lem~\ref{Cor_Nor} implies that there exists $\eps=\eps(n)=o(1)$ such that $\sum_{i\in[l]}Q_i\leq\eps^{3}$ w.h.p.\
Therefore, fixing an arbitrary element $i_0\in J$, we see that w.h.p.
\begin{align*}
\frac{\eps^2}{n^{l}}|\cX_{\theta_1,\ldots,\theta_l}(\G,J,\omega )|
&\leq\frac1{n^{l}}\sum_{u_1,\ldots,u_l\in[n]}\bck{\prod_{i\in J}( t_i(u_i,\SIGMA)-z_i^{-1})}_{\hspace{-1mm}\G}^2\,
\prod_{i=1}^l\vec{1}\cbc{\nbg{\G}{u_i}\ism\theta_i}\\
&\hspace{-2cm}\leq
\frac1{n^{l}}\sum_{u_1,\ldots,u_l\in[n]}\bck{( t_{i_0}(u_{i_0},\SIGMA_1)-z_{i_0}^{-1})( t_{i_0}(u_{i_0},\SIGMA_2)-z_{i_0}^{-1})}_{\hspace{-1mm}\G}\,
\prod_{i=1}^l\vec{1}\cbc{\nbg{\G}{u_i}\ism\theta_i}
&\mbox{[as $\SIGMA_1,\SIGMA_2$ are independent]}\\
&\hspace{-2cm}\leq
\frac1{n^{l}}\sum_{u_1,\ldots,u_l\in[n]}Q_{i_0}(u_{i_0})=Q_{i_0}\leq\eps^{3},
\end{align*}
whence $|\cX_{\theta_1,\ldots,\theta_l}(\G,J,\omega)|\leq\eps n^l$ w.h.p.
\end{proof}
\begin{corollary}\label{Cor_replicas}
Let $\omega\geq0$ be an integer,
let $\theta_1,\ldots,\theta_l$ be rooted trees,
let $\tau_1\in\cS_k(\theta_1),\ldots,\tau_l\in\cS_k(\theta_l)$ and let $\delta>0$.
For a graph $G$ let $Y(G)$ be the number of vertex sequences $v_1,\ldots,v_l$ such that
$\nbg{G}{v_i}\ism\partial^\omega\theta_i$ while
\begin{equation}\label{eqProp_replicas}
\abs{\bck{\prod_{i\in[l]}\vec{1}\cbc{\nbh{G}{v_i}{\SIGMA}\ism(\theta_i,\tau_i)}}_G-
\prod_{i\in[l]}\frac1{Z_{k}(\theta_i)}}>\delta.
\end{equation}
Then
$n^{-l}Y(\G)$ converges to $0$ in probability.
\end{corollary}
\begin{proof}
Let $z_i=Z_{k}(\partial^\omega\theta_i)$
for the sake of brevity.
Let ${\mathcal E}_{\theta_1,\ldots,\theta_l}$ be the
set of all $l$-tuples $(v_1,\ldots,v_l)$ of distinct vertices such that $\nbg{\G}{v_i}\ism\theta_i$ for all $i\in[l]$.
Moreover, with the notation of \Lem~\ref{Lemma_replicas} let
$$\cX_{\theta_1,\ldots,\theta_l}=\bigcup_{\emptyset\neq J\subset[l]}\cX_{\theta_1,\ldots,\theta_l}(\G,J,\omega)$$
and set $\cY_{\theta_1,\ldots,\theta_l}={\mathcal E}_{\theta_1,\ldots,\theta_l}\setminus \cX_{\theta_1,\ldots,\theta_l}$.
With $\eps=\eps(n)=o(1)$ from \Lem~\ref{Lemma_replicas},
we are going to show that for each $J\subset[l]$ there exists an ($n$-independent) number $C_J$ such that
\begin{equation}\label{eqProp_replicas1}
\abs{\bck{\prod_{i\in J}\vec{1}\cbc{\nbh{\G}{v_i}{\SIGMA}\ism(\theta_i,\tau_i)}}_{\hspace{-1mm}\G}-
\prod_{i\in J}z_i^{-1}}\leq C_J\eps^{1/2}
\qquad\mbox{for all $(v_1,\ldots,v_l)\in\cY_{\theta_1,\ldots,\theta_l}$}.
\end{equation}
Since $|\cX_{\theta_1,\ldots,\theta_l}|=o(n^l)$ w.h.p.\ by \Lem~\ref{Lemma_replicas}, the assertion follows from~(\ref{eqProp_replicas1}) by setting $J=[l]$.
The proof of (\ref{eqProp_replicas1}) is by induction on $\abs J$.
In the case $J=\emptyset$ there is nothing to show as both products are empty.
As for the inductive step,
set $t_i=\vec{1}\{\nbh{\G}{v_i}{\SIGMA}\ism(\theta_i,\tau_i)\}$ for the sake of brevity.
Then
\begin{align}\nonumber
\bck{\prod_{i\in J}t_i-z_i^{-1}}_{\hspace{-1mm}\G}&=\sum_{I\subset J}(-1)^{|I|}\prod_{i\in I}z_i^{-1}\bck{\prod_{i\in J\setminus I}t_i}_{\hspace{-1mm}\G}\\
&=\bck{\prod_{i\in J}t_i-\prod_{i\in J}z_i^{-1}}_{\hspace{-1mm}\G}+\prod_{i\in J}z_i^{-1}+\sum_{\emptyset\neq I\subset J}
(-1)^{|I|}\prod_{i\in I}z_i^{-1}\bck{\prod_{i\in J\setminus I}t_i}_{\hspace{-1mm}\G}.
\label{eqTelescope}
\end{align}
By the induction hypothesis, for all $\emptyset\neq I\subset J$ we have
\begin{equation}\label{eqTelescope2}
\abs{\bck{\prod_{i\in J\setminus I}t_i}_{\hspace{-1mm}\G}-\prod_{i\in J\setminus I}z_i^{-1}}\leq C_I\eps^{1/2}.
\end{equation}
Combining (\ref{eqTelescope}) and~(\ref{eqTelescope2}) and using the triangle inequality, we see that there exists $C_J>0$ such that
\begin{equation}\label{eqTelescope3}
\abs{\bck{\prod_{i\in J}t_i-z_i^{-1}}_{\hspace{-1mm}\G}-\bck{\prod_{i\in J}t_i-\prod_{i\in J}z_i^{-1}}_{\hspace{-1mm}\G}}\leq C_J\eps^{1/2}/2.
\end{equation}
Since $(v_1,\ldots,v_l)\not\in\cX_{\theta_1,\ldots,\theta_l}$, we have $\abs{\bck{\prod_{i\in J}t_i-z_i^{-1}}_{\G}}\leq\eps$.
Plugging this bound into~(\ref{eqTelescope3}) yields~(\ref{eqProp_replicas1}).
\end{proof}
\begin{proof}[Proof of \Prop~\ref{Prop_replicas}]
Let $\cU=\cU(\G)$ be the set of all tuples $(v_1,\ldots,v_l)\in[n]^l$ such that
$\partial^\omega(\G,v_i)\ism\theta_i$ for all $i\in[l]$.
Since the random graph converges locally to the Galton-Watson tree~\cite{BordenaveCaputo}, w.h.p.\ we have
\begin{equation}\label{eqProp_replicas_A1}
|\cU|=o(1)+\prod_{i\in[l]}\pr\brk{\partial^\omega\T(d)\ism\theta_i}
\end{equation}
(Alternatively, (\ref{eqProp_replicas_A1}) follows from
\Prop s~\ref{Thm_contig} and~\ref{Prop_Nor} by marginalising $\SIGMA_1,\SIGMA_2$.)
The assertion follows by combining~(\ref{eqProp_replicas_A1}) with \Cor~\ref{Cor_replicas}.
\end{proof}
\begin{proof}[Proof of \Thm~\ref{Thm_xlwc}]
As $\cP^2(\cG_k^l)$ carries the weak topology, we need to show that for any continuous $f:\cP(\cG_k^l)\to\RR$ with a compact support,
\begin{equation}\label{eqConv0}
\lim_{n\to\infty}\int f\dd\vec\lambda_{n,m,k}^l=\int f \dd\vec\thet_{d,k}^l.
\end{equation}
Thus, let $\eps>0$.
Since $\vec\thet_{d,k}^l=\lim_{\omega\to\infty}\vec\thet_{d,k}^l\brk\omega$, we have
\begin{align*}
\int f\dd\vec\thet_{d,k}^l&=\lim_{\omega\to\infty}\int f\dd\vec\thet_{d,k}^l[\omega]
=\lim_{\omega\to\infty}\Erw\int f\dd\atom_{\bigotimes_{i\in[l]}\lambda_{\nb{\T^{i}(d)}}}
=\lim_{\omega\to\infty}\Erw f\bc{\textstyle\bigotimes_{i\in[l]}\lambda_{\nb{\T^{i}(d)}}}.
\end{align*}
Hence, there is $\omega_0=\omega_0(\eps)$ such that for $\omega>\omega_0$ we have
\begin{align}\label{eqConv2}
\abs{\int f\dd\vec\thet_{d,k}^l-\Erw f\bc{\textstyle\bigotimes_{i\in[l]}\lambda_{\nb{\T^{i}(d)}}}}<\eps.
\end{align}
Furthermore, the topology of $\cG_k$
is generated by the functions (\ref{eqtopology}).
Because $f$ has a compact support, this implies that
there is $\omega_1=\omega_1(\eps)$ such that for any $\omega>\omega_1(\eps)$ and all $\Gamma_1,\ldots,\Gamma_l\in\cG_k$ we have
\begin{equation}\label{eqOmegaSuffices}
\abs{f\bc{\bigotimes_{i\in[l]}\atom_{\Gamma_i}}-f\bc{\bigotimes_{i\in[l]}\atom_{\partial^\omega\Gamma_i}}}<\eps.
\end{equation}
Hence, pick some $\omega>\omega_0+\omega_1$ and assume that $n>n_0(\eps,\omega)$ is large enough.
Let $\vec v_1,\ldots,\vec v_l$ denote vertices of $\G$ that are chosen independently and uniformly at random.
By the linearity of expectation and the definitions of $\vec\lambda_{n,m,k}^l$ and $\lambda_{\G,v_1,\ldots,v_l}$,
\begin{align*}
\int f\dd\vec\lambda_{n,d,k}^l&=\Erw\int f\dd\atom_{\lambda_{\G,\vec v_1,\ldots,\vec v_l}}
=\Erw f(\lambda_{\G,\vec v_1,\ldots,\vec v_l})
=\Erw\bck{f({\textstyle \bigotimes_{i\in[l]}\atom_{[\G\|\vec v_i,\vec v_i,\SIGMA\|\vec v_i]}})}.
\end{align*}
Consequently, (\ref{eqOmegaSuffices}) yields
\begin{align}\label{eqConv1}
\abs{\int f\dd\vec\lambda_{n,d,k}^l-
\Erw\bck{f({\textstyle \bigotimes_{i\in[l]}\atom_{\partial^\omega[\G\|\vec v_i,\vec v_i,\SIGMA\|\vec v_i]}})}}
&<\eps.
\end{align}
Hence, we need to compare $\Erw\bck{f({\textstyle \bigotimes_{i\in\brk l}\atom_{\partial^\omega[\G\|\vec v_i,\vec v_i,\SIGMA\|\vec v_i]}})}$
and $\Erw f\bc{\textstyle\bigotimes_{i\in[l]}\lambda_{\nb{\T^{i}(d)}}}$.
Because the tree structure of $\T(d)$ stems from a Galton-Watson branching process,
there exist a finite number of pairwise non-isomorphic rooted trees $\theta_1,\ldots,\theta_h$ together
with $k$-colorings $\tau_1\in\cS_k(\theta_1),\ldots,\tau_h\in\cS_k(\theta_h)$ such that
with $p_i=\pr\brk{\nb{\T(d)}\ism(\theta_i,\tau_i)}$ we have
\begin{equation}\label{eqNasty1}
\sum_{i\in[h]}p_i>1-\eps.
\end{equation}
Further, \Prop~\ref{Prop_replicas} implies that for $n$ large enough and any $i_1,\ldots,i_l\in[h]$ we have
\begin{align}\label{eqNasty2}
\Erw\abs{\bck{\prod_{i=1}^l\vec{1}\cbc{\partial^\omega[\G\|\vec v_i,\vec v_i,\SIGMA\|\vec v_i]\ism(\theta_{h_i},\tau_{h_i})}}-\prod_{i\in[l]}p_{h_i}}<h^{-l}\eps.
\end{align}
Combining (\ref{eqOmegaSuffices}), (\ref{eqNasty1}) and (\ref{eqNasty2}), we conclude that
\begin{equation}\label{eqNasty3}
\abs{\Erw\bck{f({\textstyle \bigotimes_{i\in\brk l}\atom_{\partial^\omega[\G\|\vec v_i,\vec v_i,\SIGMA\|\vec v_i]}})}-
\Erw f\bc{\textstyle\bigotimes_{i\in[l]}\lambda_{\nb{\T^{i}(d)}}}}<3l\norm f_\infty\eps.
\end{equation}
Finally,
(\ref{eqConv0}) follows from (\ref{eqConv2}), (\ref{eqConv1}) and~(\ref{eqNasty3}).
\end{proof}
\begin{proof}[Proof of \Cor~\ref{Cor_xlwc}]
While it is not difficult to derive \Cor~\ref{Cor_xlwc} from \Thm~\ref{Thm_xlwc},
\Cor~\ref{Cor_xlwc} is actually immediate from \Prop~\ref{Prop_replicas}.
\end{proof}
\begin{proof}[Proof of \Cor~\ref{Cor_decay}]
\Cor~\ref{Cor_decay} is simply the special case of setting $\omega=0$ in \Cor~\ref{Cor_xlwc}.
\end{proof}
\begin{proof}[Proof of \Cor~\ref{Cor_reconstr}]
For integer $\omega\geq 0$, consider the quantities
$\frac1n\sum_{v\in[n]}\Erw[\corr_{k,\gnm}(v, \omega)]$ and $\Erw[\corr_{k,\partial^{\omega}\T(d)}(v_0, \omega)]$.
The corollary follows by showing that
\begin{eqnarray}\label{eq:target-Cor_reconstr}
\left|\frac1n\sum_{v\in[n]}\Erw[\corr_{k,\gnm}(v, \omega)]- \Erw[\corr_{k,\partial^{\omega}\T(d)}(v_0, \omega)] \right |=o(1).
\end{eqnarray}
Let us call ${\cA}$, the quantity on the l.h.s. of the above equality. It holds that
\begin{eqnarray}
\cA&\leq & \left|\frac1n\sum_{v\in[n]}\left(\Erw[\corr_{k,\gnm}(v, \omega)]- \Erw[\corr_{k,\partial^{\omega}\gnm}(v_0, \omega)] \right )\right |
+\left|\frac1n\sum_{v\in[n]}\Erw[\corr_{k,\partial^{\omega}\gnm}(v, \omega)]- \Erw[\corr_{k,\partial^{\omega}\T(d)}(v_0, \omega)] \right |. \nonumber
\end{eqnarray}
We observe that, for any $v$-rooted $G \in \mathfrak G$ and $\omega$ it holds that $\corr_{k,G}(v,\omega)\in [0,1]$.
Then, by using Corollary \ref{Cor_xlwc} where $l=1$ (i.e. weak convergence) we get that
\begin{eqnarray}\label{eq:AbsBound1st}
\left|\frac1n\sum_{v\in[n]}\left(\Erw[\corr_{k,\gnm}(v, \omega)]- \Erw[\corr_{k,\partial^{\omega}\gnm}(v_0, \omega)] \right )\right | =o(1).
\end{eqnarray}
For bounding the second quantity we use the following observation:
The above implies that
\begin{eqnarray}\label{eq:2ndQuantVsTVD}
\left|\frac1n\sum_{v\in[n]}\Erw[\corr_{k,\partial^{\omega}\gnm}(v, \omega)]- \Erw[\corr_{k,\partial^{\omega}\T(d)}(v_0, \omega)] \right |
\leq \pr\brk{\partial^{\omega}(G(n,m),v^*) \not\ism \partial^{\omega}\T(d)} \cdot \max_{\theta} \{ \corr_{k,\theta}(v, \omega)\},
\end{eqnarray}
where the $v^*$ is a randomly chosen vertex of $G(n,m)$. The probability term
$\pr\brk{\partial^{\omega}(G(n,m),v^*) \not\ism \partial^{\omega}\T(d)}$ is w.r.t. any coupling
of $\partial^{\omega}(G(n,m),v^*)$ and $\partial^{\omega}\T(d)$. Also, the maximum index $\theta$ varies over all trees
with at most $n$ vertices and with at most $\omega$ levels.
Working as in Lemma \ref{Lemma_bin_0} we get the following: There is a coupling
of $\partial^{\omega}(G(n,m),v^*)$ and $\partial^{\omega}\T(d)$, where $d=2m/n$, such that
\begin{eqnarray}
\pr\brk{\partial^{\omega}(G(n,m),v) \ism \partial^{\omega}\T(d)}=1-o(1).\label{eq:AsymEqDistr}
\end{eqnarray}
Plugging (\ref{eq:AsymEqDistr}) into (\ref{eq:2ndQuantVsTVD}) we get that
\begin{eqnarray}\label{eq:AbsBound2nd}
\left|\frac1n\sum_{v\in[n]}\Erw[\corr_{k,\partial^{\omega}\gnm}(v, \omega)]- \Erw[\corr_{k,\partial^{\omega}\T(d)}(v_0, \omega)] \right |=o(1),
\end{eqnarray}
since it always holds that $\corr_{k,\theta}(v, \omega)\in [0,1]$.
From (\ref{eq:AbsBound1st}) and (\ref{eq:AbsBound2nd}), we get that $\cA=o(1)$, i.e. (\ref{eq:target-Cor_reconstr})
is true. The corollary follows.
\end{proof}
\begin{remark}
Alternatively, we could have deduced \Cor~\ref{Cor_reconstr} from \Lem~\ref{Lemma_Z2} and \cite[\Thm~1.4]{GM}.
\end{remark}
\bigskip
\noindent{\bf Acknowledgement.}
We thank Ralph Neininger for helpful discussions.
|
2,869,038,155,278 | arxiv |
\section{Introduction}
\label{sec:introduction}
\subsection{Background}
Machine learning (ML) has been extensively used in various aspects of society \cite{dlsurvey}. We have seen great improvements in areas such as image recognition and natural language processing.
\xdef\@thefnmark{}\@footnotetext{The authors are alphabetically ordered.}
However, in the recent years, it has been reported that the privacy of the training data can be significantly undermined by analyzing ML models. Since, in most applications, privacy-sensitive data are used as the training data for the models, protecting the privacy of the training data is crucial for getting approval from data providers or essentially society.
Following the growing concern for privacy in society worldwide, many countries and regions are introducing regulations for data protection, e.g., the General Data Protection Regulation (GDPR) \cite{GDPR}, California Consumer Privacy Act (CCPA) \cite{CCPA}, and Health Insurance Portability and Accountability Act (HIPAA) \cite{HIPAA}. Moreover, guidelines and regulations designed specifically for trustworthiness in artificial intelligence (AI) and ML are under discussion \cite{EU-Guideline}.
\vspace{1em}\noindent\textbf{Membership Inference Attacks:} One of the most fundamental attacks against the privacy of a ML model is the \textit{membership inference attack (MIA)} \cite{shokri2017membership,nasr2018machine,salem2019ml,song2020systematic,DBLP:conf/sp/NasrSH19,DBLP:conf/ccs/SongSM19,choo2020label,veale2018algorithms,yeom2018privacy,DBLP:journals/corr/abs-2103-07853,sablayrolles2019white}, where an attacker guesses whether the given target data is in the training data of a ML model.
MIAs are dangerous because they reveal the information of individual pieces of data rather than the trend of the whole population of training data. For instance, consider an ML model for inferring a reaction to some drug from a cancer patient's morphological data. An MIA attacker who knows the victim's data and has access rights to the ML model can know whether the victim has cancer or not, although the victim's data itself do not directly contain this information.
Another reason that MIAs are dangerous is that they can be executed through legitimate access to ML models only, meaning that they cannot be prevented by the conventional security methods such as data encryption and access control \cite{shokri2017membership}.
\vspace{1em}\noindent\textbf{Defense against MIAs:} The current state-of-the-art defense against MIAs is \textit{Distillation for Membership Privacy (DMP)} \cite{shejwalkar2021membership}. It can protect even against various state-of-the-art MIA attacks \cite{song2020systematic,DBLP:conf/sp/NasrSH19,choo2020label}, which the previous defenses \cite{nasr2018machine,jia2019memguard} cannot protect against very well, and its success comes from the ``semi-supervised assumption'' that a defender can obtain public unlabeled data. Specifically, DMP exploits a knowledge transfer technique \cite{hinton2015distil}; a defender trains an ML model using their own private data, feeds public data to the ML model to obtain the outputs of them, and trains another ML model using the public data and the corresponding outputs. Such indirect usage of private data makes knowledge distillation-based methods highly effective in protecting the privacy of private data.
However, in many domains of ML applications, public data are scarce due to the sensitive nature of the data, e.g., financial and medical data. To overcome this, utilization of synthetic data is proposed \cite{shejwalkar2021membership} as well. However, this method decreases accuracy \cite{shejwalkar2021membership} due to the decrease in data quality.
\subsection{Our Contributions}
In this paper, we propose a novel knowledge distillation-based defense that uses only private data for model training.
Our contributions are as follows.
\begin{itemize}
\item We propose a novel MIA defense called \textit{knowledge cross-distillation (KCD)}\footnote{After we submitted our work to PETS 2022 Issue 2, Tang et al. \cite{tang2021mitigating} published a concurrent and independent work similar to ours in arXiv.}. Unlike the state-of-the-art defense, DMP, it does not require any public or synthetic reference data to protect ML models. Hence, KCD allows us to protect the privacy of ML models in areas where public reference data are scarce.
\item For the benchmark tabular datasets used in MIA research, Purchase100 and Texas100, we empirically show that the privacy protection and accuracy of KCD are comparable to those of DMP even though KCD does not require public or synthetic data, unlike DMP.
\item For the image dataset CIFAR10, we empirically show that the accuracy of KCD is comparable to that of DMP, and KCD provides a much better privacy-utility trade-off than those of other defenses that do not require public or synthetic reference data.
\end{itemize}
\subsection{Other Related Works}
We focus only on related works that are directly related to our contributions. See Hu et al. \cite{DBLP:journals/corr/abs-2103-07853} for a comprehensive survey of MIAs.
\vspace{1em}
\noindent\textbf{Membership Inference Attacks:} One of the earliest works considering MIAs is by Homer et al. \cite{homer2008resolving}, and MIAs were introduced in the ML setting in a seminal work by Shokri et al. \cite{shokri2017membership}. A series of MIA attacks, which is now called the \textit{neural network-based attack}, was proposed by Shokri et al. \cite{shokri2017membership} and was studied in detail by Salem et al. \cite{salem2019ml} and Truex et al. \cite{8634878}. Later, a new type of MIA attack, the \textit{metric-based attack}, was proposed by Yeom et al. \cite{yeom2018privacy} and studied by Song et al. \cite{DBLP:conf/ccs/SongSM19}, Salem et al. \cite{salem2019ml}, and Leino et al. \cite{DBLP:conf/uss/LeinoF20}. Then, Song et al. \cite{song2020systematic} summarized and improved upon them and proposed the state-of-the art metric-based attack as well.
Choo et al. \cite{choo2020label} and Li et al. \cite{DBLP:journals/corr/abs-2007-15528} independently and concurrently succeeded in attacking neural networks in a \textit{label-only setting}, where an attacker can get only labels as outputs of a target neural network, while the attackers of other known papers require confidence scores as the outputs of it. Nasr et al. \cite{DBLP:conf/sp/NasrSH19} proposed an MIA attack in a \textit{white-box setting}, where an attacker can obtain the structure and parameters of the target neural network.
\vspace{1em}
\noindent\textbf{Known Defenses:} MIAs can be mitigated using one known method, \textit{differential privacy} \cite{10.1007/11787006_1,DBLP:conf/eurocrypt/DworkKMMN06}, which is a technique for guaranteeing worst-case privacy by adding noise to the learning objective or model outputs.
However, defenses designed to protect against MIAs specifically have better privacy-utility trade-offs. Three MIA-specific defenses were proposed: AdvReg by Nasr et al. \cite{nasr2018machine}, MemGuard by Jia et al. \cite{jia2019memguard}, and DMP \cite{shejwalkar2021membership}.
An important technique for protecting MLs against MIAs is knowledge transfer \cite{hinton2015distil}. Using this technique, PATE by Papernot et al. \cite{DBLP:conf/iclr/PapernotAEGT17,DBLP:conf/iclr/PapernotSMRTE18} achieved DP, Cronos \cite{chang2019cronus} by Chang et al. protected ML from an MIA in a federated learning setting, and DMP \cite{shejwalkar2021membership} achieved a higher privacy-utility trade-off by removing public data with low entropy.
Currently, DMP is the best defense in the sense of the privacy-utility trade-off. However, it requires public data. Other known defenses, AdvReg and MemGuard, have an advantage in that they do not require public reference data.
\section{Preliminaries}
\subsection{Machine Learning}
\textit{An ML model} for a classification task is a function $F$ parameterized by internal \textit{model parameters}. It takes a $d$-dimensional real-valued vector $x\in\mathbb{R}^{d}$ as input and outputs a $c$-dimensional real-valued vector $\hat{y}=(\hat{y}_1,\ldots,\hat{y}_c)$. The output $\hat{y}$ has to satisfy $\hat{y}_i\in [0,1]$ and $\sum_i\hat{y}_i=1$. Each $\hat{y}_i$ is called a \textit{confidence score}. Its intuitive meaning is the likelihood of $x$ belonging to class $i$. The $\mathop{\rm argmax}\limits_i\hat{y}_i$ is called a \textit{predicted label} (or \textit{predicted class}).
An ML model $F$ is trained using a \textit{training dataset} $D\subset\{(x,y) \mid x\in\mathbb{R}^{d},\ y\in\{0,1\}^{c}\}$, where $x$ is a data point, and $y$ is a one-hot vector reflecting the true class label of $x$. In the training procedure, the model parameters of $F$ are iteratively updated to reduce the predetermined \textit{loss} $\sum_{(x,y)\in D}L(F(x),y)$, which is the sum of errors between the prediction $F(x)$ and true label $y$. For inference, $F$ takes input $x$ and outputs $\hat{y}=F(x)$ as a \textit{prediction}.
The \textit{accuracy} of $F$ for dataset $D$ is the ratio between the number of elements $(x,y)\in D$ satisfying $\mathop{\rm argmax}\limits_iF(x)_i=\mathop{\rm argmax}\limits_iy_i$. Here, $F(x)_i$ and $y_i$ are the $i$-th component of $\hat{y}=F(x)$ and $y$, respectively. The \textit{training accuracy} and \textit{testing accuracy} of $F$ are for the training and testing datasets, respectively. Here, \textit{testing dataset} is a dataset that does not overlap with the training dataset. The \textit{generalization gap} of $F$ is the difference between training and testing accuracies.
\subsection{Membership Inference Attack (MIA)}
\label{subsec:MIA-def}
MIA is an attack in which an attacker attempts to determine whether given data (called \textit{target data}) are used for training a given ML model (called a \textit{target model}). In the discussion of MIAs, the training data of the target model are called \textit{member data}, and non-training data are called \textit{non-member data}.
There are two types of MIAs, \textit{white-box} and \textit{black-box} \cite{shokri2017membership,DBLP:conf/sp/NasrSH19}. Attackers of the former can take as input the model structure and model parameters of the target model. Attackers of the latter do not take them as input but are allowed to make queries to the target model and obtain answers any number of times.
A black-box MIA can be divided into the two sub-types, \textit{MIA with confidence scores} and \textit{label-only MIA} \cite{choo2020label}.
Attackers of the former can obtain confidence scores as answers from the target model but attackers of the latter can obtain only predicted labels as answers.
In all types of MIAs, the attackers can take the target data and \textit{prior knowledge} as inputs. Intuitively, the prior knowledge is what attackers know in advance. What type of prior knowledge an adversary can obtain depends on the assumed threat model.
An example of prior knowledge is a dataset sampled from the same distribution as the training data of the target model, not overlapping with the training data. Another example is a portion of the training data. The prior knowledge we focused in this study is described in Section \ref{subsection:Setting-of-MIA}.
The \textit{attack accuracy} of an attacker for an MIA is the probability that they will succeed in inferring whether target data are member data. As in the all previous papers, the target data are taken from member data with a probability of $50\%$.
One of the main factor causing MIA risks is
\textit{overffiting} of an ML model on the training ($=$member) data.
The member data can be distinguished from non-member data \cite{yeom2018privacy,salem2019ml} depending on whether it is overfitted to the target model, e.g. by checking whether the highest confidence score of output of the target model is more than a given threshold.
\subsection{Distillation for Membership Privacy (DMP)}
\label{subsec:DMP}
\textit{DMP} \cite{shejwalkar2021membership} is a state-of-the-art defense method against MIAs that leverages knowledge distillation \cite{hinton2015distil}. Distillation was originally introduced as a model compression technique that transfers the knowledge of a large \textit{teacher model} to a small \textit{student model} by using the output of a teacher model obtained on unlabeled \textit{reference dataset}. DMP needs public reference dataset $R$ disjoint from the training dataset $D$ to train ML models with membership privacy.
The training algorithm of DMP is given in Algorithm \ref{alg:DMP}. Here, $L$ is the loss function. First, DMP trains a teacher model $F$ using a private training dataset $D$ (Step 1). $F$ is overfitted to $D$ and therefore vulnerable to MIA. Next, DMP computes the \textit{soft labels} $F(x)$ of each peice of data $x$ of public reference dataset $R$ and lets $\bar{R}$ be the set of $(x,F(x))$ (Step 2). Finally, to obtain a protected model, DMP trains a student model $H$ using the dataset $\bar{R}$ (Step 3). $H$ has MIA resistance because it is trained without direct access to the private $D$. Note that DMP uses $H$ with the same architecture as $F$.
\begin{algorithm}
\caption{Training algorithm of DMP}
\label{alg:DMP}
\begin{algorithmic}[1]
\REQUIRE training dataset $D\subset\{(x,y)\mid x\in\mathbb{R}^{d}, y\in\{0,1\}^{c}\}$, reference dataset $R\subset\{x\mid x\in\mathbb{R}^{d}\}$, and initialized parameters of $F,H$.
\ENSURE Distilled student model $H$.
\STATE Train $F$ by using $D$ as a training dataset until the training converges to minimize the loss $$\sum_{(x,y)\in D}L(F(x),y).$$
\STATE Let $\bar{R}$ be a dataset $R$ with soft labels,
$$\bar{R}=\{(x,F(x))\mid x\in R\}.$$
\STATE Train $H$ by using a dataset $\bar{R}$ until the training converges to minimize the loss
$$\sum_{(x,y')\in\bar{R}} L(H(x),y').$$
\STATE Return $H$.
\end{algorithmic}
\end{algorithm}
The authors of DMP \cite{shejwalkar2021membership} proposed three different ways of achieving the desired privacy-utility tradeoffs:
\begin{itemize}
\item increasing the temperature of the softmax layer of $F$,
\item removing reference data with high entropy predictions from $F$,
\item decreasing the size of the reference dataset.
\end{itemize}
All of the above changes reduce MIA risks but also the accuracy of $H$ and vice versa.
When we use the second or third way to tune DMP, we select samples from the reference dataset and use them as $R$ in Step 2.
\section{Our Proposed Defense}
In this section, we propose a new defense that can protect ML from MIAs without using a reference dataset.
\subsection{Idea}
\label{subsection: Idea}
The starting point with our approach is DMP \cite{shejwalkar2021membership}.
That is, we train a \textit{teacher model} $F$ using a training dataset $D$, compute \textit{soft labels} $F(x)$ to $x$
of public \textit{reference dataset} $R$, train a \textit{student model} $H$ using $(x,F(x))$, and, finally, use $H$ for inference. DMP can mitigate MIAs as described in Section \ref{subsec:DMP}.
The problem with DMP is that it requires a public reference dataset, which may be difficult to collect in privacy-sensitive domains \cite{shejwalkar2021membership}.
A na\"{i}ve idea to solve this problem is to use the original $D$ as a reference dataset. However, our experiment shows that this approach does not sufficiently mitigate the MIA risk (see Section \ref{subsec:Discussions}).
The main problem of the na\"{i}ve idea is that data $x$ of the reference dataset $R=D$ is \textit{member data} of $F$. Therefore, $F$ results in overfitting on $x$ and the confidence score $\hat{y}=F(x)$ is close to the one-hot vector $y$ of the true label. Hence, $H$ trained on $(x,\hat{y})$ results again in overfitting on $x$, which can be exploited by an MIA.
Our proposed defense, denoted by \textit{knowledge cross-distillation} (KCD) is designed to overcome the above problem. We divide the training dataset into $n$ parts, leave one part as a reference dataset, and train a teacher $F_1$ using the remaining parts. To increase the accuracy of KCD, we prepare teachers $F_2,\ldots,F_n$ as well and repeat the above procedure for each teacher by changing the reference part. Finally, we use each reference part to distill the knowledge of each corresponding teacher into a single $H$.
Our defense solves the problem of the na\"{i}ve idea because none of the remaining parts of the training dataset are used to train the teacher model.
\subsection{Description}
\label{subsection:Description}
The training algorithm of the our proposed defense, KCD, is given in Algorithm \ref{alg:MDMP} and is overviewed in Figure \ref{fig:outline}. Here, $F_1,\ldots,F_n$, and $H$ are models with the same structure as that of the model $F$ that we want to protect\footnote{Although we use the term ``distillation,'' we use teacher and student models with the same structure as in DMP \cite{shejwalkar2021membership}. This is because we are not concerned about the size of the resulting model.}. $L$ is the loss function.
In Algorithm \ref{alg:MDMP}, we divide training dataset $D$ into $n$ disjoint
subsets $D_1,\ldots,D_{n}$ with almost the same size, such that $D=\bigsqcup^{n}_{i=1}D_i$ holds\footnote{$\bigsqcup^{n}_{i=1}D_i$ denotes a disjoint union of sets} (Step 1).
Then, for $i=1,\ldots,n$, we train the teacher model $F_i$ using the dataset $D$ but exclude $D_i$ (Step 2-4). Let $\bar{D}_i$ be the dataset that is obtained by adding soft labels $F_i(x)$ to $(x,y)\in D_i$ (Step 5). Finally, we train a student model $H$ using the dataset $\bigcup_i\bar{D}_i$ to minimize the combined loss function with hyperparameter $\alpha$ (Step 6).
Our loss function comprises two terms; the first term is the loss for soft labels $y'$, and the second is the loss for the true label $y$. The hyperparameter $\alpha$ can tune the privacy-utility trade-off of KCD. In fact, if $\alpha=1$, our defense protects the privacy of the training data due to the reason mentioned in Section \ref{subsection: Idea}. If $\alpha=0$, KCD becomes the same as the unprotected ML.
Note that our privacy-utility trade-off based on $\alpha$ cannot be directly applied to the known knowledge distillation-based defenses, DMP \cite{shejwalkar2021membership} and Cronus \cite{chang2019cronus}, because the public reference datasets for these defenses do not have the true labels and loss for the predicted scores and true labels cannot be computed.
\begin{algorithm}
\caption{Training algorithm of KCD}
\label{alg:MDMP}
\begin{algorithmic}[1]
\REQUIRE training dataset $D\subset\{(x,y)\mid x\in\mathbb{R}^{d}, y\in\{0,1\}^{c}\}$, hyperparameter $\alpha\in[0,1]$, and initialized parameters of $F_1,\ldots,F_{n},H$.
\ENSURE Distilled student model $H$.
\STATE Divide $D$ into $n$ randomly selected disjoint subsets $\{D_i\}^{n}_{i=1}$ with almost the same size, such that\footnotemark[3] $$D=\bigsqcup^{n}_{i=1}D_i, $$
\FOR{$i=1,\ldots,n$}
\STATE Train $F_{i}$ by using $D\setminus D_i$ as a training dataset until the training converges to minimize the loss $$\sum_{(x,y)\in D\setminus D_i}L(F_i(x),y).$$
\ENDFOR
\STATE Let $\bar{D}_i$ be a dataset $D_i$ with soft label
$$\bar{D}_i=\{(x,F_i(x))\mid \exists y~:~(x,y)\in D_i\},$$
and let $\bar{D}=\cup_i\bar{D}_i$.
\STATE Train $H$ by using a dataset $\bar{D}$ until the training converges to minimize the loss
\begin{equation}\label{mergedLoss}
\alpha\sum_{(x,y')\in\bar{D}} L(H(x),y')+(1-\alpha) \sum_{(x,y)\in D}L(H(x),y)
\end{equation}
\STATE Return $H$.
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[width=7.5 cm]{outline.jpg}
\caption{\rm Outline of KCD when dividing training dataset into three subsets. $F_{1}$-$F_{3}$: teacher models, H: student model.}\label{fig:outline}
\end{figure}
\section{Experimental Setup}
We conducted our experiments using the following datasets and model architectures as in the previous studies \cite{shokri2017membership, nasr2018machine, jia2019memguard, choo2020label, DBLP:conf/sp/NasrSH19, shejwalkar2021membership, song2020systematic}.
\subsection{Datasets}
\textbf{CIFAR 10:} This is a typical benchmark dataset used for evaluating the performance of image-classification algorithms \cite{Krizhevsky09}. It contains $60,000$ RGB images. Each image is composed of $32 \times 32$ pixels and labeled in one of $10$ classes.
\vspace{1em}\noindent\textbf{Purchase100:}
This is a benchmark dataset used for MIAs.
It is based on a dataset provided by Kaggle's Acquire Valued Shoppers Challenge \cite{Purchase100}.
We used a processed and simplified one by Shokri et al. \cite{shokri2017membership}.
The dataset has $197,324$ records with $600$ binary features, each of which represents whether the corresponding customer purchased an item. The data are clustered into $100$ classes representing different purchase styles, and the classification task is to predict which one of the $100$ classes an input is in.
\vspace{1em}\noindent\textbf{Texas100:} This is also a benchmark dataset used for MIAs. It is based on the hospital discharge data \cite{Texas100} from several health facilities published by the Texas Department of State Health Services and was processed and simplified by Shokri et al. \cite{shokri2017membership}. It contains the 100 most frequent procedures that patients underwent. The dataset has $67,330$ records with $6,170$ binary features of patients, such as the corresponding patient's symptoms and genetic information. The classification task is to predict which one of the $100$ procedures a patient for a piece of an input data underwent.
\subsection{Model Architectures}
\noindent\textbf{Wide ResNet-28:} For CIFAR 10, we used the same model architecture as in a previous study \cite{choo2020label}, i.e., Wide ResNet-28.
\vspace{1em}\noindent\textbf{Purchase and Texas classifiers:} For Purchase 100 and Texas 100, we used fully connected NNs with Tanh activation functions. We used the same layer sizes as in a previous study \cite{nasr2018machine}, i.e., layer sizes $(1024, 512, 256, 128)$.
\subsection{Setting of MIA}
\label{subsection:Setting-of-MIA}
As in the previous studies of MIAs \cite{nasr2018machine,DBLP:conf/sp/NasrSH19}, we consider a strong setting where the attackers know the non-member dataset and a subset of the member dataset of the target model as prior knowledge.
(This subset of the member dataset does not contain the target data, of course).
This setting is called \textit{supervised inference} \cite{DBLP:conf/sp/NasrSH19}.
One may think that the supervised inference setting seems too strong as a real setting. However, the \textit{shadow model} technique \cite{shokri2017membership} allows an attacker to achieve supervised inference virtually \cite{DBLP:conf/sp/NasrSH19}. A shadow model is an ML model that is trained by an attacker to mimic a target ML model. The attacker then knows the training data of the shadow model as in the supervised inference setting since the attacker trains it.
\subsection{MIAs for Evaluations}
We conducted comprehensive experiments for three types of MIA: black-box MIA with confidence score, black-box MIA with only labels, and white-box MIA.
\subsubsection{Black-box MIA with confidence score (BB w/score)}
These are attacks such that the attackers know the confidences scores as outputs of the target model. There are two sub-types of these attacks.
\vspace{1em}\noindent\textbf{NN-based attack:} This is a type of black-box MIA using an NN, called \textit{attack classifier} $A$. Specifically, the attacker knows a set of non-member data and a subset of member data as their prior knowledge, as mentioned in Section \ref{subsection:Setting-of-MIA}. They send these data to the target model and obtain their confidence scores as answers. Using these data, the answers, and the knowledge of whether these data are members, they train $A$. Finally, they infer the membership status of the target data by taking their label and confidence score as input to $A$.
There are two known NN-based attacks \cite{shokri2017membership,salem2019ml}. The difference between them is whether the attacker trains an attack classifier for each label class; the original attack by Shokri et al. \cite{shokri2017membership} uses one classifier per each class and a simplified attack by Salem et al. \cite{salem2019ml}, called \textit{ML Leaks Adversary 1}, uses only one common attack classifier for all classes.
{In our experiments,} we executed the attack \textit{ML Leaks Adversary 1} \cite{salem2019ml} since it is simpler and ``has very similar membership inference'' \cite{salem2019ml} to that of Shokri et al. \cite{shokri2017membership}.
\vspace{1em}\noindent\textbf{Metric-based attack:} This is a type of black-box MIA that directly uses the fact that the confidence score $F(x)$ of the target data $(x,y)$ differs depending on whether $(x,y)$ is a member. Specifically, an attacker computes a value $m=M(F(x),y)$, called a \textit{metric}, and infers $(x,y)$ as a member if $m$ satisfies a given condition (e.g., greater than a given threshold).
{There are five known attacks of this type: \textit{Top 1}, \textit{correctness}, \textit{confidence}, \textit{entropy}, and \textit{m-entropy attacks} (Table \ref{tab:metric-based}),
where Top 1 was proposed in \cite{salem2019ml}, and the other four were proposed in \cite{song2020systematic} by generalizing or improving known metric-based attacks \cite{yeom2018privacy,DBLP:conf/ccs/SongSM19,salem2019ml,DBLP:conf/uss/LeinoF20,salem2019ml,song2020systematic}.}
{In our experiments,} we executed all five metric-based attacks \cite{salem2019ml,song2020systematic} mentioned above.
\begin{table}[t]
\centering
\begin{tabular}{l||l}
\hline
\rm Name & \rm Condition \\
\hline\hline
\rm Top 1 & $\mathop{\rm argmax}\limits_iF(x)_i\overset{?}{\ge} \tau$\\
\hline
\rm Correctness & $\mathop{\rm argmax}\limits_iF(x)_i\overset{?}{=}\mathop{\rm argmax}\limits_iy_i$\\
\hline
\rm Confidence & $F(x)_{\ell[y]} \overset{?}{\ge} \tau_{\ell[y]}$\\
\hline
\rm Entropy & $-\sum_i F(x)_i\log F(x)_i \overset{?}{\le} \tau_{\ell[y]}$\\
\hline
\rm Modified Entropy & $\begin{array}{l}
-(1-F(x)_{\ell[y]})\log F(x)_{\ell[y]}\\
~~~-\sum_{i\neq \ell[y]} F(x)_i\log F(x)_i
\overset{?}{\le} \tau_{\ell[y]}\\
\end{array}$\\
\hline
\end{tabular}
\caption{\rm {Known metric-based attacks. Here, $F(x)_i$ and $y_i$ mean $i$-th component of $F(x)$ and $y$ respectively and $\ell[y]$ is the label corresponding to one-hot vector $y$, that is, $\operatorname{argmax}_i y_i$. $\tau$ and $\tau_{\ell[y]}$ are thresholds determined by attackers.}}
\label{tab:metric-based}
\end{table}
\subsubsection{Black-box MIA only with labels (BB label only)}
{These are attacks such that} attackers know only the predicted labels as outputs of the target model without knowing the confidence scores. We call such an MIA a \textit{label-only MIA}.
There are two known label-only attacks, \textit{boundary distance (BD)} and \textit{data augmentation} \cite{choo2020label}. {We introduce only the former one} because it is stronger than the {latter one} \cite{choo2020label}.
A BD attack is an attack that {computes the smallest} adversarial perturbation {$\Delta x$} satisfying $\mathop{\rm argmax}\limits_iF(x+\Delta x)_i\neq \mathop{\rm argmax}\limits_iy_i$ for the target data $(x,y)$. Here, $F(x+\Delta x)_i$ and $y_i$ are the $i$-th components of $F(x+\Delta x)$ and $y$, respectively. The attacker then infers $x$ is a member if the $L_2$ norm of $\Delta x$ is larger than a predetermined threshold.
A BD attack {is} a black-box {MIA} if adversarial perturbation is crafted by HopSkipJump \cite{chen2020hopskipjumpattack}. However, the attack becomes a white-box MIA if we use the Carlini-Wagner {method for} adversarial perturbation \cite{DBLP:conf/sp/Carlini017}.
In our experiments, we executed the BD attack with HopSkipJump. This is because the attack accuracy of the BD attack based on HopSkipJump is asymptotically equal to that of the BD attack with Carlini-Wagner when the number of queries increases \cite{choo2020label}.
\subsubsection{White-box MIA (WB)}
{These are attacks such that attackers can take the confidence score of target data besides the model structure and model parameters of the target model as input.}
Two white-box attacks have been proposed, the \textit{Nasr-Shokri-Houmansadr (NSH) attack} \cite{DBLP:conf/sp/NasrSH19} and \textit{Hui's attack} \cite{DBLP:conf/ndss/HuiYYBGC21}.
The NSH attack exploits the fact that the gradient for the model parameter of the target model $F$ on $(x,y)$ becomes smaller if $(x,y)$ is a member of $F$. Specifically, an attacker computes the gradient of $F$ on the target data and infers the membership of $(x,y)$ by inputting the gradient {as well as the confidence score and the class label} into an NN trained by the attacker.
{Hui's attack} focuses mainly on reducing the assumption behind the NSH attack. That is, it can be executed without assuming that an attacker has member data as prior knowledge \cite{shokri2017membership}.
{In our experiments,} we executed only the NSH attack, {since} our assumption was stronger than Hui's; an attacker has member data as prior knowledge (as mentioned in Section \ref{subsection:Setting-of-MIA}).
\subsection{Known Defenses}
\label{subsection:Known_Defenses}
Known defenses can be categorized into the following three types.
We chose the best defense from all three types for comparison with our method.
\vspace{1em}\noindent\textbf{Regularization-based methods:}
These methods use the fact that the regularization techniques of ML models mitigate overfitting, one of the main reasons behind the MIA risk \cite{yeom2018privacy}. Regularization techniques, such as $L_2$-regularization, dropout \cite{DBLP:journals/jmlr/SrivastavaHKSS14}, and early-stopping, also mitigate the MIA risk, as pointed out by Nasr et al. \cite{nasr2018machine}, Shokri et al. \cite{shokri2017membership}, and Song et al. \cite{song2020systematic}, respectively.
{Meanwhile,} \textit{Adversarial Regularization} (AdvReg) \cite{nasr2018machine} is a regularization that is {focused on} mitigating MIAs.
To conduct our experiment, we chose AdvReg from this type of attack since it mitigates the MIA risk {the best}.
AdvReg is based on a game theoretic framework similar to GANs \cite{HayesMDC19LOGAN}. Specifically, we train a {model} $F$ we want to protect and a {pseudo attacker} $A$ alternatively. The aim of the $A$ is to distinguish member data from non-member data. It corresponds to a discriminator of a GAN, and the gain of the $A$ is added to the loss of $F$ as a regularization term.
\vspace{1em}\noindent\textbf{AX (Adversarial eXample)-based method:}
This method exploits an AX \cite{DBLP:journals/corr/SzegedyZSBEGF13} to mitigate the MIA risk, {where AX is a technique for deceiving ML by adding small noise to the input of the ML.}
We used \textit{MemGuard} \cite{jia2019memguard} in our experiments.
MemGuard adds AX noise to the output of $F$, which we want to protect. Then, an attacker who uses an NN to attack $F$ is deceived by the noise and cannot accurately determine the membership of the target data.
\vspace{1em}\noindent\textbf{KT (Knowledge Transfer)-based methods:}
These methods exploit KT to mitigate the MIA risk. Here, KT means knowledge distillation (explained in Section \ref{subsection: Idea}) or its variants. There are three known KT-based methods: {\textit{DMP} \cite{shejwalkar2021membership}, \textit{PATE} \cite{DBLP:conf/iclr/PapernotAEGT17}, and an improved variant of PATE, \textit{PATE with confident-GNMax} \cite{DBLP:conf/iclr/PapernotSMRTE18}.} We used DMP and PATE with confident-GNMax {in our experiments} for image data. However, we used only DMP for tabular data because PATE with confident-GNMax requires GANs.
Details on DMP have already been given in Section \ref{subsec:DMP}. Meanwhile, PATE trains multiple teacher models with \textit{disjoint} subsets of private training data, gives public data hard labels chosen by noisy voting among the {teachers}, and finally trains a student model using labeled public data. {A noisy voting mechanism provides differential privacy guarantees with respect to the training data.} Confident-GNMax is a new noisy aggregation method for improving the original PATE.
{To achieve a smaller privacy budget $\varepsilon$, instead of labeling all public data, it} selects the samples among public data to be labeled by checking if the result of a {noisy} plurality vote crosses a threshold. {Once the threshold and noise parameters are determined, $\varepsilon$ can be computed. We train a student model using semi-supervised learning with GANs \cite{NIPS2016_8a3363ab}.}
\subsection{ML Setups and Hyperparameter Choosing}
\subsubsection{ML setups}
\label{subsection:ML Setups}
\label{app:datasplit}
In all experiments, we used a batch size of 64, the SGD optimizer with a momentum of $0.9$ and weight decay of $10^{-5}$, and the ReduceLROnPlateau scheduler with default hyperparameters.
The model that recorded the best validation accuracy in five trials was evaluated to test the accuracy and risks against the four types of MIAs.
We conducted all experiments using the PyTorch 1.7 framework on a Tesla V100 GPU with 32-GB memory.
\begin{table*}[t]
\centering
\begin{tabular}{|l||r|r|r|r|r|r|r|r}
\hline
\multicolumn{1}{|c||}{ } & \multicolumn{3}{c|}{\rm Train.} & \multicolumn{1}{c|}{\rm Ref.} & \multicolumn{1}{c|}{\rm Val.} & \multicolumn{3}{c|}{\rm Test.} \\ \cline{1-4}\cline{7-9}
\multicolumn{1}{|c||}{\rm Dataset} & \rm All & \rm Known & \rm Target & & & \rm All & \rm Known & \multicolumn{1}{c|}{\rm Target} \\ \hline
\multicolumn{1}{|c||}{\rm Purchase} & \rm 10000 & \rm 5000 & \rm 2500 & \rm 10000 & \rm 5000 & \rm 5000 & \rm 2500 & \multicolumn{1}{c|}{\rm 2500} \\ \hline
\multicolumn{1}{|c||}{\rm Texas} & \rm 10000 & \rm 5000 & \rm 2500 & \rm 10000 & \rm 5000 & \rm 5000 & \rm 2500 & \multicolumn{1}{c|}{\rm 2500} \\ \hline
\multicolumn{1}{|c||}{\rm CIFAR10} & \rm 25000 & \rm 12500 & \rm 2500 & \rm 25000 & \rm 5000 & \rm 5000 & \rm 2500 & \multicolumn{1}{c|}{\rm 2500} \\ \hline
\end{tabular}
\caption{Dataset splits. \rm ``All'': All data used to train or test, ``Known'': Known data that attacker can exploit to execute MIA, ``Target'': Target data for which attacker attempts to infer membership.}\label{table:dataset_splits}
\end{table*}
Table \ref{table:dataset_splits} shows how we split the above datasets in our experiments.
{Here, \textit{validation dataset} is a dataset used to select the best model parameters that does not overlap with the training dataset in our experiment.}
Following {the} previous studies \cite{nasr2018machine,DBLP:conf/sp/NasrSH19}, we considered strong attackers who know the non-member dataset and a subset of the member dataset of the target model as their prior knowledge (see Section \ref{subsection:Setting-of-MIA}). We used the rest of the training/testing data as the target data to execute an MIA. The amounts of known data and target data are also depicted in Table~\ref{table:dataset_splits}.
\subsubsection{Hyperparameter tuning}\label{app:tuning}
\noindent\textbf{Unprotected, AdvReg, MemGuard, DMP, and KCD:}
Using Optuna \cite{Optuna},
we optimized hyperparameters for each scheme.
\begin{itemize}
\item For unprotected models, we chose learning rates that maximize validation accuracies.
\item For AdvReg, MemGuard, DMP, and KCD, we tuned the learning rates and their specific parameters, i.e., the penalty parameter $\lambda$ (AdvReg), learning rate $\beta$ of a pseudo attacker, the weights $c_2$, $c_3$ of the loss function (MemGuard), the size of the public reference data\footnote{There are three privacy-utility trade-off hyperparameters depicted in the DMP paper \cite{shejwalkar2021membership}, temperature, entropy criterion, and the number of reference data {as explained in Section \ref{subsec:DMP}}. We chose the number of pieces of reference data from them for our experiments since this number shows the best trade-off in our environment.} (DMP), and the intensity $\alpha$ of the distillation in Algorithm \ref{alg:MDMP} (KCD), respectively.
We optimized their hyperparameters toward a high validation accuracy and low MIA risk.
\end{itemize}
The hyperparameters of the defenses were basically chosen to have almost the same accuracy as the unprotected model and a considerably low MIA risk, except for some defenses whose accuracy inevitably drops no matter which hyperparameters we chose for them with low MIA risk.
\begin{itemize}
\item In Tables \ref{table:comparison-purchase}, \ref{table:comparison-texas}, and \ref{table:comparison-CIFAR10},
we chose hyperparameters for AdvReg, that enable a better privacy-utility tradeoff (i.e., relatively small validation accuracy drop and mid-level MIA resistance)
since almost the same validation accuracy as the unprotected model (making the MIA risk similar to a random guess, resp.) results in an MIA risk that is the same as that of an unprotected model (deterioration of validation accuracy, resp.).
\item For MemGuard, we fixed $\varepsilon=1.0$ and tuned the other parameters toward a low MIA risk.
\item The hyperparameters of DMP were chosen to replicate the performance of the original paper \cite{shejwalkar2021membership}.
\item For KCD, we chose a model whose validation accuracy is close to that of DMP.
\end{itemize}
\noindent\textbf{PATE:}
For PATE, we trained four ensembles of teachers, i.e., 3, 5, 10 and 25, and selected five different privacy levels $(\varepsilon,\delta)$ for each ensemble (where $\delta$ is fixed to $10^{-4}$ as the order of the size of the public reference dataset is $10^4$ \cite{DBLP:conf/iclr/PapernotSMRTE18}). Since our interest is empirical MIA resistance, not DP guarantees, we chose various values for $\varepsilon$. For example, for three teachers, we chose $\varepsilon=229, 1473, 6291, 36849, 83535,141923$ (These cannot be round numbers because these values are automatically computed after we choose the thresholds and noise parameters). Epsilon $141923$ was the minimum value that maximized the validation accuracy (i.e., corresponding to the non-private case), and epsilon $229$ was the minimum value that provided enough labeled public data to train a student. Using Optuna, we optimized the learning rates toward a high validation accuracy for each $\varepsilon$.
In Table 5, we chose the hyperparameters ``$3$ teachers, $\varepsilon=141923$, $\delta=10^{-4}$,'' which maximized the validation accuracy because all the trained models had almost the same MIA risks.
\subsubsection{Choice of loss function}\label{app:loss}
The loss functions for most of the defenses were chosen from the original studies.
The exception is DMP with synthetic reference data \cite{shejwalkar2021membership}; we chose the mean squared error (MSE) as a loss function since ``synthetic'' DMP with this loss function performed better than the original loss function, KL divergence, in our experiments.
We chose a suitable loss function on the basis of known facts about distillation: the KL loss at a high temperature $T$ asymptotically approaches the MSE, and which of these performs well is an empirical question \cite{hinton2015distil} (the loss with $T=1$ is KL loss).
Therefore, we examined the KL divergence loss at various $T$ for ``synthetic'' DMP and found that a higher $T$ leads to better performance and that MSE loss is the best.
By doing similar experiments for our defense, KCD, we determined the suitable loss to be the MSE for the Purchase100 and CIFAR10 datasets and KL divergence with $T=1$ for the Texas100 dataset.
\subsubsection{Notes on implementation of DMP}
The published code\footnote{https://github.com/vrt1shjwlkr/AAAI21-MIA-Defense} does not include reference data selection but nonetheless achieved good results.
Therefore, we did not implement entropy-based criteria for DMP in our experiments.
For DMP with synthetic reference data, we trained (unconditional) DCGAN as in the original study \cite{shejwalkar2021membership}.
We trained them to obtain generated images in accordance with the implementation of PyTorch examples\footnote{\url{https://github.com/pytorch/examples/blob/master/dcgan/main.py}}.
Since the resulting images (Figure~\ref{fig:generated_cifar10}) were natural and showed large diversity, we considered them to be sufficient for the reference dataset of DMP.
\section{Experimental Results}
\label{sec:experiment}
\subsection{Tabular Dataset}
\label{subsection:Tabular dataset}
Tables \ref{table:comparison-purchase} and \ref{table:comparison-texas} show the accuracies and MIA attack accuracies of our KCD and known defenses for two tabular datasets, Purchase100 and Texas100.
Here, KCD is compared with the best defenses chosen in Section \ref{subsection:Known_Defenses} from each of the three categories described in the same section.
We stress that one can succeed in an MIA with a probability of 50\% by random guessing. Hence, the baseline of the attack accuracies of these tables is 50\%.
Note that the values for the attack accuracy of MemGuard on these tables are much higher than the values reported in the original paper \cite{jia2019memguard}. This is because the setting we consider, described in Section \ref{subsection:Setting-of-MIA}, is more advantageous for attackers than that of \cite{jia2019memguard}.
Figure \ref{fig:num_ref_DMP} shows the privacy-utility trade-off of KCD and DMP. The results of our experiments for the tabular datasets Purchase100 and Texas100 are summarized as follows.
\begin{enumerate}
\item Tables \ref{table:comparison-purchase} and \ref{table:comparison-texas} show that KCD was \textit{much better} than the known defenses that also do not use a public reference dataset, AdvReg and MemGuard, in all of the three categories of MIAs, the black-box MIA with confidence score \cite{shokri2017membership, salem2019ml, song2020systematic}, the label-only MIA \cite{choo2020label}, and the white-box MIA \cite{DBLP:conf/sp/NasrSH19}.
For Purchase100, for instance, the testing accuracy of KCD was 11.5\% higher than that of AdvReg and its attack accuracy was 13.3\% smaller than that of MemGuard for ``BB w/score'' attacks.
\item Surprisingly, Tables \ref{table:comparison-purchase} and \ref{table:comparison-texas} also show that, in both privacy and utility senses and for all of the three categories of MIAs, \textit{KCD is comparable to the state-of-the-art MIA defense, DMP, with public reference data, although KCD does not use public reference data.}
As mentioned in Section \ref{sec:introduction}, the availability of public data is not guaranteed \cite{shejwalkar2021membership}.
The above results show that KCD could avoid this problem
without sacrificing privacy or utility in these experiments.
\item Figure \ref{fig:num_ref_DMP} shows that, for Purchase100 and for the ``BB w/score'' attack, \textit{the privacy-utility trade-off of KCD was also comparable to that of the state-of-the-art MIA defense requiring public reference data, DMP}.
We also executed similar experiments for ``label only'' and ``WB.'' They showed that the privacy-utility trade-off of KCD was comparable to that of DMP for these two types of attacks as well.
\end{enumerate}
\subsection{Image Dataset}
Our experiments for the image dataset CIFAR10 were conducted {in a} similar manner as the above experiment. We additionally compared KCD with two more defenses. The first one was PATE (with confident-GNMax) \cite{DBLP:conf/iclr/PapernotSMRTE18}. The second one was DMP with synthetic reference data \cite{shejwalkar2021membership} generated using deep convolutional GANs (DCGANs) \cite{dcgan}. Note that {these} two defenses were not used in our experiment with the tabular datasets because they use GANs.
The results of these experiments are summarized as follows.
\begin{enumerate}
\item Table \ref{table:comparison-CIFAR10}
and Figure \ref{fig:trade-offs} show that the privacy-utility trade-off of KCD was \textit{much better} than that of the known defenses without public reference data, AdvReg and MemGuard, and DMP with synthetic reference data for the ``BB w/score'' attack.
We also executed similar experiments for ``label only'' and ``WB.'' They showed that the privacy-utility trade-off of KCD is much better than the known defenses without public reference data as well.
\item Table \ref{table:comparison-CIFAR10} also shows that KCD was comparable to DMP and performed much better than PATE in terms of testing accuracy. DMP and PATE were better in terms of privacy, but KCD is better in the sense that it does not require public reference data.
\end{enumerate}
\section{Discussions and Limitations}
\subsection{Discussions}
\label{subsec:Discussions}
\noindent\textbf{Best number $n$ of teacher models:} Figure~\ref{fig:num_split_proposed} shows
\begin{figure} \includegraphics[width=8cm]{split_revised.png}
\caption{\bf{Effect of number $n$ of teacher models on performance of our KCD for CIFAR10.}
\rm
{
We examined
$n=2,3,5,7,9,11,13$, and $15$ (a larger point means a larger $n$).
Note that points indicating $n=11$ and $13$ are too close to distinguish.}
}\label{fig:num_split_proposed}
\end{figure}
\noindent the performance of our KCD for various teacher models on CIFAR10. Generally, a higher number of teacher models implies better privacy and utility, and they perform the best when $n=15$.
The computational cost of KCD is greater than those of DMP and PATE but less than those of some of the other defense methods, such as AdvReg. A large computational cost may limit applications for training large models with limited computational resources. However, we believe that the advantage of KCD, ``public reference dataset not necessary,'' makes other applications possible.
We stress that, for inferring, KCD incurs only the same computational cost as an unprotected target model, unlike MemGuard.
\vspace{1em}\noindent\textbf{Comparison with na\"{i}ve ideas:}
To clarify the effect of our ``knowledge cross-distillation'' idea for KCD in terms of privacy and utility, we compared KCD with two na\"{i}ve improvements to DMP to make it ``without reference data.''
The first na\"{i}ve improvement, \textit{``splitting'' DMP}, is as follows. Split the training dataset into two distinct parts; the former and the latter parts contain $(100-\theta)\%$ and $\theta\%$ of training data, respectively. Then, train the teacher model using the former part as a training dataset and train the student model through distillation by using the remaining one as a reference dataset.
The second na\"{i}ve improvement, \textit{``reusing'' DMP}, is as follows. Train a teacher model using all of the training data, take a subset containing $\theta\%$ of training data, and reuse this subset as the reference dataset to train a student model.
Figure \ref{fig:naive-DMPvsKCD} shows that, for the CIFAR10 dataset, the privacy-utility trade-off of our KCD was better than those of these two variants of DMP in our experiments.
Our KCD contains two ideas, ``splitting training dataset'' and ``reusing training data for reference data.'' The above result shows that the performance of our KCD is achieved only when both of these ideas are used, and it cannot be achieved with only one of these ideas.
\subsection{Limitations}
\textbf{Duplication in dataset:}
If certain data appear twice in the training dataset, KCD cannot ensure defense against MIAs for such a pair of data. In fact, the defense against MIAs as depicted in Algorithm \ref{alg:MDMP} is ensured because inputs $x\in D_i$ to $F_i$ are not contained in dataset $D\setminus D_i$ used in the training of $F_i$. However, this is not the case when the same data fall into $D_i$ and $D\setminus D_i$, respectively.
Similarly, a training dataset that contains two pieces of data that are not the same but very similar would affect the privacy-utility trade-off of KCD.
Investigating and solving this is for future work.
\vspace{1em}\noindent\textbf{Outlier data, imbalanced dataset:} Long et al.\ \cite{DBLP:journals/corr/abs-1802-04889,DBLP:conf/eurosp/LongWBB0TGC20} showed that an ML model became weaker against MIAs when the target data were outliers or selected carefully by an attacker, even if the ML model was well-generalized.
We selected the target data uniformly at random in our experiments.
Hence, KCD, as well as other known defense methods, may have weak {MIA resistance against carefully selected data}.
Truex et al. \cite{DBLP:conf/tpsisa/TruexLGW019} showed that MIAs against minority classes of imbalanced data were more likely to be successful. Here, imbalanced data means a dataset with skewed class proportions. Minority classes mean the classes that make up a smaller proportion. Hence, KCD, as well as other defense methods, may also have weak protection against MIAs in this case.
\section{Conclusion}
We proposed a new defense against MIAs, \textit{knowledge cross-distillation (KCD)}{, which} does not require any public or synthetic reference data to protect ML models unlike the state-of-the-art defense, DMP.
Our experiments showed that the privacy protection and accuracy of our defense were comparable to those of DMP for the tabular datasets Purchase100 and Texas100, and our defense had a much better privacy-utility trade-off than those of the existing defenses for the CIFAR10 image dataset.
Our defense {is a feasible method} for protecting the privacy of ML models in areas where public reference data are scarce.
Future work includes ensuring the privacy of duplicated or similar data in a dataset, investigating privacy for outlier and/or imbalanced data, and guaranteeing the privacy of KCD theoretically.
\section*{Acknowledgments}
We would like to thank Reza Shokri, Tomoyuki Yoshiyama, and Kazuya Kakizaki for useful comments. This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
\section{Appendix}
\label{section:Missing_Details_of_Experimental_Results}
\begin{table}[h]
\begin{tabular}{l|l|l||l|l|l|l|l|l}
\hline
\multicolumn{1}{|c|}{ }& \rm Category & \rm Defense&\rm \!\!Leaks~1~\cite{salem2019ml}\!\! & \rm \!\!Top~1~\cite{salem2019ml}\!\!& \rm \!\!Corr.~\cite{song2020systematic}\!\!& \rm \!\!Conf.~\cite{song2020systematic}\!\!& \rm \!\!Entr.~\cite{song2020systematic}\!\!& \multicolumn{1}{c|}{\rm \!\!m-Entr.~\cite{song2020systematic}\!\!}\\\hline\hline
\multicolumn{1}{|c|}{\multirow{3}{*}{\rm ${}^\nexists$Public Ref.}}& \rm Reg-based& \rm AdvReg \cite{nasr2018machine}& \rm 57.0\% & \rm 56.8\% & \rm 58.9\% & \rm 59.9\% & \rm 55.3\% & \multicolumn{1}{c|}{\rm 59.7\% }\\
\cline{2-9}
\multicolumn{1}{|c|}{ }& \rm AX-based& \rm MemGuard \cite{jia2019memguard}& \rm 66.6\% & \rm 71.9\% & \rm 61.3\% & \rm 72.1\% & \rm 70.1\% & \multicolumn{1}{c|}{\rm 72.1\% } \\
\cline{2-9}
\multicolumn{1}{|c|}{ }& \rm KT-based& \bf KCD& \rm 54.9\% & \rm 55.0\% & \rm 58.8\% & \rm 57.0\% & \rm 53.7\% & \multicolumn{1}{c|}{\rm 57.3\% }\\
\hline
\multicolumn{1}{|c|}{\rm ${}^\exists$Public Ref.}& \rm KT-based& \rm DMP~\cite{shejwalkar2021membership}\quad\quad\quad\quad\quad\quad& \rm 53.9\% & \rm 53.9\% & \rm 57.1\% & \rm 55.8\% & \rm 52.8\% & \multicolumn{1}{c|}{\rm 55.7\% }\\
\hline\hline
\multicolumn{3}{|c||}{\rm Unprotected} & \rm 72.8\% & \rm 72.0\% & \rm 61.3\% & \rm 73.6\% & \rm 71.2\% & \multicolumn{1}{c|}{\rm 73.7\% }\\
\hline
\end{tabular}
\caption{\centering BB attacks with confidence scores on Purchase100}
\label{table:BB-purchase100}
\begin{tabular}{l|l|l||l|l|l|l|l|l}
\hline
\multicolumn{1}{|c|}{ }& \rm Category & \rm Defense&\rm \!\!Leaks~1~\cite{salem2019ml}\!\! & \rm \!\!Top~1~\cite{salem2019ml}\!\!& \rm \!\!Corr.~\cite{song2020systematic}\!\!& \rm \!\!Conf.~\cite{song2020systematic}\!\!& \rm \!\!Entr.~\cite{song2020systematic}\!\!& \multicolumn{1}{c|}{\rm \!\!m-Entr.~\cite{song2020systematic}\!\!}\\\hline\hline
\multicolumn{1}{|c|}{\multirow{3}{*}{\rm ${}^\nexists$Public Ref.}}& \rm Reg-based& \rm AdvReg \cite{nasr2018machine}& \rm 52.5\% & \rm 52.1\% & \rm 56.7\% & \rm 58.8\% & \rm 53.2\% & \multicolumn{1}{c|}{\rm 59.5\% }\\
\cline{2-9}
\multicolumn{1}{|c|}{ }& \rm AX-based& \rm MemGuard \cite{jia2019memguard}& \rm 57.7\% & \rm 58.0\% & \rm 68.6\% & \rm 68.2\% & \rm 57.7\% & \multicolumn{1}{c|}{\rm 68.2\% } \\
\cline{2-9}
\multicolumn{1}{|c|}{ }& \rm KT-based& \bf KCD& \rm 54.8\% & \rm 54.9\% & \rm 53.1\% & \rm 56.2\% & \rm 54.8\% & \multicolumn{1}{c|}{\rm 55.4\% }\\
\hline
\multicolumn{1}{|c|}{\rm ${}^\exists$Public Ref.}& \rm KT-based& \rm DMP \cite{shejwalkar2021membership}\quad\quad\quad\quad\quad\quad& \rm 51.2\% & \rm 51.5\% & \rm 56.1\% & \rm 56.3\% & \rm 51.0\% & \multicolumn{1}{c|}{\rm 56.1\% }\\
\hline\hline
\multicolumn{3}{|c||}{\rm Unprotected} & \rm 58.8\% & \rm 58.6\% & \rm 68.6\% & \rm 69.7\% & \rm 59.4\% & \multicolumn{1}{c|}{\rm 69.9\% }\\
\hline
\end{tabular}
\caption{\centering BB attacks with confidence scores on Texas100}
\label{table:BB-texas100}
\begin{tabular}{l|l|l||l|l|l|l|l|l}
\hline
\multicolumn{1}{|c|}{ }& \rm Category & \rm Defense&\rm \!\!Leaks~1~\cite{salem2019ml}\!\! & \rm \!\!Top~1~\cite{salem2019ml}\!\!& \rm \!\!Corr.~\cite{song2020systematic}\!\!& \rm \!\!Conf.~\cite{song2020systematic}\!\!& \rm \!\!Entr.~\cite{song2020systematic}\!\!& \multicolumn{1}{c|}{\rm \!\!m-Entr.~\cite{song2020systematic}\!\!}\\
\hline\hline
\multicolumn{1}{|c|}{\multirow{4}{*}{\rm ${}^\nexists$Public Ref.}}& \rm Reg-based& \rm AdvReg \cite{nasr2018machine}& \rm 53.1\% & \rm 52.7\% & \rm 54.6\% & \rm 54.6\% & \rm 51.9\% & \multicolumn{1}{c|}{\rm 54.6\% }\\
\cline{2-9}
\multicolumn{1}{|c|}{ }& \rm AX-based& \rm MemGuard \cite{jia2019memguard}& \rm 63.0\% & \rm 63.4\% & \rm 58.6\% & \rm 64.3\% & \rm 63.1\% & \multicolumn{1}{c|}{\rm 64.3\% } \\
\cline{2-9}
\multicolumn{1}{|c|}{ }& \multicolumn{1}{c|}{\multirow{2}{*}{\rm KT-based}}& \rm DMP \cite{shejwalkar2021membership}(synth. ref.)& \rm 51.0\% & \rm 51.2\% & \rm 52.5\% & \rm 51.8\% & \rm 50.3\% & \multicolumn{1}{c|}{\rm 52.0\% }\\
\cline{3-9}
\multicolumn{1}{|c|}{ }& \rm & \bf KCD& \rm 52.1\% & \rm 52.2\% & \rm 55.6\% & \rm 55.3\% & \rm 51.3\% & \multicolumn{1}{c|}{\rm 55.8\% }\\
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\rm ${}^\exists$Public Ref.}}& \multicolumn{1}{c|}{\multirow{2}{*}{\rm KT-based}}& \rm DMP \cite{shejwalkar2021membership}& \rm 50.8\% & \rm 50.7\% & \rm 50.7\% & \rm 50.4\% & \rm 51.1\% & \multicolumn{1}{c|}{\rm 50.2\%}\\
\cline{3-9}
\multicolumn{1}{|c|}{ }& & \rm PATE \cite{DBLP:conf/iclr/PapernotSMRTE18}& \rm 50.0\% &\rm 49.8\% &\rm 50.5\% &\rm 50.4\% &\rm 50.0\% & \multicolumn{1}{c|}{\rm 51.2\%}\\
\hline\hline
\multicolumn{3}{|c||}{\rm Unprotected}& \rm 64.2\% & \rm 63.8\% & \rm 58.6\% & \rm 65.6\% & \rm 63.9\% & \multicolumn{1}{c|}{\rm 65.9\% }\\
\hline
\end{tabular}
\caption{\centering BB attacks with confidence scores on CIFAR10}
\label{table:BB-cifar10}
The above tables show the attack accuracies of each black-box MIA with confidence scores. ``Leaks 1'' means ML Leaks Adversary 1 \cite{salem2019ml}. ``Top 1,'' ``Corr.,'' ``Conf.,'' ``Entr.,'' ``m-Entr.,'' mean five metric-based attacks \cite{salem2019ml,song2020systematic}, \textit{Top 1}, \textit{correctness}, \textit{confidence}, \textit{entropy}, and \textit{m-entropy attacks}, respectively.
\end{table}
\begin{figure}
\centering
\begin{minipage}[t]{0.50\linewidth}
\includegraphics[width=8cm]{generated_cifar10.png}
\caption{Images generated by (unconditional) DCGAN}\label{fig:generated_cifar10}
\end{minipage}
\end{figure}
|
2,869,038,155,279 | arxiv | \section{Introduction}
Document language modelling is a crucial component in statistical approaches to information retrieval \cite{ponte98,hiemstra2001,zhai04}. These types of approach are generative in nature, in so far as they aim to estimate a model of the document generation process. Often these generative document models are simple unigram mixture models that incorporate information from the background collection as well as information from a specific document. Thus, the main challenge is one of parameter estimation and consequently various smoothing approaches have been studied \cite{zhai04} that aim to better estimate the parameters of the document models given a collection of documents. In information retrieval, once the document models have been estimated, documents can be ranked by the likelihood of their document model generating the query (i.e. the query-likelihood approach \cite{ponte98}).
Recently, document language models that exhibit a self-reinforcing property, via a multivariate P\'olya process, have been shown to significantly increase retrieval effectiveness \cite{cummins15a}. This approach captures word burstiness in a document-specific way. In this paper, we take this approach further and outline a more general process for statistical document modelling, one which encompasses a number of existing document language models as well as a number of novel variants.
In Section~2, we outline a general model of document generation in terms of an \emph{urn} scheme which crucially models the dynamics of document generation using a matrix ${\bf M}$. Section 3 discusses specific instantiations of ${\bf M}$, ones which lead to determining different generative distributions. Section 4 outlines how the new general model is used for document retrieval. Section 5 outlines our experimental set-up. The results of a number of experiments are reported in Section 6. Finally, in Section 7 we conclude with a discussion and outline some future work.
\section{Generalised P\'olya}
Let ${\bf u}_0$ be an urn initially containing $|{\bf u}_0|$ balls in total where each ball is one of $v$ distinct colours. Starting at time $i=0$, a ball is drawn with replacement from the urn and a number of additional balls (possibly of different colours) are added\footnote{such that the mass of the urn never decreases} to the urn according to a replacement matrix ${\bf M}$. Each row of this matrix determines the number of additional balls of each colour to add to the urn and the row is selected according to the colour of the ball drawn from the urn. Therefore, the dynamical nature of this random process is defined by this $v^2$ matrix ${\bf M}$.
Now, if $d = \{t_i\}$ is a sequence of observations indexed from $i=0$ to $i=|d|-1$ drawn from the urn, then the state of the urn can be describe as a recurrent process as follows:
\begin{equation}
{\bf u}_{i+1} = {\bf u}_{i} + {\bf e}_{t_i} \cdot {\bf M}
\end{equation}
where ${\bf e}_{t_i}$ is a standard basis vector (a.k.a a one-hot vector\footnote{a vector that is a $1$ in dimension $t_i$ and $0$ elsewhere}). At a particular time $i$ and for an observation $t_i$, the process essentially selects row $t_i$ of the matrix ${\bf M}$ and then combines it with the $i^{th}$ state of the urn ${\bf u}_{i}$. This imbues the urn with a reinforcing property, and in specific cases results in the traditional P\'olya urn scheme. The generalised P\'olya process is thus defined by both the replacement matrix ${\bf M}$ and the initial parameters ${\bf u}_0$ and so the likelihood of seeing a specific sample $d$ can be written as $p(d|{\bf u}_0, {\bf M})$. One can interpret ${\bf u}_0$ as completely defining the probability of seeing a particular coloured ball on the first draw, and can interpret ${\bf M}$ as defining the dynamics of the urn. These types of model are well-suited to modelling natural language where different coloured balls represent different word-types \cite{goldwater11} and where the number of balls of each colour in the urn is proportional to the probability of generating a specific word.
\section{Choices of ${\bf M}$}
In this section, we outline four different variants of ${\bf M}$. The first two of these variants do not rely on estimating ${\bf M}$ from data and already appear in the literature \cite{zhai01,cummins15a} in one form or another. They have also been implemented and have resulted in increases in retrieval effectiveness and in advances of a theoretical nature. The latter two models have yet to be realised.
\subsection{Zero Matrix}
If we choose ${\bf M}$ to be the zero (denoted ${\bf 0}$) matrix, the initial state of the urn does not change during the process. As the drawing of a particular colour does not effect subsequent draws, choosing the zero matrix is equivalent to using a multinomial language model.
\begin{equation}
{\bf M} =
\begin{pmatrix}
0 & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 0
\end{pmatrix}
\end{equation}
Estimating the maximum-likelihood parameters of ${\bf u}_{0}$ is trivial as they have closed-form solutions. Using this multinomial document language model (with different types of smoothing) is quite effective for retrieval \cite{zhai04}. In particular, it was shown that a document language model using Dirichlet prior smoothing is particularly effective.
\subsection{Identity Matrix}
If we choose ${\bf M}$ to be the identity matrix (denoted ${\bf 1}$), the state of the initial urn changes with a self-reinforcing property, as the drawing of a particular coloured ball only reinforces the urn with another copy of that particular colour. This process is equivalent to the multivariate P\'olya urn or Dirichlet-compound multinomial (DCM) and has been shown to generate text according to the power-law characteristics of natural language.
\begin{equation}
{\bf M} =
\begin{pmatrix}
1 & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 \\
\vdots & \vdots & 1 & \vdots \\
0 & 0 & 0 & 1
\end{pmatrix}
\end{equation}
This document language model has been implemented recently and has shown significant increases in retrieval effectiveness over the multinomial model \cite{cummins15a}. Furthermore, it has led to a number of theoretically interesting properties. It models word burstiness in a document-specific manner which in turn has led to a better understanding of both the scope and verbosity hypotheses \cite{cummins16}. This distribution contains only one extra parameter compared to the multinomial (a parameter which models the burstiness of terms). However, one of the weaknesses of the model is that is assumes that all terms are equally bursty.
\subsection{Diagonal Matrix}
If we choose ${\bf M}$ to be any positive diagonal matrix, the state of the initial urn again changes with a self-reinforcing property, but this time the amount of reinforcement is different depending on the colour of the ball drawn. This process can captures term-specific burstiness where certain words are more likely to repeat within a specific sample than others.
\begin{equation}
{\bf M} =
\begin{pmatrix}
m_{1,1} & 0 & \cdots & 0 \\
0 & m_{2,2} & \cdots & 0 \\
\vdots & \vdots & m_{v-1,v-1} & \vdots \\
0 & 0 & 0 & m_{v,v}
\end{pmatrix}
\end{equation}
In this case, we need to determine the $v$ parameters in this matrix. In this model, each term has an initial probability of occurring but also a specific parameter controlling its re-occurrence. We can set these extra parameters to some intuitively motivated values or can estimate them directly from data. The latter can be done using numerical optimisation or sampling methods. This is the model that is the focus of the experiments in this paper.
\subsection{Full Replacement Matrix}
If we choose ${\bf M}$ to be any matrix with the constraint such that for any replacement row vector ${\bf m}_k$ then $\sum^{v}_{j=1} {\bf m}_{kj} >0$, the state of the urn can change in all $v$ dimensions for any draw. The constraint is necessary to ensure that the process can continue ad infinitum (i.e. the urn is never left with less mass than previous).
\begin{equation}
{\bf M}_{|v|,|v|} =
\begin{pmatrix}
m_{1,1} & m_{1,2} & \cdots & \cdots \\
m_{2,1} & m_{2,2} & \cdots & \vdots \\
\vdots & \vdots & \vdots & \vdots \\
m_{v,1} & 0 & 0 & m_{v,v}
\end{pmatrix}
\end{equation}
This model can capture dependencies between different word-types within documents. For example, it may be that the words \emph{dna} and \emph{blood} tend to occur in the same documents. While this model is theoretically interesting, the remainder of this paper is focussed on the model presented in Section~3.3.
\subsection{Discussion}
The general model outlined here (Eq.~1) is an intuitive statistical generative model of documents. The vector ${\bf u}_i$ can be seen as storing the state of the model at a particular time $i$. Both the multinomial and multivariate P\'olya urn (SPUD \cite{cummins15a}) language model are specific instances of this model and are instantiated by different settings of ${\bf M}$. Given that the SPUD language model significantly improves upon the multinomial model in information retrieval, the further extensions hold the promise of improved performance and of greater theoretical understanding. Furthermore, it is worth noting that the dependencies that the models\footnote{those from from Section~3.2 onwards} capture, span a greater distance than $n$-gram models (i.e. a word occurring at the start of a document affects the choice of word at the end of a document).
The main challenges to implementing the remaining model variants are in estimating ${\bf M}$ and ${\bf u}_0$ from a large background model (document collection ${\bf D}$) and subsequently in inferring the initial state of each document model. For large scale collections this a computationally expensive inverse problem. However, the upcoming section will outline some promising initial experiments with regard to the third variant of the general model (i.e. modelling term-specific burstiness for retrieval). The fourth variant is left for future work.
\section{Parameter Estimation}
This section is concerned with estimating from data the parameters of the general background language model ${\bf u}_{0}$ and ${\bf M}$, and the unsmoothed document model ${\bf u}_{0}^d$.
\subsection{Background Model Estimation}
In the language modelling approach to information retrieval, it is common to estimate a background collection model using all of the documents $d$ from the collection $D$. Therefore, we first estimate the parameters of the background model (${\bf u}_{0}$ and ${\bf M}$) that has generated all the documents in the collection $D$. We adopt a Bayesian approach to this problem and aim use the posterior mean as the point-estimate as follows:
\begin{equation}
\mathbb{E}({\bf u}_{0}, {\bf M}) = \int ({\bf u}_{0}, {\bf M}) \cdot p({\bf u}_{0}, {\bf M} | D )\cdot d_{{\bf u}_{0}, {\bf M}}
\end{equation}
In general, estimating these parameters is computationally expensive for large scale document collections. In this paper, we use smaller collections and make use of MCMC sampling to approximate the posterior distribution. The main bottleneck in such an approach is estimating the likelihood of the data given the parameters (for many parameter samples). However, there are alternative approaches to setting these parameters. For example, if ${\bf M}$ is set to a fixed value, then we only need to estimate ${\bf u}_{0}$ from data. We will outline the specifics of the estimation approach in Section~5.
\subsection{Document Model Estimation}
Once ${\bf M}$ is known\footnote{via estimation or intuition}, the parameters of the unsmoothed document model $({\bf u}_{0}^d)$ that generated a specific document $d$ need to be estimated. Again we take a Bayesian approach and estimate the expectation of the posterior as follows:
\begin{equation}
\mathbb{E}({\bf u}_{0}^d) = \int {\bf u}_0^d \cdot p({\bf u}_0^d|{\bf M},d) \cdot d_{{\bf u}_0^d}
\end{equation}
where it is worth noting that these parameters need to be estimated for each document $d$ in the collection $D$. ${\bf M}$ is fixed here as it represents the general dynamics of document generation (not of a specific document). Once both ${\bf u}_{0}$ and ${\bf u}_{0}^d$ are estimated, they can be linearly smoothed using a single hyperparameter as follows:
\begin{equation}
({\bf u}_{0}^{d'}) = ((1-\omega)\cdot {\bf u}_{0}^d + \omega \cdot {\bf u}_{0})
\end{equation}
where $0 \leq \omega \leq 1$ is a tuning parameter and ${\bf u}_{0}^{d'}$ is the final document model. The parameters of ${\bf u}_{0}^{d'}$ can be interpreted as the initial proportions of words of each type in the document model before any draws have been made.
\subsection{Query-Likelihood for Retrieval}
We adopt the well-known query likelihood approach. Therefore, each document $d$ can be ranked by determining the probability of their document model generating the query as follows:
\begin{equation}
p(q| {\bf u}_{0}^{d'}, {\bf M}_q)
\end{equation}
where ${\bf M}_q$ is a dynamical model for query generation. It may be the same as ${\bf M}$ (for documents) or could be something as simple as the zero matrix. In this paper, we assume a zero matrix where is no query dynamics. We justify this by noting that queries are typically much shorter than documents and are motivated by a very different need when compared to documents. Therefore, in this paper we rank documents according to the following formula once both ${\bf u}_{0}$ and ${\bf u}_{0}^d$ are estimated:
\begin{equation}
log~p(q|{\bf u}_{0}^{d'}, {\bf 0}) = \sum_{t \in q} log ( \frac{\mu_d}{\mu_d + \mu } \cdot \frac{u_{0_t}^d}{|{\bf u}_0^d|} + \frac{\mu}{\mu_d + \mu } \frac{u_{0_t}}{|{\bf u}_0|} )
\end{equation}
where ${|{\bf u}_0^d|}$ and ${|{\bf u}_0|}$ are the mass of the document model and background model used to calculate actual probabilities. However, when estimating the parameters of ${{\bf u}_0^d}$ of a single document $d$, the mass (i.e. ${|{\bf u}_0^d|}$) is often not constrained. Therefore, we introduce $\mu_d$ as the initial mass of the document model and set it to the number of unique terms in $d$. Furthermore, by re-writing the retrieval formula, we have subsumed $\omega$ into $\mu$, thus leaving $\mu$ the only free hyperparameter in the model.
\section{Experiments}
The aim of our experiments is to test the third variant (denoted GSPUD from now on) of the general model and compare it to the models when ${\bf M=0}$ (MULT) and ${\bf M=1}$ (DCM). This third model contains $v$ extra parameters compared to the multivariate P\'olya scheme (DCM). These extra parameters model the burstiness of each term individually. We outline two approaches to finding them. One method involves setting these parameters according to some heuristic (i.e. intuition), and the second method involves estimating the parameters directly from the data using numerical methods. Before this we first outline our use of Metropolis-Hastings MCMC sampling for estimating the parameters (${\bf u}_0$ and ${\bf u}_0^d$) of the models outlined in this work.
\subsection{Estimation}
For all of the models implemented in this work, we make use of MCMC sampling to estimate the posterior expectation of the parameter distribution. We use the Metropolis-Hastings algorithm as it is easily implemented (i.e. one only needs to be able to calculate the likelihood of the data given a set of parameters), can deal with complex distributions (i.e. we do not know the conjugate distribution of the generalised P\'olya distribution), and can deal with high-dimensional data (i.e. the background model will contain thousands of parameters/words). Other techniques may indeed prove to be faster, but for our purposes we only need a method of arriving at suitable estimates.
In particular, we use the Metropolis-Hastings algorithm with a Gaussian proposal distribution (variance of 0.01 for the background model and variance of 0.25 for each document model). For background models, we run the chain for 500,000 samples discarding the first 50,000 samples (i.e. burn-in period). For individual document models, we run the chain for 200,000 samples discarding the first 20,000 samples. We estimate the expectation of the posterior parameter distribution using the likelihood function with a uniform prior. We start the algorithm with a uniform distribution (all parameters are set to $1.0$).\footnote{In practice we sample in the log parameter space and then exponentiate to avoid negative values which are invalid for these models.} We used these techniques on all variants of our models. We used this method for the multinomial model in order to compare the effectiveness of the sampling algorithm on models where we have closed-form maximum-likelihood estimates. This was especially useful during development.\footnote{Code available at \url{https://github.com/ronancummins/gen_polya}}
\subsection{Setting ${\bf M}$ Heuristically}
One method of setting the $v$ parameters of ${\bf M}$ is to set them according to a heuristic. One measure of term-burstiness that has been outlined in the literature is as follows:
\begin{equation}
m_{t,t} = bs_t = \frac{cf_t}{dt_t}
\end{equation}
where $cf_t$ is the frequency of the term in the collection, and $df_t$ is the document frequency of the term. This quantity measures the average frequency of a word in a document given that it has occurred once. The measure has appeared extensively in the information retrieval literature \cite{kwok96,franz00,cummins06}. It has a lower bound of $1.0$ (which in fact is the default value of term-burstiness in the multivariate P\'olya scheme outlined in Section~3.2). Once ${\bf M}$ is set in this way, we can estimate the initial parameters ${\bf u}_0$ and ${\bf u}_0^d$ using MCMC.
\subsection{Data}
Due to the computationally expensive nature of the task, we use three small test collections (Medline, Cranfield, and CISI)\footnote{Available from \url{http://ir.dcs.gla.ac.uk/resources/test_collections/}}. Although, these collections are rather old and are quite small, when compared to modern large scale Web collections such as ClueWeb, they can provide some insights and intuitions into what natural language characteristics are being captured by the new model. We removed standard stopwords and stemmed the documents and queries. Table~\ref{tab:collections} shows some of the characteristics of the collections after preprocessing.
\begin{table*}[!ht]
\centering
\small
{\renewcommand{\arraystretch}{1.0}
\begin{tabular}{| r || r | r | r | r |}
\hline
Collection & Medline & Cranfield & CISI \\
\hline
\# docs & 1,033 & 1,400 & 1,460 \\
\# vocab ($v$) & 8,764 & 5,769 & 7,062 \\
\# tokens & 97,175 & 153,276 & 115,527 \\
\# qrys & 30 & 225 & 76 \\
\hline
\end{tabular}}
\caption{Test Collections Details}
\label{tab:collections}
\end{table*}
\section{Results}
In this section we report our results along with some qualitative analysis that aim to interpret them further.
\subsection{Background Estimation}
First, in Table~\ref{tab:perplexity} we look at how well our different models fit the background collection. We use perplexity to compare the performance of our model on the data on which the models were trained. Perplexity is a measure of how surprised our model is by the data (a lower perplexity indicating a better model). The number of parameters for each model is shown in the table, where the vocabulary ($v$) can be found in Table~\ref{tab:collections}.
As a benchmark for our sampling algorithm, we used MCMC to estimate the parameters of the multinomial model (${\bf M = 0}$). We see that our MCMC estimates (MULT$_{mc}$) result in a model which has a very similar perplexity to the maximum-likelihood estimates (MULT$_{mle}$). Furthermore, the DCM$_{mc}$ model (${\bf M = 1}$) improves substantially over the multinomial model with only one extra parameter. The generalised P\'olya model with $v$ burstiness parameters set to $bs_t$ (GSPUD$_{bs_{t}}$) improves over the DCM model. Finally, the generalised P\'olya model with $v$ burstiness parameters estimated via MCMC (GSPUD$_{mc}$) has the lowest perplexity of all models. While this tells us that the parameters are modelling aspects of the document collection such that they are better able to predict the sequence of words, it is unclear whether these results extend beyond the collections on which they were trained. While these experiments provide a useful sanity check regarding the ability of our models to better fit to data, we need to use these models in a retrieval setting to ultimately determine if they can improve effectiveness.
\begin{table*}[!ht]
\centering
\small
{\renewcommand{\arraystretch}{1.0}
\begin{tabular}{| l || c | c || c | r | r | r |}
\hline
& \multicolumn{2}{|c||}{Estimation} & & \multicolumn{3}{|c|}{Datasets} \\
\hline
Model &${\bf u_0}$ & ${\bf M}$ & \# params & Medline & Cranfield & CISI \\
\hline
MULT$_{mle}$ & mle & ${\bf 0}$ & $v-1$ & 2047.4 & 971.6 & 1309.7 \\
MULT$_{mc}$ & mcmc & ${\bf 0}$ & $v-1$ & 2079.2 & 980.2 & 1325.0 \\
DCM$_{mc}$ & mcmc & ${\bf 1}$ & $v$ & 1728.8 & 883.1 & 1282.2 \\
GSPUD$_{bs_{t}}$ & mcmc & $bs_t$ & $2v$ & 1369.1 & 688.4 & 1248.4 \\
GSPUD$_{mc}$ & mcmc & mcmc & $2v$ & 1152.7 & 597.8 & 999.6 \\
\hline
\end{tabular}}
\caption{Perplexity (nats) of different language models trained on the background corpus.}
\label{tab:perplexity}
\end{table*}
\subsection{Document Models and Retrieval Performance}
Table~\ref{tab:retrieval} shows the optimal performance of each model when $\mu$ is tuned per collection over the values
$\mu = {10,50,100,200,300,400,500,1000,10000}$. Fig.~\ref{fig:tuning} shows the trends when $\mu$ varies. In all cases we can see that the GSPUD models outperform both the DCM and MULT models. The results are statistically significant. The best performing model is the GSPUD model that estimates its burstiness parameters directly from data rather then heuristically. Interestingly, the
MULT$_{mc}$ approach outperforms MULT$_{mle}$ suggesting that Bayesian estimates might be better than
maximum-likelihood estimates for the retrieval task.
\begin{table*}[!ht]
\centering
\small
{\renewcommand{\arraystretch}{1.0}
\begin{tabular}{| l || c | c || l | l | l |}
\hline
& \multicolumn{2}{|c||}{Estimation} & \multicolumn{3}{|c|}{Datasets} \\
\hline
Model &${\bf u_0},{\bf u_0^d}$ & ${\bf M}$ & Medline & Cranfield & CISI \\
\hline
MULT$_{mle}$ & mle & ${\bf 0}$ & 0.504 & 0.402 & 0.221 \\
MULT$_{mc}$ & mcmc & ${\bf 0}$ & 0.506 & 0.409 & 0.225 \\
DCM$_{mc}$ & mcmc & ${\bf 1}$ & 0.517$^{m}$ & 0.414$^{m}$ & 0.230$^{m}$ \\
GSPUD$_{bs_{t}}$ & mcmc & $bs_t$ & 0.523$^{md}$ & 0.427$^{md}$ & 0.233$^{md}$ \\
GSPUD$_{mc}$ & mcmc & mcmc & 0.533$^{md}_{g}$ & 0.432$^{md}_{g}$ & 0.245$^{md}_{g}$ \\
\hline
\end{tabular}}
\caption{Performance (Mean Average Precision) of different language models with $\mu$ tuned per collection. The keys $m,d,g$ mean that the result is statistically significant compared to MULT, DCM, and GSPUD$_{bs_{t}}$ respectively at the $p<0.01$ level using a permutation test.}
\label{tab:retrieval}
\end{table*}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{c c c}
\includegraphics[height=3.1cm,width=3.7cm]{med.eps} &
\includegraphics[height=3.1cm,width=3.7cm]{cran.eps} &
\includegraphics[height=3.1cm,width=3.7cm]{cisi.eps} \\
\end{tabular}
\caption{MAP of different models for varying values of $\mu$ Medline, Cranfield, and CISI.}
\label{fig:tuning}
\end{center}
\end{figure}
\subsection{Qualitative Analysis}
In order to qualitatively evaluate what our models are learning, we now analyse a number of words that appear in the Medline collection. The three words \emph{also}, \emph{dna}, and \emph{refer} are shown with some of their collection statistics. In a multinomial model, \emph{also} and \emph{dna} would have a very similar discriminative value because they occur nearly the same number of times throughout the corpus. Models that only use a type of $idf_t$ would assign a similar discriminative value to \emph{refer} and \emph{dna}. However, it is intuitive that \emph{dna} is a much more useful word in general due to its burstiness. We can see that the burstiness of \emph{dna} is much higher than the other terms when estimated from data (i.e. $\hat{m}_{t,t}$). It is worth remembering that in the GSPUD model, each word is characterised by two parameters, one that controls the probability of being drawn from the initial urn at time $t=0$ and one that controls the burstiness. For the GSPUD model, the initial probability of a word being drawn from the urn is very closely correlated with the document frequency and is close to the probability $\frac{df_t}{\sum_{t'} df_{t'}}$. The ideas of word burstiness have been around for quite a while with a number of interesting papers written by Church and Gale \cite{church95,church99} outlining the properties of \emph{important} keywords in documents.
\begin{table*}[!ht]
\centering
\small
{\renewcommand{\arraystretch}{1.0}
\begin{tabular}{| l || r | r | r || r | r |}
\hline
term ($t$) & $cf_t$ & $df_t$ & $bs_t$ & $ \hat{u}_{0_t}$ & $\hat{m}_{t,t}$ \\
\hline
also & 216 & 180 & 1.20 & 37.86 & 10.22 \\
dna & 214 & 47 & 4.55 & 9.28 & 266.27 \\
refer & 51 & 47 & 1.08 & 9.08 & 3.22 \\
\hline
\end{tabular}}
\caption{Some statistics and example estimates of the parameters of the GSPUD$_{mc}$ for three words}
\label{tab:examples}
\end{table*}
\section{Summary}
This paper has introduced a family of statistical language models inspired by a classic urn model. We have shown that it is the replacement matrix ${\bf M}$ that defines the dynamics of the model. We have implemented a variant of the model which models burstiness in a term-specific manner. We have shown that the parameters of the model can be estimated from data using sampling techniques.
Furthermore, we have incorporated the new language model into a retrieval framework and shown that retrieval effectiveness improves significantly over a highly competitive baseline language model. Although our experiments are conducted on small test collections (because parameter estimation is computationally expensive), the results are promising. We believe that this is first paper that deals with term-specific burstiness in such a principled probabilistic manner.
\bibliographystyle{plain}
\input{summary.bbl}
\end{document}
|
2,869,038,155,280 | arxiv | \section{Introduction}
Osteoarthritis (OA) is a kind of chronic degenerative articular disease that causes disability gradually. Knee cartilage is the soft tissue adhering to the end of the bone surface, whose changes in morphological structure are associated with the progression of OA~\cite{ref_proc1}. Compared to other imaging techniques, magnetic resonance imaging (MRI) shows a higher level of specificity and sensitivity to obtain the biomedical markers of knee cartilage~\cite{ref_article1}. However, manual cartilage segmentation from MRI has demanded more relative knowledge from specialists. The manual segmentation is tedious, time-consuming and brings inter-/intra-observer variations. Thus, there is a demand to design an effective automatic cartilage segmentation method for the longitudinal analysis.
\begin{figure}
\includegraphics[width=\textwidth]{fig1.pdf}
\centering
\caption{The knee cartilage segmentation results from the primary baseline~\cite{ref_proc4} and the same model~\cite{ref_proc4} modified by PCAM. In white and red circle, the local contextual information of foreground is similar to the background nearby, which causes the segmentation errors and leads to discontinuous segmentation result. The proposed PCAM can relieve the problem and create a more accurate and continuous segmentation result.} \label{fig1}
\end{figure}
With the development of deep learning, convolutional neural network(e.g. U-Net~\cite{ref_proc3}, V-Net~\cite{ref_proc4})has achieved state-of-the-art segmentation results. Although the 3D deep learning model V-Net has exhibited a superior performance on medical image segmentation tasks, directly applying the primary V-Net on knee MR data may generate low accuracy results. As shown in Fig.~\ref{fig1}, the articular structure is complex and the features of tissues around the knee cartilage in MR image are similar to each other, which is difficult to extract continuous knee cartilage accurately from the whole volumetric data. To reduce disturbance brought by the complex background, Ambellan et al.~\cite{ref_article3} proposed a coarse-to-fine scheme with a 2D CNN for coarse segmentation and a 3D CNN for fine segmentation followed by a statistical shape adjustment. Similarly, Gatti et al. ~\cite{ref_article2} adopted 2D U-Net and 3D U-Net in parallel for coarse segmentation and an additional 3D U-Net for fine segmentation. The coarse-to-fine architecture consists of several sub-networks which brings a huge computational burdens and the input of the behind sub-networks is entirely reliant on the output of the preceding sub-networks. Tan et al. ~\cite{ref_proc2} presented a collaborative multi-agent learning framework in an end-to-end scheme that is still limited to the GPU memory.
Within 3D MR dataset, the morphological feature of knee cartilage varies greatly. To capture the contextual information of objects with different scales, Zhao et al.~\cite{ref_proc5} presented a pyramid network using multiple dilated convolution kernels. However, this non-adaptive approach ignored the long-range dependencies and did not distinguish the surrounding pixels from different categories leading to discontinuous segmented results. Sinha et al.~\cite{ref_article4} pointed out that the self-attention mechanism exhibits good performance on modeling long-range contextual dependency with a high level of flexibility. Nevertheless, when the feature map is large, this attention module is prone to a heavy computational burden.
To overcome the shortcomings mentioned above, we proposed a novel position-prior and clustering-based self-attention module (PCAM) in CNN for automatic knee cartilage segmentation in MR data. The main contributions of our research are concluded as follows: (a) It is the first time that we applied clustering-based self-attention module on knee cartilage segmentation tasks. And the proposed PCAM can be plugged in network flexibly with less GPU memory consumption and computational burdens. (b) We proposed a position-prior module that excludes false positives on the boundary area of knee cartilage from coarse mask to improve the accuracy of feature clustering in PCAM. (c) The presented PCAM captures long-range contextual information to achieve continuous and accurate knee cartilage segmentation. (d) The segmentation models combining proposed PCAM have obtained performance improvement on the OAI-ZIB dataset.
\section{Method}
General CNN focuses on the local receptive field while neglecting the long-range dependencies that introduces intra-class inconsistency and discontinuous segmentation~\cite{ref_article4}. The attention module can capture various scale context dependencies and strengthen the relevant feature information to obtain a more accurate result in segmentation tasks. Taking tibial cartilage segmentation as an example, the overview of the network architecture is shown in Fig.~\ref{fig2}. The blue box is the side-output function to produced coarse mask of the knee cartilage that is modified by morphological operations. The feature map together with the modified side-output is then fed back to the clustering-based self-attention module to produce an enhanced feature map.
\begin{figure}[h]
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{The segmentation network architecture with PCAM.} \label{fig2}
\end{figure}
The PCAM is divided into three parts: position-prior module, clustering-based module and self-attention module, which are illustrated in Fig.~\ref{fig3}. As it is impractical to use true label mask for calculating each class center in corresponding feature map, the output of the segmentation network is applied for class center approximation~\cite{ref_proc6}. However, the knee cartilage only occupies a small area in a large sized MR image, several false positive points in the coarse segmentation result can make the estimated results deviate from the true class centers evidently. Therefore, we adopt modified result with the help of morphological operations to generate precise position prior in this research.
\subsubsection{Position-prior module}
The position-prior module is designed for excluding the false positives of side-output so as to improve the accuracy of feature clustering. Fig.~\ref{fig1} illustrates a common case that V-Net obtains precise segmentation results within the knee cartilage as well as background while fails on the area around the boundary of segmented knee cartilage. In boundary area, the feature distribution is ambiguous because of the low contrast with adjacent tissues and the poor quality of imaging technique. Thus, we divided the predicted probability map into three parts: $M^{boundary}$ (the area around boundary the of predicted knee cartilage); $M^{foreground}$ (the area within predicted knee cartilage); $M^{background}$ (the area except for $M^{boundary}$ and $M^{foreground}$). Areas $M^{foreground}$ and $M^{background}$ are defined as follows:
\begin{equation}\label{e1}
M^{foreground} = \{(x,y,z)|(B)_{(x,y,z)}\subseteq \sigma(F)\}
\end{equation}
\begin{equation}\label{e2}
M^{background} = \{(x,y,z)|(B)_{(x,y,z)}\subseteq 1-\sigma(F)\}
\end{equation}
where $F$ is the feature map and $\sigma(\cdot)$ is the side-output function to generate a predicted probability map; $(x,y,z)$ is the position in $\sigma(F)$; $B$ represents the structure element while function $(B)_{(x,y,z)}$ is the set centered on $(x,y,z)$ containing all elements of $B$. As shown in Fig.~\ref{fig3}, we regard the modified side-output $M^{foreground}$ and $M^{background}$ as position prior, which are then used to assess class centers of foreground and background in feature map, respectively.
\subsubsection{Clustering-based module}
The clustering-based module can weaken the influence of segmentation error by averaging all features belonging to the same class in predicted probability map. In addition, the class center contains abundant contextual information as an aggregation of features within each class. Thus, the clustering method is applied in PCAM to compute the similarity between every position in feature map and each class center so as to construct affinity map. The process of clustering is shown in Fig.~\ref{fig3}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{fig3.pdf}
\caption{The details of position-prior and clustering-based attention module.} \label{fig3}
\end{figure}
Given the feature map $F\in{\mathbb{R}}^{C\times H\times W\times S}$, foreground mask $M^{foreground}\in{\mathbb{R}}^{1\times H\times W\times S}$ and background mask $M^{background}\in\mathbb{R}^{1\times H \times W\times S}$, $F$ is reshaped to $C\times HWS$ and mask $M^{class}$ is adjusted to shape $HWS\times 1$, where $C, H, W, S$ represent channel, height, width and slice, respectively. The class center of each class is calculated as follows:
\begin{equation}\label{e3}
F^{class} = \frac{\sum_{i=1}^{HWS}M^{class}(i)\cdot F(i)}{\sum_{i=1}^{HWS}M^{class}(i)}
\end{equation}
where the $class$ could be one of the elements from set $\{foreground, background\}$, $F(i)\in \mathbb{R}^{C\times 1}$ is the feature vector in position $i$ of feature map $F$.
\subsubsection{Self-attention module}
Self-attention module can capture the long-range contextual information to ensure the continuous segmentation result and improve the accuracy. As shown in Fig.~\ref{fig3}, the feature map $F$ is firstly reshaped and permuted to $\mathbb{R}^{HWS\times C}$ and class centers are concatenated along the last dimension. The matrix multiplication is then executed between feature map $F\in \mathbb{R}^{HWS\times C}$ and class center $F^{class}\in \mathbb{R}^{C\times 1}$. The results are normalized to generate affinity map $A$ as follows:
\begin{equation}\label{e4}
A_{j}^{class} = \frac{exp(F^{class}\cdot F(j))}{\sum_{i}^{classes}exp(F^{i}\cdot F(j))}
\end{equation}
where $A_{j}^{class}$ denotes the similarity between the feature vectors in position $j$ and clustering center of $class$. The class set $classes=\{foreground,background\}$, which also can be extended to multi-class. Affinity map is then multiplied by the transposed feature vectors of class centers to obtain the attention feature map that is further element-wisely added to feature map $F$. The generation of novel feature map is formulated as follows:
\begin{equation}\label{e5}
F_{j}^{atten} = F(j)+\sum_{i}^{classes}A_{j}^{i}\cdot F^{i}
\end{equation}
As presented in formulas \ref{e4} and \ref{e5}, two points with similar features are more likely to be assigned the same labels, which achieves the continuous segmentation results of health cartilage while adjacent points with dissimilar features are hardly to be classified into the same class that ensures the discontinuity of defective cartilage. PCAM is a flexible plug-in module, the output of which will be the input of next layer in segmentation network as shown in Fig.~\ref{fig2}. Comparing to the self-attention module~\cite{ref_article4}, the float point operations of PCAM is reduced to $(2C-1)\times N\times HWS$ in a $C\times H\times W\times S$ feature map, where $N$ is the number of class. In PCAM, the side-output result indicates the class distribution and determines the accuracy of class center estimation. For ensuring the consistency between side-output result and true mask, the auxiliary deep supervision is adopted.
To relieve the class-imbalanced problem that knee cartilage occupies a much smaller area compared with background, the Dice loss and Cross-entropy loss are employed for supervision. The total loss is described as follows:
\begin{equation}\label{e6}
Loss_{Total} = Loss_{Dice} + Loss_{Cross-entropy}
\end{equation}
\section{Experiment}
\subsubsection{Materials and Evaluation Metrics}
The proposed method is validated on the OAI-ZIB Dataset\footnote[1]{\url{https://nda.nih.gov/oai/}}. This public dataset includes 507 3D DESS MR data with 81120 slices. The pixel spacing is 0.3645mm $\times$ 0.3645mm and the slice thickness is 0.7mm for all volumetric data. For each volumetric data, it contains 160 slices in 384 $\times$ 384. The 2-fold cross validation approach is applied in experiment for evaluating the performance of methods. To verify the effectiveness of PCAM, the ablation study is executed as well. Dice similarity coefficient (DSC), average symmetric surface distance (ASSD) and volumetric overlap error (VOE) are adopted for the comparison between the predicted results and ground truth.
\subsubsection{Implementation Details}
In training phase, the batch size is set to 4 and the initial learning rate is set to 0.01 by Adam optimizer with 0.95 decay rate for every epoch. The training phase would be stopped when the improvement of DSC on validation set is no more than 0.0001 in continuous 10 epochs. The structure element is a $3\times 3$ mask with all elements set to 1. To protect the structure of cartilage, the morphological operation is executed only once. For data augmentation, the elastic deformation, random rotation, random-/center-cropping and random clip contribute to spatial transformation. Moreover, the gamma transformation, Gaussian noise and contrast adjustment are applied to enrich the gray distribution of training data. For mini-batch training in segmentation network, the batch normalization and ReLU are substituted by instance normalization and Leaky ReLU, respectively. The networks are trained and tested on NVIDIA 2080Ti with 11 GB video memory.
\subsubsection{Experimental results}
As the cascaded model (e.g.~\cite{ref_article2}-~\cite{ref_proc2}, ~\cite{ref_article5}) is composed of single segmentation models, the segmentation performance of cascaded model depends on its sub-networks with complex computational process and huge computation burdens. In this experiment, several classical segmentation networks with different schemes are evaluated. First of all, the baseline model is derived from V-Net~\cite{ref_proc4} without any modification. The second segmentation model is devised with the joint learning by the generative adversarial network (GAN) whose architecture and training process are adjusted on the basis of~\cite{ref_proc2}. Because the memory usage of attention module in ~\cite{ref_proc6} exceeds the limitation of GPU, the third model, that is the combination of baseline model and the attention module~\cite{ref_proc6}, is re-designed with the help of side-output and auxiliary supervision. The fourth model is the baseline with proposed PCAM plugged between the third and the fourth upsampling layer as Fig.~\ref{fig2}. The fifth model is the primary nnU-Net~\cite{ref_article5}, which obtained the best segmentation results on several medical image segmentation challenges. In the last model, the nnU-Net is combined with the proposed PCAM in the same location as the fourth model.
\begin{table}[h]
\caption{Quantitative comparisons among segmentation methods with evaluation metrics (DSC, VOE and ASSD) by mean and std values.}\label{tab1}
\centering
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{2}{*}{Model} & \multicolumn{3}{c|}{Femoral Cartilage} & \multicolumn{3}{c|}{Tibial Cartilage} \\ \cline{2-7}
& \multicolumn{1}{c|}{DSC(\%)} & \multicolumn{1}{c|}{VOE(\%)} & ASSD(mm) & \multicolumn{1}{c|}{DSC(\%)} & \multicolumn{1}{c|}{VOE(\%)} & ASSD(mm)\\ \hline
Baseline~\cite{ref_proc4}& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}87.71\\ 2.77\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}21.79\\ 4.28\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2259\\ 0.1020\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}84.11\\ 3.93\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}27.23\\ 5.79\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2287\\ 0.1067\end{tabular} \\ \hline
Tan et al.~\cite{ref_proc2}& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}87.86\\ 2.84\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}21.53\\ 4.40\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2390\\ 0.1221\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}84.08\\ 4.04\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}27.25\\ 5.94\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2643\\ 0.1402\end{tabular} \\ \hline
Zhang et al.~\cite{ref_proc6}& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}87.91\\ 2.82\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}21.45\\ 4.38\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2235\\ 0.1121\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}84.57\\ 4.16\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}26.52\\ 6.20\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2374\\ 0.0907\end{tabular} \\ \hline
Baseline~\cite{ref_proc4}+PCAM & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}88.45\\ 2.57\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}20.62\\ 4.04\end{tabular}} & \begin{tabular}[c]{@{}c@{}}\textbf{0.2058}\\ 0.0900\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}85.15\\ 4.19\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}25.61\\ 6.31\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2172\\ 0.1072\end{tabular} \\ \hline
nnU-Net~\cite{ref_article5}& \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}89.03\\ 2.73\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}19.21\\ 4.37\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2551\\ 0.3206\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}86.0\\ 4.52\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}24.29\\ 6.73\end{tabular}} & \begin{tabular}[c]{@{}c@{}}\textbf{0.2117}\\ 0.1074\end{tabular} \\ \hline
nnU-Net~\cite{ref_article5}+PCAM & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{89.35}\\ 2.69\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{19.14}\\ 4.32\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2389\\ 0.3196\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{86.14}\\ 4.43\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{24.08}\\ 6.63\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2165\\ 0.1242\end{tabular} \\ \hline
\end{tabular}
\end{table}
The experimental results are demonstrated on Table.~\ref{tab1}. For adversarial learning scheme, the results are imporved on femoral cartilage segmentation but failed on the tibial cartilage. The structure characteristics that tibial cartilage occupies much fewer pixels may cause the adversarial learning scheme ineffective. For attention module in the third model, it captures the long-range contextual information that is more suitable for elongated cartilage segmentation. Furthermore, the segmentation performance of both baseline model and nnU-Net are improved on condition that the PCAM is plugged in the network which proves the effectiveness of the proposed self-attention module. Comparing to the primary baseline model, the combination of baseline model and PCAM can achieve the continuous and accurate segmentation, which is shown in Fig~\ref{fig1}. To quantify the continuity correctness of segmentation results, 0-dimension Betti number error is calculated slice-by-slice~\cite{ref_article6}. Under this metric, the average continuity errors of nnU-Net+PCAM model are $0.1323(\pm 0.43)$ in femoral cartilage and $0.1267(\pm 0.48)$ in tibial cartilage compared to nnU-Net with $0.1792(\pm 0.52)$ in femoral cartilage and $0.1358(\pm 0.49)$ in tibial cartilage. T-test was conducted between the methods with and without PCAM on segmentation continuity. We obtained p-value<0.01 on both femoral cartilage and tibial cartilage segmentation tasks, indicating that PCAM improves the segmentation continuity significantly.
The decoder of the baseline model contains abundant semantic and spatial information, which is suitable to insert PCAM. However, there are three locations among four upsampling layers of decoder that the PCAM can be plugged in. To find out which upsampling layer of the decoder is optimized, the three locations are remarked as $1, 2, 3$ from low resolution to high resolution and plugged with PCAM, respectively. The experimental results are shown in Table.~\ref{tab2}. It can be seen that the PCAM plugged between the third and the fourth upsampling layer with the highest resolution has obtained the best segmentation results.
\begin{table}[h]
\caption{Ablation experiment on $PCAM$. $PCAMNet^{i}$ represents that the model is plugged with PCAM behind the $i th$ upsampling layer (PCAMNet=Baseline+PCAM). The performance is evaluated on three metrics by mean and std values.}\label{tab2}
\centering
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{Femoral Cartilage} & \multicolumn{3}{c|}{Tibial Cartilage} \\ \cline{2-7}
& \multicolumn{1}{c|}{DSC(\%)} & \multicolumn{1}{c|}{VOE(\%)} & ASSD(mm) & \multicolumn{1}{c|}{DSC(\%)} & \multicolumn{1}{c|}{VOE(\%)} & ASSD(mm) \\ \hline
$PCAMNet^{1}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}87.78\\ 2.87\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}21.67\\ 4.47\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2555\\ 0.1853\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}84.57\\ 4.16\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}26.51\\ 6.20\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2374\\ 0.0907\end{tabular} \\ \hline
$PCAMNet^{2}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}87.90\\ 2.71\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}21.49\\ 4.20\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2438\\ 0.1451\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}84.65\\ 4.14\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}26.39\\ 6.15\end{tabular}} & \begin{tabular}[c]{@{}c@{}}0.2518\\ 0.1746\end{tabular} \\ \hline
$PCAMNet^{3}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{88.45}\\ 2.57\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{20.62}\\ 4.04\end{tabular}} & \begin{tabular}[c]{@{}c@{}}\textbf{0.2058}\\ 0.0900\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{85.15}\\ 4.19\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{25.61}\\ 6.31\end{tabular}} & \begin{tabular}[c]{@{}c@{}}\textbf{0.2172}\\ 0.1072\end{tabular} \\ \hline
\end{tabular}
\end{table}
In PCAM, the side-output is used to calculate the center of each class. But the original side-output $S_{pred}$ contains many false positives on the area around the boundary of segmentated knee cartilage as shown in Fig.~\ref{fig1}. In order to filter out segmentation mistakes, the Erode operations (achieved by $maxpooling$) are applied in this research. The modified side-output $S_{pred}^{'}$ has higher precision results that are shown in Table.~\ref{tab3}.
\begin{table}[h]
\caption{Quantitative comparion by mean and std precision between original side-output $S_{pred}$ results and side-output results $S_{pred}^{'}$ modified by morphological operations. ($Precision = \frac{True Positives}{True Positives + False Positives}\times 100\%$)}\label{tab3}
\centering
\begin{tabular}{|c|cccc|}
\hline
\multirow{3}{*}{} & \multicolumn{4}{c|}{Precision(\%)} \\ \cline{2-5}
& \multicolumn{2}{c|}{Femoral Cartilage} & \multicolumn{2}{c|}{Tibial Cartilage} \\ \cline{2-5}
& \multicolumn{1}{c|}{Foreground} & \multicolumn{1}{c|}{Background} & \multicolumn{1}{c|}{Foreground} & Background \\ \hline
$S_{pred}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}87.56\\ 4.18\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}99.83\\ 0.05\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}83.32\\ 7.51\end{tabular}} & \begin{tabular}[c]{@{}c@{}}99.95\\ 0.02\end{tabular} \\ \hline
$S_{pred}^{'}$ & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{98.13}\\ 1.75\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{99.98}\\ 0.02\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{94.77}\\ 5.98\end{tabular}} & \begin{tabular}[c]{@{}c@{}}\textbf{99.99}\\ 0.01\end{tabular} \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
In this research, we proposed a novel self-attention module PCAM for ensuring accurate continuous knee cartilage segmentation. The proposed PCAM captures long-range contextual information to relieve limited receptive field brought by convolution filters in neural networks. Besides, the proposed PCAM brings less computational burdens and is flexible to be plugged in any encoder-decoder structured segmentation neural networks. The experimental results show that the proposed method achieves an accurate segmentation both on femoral and tibial cartilage in 3D MR data and has a potential application in future.
\subsubsection{Acknowledgements} This work was supported by the National Natural Science Foundation of China under Grant 62001144 and Grant 62001141, and by Science and Technology Innovation Committee of Shenzhen Municipality under Grant JCYJ20210324131800002 and RCBS20210609103820029.
|
2,869,038,155,281 | arxiv | \section{Introduction}
Entanglement\cite{nielsen00,horodecki09} is one of the important concepts from fundamental aspect of quantum mechanics and practical aspect of quantum information processing. As shown for last two decades it plays a crucial role in quantum teleportation\cite{teleportation},
superdense coding\cite{superdense}, quantum cloning\cite{clon}, and quantum cryptography\cite{cryptography,cryptography2}. It is also quantum entanglement,
which makes the quantum computer\footnote{The current status of quantum computer technology was reviewed in Ref.\cite{qcreview}.} outperform the classical one\cite{qcomputer}.
Quantum mechanics is a physics, which is valid for ideally closed system. However, real physical systems inevitably interact with their
surroundings. Thus, it is important to study how the environment modifies the dynamics of given physical system. There are two different tools
for describing the evolution of open quantum system: quantum operation formalism\cite{nielsen00} and master equation approach\cite{breuer02}.
Both tools have their own merits.
Since it is known that quantum system loses quantum properties by contacting the environment\cite{zurek03}, we expect that the degradation
of entanglement occurs\cite{yu02-1,simon02-1,dur04-1}. Sometimes entanglement exhibits an exponential decay in time by successive halves.
Sometimes, however, the entanglement sudden death (ESD) occurs when the entangled multipartite quantum system is embedded in
Markovian environments\cite{markovian,yu05-1,yu06-1,yu09-1}. This means that the entanglement is completely disentangled at finite times.
This ESD phenomenon has been revealed experimentally\cite{almeida07,laurat07}. When the ESD occurs, it is natural to ask where the lost
entanglement goes. It was found that when the entanglement of given quantum system suddenly disappears, the reservoir entanglement suddenly
appears, which is called entanglement sudden birth (ESB) \cite{lopez08}. Since we do not consider the degrees of freedom for the environment, we do not examine the ESB phenomenon in this paper.
The dynamics of entanglement was also examined when the physical system is embedded in non-Markovian environment\cite{breuer02,bellomo07}.
It has been shown that there is a revival of entanglement after a finite period of time of its complete disappearance. This is mainly due to
the memory effect of the non-Markovian environment. This phenomenon was shown in Ref.\cite{bellomo07} by making use
of the two qubit system and concurrence\cite{concurrence1} as a bipartite entanglement measure. Subsequently, many works have been done
to quantify the non-Markovianity\cite{breuer09,vacchini11,chruscinski11,rivas14,hall14,kwang15-1}.
In this paper we consider the entanglement dynamics when the qubit system interacts with the Markovian or non-Markovian environment.
So far this issue was investigated by making use of the bipartite system. Recently, the tripartite entanglement dynamics was also explored in
Ref.\cite{kwang15-1} numerically. Since entanglement is an important physical resource in the quantum information processing, it is important
to control the entanglement dynamics when the environment is present.
In order to control the entanglement it is crucial to derive the entanglement analytically in the
entire range of time. For example, analytic derivation for the bipartite entanglement dynamics enables us to explore the entanglement invariants\cite{yonac07,yu09-1}.
It is also possible to discuss the robustness or fragility issue against the environment by exploiting the analytical results.
Thus, we will explore the tripartite entanglement dynamics in this paper on the analytical ground. For simplicity, we
choose the physical setting, i.e. there is no interaction between qubit and each qubit interacts with its own reservoir. We will compute the
entanglement at arbitrary time for three-types of initial Greenberger-Horne-Zeilinger(GHZ) states\cite{green89} and for
two types of initial W-states\cite{dur00-1} in the presence of the Markovian or non-Markovian environment.
Typical tripartite entanglement measures are residual entanglement\cite{ckw} and $\pi$-tangle\cite{ou07-1}.
For three-qubit pure state
$|\psi\rangle = \sum_{i,j,k=0}^1 a_{ijk} |ijk\rangle$ the residual entanglement $\tau_{ABC}$
becomes
\begin{equation}
\label{residual-1}
\tau_{ABC} = 4 |d_1 - 2 d_2 + 4 d_3|,
\end{equation}
where
\begin{eqnarray}
\label{residual-2}
& &d_1 = a^2_{000} a^2_{111} + a^2_{001} a^2_{110} + a^2_{010} a^2_{101} +
a^2_{100} a^2_{011
},
\\ \nonumber
& &d_2 = a_{000} a_{111} a_{011} a_{100} + a_{000} a_{111} a_{101} a_{010} +
a_{000} a_{111} a_{110} a_{001}
\\ \nonumber
& &\hspace{1.0cm} +
a_{011} a_{100} a_{101} a_{010} + a_{011} a_{100} a_{110} a_{001} +
a_{101} a_{010} a_{110} a_{001},
\\ \nonumber
& &d_3 = a_{000} a_{110} a_{101} a_{011} + a_{111} a_{001} a_{010} a_{100}.
\end{eqnarray}
Thus, the residual entanglement of any three-qubit pure state can be computed by making use of Eq. (\ref{residual-1}). Although the residual entanglement
can detect the GHZ-type entanglement, it cannot detect the W-type entanglement:
\begin{equation}
\label{residual-3}
\tau_{ABC} (GHZ) = 1 \hspace{1.0cm} \tau_{ABC} (W) = 0,
\end{equation}
where
\begin{equation}
\label{residual-4}
\ket{GHZ} = \frac{1}{\sqrt{2}} \left( \ket{000} + \ket{111} \right)
\hspace{1.0cm} \ket{W} = \frac{1}{\sqrt{3}} \left( \ket{001} + \ket{010} + \ket{100} \right).
\end{equation}
For mixed states the residual entanglement is defined by a convex-roof
method\cite{benn96,uhlmann99-1} as follows:
\begin{equation}
\label{residual-5}
\tau_{ABC} (\rho) = \min \sum_i p_i \tau_{ABC} (\rho_i),
\end{equation}
where the minimum is taken over all possible ensembles of pure states. The pure state ensemble
corresponding to the minimum $\tau_{ABC}$ is called the optimal decomposition. It is in general
difficult to derive the optimal decomposition for arbitrary mixed states. Hence, analytic
computation of the residual entanglement can be done for rare cases\cite{residual}.
Furthermore, recently,
three-tangle\footnote{In this paper we will call $\tau_3 = \sqrt{\tau_{ABC}}$ three-tangle and $\tau_3^2 = \tau_{ABC}$
residual entanglement.} $\tau_3$ of the whole GHZ-symmetric states\cite{elts12-1} was explicitly computed\cite{siewert12-1}.
The $\pi$-tangle defined in Ref.\cite{ou07-1} is easier for analytic computation than the residual entanglement (or three tangle) because it
does not rely on the convex-roof method.
The $\pi$-tangle is
defined in terms of the global negativities~\cite{vidal01-1}. For a three-qubit state $\rho$
they are given by
\begin{equation}
\label{negativity-1}
{\cal N}^A = || \rho^{T_A} || - 1, \hspace{1.0cm}
{\cal N}^B = || \rho^{T_B} || - 1, \hspace{1.0cm}
{\cal N}^C = || \rho^{T_C} || - 1,
\end{equation}
where $||R|| = \mbox{Tr} \sqrt{R R^{\dagger}}$, and the superscripts $T_A$, $T_B$, and $T_C$
represent the partial transposes of $\rho$ with respect to the qubits $A$, $B$, and $C$ respectively.
Then, the $\pi$-tangle is defined as
\begin{equation}
\label{pi-1}
\pi_{ABC} = \frac{1}{3} (\pi_A + \pi_B + \pi_C ),
\end{equation}
where
\begin{equation}
\label{pi-2}
\pi_A = {\cal N}_{A(BC)}^2 - ({\cal N}_{AB}^2 + {\cal N}_{AC}^2) \hspace{.5cm}
\pi_B = {\cal N}_{B(AC)}^2 - ({\cal N}_{AB}^2 + {\cal N}_{BC}^2) \hspace{.5cm}
\pi_C = {\cal N}_{(AB)C}^2 - ({\cal N}_{AC}^2 + {\cal N}_{BC}^2).
\end{equation}
The remarkable property of the $\pi$-tangle is that it can detect not only the GHZ-type entanglement but also the W-type
entanglement:
\begin{equation}
\label{pi-ghz-w}
\pi_{ABC} (GHZ) = 1 \hspace{1.0cm}
\pi_{ABC} (W) = \frac{4}{9} (\sqrt{5} - 1) \sim 0.55.
\end{equation}
As commented earlier we will examine the tripartite entanglement dynamics of the three-qubit states in the presence of the Markovian or non-Markovian
environment. We will adopt the $\pi$-tangle as an entanglement measure for analytic computation as much as possible. In section II we consider
how the three-qubit initial state is evolved when each qubit interacts with its own Markovian or non-Markovian environment\cite{bellomo07}. In section III we
explore the entanglement dynamics of three GHZ-type initial states. The initial states are local unitary (LU) with each other. Thus, their entanglement are
the same initially. Furthermore, if the parameters are appropriately chosen, they all have GHZ-symmetry, i.e. they are invariant under
(i) qubit permutation (ii) simultaneous three-qubit flips (iii) qubit rotations about the $z$-axis. However, this symmetry is broken due to the
environment effect. As a result, their entanglement dynamics are different with each other. In section IV we examine the entanglement
dynamics of two W-type initial states. They are also LU to each other. However, the dynamics is also different because of the environment effect.
In section V a brief conclusion is given.
\section{General Features}
We consider three-qubit system, each of which interacts only and independently with its local environment. We assume that the dynamics of
single qubit is governed by Hamiltonian
\begin{equation}
\label{hamitonian-1}
H = H_0 + H_I
\end{equation}
where
\begin{eqnarray}
\label{hamiltonian-2}
& &H_0 = \omega_0 \sigma_+ \sigma_- + \sum_{k} \omega_k b_k^{\dagger} b_k \\ \nonumber
& &H_I = \sigma_+ \otimes B + \sigma_- \otimes B^{\dagger} \hspace{1.0cm} \mbox {with}\hspace{.5cm} B = \sum_k g_k b_k.
\end{eqnarray}
In Eq. (\ref{hamiltonian-2}) $\omega_0$ is a transition frequency of the two-level system (qubit), and $\sigma_{\pm}$ are the raising and
lowering operators. The index $k$ labels the different field modes of the reservoir with frequencies $\omega_k$, creation and annihilation
operators $ b_k^{\dagger}$, $b_k$, and coupling constants $g_k$. In the interaction picture the dynamics is governed by the Schr\"{o}dinger
equation
\begin{equation}
\label{schrodinger-1}
\frac{d}{d t} \psi (t) = -i H_I (t) \psi(t)
\end{equation}
where
\begin{eqnarray}
\label{schrodinger-2}
&&H_I (t) \equiv e^{i H_0 t} H_I e^{-i H_0 t} = \sigma_+ (t)\otimes B(t) + \sigma_- (t) \otimes B^{\dagger} (t) \nonumber \\
&&\sigma_{\pm} (t) \equiv e^{i H_0 t} \sigma_{\pm} e^{-i H_0 t} = \sigma_{\pm} e^{\pm i \omega_0 t} \\ \nonumber
&&B(t) \equiv e^{i H_0 t} B e^{-i H_0 t} = \sum_k g_k b_k e^{-i \omega_k t}.
\end{eqnarray}
The Hamiltonian (\ref{hamitonian-1}) represents one of few exactly solvable model\cite{garraway97}. This means that the Schr\"{o}dinger
equation (\ref{schrodinger-1}) can be formally solved if $\psi (0)$ is given. Then, the reduced state of the single qubit
$\hat{\rho}^S (t) \equiv Tr_{env} \ket{\psi(t)} \bra{\psi(t)}$ is given by\cite{breuer02,manis06}
\begin{eqnarray}
\label{density-1}
\hat{\rho}^S (t) = \left( \begin{array}{cc}
\rho_{00}^S (0) + \rho_{11}^S (0) \left( 1 - |P_t|^2 \right) & \rho_{01}^S (0) P_t \\
\rho_{10}^S (0) P_t^* & \rho_{11}^S (0) |P_t|^2
\end{array} \right)
\end{eqnarray}
where $\hat{\rho}^S (0) = Tr_{env} \ket{\psi(0)} \bra{\psi(0)}$ and $Tr_{env}$ denotes the partial trace over the environment.
The function $P_t$ satisfies the differential equation
\begin{equation}
\label{density-2}
\frac{d}{dt} P_t = - \int_0^t dt_1 f(t - t_1) P_{t_1}
\end{equation}
and the correlation function $f(t - t_1)$ is related to the spectral density $J(\omega)$ of the reservoir by
\begin{equation}
\label{density-3}
f(t - t_1) = \int J(\omega) exp[i (\omega_0 - \omega) (t - t_1)].
\end{equation}
We choose $J(\omega)$ as an effective spectral density of the damped Jaynes-Cummings model\cite{breuer02}
\begin{equation}
\label{jc-1}
J(\omega) = \frac{1}{2 \pi} \frac{\gamma_0 \lambda^2}{(\omega_0 - \omega)^2 + \lambda^2}
\end{equation}
where the parameter $\lambda$ defines the spectral width of the coupling, which is connected to the reservoir
correlation time $\tau_B$ by the relation $\tau_B = 1 / \lambda$ and the relaxation time scale $\tau_R$ on which the state of the system
changes is related to $\gamma_0$ by $\tau_R = 1 / \gamma_0$.
By making use of the Residue theorem in complex plane the correlation function can be easily computed in a form
\begin{equation}
\label{correlation-1}
f(t - t_1) = \frac{\gamma_0 \lambda}{2} e^{-\lambda |t - t_1|}.
\end{equation}
Inserting Eq. (\ref{correlation-1}) into Eq. (\ref{density-2}) and making use of Laplace transform, one can compute $P_t$ explicitly. While in a
weak coupling (or Markovian) regime $\tau_R > 2 \tau_B$ $P_t$ becomes
\begin{equation}
\label{pt-m}
P_t = e^{-\frac{\lambda}{2} t} \left[ \cosh \left( \frac{\bar{d}}{2} t \right) + \frac{\lambda}{\bar{d}} \sinh \left( \frac{\bar{d}}{2} t \right)
\right]
\end{equation}
with $\bar{d} = \sqrt{\lambda^2 - 2 \gamma_0 \lambda}$, in a strong coupling (or non-Markovian) regime $\tau_R < 2 \tau_B$ $P_t$ reduces to
\begin{equation}
\label{pt-nm}
P_t = e^{-\frac{\lambda}{2} t} \left[ \cos \left( \frac{d}{2} t \right) + \frac{\lambda}{d} \sin \left( \frac{d}{2} t \right)
\right]
\end{equation}
with $d = \sqrt{2 \gamma_0 \lambda - \lambda^2}$. Since, in the Markovian regime $\lambda > 2 \gamma_0$, $P_t$ in Eq. (\ref{pt-m}) exhibits
an exponential decay in time, it seems to make exponential decay of entanglement or ESD phenomenon.
However, in the non-Markovian regime $\lambda < 2 \gamma_0$, $P_t$ in
Eq. (\ref{pt-nm}) exhibits an oscillatory behavior in time with decreasing amplitude. It seems to be responsible for the revival phenomenon of
entanglement\cite{bellomo07}, after a finite period of time of its complete disappearance.
The state $\hat{\rho}^T (t)$ at time $t$ of whole three-qubit system, each of which interacts only and independently with its own environment, can be derived by the Kraus operators\cite{kraus83}. Introducing, for simplicity, $\{ \ket{0} \equiv \ket{000}, \ket{1} \equiv \ket{001},
\ket{2} \equiv \ket{010}, \ket{3} \equiv \ket{011}, \ket{4} \equiv \ket{100}, \ket{5} \equiv \ket{101}, \ket{6} \equiv \ket{110},
\ket{7} \equiv \ket{111} \}$, the diagonal parts of $\hat{\rho}^T (t)$ are
\begin{eqnarray}
\label{diagonal-1}
&&\hat{\rho}^T_{11} (t) = P_t^2 \left[\hat{\rho}^T_{11} (0) + \left\{ \hat{\rho}^T_{33} (0) + \hat{\rho}^T_{55} (0) \right\} (1 - P_t^2) +
\hat{\rho}^T_{77} (0) (1 - P_t^2)^2 \right] \nonumber \\
&&\hat{\rho}^T_{22} (t) = P_t^2 \left[\hat{\rho}^T_{22} (0) + \left\{ \hat{\rho}^T_{33} (0) + \hat{\rho}^T_{66} (0) \right\} (1 - P_t^2) +
\hat{\rho}^T_{77} (0) (1 - P_t^2)^2 \right] \nonumber \\
&&\hat{\rho}^T_{33} (t) = P_t^4 \left[\hat{\rho}^T_{33} (0) + \hat{\rho}^T_{77} (0) (1 - P_t^2) \right] \\ \nonumber
&&\hat{\rho}^T_{44} (t) = P_t^2 \left[\hat{\rho}^T_{44} (0) + \left\{ \hat{\rho}^T_{55} (0) + \hat{\rho}^T_{66} (0) \right\} (1 - P_t^2) +
\hat{\rho}^T_{77} (0) (1 - P_t^2)^2 \right] \\ \nonumber
&&\hat{\rho}^T_{55} (t) = P_t^4 \left[\hat{\rho}^T_{55} (0) + \hat{\rho}^T_{77} (0) (1 - P_t^2) \right] \\ \nonumber
&&\hat{\rho}^T_{66} (t) = P_t^4 \left[\hat{\rho}^T_{66} (0) + \hat{\rho}^T_{77} (0) (1 - P_t^2) \right] \\ \nonumber
&&\hat{\rho}^T_{00} (t) = 1 - \sum_{i=1}^7 \hat{\rho}^T_{ii} (t)
\end{eqnarray}
and the non-diagonal parts are
\begin{eqnarray}
\label{non-diagonal-1}
&&\hat{\rho}^T_{01} (t) = P_t \left[\hat{\rho}^T_{01} (0) + \left\{ \hat{\rho}^T_{23} (0) + \hat{\rho}^T_{45} (0) \right\} (1 - P_t^2) +
\hat{\rho}^T_{67} (0) (1 - P_t^2)^2 \right] \nonumber \\
&&\hat{\rho}^T_{02} (t) = P_t \left[\hat{\rho}^T_{02} (0) + \left\{ \hat{\rho}^T_{13} (0) + \hat{\rho}^T_{46} (0) \right\} (1 - P_t^2) +
\hat{\rho}^T_{57} (0) (1 - P_t^2)^2 \right] \nonumber \\
&&\hat{\rho}^T_{04} (t) = P_t \left[\hat{\rho}^T_{04} (0) + \left\{ \hat{\rho}^T_{15} (0) + \hat{\rho}^T_{26} (0) \right\} (1 - P_t^2) +
\hat{\rho}^T_{37} (0) (1 - P_t^2)^2 \right] \nonumber \\
&&\hat{\rho}^T_{03} (t) = P_t^2 \left[\hat{\rho}^T_{03} (0) + \hat{\rho}^T_{47} (0) (1 - P_t^2) \right] \hspace{.5cm}
\hat{\rho}^T_{05} (t) = P_t^2 \left[\hat{\rho}^T_{05} (0) + \hat{\rho}^T_{27} (0) (1 - P_t^2) \right] \nonumber \\
&&\hat{\rho}^T_{06} (t) = P_t^2 \left[\hat{\rho}^T_{06} (0) + \hat{\rho}^T_{17} (0) (1 - P_t^2) \right] \hspace{.5cm}
\hat{\rho}^T_{12} (t) = P_t^2 \left[\hat{\rho}^T_{12} (0) + \hat{\rho}^T_{56} (0) (1 - P_t^2) \right] \nonumber \\
&&\hat{\rho}^T_{13} (t) = P_t^3 \left[\hat{\rho}^T_{13} (0) + \hat{\rho}^T_{57} (0) (1 - P_t^2) \right] \hspace{.5cm}
\hat{\rho}^T_{14} (t) = P_t^2 \left[\hat{\rho}^T_{14} (0) + \hat{\rho}^T_{36} (0) (1 - P_t^2) \right] \nonumber \\
&&\hat{\rho}^T_{15} (t) = P_t^3 \left[\hat{\rho}^T_{15} (0) + \hat{\rho}^T_{37} (0) (1 - P_t^2) \right] \hspace{.5cm}
\hat{\rho}^T_{23} (t) = P_t^3 \left[\hat{\rho}^T_{23} (0) + \hat{\rho}^T_{67} (0) (1 - P_t^2) \right] \\ \nonumber
&&\hat{\rho}^T_{24} (t) = P_t^2 \left[\hat{\rho}^T_{24} (0) + \hat{\rho}^T_{35} (0) (1 - P_t^2) \right] \hspace{.5cm}
\hat{\rho}^T_{26} (t) = P_t^3 \left[\hat{\rho}^T_{26} (0) + \hat{\rho}^T_{37} (0) (1 - P_t^2) \right] \\ \nonumber
&&\hat{\rho}^T_{45} (t) = P_t^3 \left[\hat{\rho}^T_{45} (0) + \hat{\rho}^T_{67} (0) (1 - P_t^2) \right] \hspace{.5cm}
\hat{\rho}^T_{46} (t) = P_t^3 \left[\hat{\rho}^T_{46} (0) + \hat{\rho}^T_{57} (0) (1 - P_t^2) \right] \\ \nonumber
&&\hat{\rho}^T_{07} (t) = \hat{\rho}^T_{07} (0) P_t^3 \hspace{.5cm} \hat{\rho}^T_{16} (t) = \hat{\rho}^T_{16} (0) P_t^3 \hspace{.5cm}
\hat{\rho}^T_{17} (t) = \hat{\rho}^T_{17} (0) P_t^4 \hspace{.5cm} \hat{\rho}^T_{25} (t) = \hat{\rho}^T_{25} (0) P_t^3 \\ \nonumber
&&\hat{\rho}^T_{27} (t) = \hat{\rho}^T_{27} (0) P_t^4 \hspace{.5cm} \hat{\rho}^T_{34} (t) = \hat{\rho}^T_{34} (0) P_t^3 \hspace{.5cm}
\hat{\rho}^T_{35} (t) = \hat{\rho}^T_{35} (0) P_t^4 \hspace{.5cm} \hat{\rho}^T_{36} (t) = \hat{\rho}^T_{36} (0) P_t^4 \\ \nonumber
&&\hspace{1.3cm}\hat{\rho}^T_{37} (t) = \hat{\rho}^T_{37} (0) P_t^5 \hspace{.5cm} \hat{\rho}^T_{47} (t) = \hat{\rho}^T_{47} (0) P_t^4 \hspace{.5cm} \hat{\rho}^T_{56} (t) = \hat{\rho}^T_{56} (0) P_t^4 \\ \nonumber
&&\hspace{2.7cm}\hat{\rho}^T_{57} (t) = \hat{\rho}^T_{57} (0) P_t^5 \hspace{.5cm} \hat{\rho}^T_{67} (t) = \hat{\rho}^T_{67} (0) P_t^5
\end{eqnarray}
with $\hat{\rho}^T_{ij} (t) = \hat{\rho}^{T*}_{ji} (t)$. Now, we are ready to explore the tripartite entanglement dynamics in the presence of the
Markovian or non-Markovian environment.
\section{entanglement dynamics of GHZ-type initial states}
In this section we examine the tripartite entanglement dynamics when the initial states are GHZ-type states. All initial states have
GHZ-symmetry\cite{elts12-1} if the parameters are appropriately chosen. However, this symmetry is broken due to the effects of environment.
\subsection{Type I}
Let us choose the initial state in a form
\begin{equation}
\label{type1-ghz-1}
\hat{\rho}^T_I (0) = \ket{\psi_I} \bra{\psi_I}
\end{equation}
where
$\ket{\psi_I} = a \ket{0} + b e^{i \delta} \ket{7}$ with $a^2 + b^2 = 1$. As commented before $\ket{\psi_I}$ has a GHZ-symmetry when
$a^2 = b^2 = 1/2$ and $\delta = 0$. Then the spectral decomposition of $\hat{\rho}^T_I (t)$ can be read directly from Eqs. (\ref{diagonal-1})
and (\ref{non-diagonal-1}) as a form:
\begin{eqnarray}
\label{type1-ghz-2}
&&\hat{\rho}^T_I (t) = \Lambda_+ \ket{\psi_1} \bra{\psi_1} + \Lambda_- \ket{\psi_2} \bra{\psi_2}
+ b^2 P_t^2 (1 - P_t^2)^2 \left\{ \ket{1} \bra{1} + \ket{2} \bra{2} + \ket{4} \bra{4} \right\} \\ \nonumber
&&\hspace{3.0cm} + b^2 P_t^4 (1 - P_t^2) \left\{ \ket{3} \bra{3} + \ket{5} \bra{5} + \ket{6} \bra{6} \right\}
\end{eqnarray}
where
\begin{equation}
\label{type1-ghz-3}
\Lambda_{\pm} = \frac{1}{2} \left[ \left\{ 1 - 3 b^2 P_t^2 (1 - P_t^2) \right\}
\pm \sqrt{\left[ 1 - 3 b^2 P_t^2 (1 - P_t^2) \right]^2 - 4 b^4 P_t^6 (1 - P_t^2)^2 } \right]
\end{equation}
and
\begin{equation}
\label{type1-ghz-4}
\ket{\psi_1} = \frac{1}{N_I} \left( x \ket{0} + y e^{i \delta} \ket{7} \right) \hspace{1cm}
\ket{\psi_2} = \frac{1}{N_I} \left( y \ket{0} - x e^{i \delta} \ket{7} \right)
\end{equation}
with
\begin{eqnarray}
\label{type1-ghz-5}
&&x = 1 - b^2 P_t^2 (3 - 3 P_t^2 + 2 P_t^4) + \sqrt{\left[ 1 - 3 b^2 P_t^2 (1 - P_t^2) \right]^2 - 4 b^4 P_t^6 (1 - P_t^2)^2 } \nonumber \\
&&y = 2 a b P_t^2
\end{eqnarray}
and $N_I = \sqrt{x^2 + y^2}$ is a normalization constant.
Since $\hat{\rho}^T_I (t)$ is a full rank, it seems to be highly difficult to compute the residual entanglement (or three-tangle) analytically. However,
from Eq. (\ref{type1-ghz-2}) one can realize the upper bound of $\tau_{ABC}$ as
\begin{equation}
\label{upper-1}
\tau_{ABC} \leq \left[1 - 3 b^2 P_t^2 (1 - P_t^2)\right] \frac{4 x^2 y^2}{(x^2 + y^2)^2}.
\end{equation}
It is worthwhile noting that $\hat{\rho}^T_I (t)$ does not have the GHZ-symmetry even at $a^2 = b^2 = 1/2$ and $\delta = 0$. Thus, the symmetry
which $\hat{\rho}^T_I (0)$ has is broken due to the effect of environment.
In order to explore the tripartite entanglement dynamics on the analytical ground, we compute the $\pi$-tangle of $\hat{\rho}^T_I (t)$. Using
Eq. (\ref{negativity-1}) it is straightforward to compute the induced bipartite entanglement quantities
${\cal N}_{A(BC)}$, ${\cal N}_{B(AC)}$, and ${\cal N}_{(AB)C}$. One can show that they are all the same with
\begin{eqnarray}
\label{ghz-1-pi-1}
{\cal N}_{A(BC)} = {\cal N}_{B(AC)} = {\cal N}_{(AB)C} = \max \left[ Q(t), 0 \right],
\end{eqnarray}
where
\begin{equation}
\label{add1}
Q(t) = \sqrt{b^4 P_t^4 (1 - P_t^2)^2 (1 - 2 P_t^2)^2 + 4 a^2 b^2 P_t^6} - b^2 P_t^2 (1 - P_t^2).
\end{equation}
One can also show the two-tangles completely vanish, i.e. ${\cal N}_{AB} = {\cal N}_{AC} = {\cal N}_{BC} = 0$, easily. Thus the $\pi$-tangle of $\hat{\rho}^T_I (t)$ is
\begin{equation}
\label{ghz-1-pi-2}
\pi^I_{GHZ} (t) = {\cal N}_{A(BC)}^2.
\end{equation}
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=4.9cm]{fig1a.pdf}
\includegraphics[height=4.9cm]{fig1b.pdf}
\caption[fig1]{(Color online) The $\pi$-tangle of $\hat{\rho}^T_I (t)$ as a function of the parameters $\gamma_0 t$
and $a^2$ when the state interacts with the Markovian and non-Markovian environments. We choose $\lambda$ as (a) $\lambda = 3 \gamma_0$ and
(b) $\lambda = 0.01 \gamma_0$. }
\end{center}
\end{figure}
Eq. (\ref{ghz-1-pi-1}) guarantees that regardless of Markovian or non-Markovian environment $\pi^I_{GHZ} (t)$ becomes zero if an inequality
\begin{equation}
\label{add1}
a^2 \leq \frac{(1 - P_t^2)^3}{1 + (1 - P_t^2)^3}
\end{equation}
holds because $Q(t)$ becomes negative in this condition.
Now, let us examine the dynamics of the tripartite entanglement for $\hat{\rho}^T_I (t)$ when the quantum system interacts with Markovian environment.
Since $P_t$ in Eq. (\ref{pt-m}) decays exponentially in time, one can expect that the tripartite entanglement exhibits an asymptotic decay,
i.e. decay with the half-life rule, similarly. In fact, this is true when the inequality (\ref{add1}) is violated. If the inequality holds at $t \geq t_*$, the tripartite entanglement
becomes zero at $t = t_*$ abruptly. This is an ESD phenomenon. If the inequality does not hold for all time, the tripartite entanglement decays with the
half-life rule as expected. This is shown clearly in Fig. 1(a), where $\pi^I_{GHZ} (t)$ is plotted as a function of $\gamma_0 t$ and $a^2$. In this
figure we choose $\lambda = 3 \gamma_0$. As expected, the tripartite entanglement decreases with increasing $\gamma_0 t$.
When $a^2 = 0.6$ (blue line) it decays exponentially in $\gamma_0 t$ with the half-life rule. For $a^2 = 0.2$ (red line), however, it becomes zero in the region $\gamma_0 t \geq 1.21$.
For non-Markovian regime the decay behavior of the tripartite entanglement in time is completely different. This difference arises due to combination
of the inequality (\ref{add1}) and difference form of $P_t$. Since $P_t$ in Eq. (\ref{pt-nm}) exhibits an underdamping behavior in time with
zeros at $t_n = 2[n \pi - \tan^{-1} (d / \lambda) / d] \hspace{.2cm} (n = 1, 2, \cdots)$, one may expect that the tripartite entanglement also
decays with oscillatory behavior. This is true when the inequality (\ref{add1}) is violated for all time. This behavior is shown as a blue line
($a^2 = 0.6$) of Fig. 1(b). In this figure we choose
$\lambda = 0.01 \gamma_0$. If the inequality holds for some time interval $t_{*1} \leq t \leq t_{*2}$, the tripartite entanglement becomes zero in this
interval. After this time interval, however, nonzero tripartite entanglement reappears, which makes a revival of entanglement after a finite period of time of
its complete disappearance. This is shown as a red line ($a^2 = 0.3$) of Fig. 1(b).
\subsection{Type II}
Let us choose the initial state in a form
\begin{equation}
\label{type2-ghz-1}
\hat{\rho}^T_{II} (0) = \ket{\psi_{II}} \bra{\psi_{II}}
\end{equation}
where
$\ket{\psi_{II}} = a \ket{1} + b e^{i \delta} \ket{6}$ with $a^2 + b^2 = 1$.
Since $\ket{\psi_{I}} = \openone \otimes \openone \otimes \sigma_x \ket{\psi_{II}}$, $(\openone \otimes \openone \otimes \sigma_x)
\hat{\rho}^T_{II} (0) (\openone \otimes \openone \otimes \sigma_x)^{\dagger}$ has a GHZ-symmetry provided that $a^2 = b^2 = 1/2$ and
$\delta = 0$.
Using Eqs. (\ref{diagonal-1}) and (\ref{non-diagonal-1}) one can show that the spectral decomposition of $\hat{\rho}^T_{II} (t)$ becomes
\begin{equation}
\label{type2-ghz-2}
\hat{\rho}^T_{II} (t) = \lambda_2 \ket{\phi_{II}} \bra{\phi_{II}} + (1 - P_t^2) \left[ a^2 + b^2 (1 - P_t^2) \right] \ket{0} \bra{0} +
b^2 P_t^2 (1 - P_t^2) \left( \ket{2} \bra{2} + \ket{4} \bra{4} \right)
\end{equation}
where
\begin{eqnarray}
\label{type2-ghz-3}
&&\lambda_2 = P_t^2 (a^2 + b^2 P_t^2) \\ \nonumber
&& \ket{\phi_{II}} = \frac{1}{\sqrt{a^2 + b^2 P_t^2}} \left( a \ket{1} + b P_t e^{i \delta} \ket{6} \right).
\end{eqnarray}
Unlike the case of type I $\hat{\rho}^T_{II} (t)$ is rank four tensor. From Eq. (\ref{type2-ghz-2}) one can derive the upper bound of $\tau_{ABC}$
for $\hat{\rho}^T_{II} (t)$, which is
\begin{equation}
\label{upper-2}
\tau_{ABC} \leq \frac{4 a^2 b^2 P_t^4}{a^2 + b^2 P_t^2}.
\end{equation}
The negativities ${\cal N}_{A(BC)}$, ${\cal N}_{B(AC)}$, and ${\cal N}_{(AB)C}$ of $\hat{\rho}^T_{II} (t)$ can be computed by making use of
Eq. (\ref{negativity-1}). The final expressions are
\begin{eqnarray}
\label{ghz-2-pi-1}
&&{\cal N}_{A(BC)} = {\cal N}_{B(AC)} = \sqrt{b^4 P_t^4 (1 - P_t^2)^2 + 4 a^2 b^2 P_t^6} - b^2 P_t^2 (1 - P_t^2) \\ \nonumber
&&{\cal N}_{(AB)C} = \sqrt{(1 - P_t^2)^2 \left[ a^2 + b^2 (1 - P_t^2) \right]^2 + 4 a^2 b^2 P_t^6} - (1 - P_t^2) \left[ a^2 + b^2 (1 - P_t^2) \right].
\end{eqnarray}
It is also easy to show ${\cal N}_{AB} = {\cal N}_{AC} = {\cal N}_{BC} = 0$. Thus the $\pi$-tangle of $\hat{\rho}^T_{II} (t)$ is
\begin{equation}
\label{ghz-2-pi-2}
\pi_{GHZ}^{II} (t) = \frac{1}{3} \left[2 {\cal N}_{A(BC)}^2 + {\cal N}_{(AB)C}^2 \right].
\end{equation}
When $t = 0$, $\pi_{GHZ}^{II} (0)$ becomes $4 a^2 b^2$ and it reduces to zero as $t \rightarrow \infty$. Of course, the entanglement of
$\hat{\rho}^T_{II} (t)$ is completely disentangled at $t = t_n \hspace{.2cm} (n = 1, 2, \cdots)$ in the non-Markovian regime.
\subsection{Type III}
Let us choose the initial state in a form
\begin{equation}
\label{type3-ghz-1}
\hat{\rho}^T_{III} (0) = \ket{\psi_{III}} \bra{\psi_{III}}
\end{equation}
where
$\ket{\psi_{III}} = a \ket{3} + b e^{i \delta} \ket{4}$ with $a^2 + b^2 = 1$.
Since $\ket{\psi_{I}} = \openone \otimes \sigma_x \otimes \sigma_x \ket{\psi_{III}}$, $(\openone \otimes \sigma_x \otimes \sigma_x)
\hat{\rho}^T_{III} (0) (\openone \otimes \sigma_x \otimes \sigma_x)^{\dagger}$ has a GHZ-symmetry provided that $a^2 = b^2 = 1/2$ and
$\delta = 0$.
Using Eqs. (\ref{diagonal-1}) and (\ref{non-diagonal-1}) one can show that the spectral decomposition of $\hat{\rho}^T_{III} (t)$ becomes
\begin{equation}
\label{type3-ghz-2}
\hat{\rho}^T_{III} (t) = \lambda_3 \ket{\phi_{III}} \bra{\phi_{III}} + (1 - P_t^2) \left[ a^2 (1 - P_t^2) + b^2 \right] \ket{0} \bra{0} +
a^2 P_t^2 (1 - P_t^2) \left( \ket{1} \bra{1} + \ket{2} \bra{2} \right)
\end{equation}
where
\begin{eqnarray}
\label{type3-ghz-3}
&&\lambda_3 = P_t^2 (a^2 P_t^2 + b^2) \\ \nonumber
&& \ket{\phi_{III}} = \frac{1}{\sqrt{a^2 P_t^2+ b^2}} \left( a P_t \ket{3} + b e^{i \delta} \ket{4} \right).
\end{eqnarray}
Unlike the case of type I $\hat{\rho}^T_{III} (t)$ is rank four tensor. From Eq. (\ref{type3-ghz-2}) one can derive the upper bound of $\tau_{ABC}$
for $\hat{\rho}^T_{III} (t)$, which is
\begin{equation}
\label{upper-3}
\tau_{ABC} \leq \frac{4 a^2 b^2 P_t^4}{a^2 P_t^2 + b^2 }.
\end{equation}
The negativities ${\cal N}_{A(BC)}$, ${\cal N}_{B(AC)}$, and ${\cal N}_{(AB)C}$ of $\hat{\rho}^T_{III} (t)$ can be computed by making use of
Eq. (\ref{negativity-1}), whose explicit expressions are
\begin{eqnarray}
\label{ghz-3-pi-1}
&&{\cal N}_{A(BC)} = \sqrt{(1 - P_t^2)^2 \left[a^2 (1 - P_t^2) + b^2 \right]^2 + 4 a^2 b^2 P_t^6} - (1 - P_t^2) \left[a^2 (1 - P_t^2) + b^2 \right]
\nonumber \\
&&{\cal N}_{B(AC)} = {\cal N}_{(AB)C} = \sqrt{a^4 P_t^4 (1 - P_t^2)^2 + 4 a^2 b^2 P_t^6} - a^2 P_t^2 (1 - P_t^2).
\end{eqnarray}
It is of interest to note that ${\cal N}_{A(BC)}$ and ${\cal N}_{B(AC)}$ of type III is the same with ${\cal N}_{(AB)C}$ and ${\cal N}_{A(BC)}$
of type II with $a \leftrightarrow b$ respectively.
It is easy to show ${\cal N}_{AB} = {\cal N}_{AC} = {\cal N}_{BC} = 0$. Thus the $\pi$-tangle of $\hat{\rho}^T_{III} (t)$ is
\begin{equation}
\label{ghz-3-pi-2}
\pi_{GHZ}^{III} (t) = \frac{1}{3} \left[ {\cal N}_{A(BC)}^2 + 2 {\cal N}_{B(AC)}^2 \right].
\end{equation}
One can also consider different types of initial GHZ-type states. For example, one can consider
$\hat{\rho}^T_{IV} (0) = \ket{\psi_{IV}} \bra{\psi_{IV}}$, where $\ket{\psi_{IV}} = a \ket{2} + b e^{i \delta} \ket{5}$. Although, in this case,
$\hat{\rho}^T_{IV} (t)$ is different from $\hat{\rho}^T_{II} (t)$, one can show that its $\pi$-tangle is exactly the same with that of type II.
Thus, this case is not discussed in detail.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=5cm]{fig2a.pdf}
\includegraphics[height=5cm]{fig2b.pdf}
\caption[fig2]{(Color online) The $\pi$-tangle for the initial states
(a) $ a \ket{001} + b e^{i \delta} \ket{110}$ and (b) $ a \ket{011} + b e^{i \delta} \ket{100}$ as a function of the parameters $\gamma_0 t$
and $a^2$. We choose $\lambda$ as a $\lambda = 0.01 \gamma_0$. }
\end{center}
\end{figure}
As shown in Eqs. (\ref{ghz-2-pi-2}) and (\ref{ghz-3-pi-2}) the dynamics of the tripartite entanglements for type II and type III are not expressed
in terms of an inequality like Eq. (\ref{add1}) in type I. Thus, if $\ket{\psi_{II}}$ and $\ket{\psi_{III}}$ interact with the Markovian surroundings,
these entanglements decay exponentially with the half-life rule. This means that there is no ESD phenomenon in these cases. If $\ket{\psi_{II}}$
and $\ket{\psi_{III}}$ interact with the non-Markovian environment, $\pi_{GHZ}^{II} (t)$ and $\pi_{GHZ}^{III} (t)$ should exhibit an oscillatory
behavior with rapid decrease of amplitude due to $P_t$ in Eq. (\ref{pt-nm}). This can be seen in Fig. 2, where $\pi_{GHZ}^{II} (t)$ and
$\pi_{GHZ}^{III} (t)$ are plotted as a function of dimensionless parameter $\gamma_0 t$ and $a^2$. We choose $\lambda$ as a
$\lambda = 0.01 \gamma_0$. As expected the tripartite entanglement reduces to zero with increasing time with oscillatory behavior.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=5cm]{fig3a.pdf}
\includegraphics[height=5cm]{fig3b.pdf}
\caption[fig3]{(Color online) The $\gamma_0 t$ dependence of $\pi_{GHZ}^{I} (t)$ (red solid), $\pi_{GHZ}^{II} (t)$ (black dashed), and
$\pi_{GHZ}^{III} (t)$ (blue dotted) when (a) $a^2 = 0.1$
and (b) $a^2 = 0.9$. We choose $\lambda$ as a $\lambda = 0.001 \gamma_0$. }
\end{center}
\end{figure}
The $\pi$-tangles $\pi_{GHZ}^{I} (t)$ , $\pi_{GHZ}^{II} (t)$ , and $\pi_{GHZ}^{III} (t)$ are compared in Fig. 3 when
$\lambda / \gamma_0 = 0.001$. They are represented by red solid, black dashed, and blue dotted lines respectively. Fig. 3(a) and Fig. 3(b)
correspond to $a^2 = 0.1$ and $a^2 = 0.9$. Both figures clearly show the revival of the tripartite entanglement, after a finite period of time of complete
disappearance. The revival phenomenon seems to be mainly due to the memory effect of the non-Markovian environment. It is of interest to note that
while $\pi_{GHZ}^{III} (t)\geq \pi_{GHZ}^{II} (t) \geq \pi_{GHZ}^{I} (t)$ when $a^2 = 0.1$, the order is changed as
$\pi_{GHZ}^{I} (t)\geq \pi_{GHZ}^{II} (t) \geq \pi_{GHZ}^{III} (t)$ when $a^2 = 0.9$.
\section{entanglement dynamics of W-type initial states}
In this section we examine the tripartite entanglement dynamics when the initial states are two W-type states. Both initial states are LU to
each other. However, their entanglement dynamics are different due to Eqs. (\ref{diagonal-1}) and (\ref{non-diagonal-1}).
\subsection{Type I}
In this subsection we choose the initial state as
\begin{equation}
\label{type1-W-1}
\hat{\rho}^W_I (0) = \ket{W_1} \bra{W_1}
\end{equation}
where $\ket{W_1} = a \ket{1} + b e^{i \delta_1} \ket{2} + c e^{i \delta_2} \ket{4}$ with $a^2 + b^2 + c^2 = 1$. Then, it is
straightforward to show that the spectral decomposition of $\hat{\rho}^W_I (t)$ is
\begin{equation}
\label{type1-W-2}
\hat{\rho}^W_I (t) = (1 - P_t^2) \ket{0} \bra{0} + P_t^2 \ket{W_1} \bra{W_1}.
\end{equation}
Eq. (\ref{type1-W-2}) guarantees that the residual entanglement and three-tangle of $\hat{\rho}^W_I (t)$ are zero because the spectral
decomposition exactly coincides with the optimal decomposition.
By making use of Eq. (\ref{negativity-1}) one can compute the induced bipartite entanglement quantities
${\cal N}_{A(BC)}$, ${\cal N}_{B(AC)}$, and ${\cal N}_{(AB)C}$
of $\hat{\rho}^W_{I} (t)$ directly, whose expressions are
\begin{eqnarray}
\label{W-1-pi-1}
&& {\cal N}_{A(BC)} = \sqrt{(1 - P_t^2)^2 + 4 c^2 (a^2 + b^2) P_t^4} - (1 - P_t^2) \nonumber \\
&& {\cal N}_{B(AC)} = \sqrt{(1 - P_t^2)^2 + 4 b^2 (a^2 + c^2) P_t^4} - (1 - P_t^2) \\ \nonumber
&& {\cal N}_{(AB)C} = \sqrt{(1 - P_t^2)^2 + 4 a^2 (b^2 + c^2) P_t^4} - (1 - P_t^2).
\end{eqnarray}
Also, the two tangles ${\cal N}_{AB}$, ${\cal N}_{AC}$, and ${\cal N}_{BC}$ become
\begin{eqnarray}
\label{W-1-pi-2}
&&{\cal N}_{AB} = \sqrt{\left[ (1 - P_t^2) + a^2 P_t^2 \right]^2 + 4 b^2 c^2 P_t^4} - \left[ (1 - P_t^2) + a^2 P_t^2 \right] \nonumber \\
&&{\cal N}_{AC} = \sqrt{\left[ (1 - P_t^2) + b^2 P_t^2 \right]^2 + 4 a^2 c^2 P_t^4} - \left[ (1 - P_t^2) + b^2 P_t^2 \right] \\ \nonumber
&&{\cal N}_{BC} = \sqrt{\left[ (1 - P_t^2) + c^2 P_t^2 \right]^2 + 4 a^2 b^2 P_t^4} - \left[ (1 - P_t^2) + c^2 P_t^2 \right].
\end{eqnarray}
Thus, using Eqs. (\ref{pi-1}) and (\ref{pi-2}) one can compute the $\pi$-tangle of $\hat{\rho}^W_I (t)$, whose explicit expression is
\begin{eqnarray}
\label{W-1-pi-3}
&&\pi^I_W (t) = \frac{2}{3} \Bigg[ 2 \left[(1 - P_t^2) + a^2 P_t^2 \right] \sqrt{\left[(1 - P_t^2) + a^2 P_t^2 \right]^2 + 4 b^2 c^2 P_t^4} \nonumber \\
&&\hspace{2.0cm} + 2 \left[(1 - P_t^2) + b^2 P_t^2 \right] \sqrt{\left[(1 - P_t^2) + b^2 P_t^2 \right]^2 + 4 a^2 c^2 P_t^4} \nonumber \\
&&\hspace{2.0cm} + 2 \left[(1 - P_t^2) + c^2 P_t^2 \right] \sqrt{\left[(1 - P_t^2) + c^2 P_t^2 \right]^2 + 4 a^2 b^2 P_t^4} \\ \nonumber
&&\hspace{2,0cm} - (1 - P_t^2) \bigg\{ \sqrt{(1 - P_t^2)^2 + 4 a^2 (b^2 + c^2) P_t^4} \\ \nonumber
&&\hspace{2.0cm} + \sqrt{(1 - P_t^2)^2 + 4 b^2 (a^2 + c^2) P_t^4}
+ \sqrt{(1 - P_t^2)^2 + 4 c^2 (a^2 + b^2) P_t^4} \bigg\} \\ \nonumber
&&\hspace{4.0cm} - 2 (a^4 + b^4 + c^4) P_t^4 - (1 - P_t^2) (3 + P_t^2) \Bigg].
\end{eqnarray}
When $t = 0$, Eq. (\ref{W-1-pi-3}) reduces to
\begin{equation}
\label{W-1-pi-4}
\pi^I_W (0) = \frac{4}{3} \left[a^2 \sqrt{a^4 + 4 b^2 c^2} + b^2 \sqrt{b^4 + 4 a^2 c^2} + c^2 \sqrt{c^4 + 4 a^2 b^2}
- (a^4 + b^4 + c^4) \right],
\end{equation}
which exactly coincides with a result of Ref.\cite{ou07-1}. Of course, when $t = t_n (n= 1, 2, \cdots)$ and $t = \infty$, the entanglement of
$\hat{\rho}^W_I (t)$ is completely disentangled in the non-Markovian regime.
\subsection{Type II}
In this subsection we choose the initial state as
\begin{equation}
\label{type2-W-1}
\hat{\rho}^W_{II} (0) = \ket{W_2} \bra{W_2}
\end{equation}
where $\ket{W_2} = a \ket{6} + b e^{i \delta_1} \ket{5} + c e^{i \delta_2} \ket{3}$ with $a^2 + b^2 + c^2 = 1$.
This initial state is LU to $\ket{W_1}$ because of $\ket{W_2} = (\sigma_x \otimes \sigma_x \otimes \sigma_x) \ket{W_1}$.
Then, by making use of Eqs. (\ref{diagonal-1}) and (\ref{non-diagonal-1}) it is
straightforward to show that $\hat{\rho}^W_{II} (t)$ is
\begin{equation}
\label{type2-W-2}
\hat{\rho}^W_{II} (t) = (1 - P_t^2)^2 \ket{0} \bra{0} + P_t^4 \ket{W_2} \bra{W_2} + 2 P_t^2 (1 - P_t^2) \sigma_{II} (t)
\end{equation}
where
\begin{eqnarray}
\label{type2-W-3}
&&\sigma_{II} (t) = \frac{1}{2} \Bigg[ (b^2 + c^2) \ket{1} \bra{1} + (a^2 + c^2) \ket{2} \bra{2} + (a^2 + b^2) \ket{4} \bra{4} \nonumber \\
&&\hspace{2.0cm}+ a b \left( e^{i \delta_1} \ket{1} \bra{2} + e^{-i \delta_1} \ket{2} \bra{1} \right)
+ a c \left( e^{i \delta_2} \ket{1} \bra{4} + e^{-i \delta_2} \ket{4} \bra{1} \right) \\ \nonumber
&&\hspace{4.0cm}+ b c \left(e^{-i (\delta_1 - \delta_2)} \ket{2} \bra{4} + e^{i (\delta_1 - \delta_2)} \ket{4} \bra{2} \right) \Bigg].
\end{eqnarray}
The spectral decomposition of $\sigma_{II} (t)$ cannot be derived analytically. Also, analytic computation of $\pi$-tangle for
$\hat{\rho}^W_{II} (t)$ is impossible. Thus, we have to reply on the numerical approach for computation of $\pi$-tangle.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=5cm]{fig4.pdf}
\caption[fig4]{(Color online) The $\gamma_0 t$ dependence of $\pi^{I}_W$ (red line) and $\pi^{II}_W$ (blue line)
when $\ket{W_1}$ and $\ket{W_2}$ interact with the Markovian environment. We choose $\lambda = 3 \gamma_0$ and $a^2 = b^2 = c^2 = 1/3$.}
\end{center}
\end{figure}
However, some special cases allow the analytic computation. In this paper we consider a special case $a^2 = b^2 = c^2 = 1/3$. In this case
the spectral decomposition of $\sigma_{II} (t)$ can be derived as
\begin{equation}
\label{type2-W-4}
\sigma_{II} (t) = \frac{2}{3} \ket{\alpha_1} \bra{\alpha_1} + \frac{1}{6} \ket{\alpha_2} \bra{\alpha_2} +
\frac{1}{6} \ket{\alpha_3} \bra{\alpha_3}
\end{equation}
where
\begin{eqnarray}
\label{type2-W-5}
&&\ket{\alpha_1} = \frac{1}{\sqrt{3}} \left(\ket{1} + e^{-i \delta_1} \ket{2} + e^{-i \delta_2} \ket{4} \right) \nonumber \\
&&\ket{\alpha_2} = \frac{1}{\sqrt{2}} \left(\ket{1} - e^{-i \delta_2} \ket{4} \right) \\ \nonumber
&&\ket{\alpha_3} = \frac{1}{\sqrt{6}} \left(\ket{1} - 2 e^{-i \delta_1} \ket{2} + e^{-i \delta_2} \ket{4} \right).
\end{eqnarray}
Thus, Eqs. (\ref{type2-W-2}) and (\ref{type2-W-4}) imply that $\hat{\rho}^W_{II} (t)$ with $a^2 = b^2 = c^2 = 1/3$ is
rank-$5$ tensor, three of them are W-states and the remaining ones are fully-separable and bi-separable states. Thus, its residual entanglement and
three-tangles are zero.
Using Eq. (\ref{negativity-1}) one can show that ${\cal N}_{A(BC)}$, ${\cal N}_{B(AC)}$, and ${\cal N}_{(AB)C}$ are all identical as
\begin{equation}
\label{W-2-pi-1}
{\cal N}_{A(BC)} = {\cal N}_{B(AC)} = {\cal N}_{(AB)C} = \frac{1}{3} P_t^2
\left[ \sqrt{9 - 18 P_t^2 + 17 P_t^4} - 3 (1 - P_t^2) \right].
\end{equation}
Also ${\cal N}_{AB}$, ${\cal N}_{AC}$, and ${\cal N}_{BC}$ are all identical as
\begin{eqnarray}
\label{W-2-pi-2}
{\cal N}_{AB} = {\cal N}_{AC} = {\cal N}_{BC} = \left\{ \begin{array}{cc}
\frac{\sqrt{9 - 24 P_t^2 + 20 P_t^4} + 2 P_t^2 (2 - P_t^2)}{3} - 1 & \hspace{1.0cm} P_t^2 \geq 2 - \sqrt{2} \\
0 & \hspace{1.0cm} P_t^2 \leq 2 - \sqrt{2}.
\end{array} \right.
\end{eqnarray}
Thus, the $\pi$-tangle for $\hat{\rho}^W_{II} (t)$ with $a^2 = b^2 = c^2 = 1/3$ is given by
$\pi^{II}_W = {\cal N}_{A(BC)}^2 - 2 {\cal N}_{AB}^2$.
In Fig. 4 we plot $\pi^I_W (t)$ (red line) and $\pi^{II}_W (t)$ (blue line) as a function of $\gamma_0 t$
when $\ket{W_1}$ and $\ket{W_2}$ interact with the Markovian
environment. We choose $\lambda = 3 \gamma_0$ and $a^2 = b^2 = c^2 = 1/3$. As expected both reduce to zero with the half-life rule. It is of interest
to note $\pi^I_W (t) \geq \pi^{II}_W (t)$ in full range of time. This means that $\ket{W_1}$ is more robust than
$\ket{W_2}$ against the Markovian environment.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=5cm]{fig5a.pdf}
\includegraphics[height=5cm]{fig5b.pdf}
\caption[fig5]{(Color online) (a) The
$a^2$ and $\gamma_0 t$ dependence of $\pi_{W}^{I} (t)$ when $c^2 = 1/3$. We choose
$\lambda = 0.01 \gamma_0$. (b) The $\gamma_0 t$ dependence of $\pi_{I}^{W} (t)$ (solid line) and $\pi_{II}^{W} (t)$ (dashed line) when
$a^2 = b^2 = c^2 = 1/3$. We choose $\lambda = 0.001 \gamma_0$. This figure implies that $\hat{\rho}^W_{I} (t)$ is more robust against the
environment than $\hat{\rho}^W_{II} (t)$. }
\end{center}
\end{figure}
In Fig. 5(a) we plot $\pi_{W}^{I} (t)$ as a function of $a^2$ and $\gamma_0 t$ when $\ket{W_1}$ is embedded in the non-Markovian environment.
We choose $c^2 = 1/3$ and $\lambda / \gamma_0 = 0.01$.
As expected the $\pi$-tangle reduces to zero as $t \rightarrow \infty$ with an oscillatory behavior.
To compare $\pi_{W}^{I} (t)$ with $\pi_{W}^{II} (t)$ we plot
both $\pi$-tangles as a function of $\gamma_0 t$ in Fig. 5(b). In this figure we choose $a^2 = b^2 = c^2 = 1/3$ and $\lambda / \gamma_0 = 0.001$.
The $\pi$-tangles $\pi_{W}^{I} (t)$ and $\pi_{W}^{II} (t)$ are plotted as solid and dashed lines respectively. In this case, as in the other
cases, the revival of entanglement occurs after complete disappearance. It is interesting to note that like a Markovian case
$\hat{\rho}^W_{I} (t)$ is more robust than $\hat{\rho}^W_{II} (t)$ against non-Markovian environment.
\section{Conclusions}
In this paper we have examined the tripartite entanglement dynamics when each party is entangled with other parties initially,
but they locally interact with their
own Markovian or non-Markovian environment. First, we have considered
three GHZ-type initial states $\ket{\psi_I} = a \ket{000} + b e^{i \delta} \ket{111}$,
$\ket{\psi_{II}} = a \ket{001} + b e^{i \delta} \ket{110}$, and $\ket{\psi_{III}} = a \ket{011} + b e^{i \delta} \ket{100}$.
All states are LU to each other.
It turns out that the GHZ symmetry of the initial states is broken due to the effect of environment.
We have computed the corresponding $\pi$-tangles analytically at arbitrary time $t$ in
Eqs. (\ref{ghz-1-pi-2}), (\ref{ghz-2-pi-2}), and (\ref{ghz-3-pi-2}).
It was shown that while the ESD phenomenon occurs for type I, the entanglement dynamics for the remaining types exhibits an exponential
decay in the Markovian regime.
In the non-Markovian regime the $\pi$-tangles completely vanish when
$t_n = 2 [n \pi - \tan^{-1} (d/\lambda) / d] \hspace{.3cm} (n = 1, 2, \cdots)$ and
$t \rightarrow \infty$. As shown in Fig. 3 the revival phenomenon of entanglement occurs after complete disappearance of entanglement.
Based on the analytical results it was shown that while the robustness order against the effect of reservoir is
$\ket{\psi_I}$, $\ket{\psi_{II}}$, $\ket{\psi_{III}}$ for large $a^2$ region, this order is reversed for small $a^2$ region.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=5cm]{fig6a.pdf}
\includegraphics[height=5cm]{fig6b.pdf}
\caption[fig5]{(Color online) The $\gamma_0 t$ dependence of concurrences Eq.(\ref{bipartite1}) and Eq. (\ref{bipartite2})
when $a^2 = b^2 = c^2 = 1/3$. (a) In this figure we choose $\lambda = 3 \gamma_0$. This shows that while bipartite entanglement dynamics
for type I (red line) decays exponentially with the half-life rule, that for type II (blue line) exhibits an ESD. (b) In this figure we choose
$\lambda = 0.01 \gamma_0$. Although both entanglements decay in time, the decay rate for type II (blue line) is much faster than that for type I (red line).}
\end{center}
\end{figure}
We also have examined the tripartite entanglement dynamics for two W-type initial states
$\ket{W_1} = a \ket{001} + b e^{i \delta_1} \ket{010} + c e^{i \delta_2} \ket{100}$ and
$\ket{W_2} = a \ket{110} + b e^{i \delta_1} \ket{101} + c e^{i \delta_2} \ket{011}$ with $a^2 + b^2 + c^2 = 1$.
Like GHZ-type initial states they are LU to each other. For initial $\ket{W_1}$ state the $\pi$-tangle is analytically computed in Eq. (\ref{W-1-pi-3}).
Since, however, $\ket{W_2}$ propagates to higher-rank state with the lapse of time, the analytic computation is impossible except few
special cases. Thus, we have computed the $\pi$-tangle analytically for special case $a^2 = b^2 = c^2 = 1/3$. In Fig. 4 and Fig. 5 it was shown that $\ket{W_1}$
is more robust than $\ket{W_2}$ against the Markovian and non-Markovian environments. The bipartite entanglements measured by the
concurrence\cite{concurrence1} for $\hat{\rho}^W_I (t)$ and $\hat{\rho}^W_{II} (t)$ are
\begin{equation}
\label{bipartite1}
{\cal C}^I_{AB} (t) = 2 |b c| P_t^2 \hspace{1.0cm} {\cal C}^I_{AC} (t) = 2 |a c| P_t^2 \hspace{1.0cm} {\cal C}^I_{BC} (t) = 2 |a b| P_t^2
\end{equation}
and
\begin{eqnarray}
\label{bipartite2}
&& {\cal C}^{II}_{AB} (t) = 2 P_t^2 \max \left[0, |b c| - |a| \sqrt{(1 - P_t^2) (1 - a^2 P_t^2)} \right] \nonumber \\
&& {\cal C}^{II}_{AC} (t) = 2 P_t^2 \max \left[0, |a c| -|b| \sqrt{(1 - P_t^2) (1 - b^2 P_t^2)} \right] \\ \nonumber
&& {\cal C}^{II}_{BC} (t) = 2 P_t^2 \max \left[0, |a b| - |c| \sqrt{(1 - P_t^2) (1 - c^2 P_t^2)} \right].
\end{eqnarray}
One can show ${\cal C}^I \geq {\cal C}^{II}$ in the entire range of time like a tripartite entanglement regardless of Markovian or non-Markovian
environment. The $\gamma_0 t$-dependence of the concurrences is plotted in Fig. 6 as red line for type I and blue line for type II when
(a) Markovian ($\lambda = 3 \gamma_0$) and (b) non-Markovian ($\lambda = 0.01 \gamma_0$) environments are introduced. The Fig. 6(a)
shows that while the entanglement for type I exhibits an exponential decay with the half-life rule, that for type II exhibits an ESD. For non-Markovian case
the decay rate for type II is much faster than that for type I although both exhibit a revival phenomenon of entanglement.
It is of interest to study the effect of non-Markovian environment when the initial state is a rank-$2$ mixture
\begin{equation}
\label{conclusion-1}
\rho(p) = p \ket{\mbox{GHZ}} \bra{\mbox{GHZ}} + (1 - p) \ket{\mbox{W}} \bra{\mbox{W}}
\end{equation}
where $ \ket{\mbox{GHZ}} = (\ket{000} + \ket{111}) / \sqrt{2}$ and $\ket{\mbox{W}} = (\ket{001} + \ket{010} + \ket{100}) / \sqrt{3}$.
The residual entanglement of $\rho(p)$ is known as
\begin{eqnarray}
\label{conclusion-2}
\tau(p) = \left\{ \begin{array}{cc}
0 & \hspace{1.0cm} 0 \leq p \leq p_0 \\
g_I(p) & \hspace{1.0cm} p_0 \leq p \leq p_1 \\
g_{II} (p) & \hspace{1.0cm} p_1 \leq p \leq 1
\end{array} \right.
\end{eqnarray}
where
\begin{eqnarray}
\label{conclusion-3}
&&p_0 = \frac{4 \sqrt[3]{2}}{3 + 4 \sqrt[3]{2}} = 0.626851\cdots
\hspace{1.0cm} p_1 = \frac{1}{2} + \frac{3 \sqrt{465}}{310} = 0.70868\cdots \\ \nonumber
&&g_I (p) = p^2 - \frac{8 \sqrt{6}}{9} \sqrt{p (1 - p)^3} \hspace{1.0cm}
g_{II}(p) = 1 - (1 - p) \left(\frac{3}{2} + \frac{1}{18} \sqrt{465} \right).
\end{eqnarray}
It is interesting, at least for us, how the non-Markovian environment modifies Coffman-Kundu-Wootters inequality
$4 \min [\mbox{det} (\rho_A)] \geq {\cal C}(\rho_{AB})^2 + {\cal C}(\rho_{AC})^2$ in this model. Similar issue was discussed in
Ref. \cite{costa14}.
Since we have derived the $\pi$-tangles analytically, we tried to find the entanglement invariants\cite{yu09-1,yonac07}, which was originally found in
four-qubit system. In our three-qubit systems we cannot find any invariants. It is of interest to examine the entanglement invariants in the
higher-qubit and qudit systems.
|
2,869,038,155,282 | arxiv | \section{Introduction
Building upon earlier work~\cite{ArkaniHamed:2002sp,Creminelli:2005qk,deRham:2010ik},
de Rham, Gabadadze and Tolley (dRGT) \cite{deRham:2010kj} first constructed a consistent interacting theory of a massive spin-2 field using an auxiliary flat fiducial metric. Though the study of this theory and other nonlinear massive gravity variants has now reached a somewhat mature stage \cite{deRham:2014zqa}, there remain many outstanding questions to be addressed and understood.
Foremost amongst these is the status of cosmological solutions within the theory.
The dRGT theory does not admit
flat or closed cosmological solutions where the spacetime and fiducial metrics are simultaneously homogeneous and isotropic~\cite{D'Amico:2011jj,Gumrukcuoglu:2011ew}. Though this complicates matters computationally, it does not
automatically destroy the phenomenological viability of cosmological solutions. The spacetime
upon which matter fields propagate can be homogeneous and isotropic for any choice of spatial curvature
\cite{Gratia:2012wt,Volkov:2012cf,Volkov:2012zb}. Indeed, many explicit examples of cosmological solutions to the theory have been found where accelerated expansion occurs without a true cosmological
constant~\cite{deRham:2010tw,Koyama:2011xz,Gumrukcuoglu:2011ew,Koyama:2011yg,Nieuwenhuizen:2011sq,
Berezhiani:2011mt,D'Amico:2011jj,Gratia:2012wt,Kobayashi:2012fz,
Volkov:2012cf,Volkov:2012zb}.
Though these solutions are satisfactory as far as background evolution is concerned, the theory of fluctuations about these solutions is somewhat paradoxical. Analyses of perturbations about these self-accelerating backgrounds~\cite{Wyman:2012iw,Khosravi:2013axa} reveal pathologies such as unboundedness of the Hamiltonian for isotropic perturbations, or strong-coupling of degrees of freedom in specific backgrounds, but often these appear in quite a subtle way \cite{Motloch:2014nwa}. For example it has been claimed
that the number of degrees of freedom identified by a Hamiltonian analysis depends
on a choice of coordinates \cite{Khosravi:2013axa}. Therefore, a worthwhile aim is to understand more fully the relationship between these potentially problematic features.
More generally, this broad class of self-accelerating solutions provides an interesting playground to explore fundamental field-theoretical questions which remain unresolved in massive gravity and beyond. In particular, there has been much recent interest in the interplay between superluminality in field theories and strong coupling phenomena, as well as what if anything the presence of these features implies about the well-posedness
of the Cauchy problem (for various perspectives, see~\cite{Adams:2006sv,Dubovsky:2007ac,Shore:2007um,deRham:2014lqa,Hollowood:2009tw,Hinterbichler:2009kq,Porrati:2009bs,Goon:2010xh,Padilla:2010tj,deFromont:2013iwa,Cooper:2013ffa,Joyce:2014kja,deRham:2013hsa,Creminelli:2014zxa,Keltner:2015xda,Deser:2012qx,Deser:2013eua,Deser:2014hga,Deser:2014fta,Deser:2015wta}). The self-accelerating sector of dRGT allows us an opportunity to explore these issues with explicit solutions: the theory of isotropic perturbations about these solutions turns out to be rather simple \cite{Wyman:2012iw}. In particular, it is possible to solve exactly for the characteristic hypersurfaces upon which graviton stress energy fluctuations propagate. This makes it rather easy to study perturbative aspects of the causal structure of self-accelerating backgrounds.
Therefore, another worthwhile avenue to explore is to what extent we can abstract general lessons about superluminality, strong coupling and Cauchy surfaces from these particular examples.
In this paper, we pursue both of these goals in tandem. After reviewing the structure of the dRGT theory---and in particular the construction of self-accelerating solutions of Ref.~\cite{Gratia:2012wt}---we focus on three particular families of vacuum solutions, one of which is new
to this work. We explicitly solve for the characteristics for isotropic perturbations and examine their causal structure
via conformal diagrams. In this sense, our study is related to the analysis of Ref.~\cite{Deser:2012qx,Izumi:2013poa,Deser:2013eua,Deser:2014hga,Deser:2014fta,Deser:2015wta} but our exact solutions are general and do not require discontinuous or ``shock" conditions which automatically entail strong coupling. We find many interesting phenomena: in particular, the solutions we consider exhibit superluminal propagation of fluctuations by necessity except in one unique case, only some of which provide a well-posed Cauchy problem and originate from singular conditions where strong-coupling might reside.
Perturbative superluminality on specific backgrounds does not necessarily indicate superluminality in the full theory
(see \cite{Dubovsky:2007ac,Shore:2007um,deRham:2014lqa}), which would present problems for
a local and Lorentz-invariant UV completion of the theory itself~\cite{Adams:2006sv},
nor does it imply
acausal structures such as closed timelike curves \cite{Babichev:2007dw,Burrage:2011cr}.
However these examples do highlight the fact that the highly related notions of superluminality, strong coupling and well-posed Cauchy problem are conceptually distinct.
Our examples also highlight an issue that is unique to a theory with two metrics.
In all three families of vacuum solutions, the Minkowski fiducial space fails to cover the
entire spacetime. The point at which the Minkowski chart, or unitary gauge, ends is
called a determinant singularity; it is diffeomorphism-invariant
and hence physical \cite{Gratia:2013gka,Gratia:2013uza}. In these examples, worldlines of some observers
can intersect the singularity in finite proper time and for others the singularity lies
in their past light cone requiring {\it ad hoc} rules for continuing worldlines or boundary conditions.
The paper is organized as follows.
In \S \ref{sec:theory}, we review the construction of bi-isotropic self-accelerating background solutions
\cite{Gratia:2012wt} and isotropic perturbations around them \cite{Wyman:2012iw}. We then
specialize to the vacuum case in \S \ref{sec:vacuum} and discuss the global structure of the spacetime background and fiducial metric for three families
of solutions, one of which is new to this paper.
In \S \ref{sec:perturb}, we employ a characteristic analysis of perturbations around these backgrounds to
expose the relationship between superluminality, strong coupling and the well-posedness of the
Cauchy problem.
We discuss the implications of these results in \S \ref{sec:discuss}.
\section{Self-Acceleration in Massive Gravity
\label{sec:theory}
In this section we briefly review the properties of the dRGT theory of massive gravity, the construction of self-accelerating background solutions, and spherically symmetric perturbations around them.
\subsection{Fiducial Metric
The dRGT \cite{deRham:2010kj} nonlinear theory of a massive spin-2 field
is given by the following action which propagates only the expected 5 polarizations {of a massive graviton}:
\begin{equation}
\label{drgt}
S = \frac{M_{\rm Pl}^2}{2}\int{\rm d}^4x\sqrt{-g}\left(R-m^2\sum_{k=0}^4 \frac{\beta_k}{k!} F_k\left(\boldsymbol{\gamma}\right)
\right),
\end{equation}
where $M_{\rm Pl}^2 = (8\pi G)^{-1}$ is the reduced Planck mass
and the $F_k$ terms are
characteristic polynomials of the matrix $\boldsymbol{\gamma}$. These can be written explicitly as
\begin{align}
F_0(\boldsymbol{\gamma}) & = 1, \nonumber\\
F_1(\boldsymbol{\gamma}) & = \tr{\boldsymbol{\gamma}}, \nonumber\\
F_2(\boldsymbol{\gamma}) & = \tr{\boldsymbol{\gamma}}^2 - \tr{\boldsymbol{\gamma}^2} , \\
F_3(\boldsymbol{\gamma}) & =\tr{\boldsymbol{\gamma}}^3 - 3 \tr{\boldsymbol{\gamma}} \tr{\boldsymbol{\gamma}^2} + 2 \tr{\boldsymbol{\gamma}^3} , \nonumber\\
F_4(\boldsymbol{\gamma}) &= \tr{\boldsymbol{\gamma}}^4 - 6 \tr{\boldsymbol{\gamma}}^2 \tr{\boldsymbol{\gamma}^2} + 3 \tr{\boldsymbol{\gamma}^2}^2 + 8 \tr{\boldsymbol{\gamma}} \tr{\boldsymbol{\gamma}^3}
- 6 \tr{\boldsymbol{\gamma}^4} ,
\nonumber
\end{align}
where $[\,]$ denotes the trace of the enclosed matrix. The matrix $\boldsymbol{\gamma}$ is the square root of the product of the inverse spacetime metric ${\bf g}^{-1}$ and a flat fiducial metric $\boldsymbol{\Sigma}$
\begin{equation}
\ul{\gamma}{\mu}{\alpha} \ul{\gamma}{\alpha}{\nu} = g^{\mu\alpha}\Sigma_{\alpha\nu} \,.
\label{eqn:gamma}
\end{equation}
The parameters of the theory defined by Eq.~\eqref{drgt} are
$m$---the graviton mass---and the $\beta_k$. These parameters are not all independent, but rather they depend on two fundamental independent parameters~$\{\alpha_3,\alpha_4\}$ through
\begin{align}
\beta_0 &= -12 (1+ 2\alpha_3+2\alpha_4), \nonumber\\
\beta_1 &= 6(1 + 3 \alpha_3 + 4\alpha_4),\nonumber\\
\beta_2 &= -2(1+ 6 \alpha_3+12\alpha_4 ), \\
\beta_3 &= 6(\alpha_3+ 4\alpha_4), \nonumber\\
\beta_4 &= -24 \alpha_4.\nonumber
\end{align}
The chart of the spacetime metric for which the fiducial metric is represented by the Minkowski metric
$\Sigma_{\mu\nu}=\eta_{\mu\nu}$ is called
unitary gauge and there the spacetime metric possesses additional degrees of freedom compared to Einstein gravity.
Diffeomorphism invariance can be restored by employing the St\"uckelberg trick to introduce four scalar functions, $\phi^a$, to represent the flat fiducial metric in arbitrary coordinates
\begin{equation}
\Sigma_{\mu\nu} = \eta_{a b} \partial_\mu\phi^a\partial_\nu\phi^b .
\end{equation}
The $\phi^a$ are equal to the unitary gauge coordinates $x_u^\mu$ and absorb the extra polarization states in arbitrary gauges where diffeomorphism invariance is used to eliminate or constrain
the metric terms.
The matrix $\partial_\mu\phi^a$ then represents the Jacobian of the coordinate transform between a general set of coordinates
$x^\mu$ and $x_u^\mu$.
Minkowski space may not cover, or equivalently unitary gauge may not chart, the entire spacetime; this situation is signaled by a non-invertible Jacobian
transform, which we call a determinant singularity \cite{Gratia:2013gka}.
Unlike a pure coordinate singularity, a determinant singularity does not depend on the
chart of the spacetime. This is because the ratio of determinants of the two metrics
\begin{equation}
{\rm det}({\bf g}^{-1} {\boldsymbol{\Sigma}})={\rm det}({\bf g}_u^{-1} {\boldsymbol{\eta}})
=-{\rm det}({\bf g}_u^{-1})
\end{equation}
is a spacetime scalar. A coordinate singularity in the unitary chart of the spacetime metric ${\bf g}_u$ therefore becomes a
coordinate-invariant determinant singularity. In contrast to a curvature singularity, the two metrics need not individually have any diffeomorphism invariant singularities, despite the fact that combined they display a determinant singularity.
Physically, the presence of two metrics means that worldlines
of observers may end after a finite proper time in spacetime has elapsed, since
an infinite interval can elapse as measured by the fiducial metric. This geodesic incompleteness is another indicator that we should take determinant singularities seriously.
It is possible to continue worldlines past these singularities in the spacetime, but this requires multiple copies of Minkowski space and a rule for
continuing the chart---or equivalently the St\"{u}ckelberg\ fields---that is not directly imposed by the action.
\subsection{Background Equations of Motion
\label{sec:backgroundeoms}
For any isotropic spacetime metric,
including cosmological solutions with arbitrary matter content, there are solutions to the dRGT equations of motion
where the stress-energy associated with the graviton potential in Eq.~\eqref{drgt} behaves as a cosmological constant.
The construction of Ref.~\cite{Gratia:2012wt}, which we now review, is in fact extensible beyond dRGT to cases with non-flat bi-isotropic and dynamical metrics \cite{Motohashi:2012jd,Gratia:2013uza}.
Any metric with {spatial slices invariant under SO(3) rotations} can be written in isotropic coordinates, in which the line element takes the form
\begin{equation}
{{\rm d}}s^2 = -b^2(t,r) {\rm d} t^2 + a^2(t,r) \big({\rm d} r^2 + r^2 {\rm d} \Omega_2^2 \big),
\label{isotropicmetric}
\end{equation}
where ${\rm d} \Omega_2^2$ is the line element on a 2-sphere. If the fiducial metric is rotationally invariant
in the same coordinate system ({\it i.e.}, the metrics are bi-isotropic),
the St\"{u}ckelberg\ fields take the following form,
\begin{eqnarray}
\phi^0 &=& f(t,r),\nonumber\\
\phi^i &=& g(t,r) \frac{x^i}{r},
\label{stuckyback}
\end{eqnarray}
and are completely specified by the two functions $f$ and $g$.
Note that $f = t_u$ is the unitary gauge time and $g=r_u$ is the unitary gauge radius that describe the
fiducial flat line element
\begin{eqnarray}
{\rm d} s_\Sigma^2 &=& \Sigma_{\mu\nu} {\rm d} x^\mu {\rm d} x^\nu
= -{\rm d} f^2 + {\rm d} g^2 + g^2 {\rm d} \Omega_2^2 .
\label{eqn:fidline}
\end{eqnarray}
Unitary gauge uniquely
specifies the coordinates, whereas the isotropic condition does not, leading to multiple paths to
finding the same solution and superficially different descriptions of their dynamics.
Once solutions for $f$ and $g$ are found using any isotropic construction they may be
re-expressed in an alternate choice of coordinates since both are spacetime scalars.
Upon inserting the ans\"atze~\eqref{isotropicmetric} and~\eqref{stuckyback} into the action~\eqref{drgt}, we find that the equation of motion for the spatial St\"uckelberg fields is satisfied by
\begin{eqnarray}
g(t,r) =x_0 a(t,r)r,
\label{eqn:gsoln}
\end{eqnarray}
where the constant $x_0$ solves the polynomial equation $P_1(x_0)=0$ with
\begin{equation}
P_1(x) \equiv 2(3-2x)+6(x-1)(x-3)\alpha_3+24(x-1)^2\alpha_4.
\end{equation}
On this solution, the effective stress tensor due to the presence of the non-derivative graviton interactions takes the form of an effective cosmological constant
\begin{equation}
T_{\mu\nu} = - \Lambda_{\rm eff} M_{\rm Pl}^2 g_{\mu\nu},
\end{equation}
where
\begin{equation}
\Lambda_{\rm eff}= \frac{1}{2} m^2 P_0(x_0),
\end{equation}
and the polynomial $P_0(x)$ is given by
\begin{align}
P_0(x) &= - 12 - 2 x(x-6) - 12(x-1)(x-2)\alpha_3
\nonumber\\&\qquad -24(x-1)^2\alpha_4 .
\end{align}
Unitary time, $f(t,r)$, satisfies the equation
\begin{equation}
\sqrt{X} = \frac{W}{x_0}+x_0,
\label{eqn:feqnsoln}
\end{equation}
where \begin{align}
X & \equiv\Bigl(\frac{\dot{f}}{b}+\mu\frac{g'}{a}\Bigr)^2-\Bigl(\frac{\dot{g}}{b}+\mu\frac{f'}{a}\Bigr)^2, \nonumber\\
W & \equiv \frac{\mu}{ab} \( \dot f g' - \dot g f' \),
\label{eqn:XW}
\end{align}
with branches due to the matrix square root $\boldsymbol{\gamma}$ defined in Eq.~\eqref{eqn:gamma} allowing
$\mu\equiv \pm 1$. Here and throughout, overdots denote derivatives with respect to $t$ and primes denote
derivatives with respect to $r$.
Note that $W$ is proportional to the determinant
of the Jacobian transform from unitary gauge to isotropic coordinates. When $W=\pm\infty,
0$ or is undefined
because either $f$ or $g$
are not continuously differentiable, the Jacobian
transform is not invertible. We call all of these cases a determinant singularity. Analytic continuation is sometimes possible, especially in the later two cases, but requires a second fiducial metric and solution with its own choice
of branch \cite{Gratia:2013gka}. We pick $\mu=1$ for the examples in the following sections.
Using Eq.~\eqref{eqn:gsoln} in Eq.~\eqref{eqn:feqnsoln}, we see that the latter equation can be cast as
\begin{eqnarray}
\label{fEOMback}
b^2 f'^2 + 2 a r(a' \dot f^2 - \dot a \dot f f')+r^2 (a' \dot f - \dot a f')^2 \nonumber
\\
= x_0^2 \(a'^2 b^2 r^2 + 2 a' a b^2 r - \dot a^2 a^2 r^2\) .
\end{eqnarray}
This nonlinear partial differential equation has an infinite number of distinct self-accelerating solutions, each of
which possesses the same
background spacetime metric and $\Lambda_{\rm eff}$. In order to specify a solution,
one assumes a functional form for $a$ and $b$ consistent with the Einstein equations sourced by
$\Lambda_{\rm eff}$ and the matter content
and then solves Eq.~\eqref{fEOMback} to determine $f$. In \S \ref{sec:vacuum}, we will perform this procedure, choosing $a$ and $b$ so that there is no matter content and the background spacetime is de Sitter.
For solutions to the equation of motion~\eqref{fEOMback} which satisfy the condition
\begin{equation}
\dot f f' = \dot g g',
\label{eqn:bidiagonal}
\end{equation}
the fiducial metric Eq.~\eqref{eqn:fidline} is also diagonal in the same $(t,r)$ coordinates that the spacetime
metric is diagonal. We shall see that this is
exactly the condition for which the kinetic term of isotropic perturbations vanishes \cite{Khosravi:2013axa}.
However, we can analyze the dynamics of the same solution in alternate isotropic coordinate systems which mix the temporal and radial coordinates where bi-diagonality and vanishing kinetic terms do not apply.
Moreover,
the diffeomorphism invariance of the St\"{u}ckelberg\ representation means that we are not even limited to
isotropic coordinate systems when analyzing the dynamics.
\subsection{Isotropic Perturbations
\label{sec:isopert}
Given the rotational invariance of the background spacetime metric and St\"uckelberg fields, isotropic perturbations about these background solutions are straightforward to analyze ({\it cf.}\ \cite{Motloch:2014nwa}). This is a reasonable starting point, as uncovering problems
in this restricted class of perturbations would already indicate a pathology of the background solution.
Note that the converse is not true: a
background solution with healthy isotropic perturbations can still show pathologies in the
anisotropic sector.
In order to study fluctuations about the background solutions we are considering, we perturb both the metric variables and St\"uckelberg fields about their background values;
isotropic perturbations are specified by four functions of $(t,r)$:
$\delta a, \delta b, \delta f$ and $\delta g$.
Varying the quadratic action for these variables given in Ref.~\cite{Wyman:2012iw} yields
an independent equation of motion for a specific combination
\begin{equation}
\delta \Gamma(t,r) = \delta g(t,r) - x_0 r \delta a(t,r) .
\end{equation}
This combination is special because it is precisely the variable which quantifies perturbations away from the effective cosmological constant solution; perturbations for which $\delta \Gamma = 0$ still satisfy~\eqref{eqn:gsoln} and therefore
leave $\Lambda_{\rm eff}$ unchanged. The equation of motion for the variable $\delta\Gamma$ is~\cite{Wyman:2012iw}
\begin{eqnarray}
&& \partial_t \left[
\frac{ a^2 r }{\sqrt{X}} \left( \frac{\dot f}{b} + \mu \frac{g'}{a} \right)\delta \Gamma \right] =
\partial_r \left[
\frac{ab r }{\sqrt{X}} \left( \mu \frac{\dot g}{b} + \frac{f'}{a}\right)\delta \Gamma\right]
\nonumber\\
&&\qquad + \mu a^2 r^2 \left[ \frac{(a r)'}{ar } \dot \delta \Gamma - \frac{\dot a}{a}\delta \Gamma' \right] .
\label{eqn:gs}
\end{eqnarray}
Equation~\eqref{eqn:gs} is first order in both time and space derivatives and does not depend on
other perturbations. The $\delta f$ equation of motion is also first order, but
unlike the $\delta \Gamma$ equation it requires specification of $\delta \Gamma$ and other perturbations.
To linear order in the St\"{u}ckelberg\ fluctuations, the stress energy tensor of the perturbations depends only
on $\delta \Gamma$ and we therefore focus on its dynamics. The equation of motion \eqref{eqn:gs} is equivalent to local conservation of stress energy \cite{Wyman:2012iw}.
The coefficient of the temporal derivative term,
\begin{eqnarray}
\label{At}
A_t = \frac{ a^2 r }{\sqrt{X}} \left( \frac{\dot f}{b} + \mu \frac{g'}{a} \right) - \mu a r {(a r)'},
\end{eqnarray}
is of special interest to the time evolution of the perturbations.
In a choice of isotropic coordinates
where $A_t=0$, the kinetic term for $\delta \Gamma$ vanishes and hence the equation of
motion \eqref{eqn:gs} does not specify the evolution of perturbations off of a constant time surface.
Combining Eq.~\eqref{fEOMback} and Eq.~\eqref{eqn:bidiagonal}, we see that $A_t=0$ whenever the spacetime and
fiducial metrics are bi-diagonal and vice versa.
When $A_t=0$, the energy density, momentum and pressure associated with any $\delta \Gamma$ fluctuation vanish to
linear order
\cite{Wyman:2012iw}. Thus energy-momentum conservation, though enforced by the equation of motion,
also does not yield equations that allow evolution of $\delta \Gamma$ off of constant time surfaces.
Instead, the nonvanishing anisotropic stresses must obey a constraint equation.
On the other hand,
the fact that the equation of motion
can be written in terms of different choices of isotropic time---or more generally
different foliations of the spacetime---indicates that the
vanishing of the kinetic term is a coordinate dependent statement~\cite{Khosravi:2013axa}.\footnote{Note that there are more prosaic examples of this phenomenon: a scalar field with a superluminal phase velocity considered in an appropriately Lorentz-boosted frame will have a vanishing kinetic term \cite{Adams:2006sv}.}
In a choice where the kinetic term is present, the equation of motion does supply
an evolution equation, or equivalently the constraint
on stresses becomes a non-trivial equation for energy-momentum conservation.
We shall see that this is a generic property for perturbations that propagate superluminally, {\it i.e.}, on spacelike characteristics.
Relatedly, Ref.~\cite{Khosravi:2013axa} show through a Hamiltonian analysis
that the number of propagating (isotropic) degrees of freedom in a given
isotropic frame is one if $A_t \neq 0$ and zero if $A_t =0$. The number of
degrees of freedom counted in this way, therefore depends on the coordinate
system. For $A_t = 0$, the Hamiltonian is not associated with the time evolution of the system, in the sense that it defines evolution along a spatial slice, rather than transverse to it. As such, it is somewhat unclear how to interpret the Hamiltonian analysis in this case.
Further, in the case $A_t \neq 0$, the canonical momentum for $\delta \Gamma$ appears linearly in the Hamiltonian, and thus the Hamiltonian is unbounded from below.
\begin{figure*}
\center
\includegraphics[width = 0.99\textwidth]{fig_coverings_all.pdf}
\caption{Conformal diagrams showing portions of de Sitter space charted by closed (left),
flat (middle) and open (right) foliations. Thick lines indicate coordinate
singularities. Superimposed are lines of constant isotropic time and radius for each foliation.
}
\label{fig:coverings}
\end{figure*}
\section{Vacuum Solutions
\label{sec:vacuum}
In this section we specialize to background solutions without any matter content. In these
solutions the only source of stress-energy determining the background is the effective
cosmological constant, $\Lambda_{\rm eff}$, and the spacetime metric describes a de Sitter
space. We first discuss the conformal and three isotropic charts of the de Sitter space that
play a prominent role in both the construction of solutions and the investigation of their global properties.
We then turn to three families of self-accelerating solutions and discuss their global structure, namely the appearance of their determinant singularities.
\subsection{De Sitter Charts
One special feature of de Sitter space is that there is no preferred temporal coordinate to define a foliation with respect to. Sections of the full spacetime can therefore be charted by isotropic
coordinates where the constant time slices have positive, negative or zero spatial curvature.
The conformal diagram for de Sitter space can be constructed from the positive curvature (closed) foliation of the spacetime, where the line element takes the form
\begin{equation}
\label{conformal_metric}
{\rm d} s^2 = \left( \frac{1}{H \sin \eta}\right)^2 \left(-{\rm d}\eta^2 + {\rm d}\chi^2 + \sin^2\chi {\rm d}\Omega_2^2\right),
\end{equation}
with the dimensionless conformal time $\eta \in (-\pi, 0)$ and the comoving radial coordinate
$\chi \in [0, \pi]$. Here $H^2 = \Lambda_{\rm eff}/3$. We use the $(\eta,\chi)$ conformal diagram
throughout to represent the spacetime.
Closed, flat, and open isotropic coordinates can alternately be used to foliate portions of de Sitter
space and are useful in finding solutions to the background massive gravity equations and for investigating perturbations. With the transformations
\begin{eqnarray}
\sinh (H t_c) &=& - \cot \eta ,\nonumber\\
H r_c &=& 2 \tan(\chi/2),
\label{eqn:closedtr}
\end{eqnarray}
the line element~\eqref{conformal_metric} takes its closed isotropic form
\begin{equation}
\label{eqn:closeddS}
{\rm d} s^2 = -{\rm d} t_c^2 +\left[ \frac{ \cosh{(H t_c)} }{1 +(H r_c)^2/4} \right]^2 \left({\rm d} r_c^2 + r_c^2 {\rm d}\Omega_2^2\right),
\end{equation}
where $t_c \in (-\infty,\infty)$, $r_c \in [0,\infty)$. These coordinates chart the entire de Sitter spacetime.
Similarly, defining the coordinates
\begin{eqnarray}
e^{H t_f} &=& -\frac{\cos\chi+\cos\eta}{\sin\eta} ,\nonumber\\
H r_f &=& \frac{\sin\chi}{\cos\chi+\cos\eta},
\label{eqn:flattr}
\end{eqnarray}
obtains the flat isotropic form
\begin{equation}
{\rm d} s^2 = -{\rm d} t_f^2 +e^{2 H t_f}\left({\rm d} r_f^2 + r_f^2 {\rm d}\Omega_2^2\right),
\end{equation}
where $H t_f \in(-\infty,\infty)$, $H r_f \in [0,\infty)$. These coordinates chart the upper left half of the
conformal diagram $\eta > \chi-\pi$.
Finally, the coordinate definition
\begin{eqnarray}
\ln\left[ \tanh(H t_o/2 ) \right] &=& \tanh^{-1}\left( \frac{\sin\eta}{\cos\chi} \right),\nonumber\\
2 \tanh^{-1}(H r_o/2)&=&\tanh^{-1}\left( \frac{\sin\chi}{\cos\eta} \right),
\label{eqn:opentr}
\end{eqnarray}
gives the open isotropic form
\begin{equation}
{\rm d} s^2 = -{\rm d} t_o^2 +\left[ \frac{ \sinh{(H t_o)} }{1 - (H r_o)^2/4} \right]^2 \left( {\rm d} r_o^2 + r_o^2 {\rm d}\Omega_2^2\right),
\end{equation}
where $H t_o \in (0,\infty)$, $H r_o \in [0,2)$. These coordinates chart the upper left wedge of the conformal diagram
$\eta > \chi-\pi/2$, corresponding to $1/8$ of the space. We display these charts with the curves of constant $t_i$ and $r_i$ in Fig.~\ref{fig:coverings}.
\begin{figure}
\center
\includegraphics[width = 0.40\textwidth]{fig_constant_g.pdf}
\caption{Constant unitary gauge radius $g$ curves, {given by Eq.~\eqref{eq:gfordS}}. These are common to all
bi-isotropic solutions.}
\label{fig:const_g}
\end{figure}
Self-accelerating background solutions appear superficially different in different isotropic coordinates.
More importantly, differences in the constant time surfaces {cause the kinetic terms
for the perturbations to appear differently in} each case, as emphasized by
Ref.~\cite{Khosravi:2013axa}.
Of course, the causal structure of the
conformal diagram remains unchanged and serves to highlight the spacelike, timelike or lightlike
nature of curves rather than coordinate dependent definitions of simultaneity. For this reason, it will be convenient to plot characteristics on the conformal diagram.
\begin{figure*}
\center
\includegraphics[width = 0.99\textwidth]{fig_constant_f_all.pdf}
\caption{Constant unitary time curves for various well-known self-accelerating solutions: open $f_o$ (left), and stationary $f_C$ for $C = 1/2$
(middle) and $f_C$ for $C = 1$ (right). Solid lines indicate {constant unitary time slices extending from} $\chi=0$ which end at a determinant singularity (red lines). Dashed lines indicate an extension of solutions with a second copy of the fiducial metric. }
\label{fig:constant_f}
\end{figure*}
\subsection{Explicit Solutions
\label{sec:backgroundsolns}
We now combine the isotropic de Sitter charts with the formalism of \S \ref{sec:backgroundeoms} to construct vacuum solutions to the massive gravity equations of motion.
\vspace{.07cm}
\noindent
{\bf Radial solution: $g$ and equations of motion
Since closed isotropic coordinates chart the whole spacetime, it is convenient to use these coordinates
to find self-accelerating solutions. Using the closed line element Eq.~\eqref{eqn:closeddS} in the
radial unitary gauge solution of Eq.~\eqref{eqn:gsoln}, we obtain
\begin{equation}
\label{eq:gfordS}
g = x_0 \frac{ \cosh{(H t_c)} }{1 +(H r_c)^2/4}r_c = -\frac{ x_0}{H} \frac{\sin\chi}{ \sin \eta} .
\end{equation}
Fig.~\ref{fig:const_g} shows the contours of constant $g$ in the conformal diagram. As the conformal diagram
makes obvious,
$g$ is 4-fold symmetric in the de Sitter spacetime since $\eta \rightarrow -\pi -\eta$ and/or
$\chi \rightarrow \pi -\chi$ provide the same values.
We then look for a choice of unitary time $f$ that solves Eq.~\eqref{fEOMback}, which in the coordinates~\eqref{conformal_metric} takes the explicit form
\begin{eqnarray}
\label{fEOMbackConformal}
&&
\left( f_{,\chi}^2\cot^2\chi -
f_{,\eta}^2 \right)\sin^2\eta
+ f_{,\chi}^2 + f_{,\chi} f_{,\eta} \cot\chi\sin(2\eta) \nonumber\\
&&\quad
= -\frac{x_0^2}{H^2} \csc^2\eta.
\end{eqnarray}
Below we consider three explicit families of solutions to Eq.~\eqref{fEOMbackConformal}, corresponding to different self-accelerating backgrounds whose perturbations we will examine in the subsequent sections.
As an aside, we note that~\eqref{fEOMbackConformal} can be cast in a particularly simple form,
\begin{equation}
\label{eq:volkovfeq}
f_{,\tau}^2 - f_{,\rho}^2 =\frac{x_0^2}{H^2},
\end{equation}
by a further coordinate transform~\cite{Mazuet:2015pea}
\begin{equation}
\rho= \frac{\cos\chi}{\sin\eta}, \quad \tau=-\cot\eta,
\end{equation}
with $\tau \in (-\infty,\infty), \rho \in (-\infty,\infty)$.
\vspace{.07cm}
\noindent
{\bf Open solution: $f_o$
The simple ans\"atz that $f$ is a function of $\eta$ alone leads to the solutions first
identified in Ref.~\cite{Gumrukcuoglu:2011ew} through an open slicing construction, where $f$ is given by
\begin{equation}
\label{solMukohyama}
f_o(\eta,\chi)
= - \frac{x_0}{H} \cot \eta .
\end{equation}
In open slicing, the spacetime and fiducial metrics are bi-diagonal (because Eq.~\eqref{eqn:bidiagonal} is satisfied) and manifestly
homogeneous and isotropic. This property of open slicing applies beyond the de Sitter solutions considered here to a general FRW spacetime, giving this solution a special status.
As pointed out by Ref.~\cite{Khosravi:2013axa}, and rediscovered by
Ref.~\cite{Mazuet:2015pea}, bi-diagonality does not hold in the closed or flat slicings of de Sitter.
However, the ability to define multiple homogeneous and isotropic slicings and threadings of the spacetime
is special to de Sitter. For a general FRW spacetime, this freedom no longer exists since the homogeneity
of an evolving background density picks out a unique time slicing.
Curves of constant unitary gauge time for this solution are plotted in Fig.~\ref{fig:constant_f}.
Note that the coordinate pair $(f_o,g)$ maps to two different $(\eta,\chi)$ points in the spacetime
related by $\chi \rightarrow \pi -\chi$ and correspondingly the determinant of the Jacobian transformation to unitary gauge is zero along $\chi=\pi/2$.
Therefore, more than one copy of the Minkowski fiducial space is required to cover the entire spacetime.
For this solution, the singularity lies in the past light cone of an observer at $\chi=0$ and
hence boundary conditions that represent the continuation with a second fiducial metric
are required. Note that---as mentioned in \S \ref{sec:theory}---such a rule for continuation is {\it ad hoc} as it must be imposed by hand.
\vspace{.07cm}
\noindent
{\bf Stationary solution: $f_C$
Self-accelerating solutions where the spacetime metric in unitary gauge is stationary were first identified
in Ref.~\cite{Koyama:2011yg}. In conformal coordinates, unitary time takes the form
\begin{eqnarray}
\label{solKoyama}
f_C &=& \frac{x_0}{C H} \left( \ln \left\lvert \frac{C^2 e^{H t_f}}{1-y}\right\rvert - y \right),\nonumber\\
y &=& \sqrt{1 + C^2( \sin^2\chi/\sin^2\eta -1)},
\end{eqnarray}
where $C \in (0, 1]$ is a free parameter, $y \in [0,\infty)$, and $t_f(\chi,\eta)$ is the extension of flat isotropic
time to the full de Sitter space using Eq.~\eqref{eqn:flattr}. The coordinate pair $(f_C,g)$ maps to two spacetime points $(\eta,\chi)$ and
$(-\pi-\eta,\pi-\chi)$. The inversion singularity thus is at $\chi = -\eta$ and is characterized
by a divergence in unitary time $f_C \rightarrow \infty$ which prevents its differentiability. Again, two copies of
the Minkowski fiducial space are required to cover the spacetime. In this case,
the singularity is along the past light cone of the observer at $\chi=0$ at the terminal
conformal time $\eta=0$. Conformal diagrams showing curves of constant $f_C$ for $C = 1/2$
and $C = 1$ are plotted in Fig.~\ref{fig:constant_f}.
A different construction of this solution that starts in static de Sitter coordinates \cite{Mazuet:2015pea} is in fact equivalent to Eq.~\eqref{solKoyama} after
relating their parameter $q$ to $C$ as $q^2 = 1/C^2 - 1$. The equivalence between the two classes of solutions can also be seen directly in the
construction itself since Ref.~\cite{Koyama:2011yg} showed that stationary unitary coordinates and
static de Sitter coordinates are simply related by a radially dependent offset to the static time coordinate.
\begin{figure*}
\center
\includegraphics[width = 0.99\textwidth]{fig_constant_f_Khosravi.pdf}
\caption{Constant unitary time curves for the new $f_{ab}$ class of self-accelerating solutions.
In addition to the zero determinant singularity (thick red lines) there also appear singularities where the determinant is infinite (blue lines), in the $a=b=1$ case the two coincide. Across the infinite determinant singularity, a na\"ive analytic continuation would require a Riemannian rather than Lorentzian second fiducial metric.}
\label{fig:constant_f_Khosravi}
\end{figure*}
\vspace{.07cm}
\noindent
{\bf New solution: $f_{ab}$
Finally, we can generalize a particular solution introduced in Ref.~\cite{Khosravi:2013axa}, to a new two
parameter class, where the {temporal} St\"uckelberg field takes the following form
\begin{equation}
f_{ab} =\pm \frac{x_0}{H} \sqrt{(a - \cot \eta)^2 - (b - \cos \chi \csc \eta)^2 \vphantom{\big|}}.
\end{equation}
Here, the parameters $a,b$ can take on any real value except for $a=b=0$. For that case, we have $f_{00}^2 = g^2 - (x_0/H)^2$, such that unitary time and radius cannot identify a unique spacetime point. Ref.~\cite{Khosravi:2013axa} previously considered the case of $a=0$, $b=1$.
In the full spacetime $f_{ab}$ is not guaranteed to be real and so unitary gauge
ends in a determinant singularity where $W = \pm \infty$. Approaching this point,
the change in unitary time (or proper time measured by the fiducial metric) per unit conformal time (or proper time measured by the spacetime metric) diverges.
Unlike the
case of $W=0$, an analytic continuation of unitary coordinates
would make a copy of the fiducial metric with a Riemannian rather than Lorentzian signature.
Thus, unlike the other solutions, it does not appear that $f_{ab}$ can be continued beyond this type of determinant
singularity with copies of the fiducial metric in the same class of solutions.
Changing the sign of $a$ reflects
solutions vertically across $\eta = -\pi/2$ whereas changing the sign of $b$ reflects horizontally across $\chi=\pi/2$.
The determinant singularities $W =\pm \infty$ occur where $f_{ab}=0$
and bound regions past which the solutions cannot be continued within the class.
These singularities intersect at
\begin{equation}
\cot\eta = a, \quad \cos \chi =- \frac{b}{\sqrt{1+a^2}},
\label{eqn:Wcross}
\end{equation}
if $|b| \le \sqrt{1+a^2}$. The $W=0$ singularity
occurs along the curve $a\cos\chi =b\cos\eta$ and intersects both $W =\pm \infty$
singularities at their crossing point.
We show several examples from this class in Fig.~\ref{fig:constant_f_Khosravi}.
Displacing $b$ from zero at $a=0$ breaks the $\chi$ or left-right symmetry of the conformal diagram allowing constant unitary time surfaces to foliate the spacetime near one pole
with the other pole hidden behind a $W=\pm \infty$ determinant singularity.
The unbroken $\eta$ or top-bottom symmetry has a corresponding $W=0$ singularity at $\eta=-\pi/2$ across which
solutions can be extended with a second fiducial metric.
Displacing $a$ from zero at $b=0$ does the converse, allowing constant unitary
time surfaces to foliate the spacetime near both poles at either the top or bottom sections of the diagram with
a $W=0$ singularity at $\chi=\pi/2$, again with a second branch of the solution across it.
Displacing both equally makes one of the $W =\pm \infty$ curves coincide with
$W=0$, allowing foliation
around only one pole
and only on the top or bottom section.
\subsection{Determinant Singularities
All three families of solutions exhibit determinant singularities. Worldlines of observers can intersect these
singularities
in the bulk of the spacetime, {\it i.e.}, after only a finite amount of proper time has elapsed. For other observers, the singularity lies in the past light cone.
Both properties imply that the incompleteness of the solutions there cannot simply be ignored.
In the open $f_o$ solution, a na\"ive
continuation past the $W=0$ singularity keeps the St\"{u}ckelberg\ fields or unitary gauge coordinates continuous and
differentiable but multivalued. In the static $f_C$ solution, na\"ive continuation keeps them continuous
but not differentiable at the singularity. In the new $f_{ab}$ solutions, na\"ive continuation beyond the
$W=\pm \infty$ singularities is not possible and when they intersect at a specific
point in the spacetime bulk they do so at the $W=0$ singularity.
Finally, by pushing the $W=\pm\infty$ singularities of the $f_{ab}$ case to the top or bottom of the conformal diagram
it can be made to resemble the $f_o$ and $f_C$ solutions.
The $f_o$ solution picks out the same unitary time surfaces as the limiting case of $a \rightarrow \infty$, $b=0$ since $f_o$ and $f_{ab}$ are then linearly related.
Likewise, the $a = b\rightarrow \infty$ limit of $f_{ab}$ leads to the same
surfaces of constant unitary time as the
$f_C$ solution for $C \rightarrow 0$ in the upper right portion of the conformal diagram, although in this case $f_{ab}$ is a more general function of $f_C$. The second
copy of $f_C$ for $C \rightarrow 0$ in the lower left corresponds similarly to $a=b\rightarrow -\infty$.
As we shall see in the next section, the appearance of these singularities
influences the characteristic curves on which information about perturbations propagate.
\section{Perturbation Characteristics
\label{sec:perturb}
In this section {we use the method of characteristics to} explicitly solve the equation of motion \eqref{eqn:gs} for the field fluctuation $\delta\Gamma$ in the three families of background solution studied in \S \ref{sec:backgroundsolns}. The characteristic curves of this linear differential equation define hypersurfaces along which spherically
symmetric stress-energy perturbations propagate. The conformal diagram of the curves exemplifies the difference between superluminality, strong coupling, and the well-posedness of the Cauchy problem.
\subsection{Method of Characteristics
We can solve the $\delta\Gamma$ equation of motion for perturbations propagating on the background solutions using the method of characteristics.
The characteristic curves of this equation also clarify the causal structure of solutions.
Transforming Eq.~\eqref{eqn:gs} from isotropic coordinates to
the conformal coordinates $(\eta, \chi)$ leads to a differential equation of the form
\begin{equation}
\label{eqn:gs2}
V_\eta \delta \Gamma_{,\eta} + V_\chi \delta \Gamma_{,\chi} + A \delta \Gamma =0.
\end{equation}
Here,
\begin{eqnarray}
V_\eta &=& f_{,\chi} - \tan \chi \tan \eta\, f_{,\eta},\nonumber\\
V_\chi &=& f_{,\eta} + \(\cot \chi + \csc^2 \eta \tan \chi\)\tan \eta\,
f_{,\chi} .
\label{eqn:Vcharacteristic}
\end{eqnarray}
This equation depends on the particular background $f(\eta, \chi)$ around
which we perturb and describes propagation of the $\delta \Gamma$ perturbations in
the regions of de Sitter space where $f$ is defined.
While $A(\eta,\chi)$ is not important for the construction of characteristics themselves, it does enter
{into determining the field profile for $\delta\Gamma$ along the characteristics.} For completeness, it is given by
\begin{eqnarray}
\frac{A}{N} &=& \sin\eta \left[\csc\eta \left(\frac{V_\eta}{N}-R_\eta\right)\right]_{,\eta} \nonumber\\
&& +\cos^2\frac{\chi}{2}\left[ \sec^2\frac{\chi}{2}
\left(\frac{V_\chi}{N}-R_\chi\right) \right]_{,\chi},
\end{eqnarray}
where
\begin{eqnarray}
R_\eta &=&\frac{\mu}{2} \cos\chi \sin^2\chi \cot\frac{\chi}{2} \csc\eta, \nonumber\\
R_\chi &=&\mu \cos^2\frac{\chi}{2} \sin^2\chi \cot\eta \csc\eta ,\nonumber\\
N&=& \csc^2\chi \sec\chi \tan\eta \Big[ 2 \mu\cos\eta\tan\frac{\chi}{2} f_{,\chi}\nonumber\\
&&+ \sec^2\frac{\chi}{2} \left( \mu \cos\chi\sin\eta f_{,\eta} -\frac{x_0}{H} \right)\Big].
\end{eqnarray}
\begin{figure*}
\center
\includegraphics[width = 0.99\textwidth]{fig_characteristics_all.pdf}
\caption{Characteristic curves of $\delta \Gamma$ for the
background solutions shown in Fig.~\ref{fig:constant_f}. For $\delta \Gamma_o$,
characteristics coincide with constant open time curves in the open wedge of
Fig.~\ref{fig:coverings} and no spacelike surface intersects all characteristics.
For $\delta \Gamma_{1/2}$, characteristics are also spacelike but those within the past light cone at
$\chi=0$ {do all intersect the spacelike surface $\eta = -\pi$.} For the special case of $\delta\Gamma_1$,
the characteristics are all lightlike.
Red lines divide the characteristics on either side of the determinant singularity.
}
\label{fig:characteristics}
\end{figure*}
The most general solution to Eq.~\eqref{eqn:gs2} can be found from its characteristics, which are integral curves for the vector field
\begin{equation}
\boldsymbol{V} = (V_\eta, V_\chi) ,
\end{equation}
with the first component in the $\eta$ direction.
Information from boundary conditions on $\delta \Gamma$, specified on a surface that intersects the characteristic curves, propagates along those curves to provide the general solution in the bulk.
Thus conformal diagrams of
characteristic curves for particular background solutions present a succinct way of describing
their casual structure.
In particular, the characteristic curves are
\begin{align}
{\rm timelike:} &\quad |V_\eta| > |V_\chi| , \nonumber\\
{\rm lightlike:} &\quad |V_\eta| = |V_\chi| ,\nonumber\\
{\rm spacelike:}&\quad |V_\eta| < |V_\chi| .
\end{align}
Since Eq.~(\ref{eqn:gs2}) is a first order differential equation, the characteristic
analysis yields complete solutions for $\delta \Gamma$, including both smooth and discontinuous
cases. This should be contrasted with a characteristic analysis of a second order
differential equation where in most cases the characteristics only describe the propagation
of discontinuous fronts.
Once a solution for $\delta \Gamma$ is specified from boundary data in the conformal coordinate system it can be expressed
in any coordinate system since it transforms as a scalar
function in spacetime.
\subsection{Explicit Solutions
We now apply this formalism to the explicit background solutions derived in \S\ref{sec:backgroundsolns} and identify the characteristic curves upon which isotropic stress-energy perturbations propagate.
\vspace{.07cm}
\noindent
{\bf Open solution: $f_o$
For the $f_o$ background solution
\eqref{solMukohyama}, the general solution given by the characteristics is
\begin{eqnarray}
\delta \Gamma_o(\eta, \chi) &=& \frac{\sin^2 \eta}{ \sin^2 \chi} F(\phi_o) , \nonumber\\
\phi_o(\eta,\chi) &=& \cos\chi/\sin\eta ,
\label{eqn:charopen}
\end{eqnarray}
where $F$ is a completely arbitrary function of its argument $\phi_o$.
Curves of $\phi_o=$\,const., shown in
Fig.~\ref{fig:characteristics} (left), are the
characteristics of the equation and comparison with Eq.~\eqref{eqn:opentr} shows that they
coincide with constant open time $t_o=$\,const.\ surfaces in the wedge charted by the open coordinates. Correspondingly,
in open isotropic coordinates, Eq.~\eqref{eqn:gs} loses its kinetic term $(A_{t_o}=0)$ indicating
that initial conditions at $t_o=$\,const.\ do not propagate off that surface. Open time
slices therefore are disconnected, leading to an unspecified evolution of $\delta \Gamma$. {In particular, information propagates instantaneously along the characteristics as measured by open time.}
This behavior does not violate local conservation of energy and momentum since they are both zero for
any $\delta \Gamma_o$ in these coordinates \cite{Wyman:2012iw}.
In flat or closed coordinates, these features take a superficially different form. Characteristics are still spacelike but the finite stress
components in open coordinates transform to energy and momentum. This energy-momentum is conserved by
the solution \eqref{eqn:charopen} as a consequence of Eq.~\eqref{eqn:gs}.
As discussed in \S \ref{sec:isopert}, this conservation law is the transformation of the
constraint on the spatial stresses in open coordinates.
On the other hand,
the stress-energy emanates from $\chi=0$ where the spatial profiles of the $\delta\Gamma_o$
solutions diverge. Although it is not manifest in the linearized treatment from the quadratic Lagrangian, generically one might expect perturbation theory to break down here with higher order interactions making the fluctuations strongly coupled.
This class of solution was excluded by the analysis of
Ref.~\cite{Gumrukcuoglu:2011zh} by implicitly demanding regularity at $\chi=0$, leading to their conclusion that the St\"{u}ckelberg\ fields
contained no degrees of freedom. Our characteristic analysis makes this assumption explicit:
it corresponds to assigning boundary conditions on the timelike $\chi=0$ curve.
More generally, there is no spacelike Cauchy surface at all in the open wedge, since a surface must be timelike to intersect all characteristics. The lack of a spacelike Cauchy surface is diffeomorphism invariant.
Thus, even in flat or closed slicing, counting degrees of freedom according to initial data would lead to the
conclusion that the $f_o$ case does not possess a degree of freedom that admits a well-posed Cauchy problem.
The spacetime defined by the solution $f_o$ is an example of a spacetime that is not globally hyperbolic.\footnote{The existence of such a pathological spacetime as a solution to dRGT does not imply pathologies inherent to the theory itself; indeed, general relativity admits solutions which are not globally hyperbolic.}
While imposing a regular boundary at $\chi=0$ for these characteristics might seem reasonable in the open wedge itself,
it is interesting to track characteristics intersecting this boundary in different parts of the de Sitter space. Fig.~\ref{fig:characteristics} (left) shows that in the lower left wedge, where the characteristics are mirror
images of the open wedge, there is a spacelike Cauchy surface at $\eta\rightarrow -\pi$ despite
superluminality, but the characteristics end rather than begin at $\chi=0$. Stress-energy on
a characteristic then flows into the origin requiring a boundary condition or nonlinear completion of the theory to specify its further evolution. Thus non-singular behavior on the open wedge at $\chi=0$ may require special initial conditions in the lower left wedge. On the other hand this behavior is analogous to collapse of a
perfectly spherically symmetric density shell in ordinary linearized, Newtonian gravity. The finite angular momentum of generic perturbations may prevent singular behavior---we leave investigation of this possibility
to future work. In any case, while the inwardly directed characteristics might signal strong-coupling at $\chi=0$ where the fluctuations diverge, like a black-hole nothing emanates classically from this point within the lower left wedge.
Interestingly, in the central diamond the $\delta\Gamma_o$ characteristics
are timelike, or subluminal, and admit a well-posed Cauchy problem. However, the characteristics
are split by the determinant singularity into left and right halves. Although information in
$\delta\Gamma_o$ never crosses this singularity in the linearized theory, the worldlines of other particles can. From the perspective of a massive gravity theory with only one copy of the
fiducial metric, the determinant singularity appears as another spatial boundary condition. Even allowing the theory to be extended, the boundary condition must be imposed by hand at each order in perturbation theory, {\it e.g.}, by demanding that
the St\"{u}ckelberg\ fields are smooth across the boundary to the second copy of the fiducial metric
\cite{Gratia:2013gka}.
\begin{figure*}
\center
\includegraphics[width = 0.99\textwidth]{fig_characteristics_Khosravi.pdf}
\caption{Characteristic curves of $\delta \Gamma_{ab}$ for the new class of
$f_{ab}$ solutions illustrated in Fig.~\ref{fig:constant_f_Khosravi}. The determinant singularities
strongly influence the causal structure of the characteristics. These backgrounds share many of the features of the $f_o$ and $f_C$ spacetimes discussed in Fig.~\ref{fig:characteristics}
and provide new ones due to the $W=\pm \infty$ determinant singularities (blue curves).
}
\label{fig:characteristics_Khosravi}
\end{figure*}
\vspace{.07cm}
\noindent
{\bf Stationary solution: $f_C$
The characteristics of perturbations around the $f_C$ backgrounds share some, but not all, of the properties of the $f_o$ background.
Here Eq.~\eqref{eqn:gs2} is solved by
\begin{eqnarray}
\delta \Gamma_C(\eta, \chi)& = &
\frac{ \sin^2 \eta}
{ \sin^2\chi } \frac{F(\phi_C)}{y(\eta,\chi)} ,\nonumber\\
\phi_C(\eta,\chi)&=&
\frac{\sin \eta [1-y(\eta,\chi)]}
{\cos \chi + \cos \eta},
\end{eqnarray}
where $F$ is again an arbitrary function of its argument and $y$ was defined in
Eq.~\eqref{solKoyama}.
In Fig.~\ref{fig:characteristics} (mid), we show the $\phi_C=$ const.\ curves for $C=1/2$.
These characteristics are spacelike across the whole spacetime.
Even though $\phi_C=$\,const.\ does not coincide with the open, flat or closed choice of
isotropic time, these surfaces foliate the spacetime. Thus in principle we could define
a new choice of time $t_C = \phi_C$ for which the kinetic term for $\delta \Gamma_C$
vanishes, just like it does for $\delta \Gamma_o$ in the open slicing. With this choice of time coordinate, information
along the characteristics again propagates instantaneously and energy-momentum conservation becomes a spatial constraint.
On the lower left half of the $\delta\Gamma_C$ diagram, the causal structure resembles the lower left wedge of the $\delta \Gamma_o$ case.
Here $\eta \rightarrow -\pi$ provides a spacelike Cauchy surface and the characteristics
end at $\chi=0$ with a divergent $\delta \Gamma_{1/2}$ radial profile. On the upper
right half, the structure resembles the open wedge of $\delta \Gamma_o$ with characteristics
emanating from the other pole of the closed de Sitter space. However, the two halves
are divided by the null line that designates the determinant singularity where $f$ is not
continuously differentiable. Thus the outgoing characteristics of upper right half are not within the past light cone at $\chi=0$ unlike
the $\delta \Gamma_o$ case. This example illustrates that superluminality, a well-defined
Cauchy problem, and singular field profiles in the past that might indicate problems originating from a point of strong coupling are not one-to-one related.
In the special case of $C=1$, $y=-\sin\chi/\sin\eta$ and the characteristics of $\delta\Gamma_1$
become null (see Fig.~\ref{fig:characteristics} and Ref.~\cite{Wyman:2012iw}). This special case was interpreted by
Ref.~\cite{Koyama:2011yg} as having no vector in the background, based on a
decoupling limit analysis. Although this choice
eliminates the superluminality found in other solutions, the lightlike determinant singularity still
exists for this case. While it
only occurs on
the past lightcone of the observer at $\chi=0$, $\eta=0$, it would be within the past light
cone at other points in the spacetime and still require null boundary conditions to define
global solutions. Likewise other observers in the lower left region would reach the determinant
singularity after finite proper time. Even with solutions continued beyond the singularity these observers would begin
to cross characteristics for which no spacelike Cauchy surface exists.
\vspace{.07cm}
\noindent
{\bf New solution: $f_{ab}$
For the new class of $f_{ab}$ backgrounds, the general solution is
\begin{eqnarray}
\delta \Gamma_{ab}(\eta, \chi)& = &
\frac{\csc^2 \chi \sin^3 \eta}{\cos \eta - a \sin
\eta}F(\phi_{ab}) ,\nonumber\\
\phi_{ab}(\eta,\chi)&=&
\frac{\cos \chi - b \sin \eta}
{\cos \eta - a \sin \eta} .
\end{eqnarray}
This solution serves to illustrate that the nature of the characteristics is strongly influenced
by the global structure of the determinant singularities discussed in \S\ref{sec:backgroundsolns}.
Characteristics run tangent to these singularities as information in the perturbations cannot
cross these curves (see Fig.~\ref{fig:characteristics_Khosravi}). {If $|b| <
\sqrt{1+a^2}$, the $W=\pm\infty$ and $W=0$
singularities intersect within the spacetime bulk at a point given by
Eq.~\eqref{eqn:Wcross}.
Though the characteristics tangent to these singularities} also intersect,
the perturbation amplitude $\delta \Gamma_{ab}$ diverges at this point and hence marks a point where strong
coupling is likely.
Null lines that emanate from this
point serve as separatrices which divide regions of super- and subluminal propagation much like the
$f_o$ case. This family of examples
shows that these null lines are not necessarily related to special curves in the
open, closed or flat de Sitter slicing. Where these null lines intersect the poles, the characteristics
change from ingoing into the pole, which can be all be intersected by a single
spacelike Cauchy surface in the past, to
outgoing which cannot. For the special case that $|a|=|b|$ one of these null lines
coincides with the determinant singularity.
\subsection{Superluminality and Bi-Isotropy
In all but one example {($\delta\Gamma_C$ with $C=1$)}, the characteristics
intersect the poles $\chi
= 0, \pi$ at right angles {at all but a finite number of points}.
In conformal coordinates, perturbations propagate
instantaneously there.
As we now show, this is a generic property of bi-isotropic solutions. In our explanation we focus only on the $\chi = 0$
pole as treatment of the other pole is analogous. Expanding unitary time around
$\chi = 0$ in a series
\begin{equation}
\label{fExpansion}
f(\eta,\chi) = h_0(\eta) + h_1(\eta) \chi + h_2(\eta) \chi^2 + {\cal{O}}(\chi^3)
\end{equation}
and using the $\chi$ expansion of the equation of motion \eqref{fEOMbackConformal},
one can show that $h_1$ is always identically zero. Physically this means that
the bi-isotropy condition and the
ability to make arbitrary Lorentz boosts in the fiducial space ensures that unitary time
and conformal time can be locally aligned. The difference between surfaces of
unitary time and conformal time thus grow at most quadratically with the distance to the
pole $\chi^2$ or more generally differ only by the curvature corrections to a locally
flat spacetime metric. Since the same is true between conformal time and closed, flat, or
open isotropic time, instantaneous propagation in conformal coordinates implies
instantaneous propagation in isotropic coordinates at the poles. In fact, the
arguments below for the generality of spacelike characteristics
apply beyond the vacuum cases considered here to spacetime metrics
that are locally flat near the bi-isotropy point.
Expanding the vectors defined by Eq.~\eqref{eqn:Vcharacteristic} in the same fashion,
\begin{equation}
\boldsymbol{V} = \left(
\begin{array}{c}
{\cal O}(\chi) \\
2 h_2 \tan \eta + h_{0,\eta} + {\cal O}(\chi) \\
\end{array}
\right) .
\end{equation}
In a neighborhood of a general point $(\eta_0, 0)$ the leading order terms are
finite and the vector field $\boldsymbol{V}$ is aligned with the curves of
constant conformal time $\eta$, signaling instantaneous propagation. Using the expansion of the background equation of motion
\eqref{fEOMbackConformal}, it is possible to relate
$h_2$ and $h_{0,\eta}$ to prove that infinite superluminality at the given point
is avoided if and only if
\begin{equation}
\label{LuminalBC}
h_{0,\eta}(\eta_0) = \pm \frac{x_0}{H \sin \eta_0} .
\end{equation}
As seen with our examples, this condition can be satisfied for all $\eta$ ($\delta \Gamma_C$ with $C =
1$) or at a discrete set of points ($\delta \Gamma_o$ and typical $\delta \Gamma_{ab}$).
In fact,
for the special points $\eta_0$ at which this condition is satisfied, the
characteristics are always luminal, never subluminal. Expanding $f$ to third order in $\chi$ allows $\boldsymbol{V}$ to be expanded into linear order in
both $\chi$ and $\eta - \eta_0$. With repeated use of the equation of motion
\eqref{fEOMbackConformal} we then find
that each characteristic curve hitting $\chi = 0$ at $\eta_0$ is luminal.
For $h_3(\eta_0)=0$ there are two such characteristic curves---one incoming and one outgoing---forming the typical luminal separatrices
we see in Fig.~\ref{fig:characteristics} (left) and \ref{fig:characteristics_Khosravi}.
Even for these cases, the characteristics are superluminal at all but a finite
number of points.
For
$h_3(\eta_0)\ne 0$, there is only one characteristic which is either incoming or outgoing,
depending on the sign of $h_{0,\eta}/h_3$. The $C=1$ example of
Fig.~\ref{fig:characteristics} (right) exhibits a unitary time solution $f_1$ with these
very special properties. Together with solutions trivially related by $f \rightarrow -f$ and/or $\eta \rightarrow -\pi
- \eta$, it is in fact the unique bi-isotropic vacuum solution that evades superluminality
entirely as can be shown by integrating the equation of motion \eqref{fEOMbackConformal}
from the boundary conditions \eqref{LuminalBC} along null coordinates.
\section{Discussion
\label{sec:discuss}
We have investigated the causal structure of three families of vacuum solutions in dRGT massive gravity.
In particular, we have constructed the conformal diagram of characteristic hypersurfaces
for isotropic stress-energy perturbations around these backgrounds by exploiting the first-order structure of their equation of motion.
These examples provide fertile ground for studying the interplay between superluminality, an ill-posed Cauchy problem, and strong coupling as well as issues that arise for the global structure of spacetime in a theory with two metrics.
The $f_o$ solution of~Ref.~\cite{Gumrukcuoglu:2011ew} manifests aspects of all of these issues.
This solution is distinguished because in open slicing both the background spacetime and fiducial metric are simultaneously homogeneous and isotropic.
In {this} slicing, the kinetic term of perturbations vanishes,
which was taken as an indication of possible problems with strong coupling \cite{Gumrukcuoglu:2011zh}; this is supported by instabilities identified around anisotropic backgrounds~\cite{DeFelice:2012mx}, and the lack of an isotropic degree of freedom identified
by a Hamiltonian analysis \cite{Khosravi:2013axa}. However, its kinetic term and Hamiltonian degrees of freedom only vanish in open slicing \cite{Khosravi:2013axa}.
Our characteristic analysis clarifies these issues.
Absence of a kinetic term in the quadratic action in a particular
choice of slicing indicates instantaneous propagation of perturbations in the given frame and more generally perturbative superluminality or spacelike characteristics in any frame. The Hamiltonian
analysis fails in the pathological choice of slicing along characteristics since the Hamiltonian
is no longer associated with time evolution. Energy-momentum conservation likewise
becomes a spatial constraint equation, confusing the counting of degrees of freedom.
While it is the easiest of the interrelated issues to diagnose, it does not necessarily indicate
strong coupling nor an ill-posed Cauchy problem. Through a particularly poor choice of coordinates, it is possible to make the kinetic term for superluminal fluctuations disappear
even in a free theory. Likewise, information cannot propagate forward from a pathological choice of initial value surface.
On the other hand,
for the $f_o$ solution, the characteristic analysis does {additionally} reveal an ill-posed Cauchy problem.
Since spacelike characteristics originate from the spatial origin in the open chart of de Sitter,
there is no spacelike
surface that intersects all characteristics. Hence this particular spacetime does {\it not} admit a well-defined {initial value} problem, as it lacks a spacelike Cauchy surface.
In contrast to superluminality, nonexistence of spacelike Cauchy surface always implies the
quadratic action is pathological. Further, in contrast to the Hamiltonian identification of degrees
of freedom, this concept and its relation to identifying degrees of freedom by the amount
of initial data required is diffeomorphism invariant.
Finally, for the $f_o$ solution, finite perturbations specified along characteristics diverge in amplitude
at the spatial origin from which they emanate in the open chart. Given that the theory
contains nonlinear interactions, this indicates that perturbation theory almost certainly
breaks down leaving the theory strongly coupled there. Hence in this case, the ill-posed
Cauchy problem likely originates from a point of strong coupling.
More precisely, strong coupling occurs when the effective field theory breaks down due to ever higher order
interaction terms becoming important. While strong coupling cannot be diagnosed
from the quadratic action alone, the divergence or discontinuity of perturbative solutions at a given spacetime point is a good indicator of strong coupling there.
Vanishing of kinetic terms is
often used as a related proxy for strong coupling: once variables are canonically normalized, interaction terms pick
up negative powers of the coefficient in front of the kinetic term, which drives the effective
strong coupling energy scale to zero when the kinetic term vanishes. However,
our example shows that without examining the higher order terms themselves, one
cannot immediately determine whether the vanishing of kinetic terms instead simply arises from a poor
choice of coordinates.
The $f_C$, or stationary class of solutions, serves to further distinguish these concepts.
Here generic solutions also admit spacelike characteristic curves and hence
superluminality. By choosing a time coordinate that is orthogonal to these spacelike surfaces, we again recover a representation where kinetic terms and dynamics vanish in favor of spatial
constraints.
Unlike the open $f_o$ solution,
all characteristics of this class that are within the past light cone of an observer at the spatial origin intersect a spacelike
surface. Hence the Cauchy problem is well-posed for this family of solutions and observers.
Furthermore, the characteristics end rather than begin at the spatial origin
where the field perturbations diverge. While strong coupling at the origin is
again likely, this distinction is crucial. {As in spherical collapse of perturbations in
general relativity (with Newtonian gravity as the corresponding effective theory), the formation of a singularity at the origin does not necessarily invalidate the
effective theory far from the singularity if nothing escapes it classically.}
Finally, the new class of $f_{ab}$ solutions serves to highlight issues with the
global structure of the spacetime that can occur in theories with two metrics. In all three
families of solutions, the fiducial Minkowski space
does not cover the whole de Sitter
spacetime. Specifically, there are points where the diffeomorphism-invariant ratio of the determinants of the physical and fiducial metrics vanishes, diverges or is undefined.
Here the spacetime becomes geodesically incomplete. While the $f_o$ and $f_C$
solutions allow straightforward but {\it ad hoc} analytic continuation past these
determinant singularities {to define a solution in the whole spacetime},
the $f_{ab}$ ones do not. Furthermore, since characteristics do not cross these singularities, continuity
must be imposed not just for the background but also for the perturbations.
The superluminality exhibited in all three classes of solutions is a necessary condition of the
bi-isotropic construction except in one unique case. Generically, starting at the bi-isotropy point characteristics
are superluminal across separations comparable to the spacetime curvature as a consequence of the alignment
between
the locally flat spacetime metric with the fiducial flat metric. Exceptional
cases produce luminal but never subluminal characteristics that intersect this {worldine}.
For the single case of the stationary $f_1$ solution, all characteristics are strictly luminal.
Unlike the usual second-order system where characteristics define
only a front velocity of discontinuous solutions, the characteristic analysis of our first-order
system is fully general. In particular, it allows the construction of smooth wavepackets that also propagate superluminally and do not on their own imply strong-coupling.
While these examples do serve to distinguish the concepts of superluminality, spacelike
Cauchy surfaces and strong coupling, it is important not to overinterpret their consequences
for the dRGT theory itself. They represent examples of tree-level or classical propagation of
perturbations on specific, perhaps pathological, self-accelerating backgrounds.
{Superluminality in the full theory} would be in tension with the dRGT theory admitting a local and Lorentz-invariant UV completion~\cite{Adams:2006sv}. We emphasize, however, that the characteristic analysis alone is insufficient to establish whether or not superluminal propagation is truly present---in order to discern this, it has been claimed that one must know about the Green's function at infinite frequency~\cite{Dubovsky:2007ac,Shore:2007um,deRham:2014lqa}, which goes beyond the classical approximation that the characteristic analysis entails.
It is also worth mentioning that the presence of superluminality does not in and of itself imply acausality~\cite{Babichev:2007dw}. Indeed, in order to establish acausality, one would need to use the superluminal propagation to construct a closed timelike curve ({\it e.g.}, by using a point at which characteristic curves cross). Interestingly, we find (at least in the highly symmetric case we consider) that characteristic curves only
intersect at singularities, indicating that such perturbations remain causal within the effective theory, even if we equate perturbative superluminality with true superluminality. It has been conjectured in Ref.~\cite{Burrage:2011cr} that this might be true in general: the chronology protection conjecture asserts that traversing a closed timelike curve cannot be achieved within the regime of validity of the effective theory.
Finally in all of our examples, the Hamiltonian associated with isotropic perturbations
is unbounded {\cite{Khosravi:2013axa}}. While it is not clear how to interpret the Hamiltonian in these cases where
the equations of motion are first order and
there exists pathological choices of slicing where it is no longer associated with time-evolution,
this may indicate that the features uncovered here are associated with an unstable background rather than generic consequences of the dRGT theory.
\smallskip{\em Acknowledgments.---
We thank Rachel Rosen and Robert M.\ Wald for useful discussions.
This work was supported by
the Kavli Institute for Cosmological Physics at the University of
Chicago through grants NSF PHY-0114422 and NSF PHY-0551142.
PM was additionally supported by grants NSF PHY-1125897 and NSF PHY-1412261 and an endowment from the Kavli
Foundation and its founder Fred Kavli, WH by U.S.~Dept.\ of Energy
contract DE-FG02-13ER41958
and HM by a Japan Society for the Promotion of Science
Postdoctoral Fellowships for Research Abroad.
AJ was supported in part by the Robert R.\ McCormick Postdoctoral Fellowship.
|
2,869,038,155,283 | arxiv |
\section{Conclusion}
In this work, we propose a novel P-Net for retina image anomaly detection. The motivation of our method is the correlation between structure and texture in healthy retinal images. Our model extracts structure from original images first, and then reconstructs the original images by using both structure information and texture information. At last, we extract the structure from the reconstructed images, and minimizing the difference between the structures extracted from original image and that from reconstructed image. Then we combine the image reconstruction error and structure difference as a measurement for anomaly detection.
Extensive experiments validate the effectiveness of our approach.
\noindent
\textbf{Acknowledge:} The work was supported by National Key R\&D Program of China (2018AAA0100704), NSFC \#61932020, Guangdong Provincial Key Laboratory (2020B121201001), ShanghaiTech-Megavii Joint Lab, and ShanghaiTech-UnitedImaging Joint Lab.
\section{Experiments}
\subsection{Implementation}
To train the network, the input image size is 224 $\times$ 224, and the batch size is 8. The optimizer for the generator and the discriminator are both Adam, and the learning rate is 0.001. We train our model for 800 epochs.
We implement our method with the PyTorch on a NVIDIA TITAN V GPU. The codes are released in
\url{https://github.com/ClancyZhou/P_Net_Anomaly_Detection}.
\subsection{Evaluation Metric}
Following previous work \cite{zhou2020sparse}\cite{luo2017revisit}\cite{luo2019video}, we calculate the Area Under Receiver Operation Characteristic (AUC) by gradually changing the threshold of $\mathcal{A}(\mathbf{I})$ for normal/abnormal classification. A higher AUC indicates that the performance of the method is better.
\subsection{Anomaly Detection in Retinal Images}
\noindent
\textbf{A. Datasets.}
Since the datasets used in previous retinal image anomaly detection work \cite{schlegl2019f}\cite{schlegl2017unsupervised} are not released, we evaluate our proposed method with a publicly available dataset \cite{hu2019automated} and a local hospital dataset \cite{yan2019oversampling}.
\noindent\textbf{Retinal Edema Segmentation Challenge Dataset (RESC)}\cite{hu2019automated}.
Retinal edema is a retinal disease, which causes blurry vision and affects the patient's life quality. Optical coherence tomography (OCT) images can be used to assist clinicians in diagnosing retinal edema. Thus the RESC dataset is proposed for OCT based retinal edema segmentation. As discussed previously, retinal edema damages the normal layer structure in OCT, thus we leverage this dataset for performance evaluation. This dataset contains the standard training/validating split. We use the normal images in the original training set as our training images to train the model, and use all testing images for performance evaluation.
\noindent\textbf{Fundus Multi-disease Diagnosis Dataset (iSee)}\cite{yan2019oversampling}.
Previous retinal fundus datasets usually only contain one or two types of disease \cite{porwal2019idrid}\cite{orlando2020refuge}, but in clinical diagnosis, many eye diseases can be observed in the fundus image.
Thus we collect a dataset from a local hospital, which comprises of 10000 fundus images. Eye diseases in this dataset include age-related macular degeneration (AMD), pathological myopia (PM), glaucoma, diabetic retinopathy (DR), and some other types of eye diseases.
To validate the effectiveness of P-Net for different retinal diseases, we use 4000 normal images as the training set, and we use the remaining 3000 normal images, 700 images with AMD, 800 images with PM, 420 images with glaucoma, 480 images with DR, and 600 images with other types of eye diseases, as our testing set.
\noindent
\textbf{B. Performance Evaluation.}
\begin{table}[htb]
\scriptsize
\centering
\caption{Performance comparison on different datasets.}
\begin{tabular}{p{1.2in}<{\centering}|p{0.8in}<{\centering}|p{0.8in}<{\centering}}
\hline
Method & RESC (OCT) & iSee (fundus) \\ \hline
Deep SVDD \cite{ruff2018deep} & 0.7440 & 0.6059 \\
Auto-Encoder \cite{zhou2017anomaly} & 0.8207 & 0.6127 \\
AnoGAN \cite{schlegl2017unsupervised} & 0.8481 & 0.6325 \\
VAE-GAN \cite{baur2018deep} & 0.9064 & 0.6969 \\
Pix2Pix \cite{isola2017image} & 0.7934 & 0.6722 \\
GANomaly \cite{akcay2018ganomaly} & 0.9196 & 0.7015 \\
Cycle-GAN \cite{zhu2017unpaired} & 0.8739 & 0.6699 \\ \hline
Our Method & \textbf{0.9288} & \textbf{0.7245} \\ \hline
\end{tabular}
\label{table:retinal_com}
\vspace{-0.15in}
\end{table}
\textbf{Baselines.} We compare our method with AnoGAN \cite{chen2018unsupervised} proposed for retinal OCT images, VAE-GAN \cite{baur2018deep} proposed for Brain MRI images, GANomaly \cite{akcay2018ganomaly} for X-ray security images, and Auto-Encoder based anomaly detection \cite{zhou2017anomaly}. As our work consists of the translation between image to the structure. Therefore we also compare P-Net with image-to-image translation networks, including Pix2Pix \cite{isola2017image} and Cycle-GAN \cite{zhu2017unpaired}. For Pix2Pix \cite{isola2017image} and Cycle-GAN \cite{zhu2017unpaired}, we use the original image and structures extracted with domain adaptation method to train the network, and use the same measurement as ours for anomaly detection.
As shown in Table \ref{table:retinal_com}, our method outperforms all baseline methods on both datasets, which verifies the effectiveness of our method for retinal images with different modalities.
\begin{table}[htb]
\vspace{-0.15in}
\centering
\scriptsize
\caption{The results of sub-class on iSee dataset. }
\begin{tabular}{p{1.0in}<{\centering}|p{0.45in}<{\centering}p{0.45in}<{\centering}p{0.45in}<{\centering}p{0.45in}<{\centering}p{0.45in}<{\centering}}
\hline
Method & AMD & PM & Glaucoma & DR & Other \\ \hline
Auto-Encoder \cite{zhou2017anomaly} & 0.5463 & 0.7479 & 0.5604 & 0.6002 & 0.5479 \\
AnoGAN \cite{schlegl2017unsupervised} & 0.5630 & 0.7499 & 0.5731 & 0.5704 & 0.6412 \\
VAE-GAN \cite{baur2018deep} & 0.5593 & 0.8412 & \textbf{0.6149} & 0.6590 & 0.7961 \\
GANomaly \cite{akcay2018ganomaly} & \textbf{0.5713} & 0.8336 & 0.6056 & 0.6627 & 0.8013 \\ \hline
Our Method & 0.5688 & \textbf{0.8726} & 0.6103 & \textbf{0.6830} & \textbf{0.8069} \\ \hline
\end{tabular}
\label{tab:sub_cls}
\vspace{-0.15in}
\end{table}
We further report the AUC of our method for five sub-classes in the iSee dataset, i.e., AMD, PM, glaucoma, DR, and other disease classes, and show the results in Table \ref{tab:sub_cls}, As the lesions of PM and DR are related to blood vessel structure,
and our method encodes the relation between vessels and texture, our method performs well for these diseases. While the lesions of AMD and glaucoma are associated with the macular area and the optic disc, respectively, and the structures we used cannot cover these areas in our current implementation, therefore our solution doesn't perform very well for these diseases.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.90\textwidth]{figures/exp_ablation_fundus}
\vspace{-0.05in}
\caption{The qualitative results of different input of $\mathbf{G}_r$ show that \textbf{both image and structure are necessary for anomaly detection}.
On the one hand, when the input is only structure, we can observe that optical disc (yellow box) cannot be reconstructed precisely. The reason is the lack of texture information of optical disc region.
On the other hand, when the input is only image, the lack of structure (blood vessel) information results in the vessel information loss (red box of normal and PM), and the lesion region is reconstructed incorrectly as the vessel (red box of AMD).
}
\label{fig:exp_ablation}
\vspace{-0.05in}
\end{figure*}
\noindent
\textbf{C. Ablation Study.}
\begin{table}[htb]
\centering
\scriptsize
\caption{Ablation study in different datasets. DA and $\mathcal{L}_{\text{str}}$ denote domain adaptation, and structure consistency loss respectively.}
\begin{tabular}{p{0.4in}<{\centering}|p{0.4in}<{\centering}|p{0.4in}<{\centering}p{0.4in}<{\centering}|p{0.4in}<{\centering}|p{0.45in}<{\centering}p{0.45in}<{\centering}}
\hline
& \multicolumn{1}{c|}{\multirow{2}{*}{DA}} & \multicolumn{2}{c|}{Input of $\mathbf{G}_{r}$} & \multicolumn{1}{c|}{\multirow{2}{*}{$\mathcal{L}_{\text{str}}$ }} & \multicolumn{2}{c}{AUC} \\ \cline{3-4} \cline{6-7}
Index & \multicolumn{1}{c|}{} & Image & \multicolumn{1}{c|}{Structure} & \multicolumn{1}{c|}{} & RESC & iSee \\ \hline
1 & & \checkmark & \checkmark & & 0.8152 & 0.5914 \\ \hline
2 & \checkmark & \checkmark & & & 0.8219 & 0.6487 \\ \hline
3 & \checkmark & & \checkmark & & 0.8277 & 0.6914 \\ \hline
4 & \checkmark & \checkmark & \checkmark & & 0.8518 & 0.7196 \\ \hline
5 & \checkmark & \checkmark & & \checkmark & 0.8835 & 0.6574 \\ \hline
6 & \checkmark & & \checkmark & \checkmark & 0.8821 & 0.6993 \\ \hline
Ours& \checkmark & \checkmark & \checkmark & \checkmark & \textbf{0.9288} & \textbf{0.7245} \\ \hline
\end{tabular}
\label{tab:ablation}
\end{table}
\textbf{Domain Adaptation (DA).}
As shown in Fig.
\ref{fig:method_sem}(b)
since there is domain discrepancy between source images and target images, if we train a segmentation model without domain adaptation, the quality of structure is not good enough for image reconstruction. The quantitative results are listed in Table \ref{tab:ablation} (row 1 vs. row 4). We can see that our method benefits from DA on both datasets.
\textbf{The Input of $\mathbf{G}_{r}$}. Our P-Net takes both the structure map and the original image as input for reconstruction.
To investigate the effectiveness of this design, we conduct qualitative and quantitative experiments. The results are shown in Fig. \ref{fig:exp_ablation} and Table \ref{tab:ablation} (row 2, 3, and 4), respectively. As shown in Table \ref{tab:ablation}, our P-Net solution is better than single input based image reconstruction strategy.
Further, from Fig. \ref{fig:exp_ablation}, we can observe that: i) if $\mathbf{G}_{r}$ only takes the structure as input, the image texture such as the optic disc area will be poorly reconstructed; ii) if $\mathbf{G}_{r}$ only takes the image as input, the blood vessel and macular area will be poorly reconstructed, for example, the macular area is reconstructed as the blood vessel, which is obviously incorrect.
\textbf{Structure Consistency Loss.}
The $\mathcal{L}_{\text{str}}$ constrains the consistency between $\mathbf{\hat{S}}$ and $\mathbf{S}$, which behaves like a regularizer to enforce the consistency between $\mathbf{\hat{I}}$ and $\mathbf{I}$.
The results in Table \ref{tab:ablation} (row 2 vs. row 5, row 3 vs. row 6, and row 4 vs. row 7) validate the effectiveness of the $\mathcal{L}_{\text{str}}$.
\noindent
\textbf{D. Evaluation of $\lambda_f$.}
In the testing phase, we use Equation (\ref{equa:anomaly_fusion}) to measure the anomaly score. $\lambda_f = 0$ denotes that only image difference $\| \mathbf{I} - \mathbf{\hat{I}} \|_1$ is used for anomaly detection, and $\lambda_f = 1$ means only structure difference $\| \mathbf{S} - \mathbf{\hat{S}} \|_1$ is used for anomaly detection. We vary $\lambda_f$ and show the results in Table \ref{table:lambda_f}. We can see that the performance of image difference only based anomaly detection is worse than structure difference only based method. The possible reason is that the structure is more evident for anomaly detection, which agrees with practice of clinicians. If we combine both, then it leads to a better performance. Further, the model is robust to different $\lambda_f$'s. When $\lambda_f = 0.8$, our proposed method achieves the best performance, thus we set $\lambda_f = 0.8$ in all the experiments.
\begin{table}[htb]
\centering
\scriptsize
\caption{Results of different $\lambda_f$ on RESC dataset.}
\begin{tabular}{cccccccc}
\hline
$\lambda_f$ & 0.0 & 0.2 & 0.4 & 0.5 & 0.6 & \textbf{0.8} & 1.0 \\ \hline
AUC & 0.8481 & 0.9010 & 0.9226 & 0.9232 & 0.9234 & \textbf{0.9288} & 0.9084 \\\hline
\end{tabular}
\label{table:lambda_f}
\end{table}
\noindent
\textbf{E. The Number of Cycles in P-Net.}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\textwidth]{figures/exp_cycle}
\vspace{-0.05in}
\caption{The qualitative results with different number of cycles. It can be observed that the texture and the layer-wise structure between original and reconstructed ones in the healthy sample are consistent, while the consistency in abnormal samples is broken.
}
\label{fig:exp_cycle}
\vspace{-0.10in}
\end{figure}
The lesion areas cannot be well reconstructed, and we found that these reconstructed lesion areas are very similar to normal areas.
Thus, the results of appending our framework several times are different from that with only one cycle. We show the distribution of reconstruction errors for both normal images and abnormal images with multiple cycles reconstruction.
It shows that more cycles in testing phase improve the performance, while more cycles in training phase reduce the performance.
The poor performance of more cycles in training phase is probably because more loss terms make the optimization more difficult.
In Fig. \ref{fig:exp_cycle}, we further show the qualitative effect of more cycles in testing phase, where the images correspond to pigment epithelium detachment (PED), subretinal fluid (SRF), and healthy image, respectively.
In the multiple cycles in testing phase, the abnormal lesion becomes more and more similar to normal patterns, which is a little like ``anomalies repairing''. Such phenomenon is more obvious in the structure map.
For the healthy image, both the image and structure map remain the same even after multiple cycles. Thus more cycles would enlarge the reconstruction error for abnormal images, and retain the same reconstruction error for normal ones, which explains the phenomenon that more cycles in testing phase improve the anomaly detection.
\begin{table}[htb]
\vspace{0.05in}
\centering
\scriptsize
\caption{The results of different number of cycle for training and testing on RESC dataset.}
\begin{tabular}{c|ccccc}
\hline
Cycle number in test & 1 & 2 & 3 & 4 & 5\\ \hline
1 cycle in train & 0.9288 & 0.9304 & 0.9361 & 0.9380 & 0.9374\\
2 cycles in train & 0.8935 & 0.8962 & 0.9022 & 0.8973 & 0.9015 \\
\hline
\end{tabular}
\label{tab:cycle_oct}
\vspace{-0.15in}
\end{table}
\subsection{Anomaly Detection in Real World Images}
\renewcommand{\arraystretch}{1}
We also apply our method on the MVTec AD dataset \cite{bergmann2019mvtec}, which is a very challenging and comprehensive anomaly detection dataset for general object and texture images. This dataset contains 5 texture categories and 10 object categories. Since the structure is not annotated on these real-world images, we simply take the edges detected by Canny edge detection as the structure. We compare our method with Auto-Encoder with L2 loss or SSIM loss \cite{bergmann2019mvtec}, CNN Feature Dictionary (CFD) \cite{napoletano2018anomaly}, Texture Inspection (TI) \cite{bottger2016real}, AnoGAN \cite{schlegl2017unsupervised}, Deep SVDD \cite{ruff2018deep}, Cycle-GAN \cite{zhu2017unpaired}, VAE-GAN \cite{baur2018deep} and GANomaly \cite{akcay2018ganomaly}.
The quantitative results are shown in Table \ref{tab:mvtec_results}, and qualitative results are provided in the supplementary (Fig. S1).
We utilize AUC and region overlap as evaluation metrics to evaluate the performance of our model on MVTec AD dataset. Following \cite{bergmann2019mvtec}, we define a minimum defect area for normal class data. Then we segment the difference map of normal class samples with increased threshold. This process is not stopped until the area of anomaly region is just below the defect area we defined and this threshold is utilized for segmentation anomaly region in testing phase.
We can see our method achieves the best performance in terms of the average AUC and average anomaly region overlap on all categories. Further, our method is effective for object images and less effective for some type of texture images. The possible reason is that we use the edge as structure. For object images, such edges usually correspond to shapes, which is closely related to the image contents. Thus the mapping between image and structure is relatively easy, which consequently helps the anomaly detection. For abnormal object images, usually some parts are broken or missing, which would leads to a large reconstruction error.
However, since there are too many edges in texture images and the edges are very noisy, the texture image is hard to reconstruct, consequently reduces the performance of anomaly detection.
The experimental results of novel class discovery are provided in the supplementary (Section. S3).
%
\begin{table}[h]
\scriptsize
\caption{For each category, the top row is the anomaly region \textbf{overlap} which is the same as the evaluation metric in \cite{bergmann2019mvtec} and the bottom row is \textbf{AUC}. The 5 categories at the top of the table are textures
image and the other 10 categories at the bottom of the table is objects
image. The results of AE (SSIM), AE (L2), AnoGAN \cite{schlegl2017unsupervised}, CFD \cite{napoletano2018anomaly}, and TI \cite{bottger2016real} are adopted from MVTec AD \cite{bergmann2019mvtec} dataset directly.}
\centering
\begin{tabular}{cp{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}p{0.36in}<{\centering}}
\hline
Categories & \begin{tabular}[c]{@{}c@{}}AE\\ (SSIM)\end{tabular} & \begin{tabular}[c]{@{}c@{}}AE\\ (L2)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ano\\ GAN\end{tabular} & CFD & \begin{tabular}[c]{@{}c@{}}Deep\\ SVDD\end{tabular}
& \begin{tabular}[c]{@{}c@{}}Cycle\\ GAN\end{tabular}
& \begin{tabular}[c]{@{}c@{}}VAE-\\ GAN\end{tabular}
& \begin{tabular}[c]{@{}c@{}}GAN\\omaly\end{tabular}
& TI & \begin{tabular}[c]{@{}c@{}}Our\\ Method\end{tabular} \\ \hline
& \textbf{0.69} & 0.38 & 0.34 & 0.20 & - & 0.04 & 0.01 & 0.23 & 0.29 & 0.14 \\
\multirow{-2}{*}{Carpet} & 0.87 & 0.59 & 0.54 & 0.72 & 0.54 & 0.46 & 0.35 & 0.55 & \textbf{0.88} & 0.57 \\ \hline
& \textbf{0.88} & 0.83 & 0.04 & 0.02 & - & 0.36 & 0.04 & 0.41 & 0.01 & 0.59 \\
\multirow{-2}{*}{Grid} & 0.94 & 0.90 & 0.58 & 0.59 & 0.59 & 0.86 & 0.76 & 0.80 & 0.72 & \textbf{0.98} \\ \hline
& 0.71 & 0.67 & 0.34 & 0.74 & - & 0.09 & 0.12 & 0.31 & \textbf{0.98} & 0.52 \\
\multirow{-2}{*}{Leather} & 0.78 & 0.75 & 0.64 & 0.87 & 0.73 & 0.65 & 0.64 & 0.77 & \textbf{0.97} & 0.89 \\ \hline
& 0.04 & 0.23 & 0.08 & 0.14 & - & 0.14 & 0.09 & 0.19 & 0.11 & \textbf{0.23} \\
\multirow{-2}{*}{Tile} & 0.59 & 0.51 & 0.50 & 0.93 & 0.81 & 0.64 & 0.70 & 0.69 & 0.41 & \textbf{0.97} \\ \hline
& 0.36 & 0.29 & 0.14 & \textbf{0.47} & - & 0.19 & 0.11 & 0.32 & 0.51 & 0.37 \\
\multirow{-2}{*}{Wood} & 0.73 & 0.73 & 0.62 & 0.91 & 0.87 & 0.95 & 0.77 & 0.91 & 0.78 & \textbf{0.98} \\ \hline
& 0.15 & 0.22 & 0.05 & 0.07 & - & 0.09 & 0.11 & 0.13 & - & \textbf{0.43} \\
\multirow{-2}{*}{Bottle} & 0.93 & 0.86 & 0.86 & 0.78 & 0.86 & 0.76 & 0.73 & 0.82 & - & \textbf{0.99} \\ \hline
& 0.01 & 0.05 & 0.01 & 0.13 & - & 0.02 & 0.05 & 0.14 & - & \textbf{0.16} \\
\multirow{-2}{*}{Cable} & 0.82 & \textbf{0.86} & 0.78 & 0.79 & 0.71 & 0.61 & 0.60 & 0.83 & - & 0.70 \\ \hline
& 0.09 & 0.11 & 0.04 & 0.00 & - & 0.04 & 0.19 & 0.51 & - & \textbf{0.64} \\
\multirow{-2}{*}{Capsule} & \textbf{0.94} & 0.88 & 0.84 & 0.84 & 0.69 & 0.61 & 0.59 & 0.72 & - & 0.84 \\ \hline
& 0.00 & 0.41 & 0.02 & 0.00 & - & 0.33 & 0.34 & 0.37 & - & \textbf{0.66} \\
\multirow{-2}{*}{Hazelnut} & 0.97 & 0.95 & 0.87 & 0.72 & 0.71 & 0.87 & 0.75 & 0.86 & - & \textbf{0.97} \\ \hline
& 0.01 & \textbf{0.26} & 0.00 & 0.13 & - & 0.04 & 0.01 & 0.18 & - & 0.24 \\
\multirow{-2}{*}{Metal Nut} & \textbf{0.89} & 0.86 & 0.76 & 0.82 & 0.75 & 0.43 & 0.46 & 0.69 & - & 0.79 \\ \hline
& 0.07 & 0.25 & 0.17 & 0.00 & - & 0.29 & 0.01 & 0.17 & - & \textbf{0.58} \\
\multirow{-2}{*}{Pill} & 0.91 & 0.85 & 0.87 & 0.68 & 0.77 & 0.80 & 0.62 & 0.76 & - & \textbf{0.91} \\ \hline
& 0.03 & \textbf{0.34} & 0.01 & 0.00 & - & 0.17 & 0.02 & 0.24 & - & 0.32 \\
\multirow{-2}{*}{Screw} & 0.96 & 0.96 & 0.80 & 0.87 & 0.64 & 0.95 & 0.97 & 0.72 & - & \textbf{1.00} \\ \hline
& 0.08 & 0.51 & 0.07 & 0.00 & - & 0.13 & 0.10 & 0.48 & - & \textbf{0.63} \\
\multirow{-2}{*}{Toothbrush} & 0.92 & 0.93 & 0.90 & 0.77 & 0.70 & 0.70 & 0.67 & 0.82 & - & \textbf{0.99} \\ \hline
& 0.01 & 0.22 & 0.08 & 0.03 & - & 0.20 & 0.05 & 0.15 & - & \textbf{0.24} \\
\multirow{-2}{*}{Transistor} & \textbf{0.90} & 0.86 & 0.80 & 0.66 & 0.65 & 0.72 & 0.78 & 0.79 & - & 0.82 \\ \hline
& 0.10 & 0.13 & 0.01 & 0.00 & - & 0.05 & 0.04 & 0.21 & - & \textbf{0.34} \\
\multirow{-2}{*}{Zipper} & 0.88 & 0.77 & 0.78 & 0.76 & 0.74 & 0.63 & 0.60 & 0.84 & - & \textbf{0.90} \\ \hline \hline
& 0.22 & 0.33 & 0.09 & 0.13 & - & 0.15 & 0.09 & 0.27 & - & \textbf{0.41} \\
\multirow{-2}{*}{\textbf{Mean}} & 0.87 & 0.82 & 0.74 & 0.78 & 0.72 & 0.71 & 0.66 & 0.77 & - & \textbf{0.89} \\ \hline
\end{tabular}
\label{tab:mvtec_results}
\end{table}
\section{Introduction}
\vspace{-0.20in}
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{figures/intro_structure}
\vspace{-0.15in}
\caption{The motivation of leveraging structure information for anomaly detection. The normal medical images are highly structured, while the regular structure is broken in abnormal images. For example, the lesions (denoted by black bounding box and \alert{red arrow} in (a) of diabetic retinopathy destroy the blood vessel and histology layer in retina. Thus, in the abnormal retinal fundus image and optical coherence tomography (OCT) image, the lesions (denoted by \alert{red color} in (b) and (c)) broke the structure. Moreover, this phenomenon agrees with the cognition of doctors. Motivated by this clinical observation, we suggest utilizing the structure information in anomaly detection.
The figure (a) is adopted from the website of American Academy of Ophthalmology \cite{kierstan2019what}.}
\label{fig:intro_structure}
\vspace{-0.15in}
\end{figure}
Deep convolutional neural networks (CNNs) have achieved many breakthroughs in medical image analysis \cite{litjens2017survey}\cite{xing2017deep}\cite{zhou2017deep}\cite{zhou2018multi}\cite{fu2018joint}\cite{zhang2019attention}.
However, these methods usually depend on large-scale balanced data in medical image domain, and the data acquisition of diseased images is extremely expensive because of the privacy issues of patients. Furthermore, sometimes the incidence of some diseases is extremely rare.
In contrast, it is relatively easier to collect the normal (healthy) data.
Human can distinguish those images with diseases from normal healthy data, and it is important for an intelligent system to mimic the behavior of the human for detecting those images with diseases by only leveraging the normal training data, and such task is defined as anomaly detection \cite{schlegl2017unsupervised} in medical image analysis domain.
\begin{figure*}[ttt]
\centering
\includegraphics[width=\textwidth]{figures/method_overall}
\vspace{-0.05in}
\caption{The pipeline of our P-Net, which consists of three modules. Firstly, the structure extraction network $\mathbf{G}_s$ is trained for extracting structure $\mathbf{S}$ from the original image $\mathbf{I}$, and then the extracted $\mathbf{S}$ and feature encoded from $\mathbf{I}$ are fused for reconstruction. Finally, we further utilize the reconstructed image $\mathbf{\hat{I}}$ to extract the $\mathbf{\hat{S}}$ and measure the difference between $\mathbf{S}$ and $\mathbf{\hat{S}}$. Our P-Net encodes the relation between image texture and structure by enforcing the consistency of the image and structure between original and reconstructed ones.
}
\label{fig:method_overall}
\vspace{-0.10in}
\end{figure*}
Typical anomaly detection methods are usually based on image reconstruction \cite{baur2018deep}\cite{chen2018unsupervised}\cite{schlegl2019f}\cite{zhou2017anomaly}\cite{zimmerer2018context}\cite{zhou2020sparse}
which means given an image, an encoder maps the image to feature space and a decoder reconstructs the image based on the feature. By minimizing the reconstruction error between the input image and the reconstructed image on the normal training data, the encoder and decoder are trained for image reconstruction.
In the testing phase, an image can be classified as normal or abnormal by measuring the reconstruction error \cite{zhou2017anomaly}\cite{zimmerer2018context}. To guarantee the fidelity of the reconstructed image on the normal training data, generative adversarial networks (GAN) \cite{goodfellow2014generative} based solutions have been introduced \cite{baur2018deep}\cite{chen2018unsupervised},
which guide the generator to synthesize more realistic images with a discriminator.
Further, GANomaly \cite{akcay2018ganomaly} and f-AnoGAN \cite{schlegl2019f} are proposed, which append an additional encoder to the generator to further encode the reconstructed image. Then the reconstruction errors corresponding both images and features to measure the anomaly.
However, all these existing methods directly feed the image into the CNNs for anomaly detection without leveraging any prior information. When doctors make diagnosis, besides the textures of the organ in the image, the structures (here \textbf{we treat the semantically meaningful edges in an image as the structure}, e.g., vessel topological structure in fundus images, the anatomic layer structure in OCT images, \emph{etc.}) also help them to make the decision \cite{puliafito1995imaging}\cite{zinreich1988fungal}\cite{hartnett1996deep}.
As shown in Fig. \ref{fig:intro_structure}, for eye images with diseases, the normal structures are destroyed.
For normal (healthy) images, the structure can be extracted, and the extracted structure also provides a cue about texture distribution. Since both texture and structure helps anomaly detection, then a question is naturally raised: \textit{How to encode the structure-texture relation with CNNs for anomaly detection?} Towards this end, we propose to leverage the dependencies between structure and image texture for image and structure reconstruction circularly, and use use the reconstruction error for both structure and image as normality measurement.
Specifically, we first propose to extract the structure from the original image, then we map the structure to the reconstructed image. However, the mapping from the structure to the reconstructed image is ill-posed.
Thus we propose to fuse the last layer image feature with structure feature to reconstruct the image.
We further use the reconstructed image to extract the structure, which also serves as a regularizer and helps improve the image reconstruction in previous stage. Meanwhile, the structure difference between the structure extracted from original image and that from the reconstructed image also helps us to measure the anomaly score. As shown in Fig. \ref{fig:method_overall}, since the whole network architecture is like a ``P'', we term it as P-Net.
In the training phase, since the structure of retinal image are usually not given for anomaly detection, we propose to use the vessel segmentation datasets and OCT layer segmentation datasets to train the structure extraction module of our network with a domain adaption method \cite{chai2020perceptual}. By minimizing the error between the input image and its reconstructed version (referred to as contents error), and the error between the structure extracted from original image and that extracted from the reconstructed image (referred to as structure error), our P-Net can be trained. In the inference stage, by measuring the contents error and structure error, each image can be classified as normal/abnormal accordingly. It is worth noting that our retinal image anomaly detection approach is a general framework, it can be readily applied to anomaly detection for general object images and novel class discovery for retinal images where testing data contains data falling out of the distribution of the training data \footnote{These tasks are also termed as general anomaly detection in computer vision}. For example, training data contains some given types of diseases while testing data contains a new type of disease. The reason for the success of our P-Net in these cases is that our network can capture the consistency between the structure and image contents, and the structure and image contents relation is different from the training ones for those new diseases.
The main \textbf{contributions} of this work are summarized as follows:
\textbf{i)} we propose to utilize the structure information for anomaly detection in the retinal image. Our solution agrees with the cognition of clinicians that the normal retinal images usually have regular structures, and the irregular structure hints the incidence of some diseases. To the best of our knowledge, this is the first work that infuses structure information into CNNs for anomaly detection;
\textbf{ii)} we propose a novel P-Net that encodes the relation between structure and textures for anomaly detection by using the cycle reconstruction between the image contents and structure. In the inference stage, both image reconstruction error and structure difference are utilized for anomaly score measurement;
\textbf{iii)} since the structures are not given on almost all anomaly detection datasets for retinal images, we employ a domain adaptation method to extract structure by leveraging other datasets annotated with structure;
\textbf{iv)} extensive experiments validate the effectiveness of our method for anomaly detection in both fundus modality and OCT modality for retinal images. Further, our method can be well generalized to novel class discovery for retinal images and anomaly detection for general object images.
\section{Method}
\label{method}
For healthy populations, the distribution of vasculature and histology of the retinal layers is regular. On the contrary, for subjects with diseases, the lesion of diseases will destroy the regularity of vasculature and histology.
For example,
the blood vessel and histology layer in retina will be destroyed by diabetic retinopathy (DR).
The layer-wise structure in OCT will also be destroyed by various lesions such as pigment epithelium detachment (PED), subretinal fluid (SRF) \cite{hu2019automated}, \textit{etc}.
Based on these clinical observations, we define the retinal blood vessels in fundus images and the retinal layers in OCT as structure. Besides the anomalies in texture, the anomalies of structure would also help ophthalmologists and clinicians to make the diagnosis decision \cite{puliafito1995imaging}\cite{hartnett1996deep}.
Motivated by the functionality of structure in retinal disease diagnosis, we propose to leverage the structure as an additional cue for anomaly detection. Further, for healthy images, structure extracted from the image provides a cue about texture distribution. By leveraging the relation between structure and texture in retinal images, we propose a P-Net for anomaly detection, and P-Net encodes the dependencies between structure and relation.
Specifically, our network architecture consists of three modules: 1) structure extraction from original image module, denoted as $\mathbf{G}_{s}$, which extracts structure $\mathbf{S}$ from original image $\mathbf{I}$; 2) image reconstruction module, denoted as $\mathbf{G}_{r}$, which leverages the last layer image encoder feature and structure to reconstruct the input image. We denote the reconstructed image as $\mathbf{\hat{I}}$. By minimizing the difference between $\mathbf{I}$ and $\mathbf{\hat{I}}$, the relation between texture and structure is encoded into the network. Thus we use image reconstruction error ($\|\mathbf{I}-\mathbf{\hat{I}}\|_1$) as a normality measurement;
3) structure extraction from reconstructed image module, which further extracts structure from the reconstructed image $\mathbf{\hat{I}}$. We denote the structure extracted from $\mathbf{\hat{I}}$ as $\mathbf{\hat{S}}$. By minimizing the difference between $\mathbf{S}$ and $\mathbf{\hat{S}}$, this module enforces the original image to be correctly reconstructed by $\mathbf{G}_{r}$. Further, the structure difference $\|\mathbf{S}-\mathbf{\hat{S}}\|_1$) can also be used to measure the normality of the image. The network architecture of P-Net is shown in Fig. \ref{fig:method_overall}. The detailed architecture of each module can be found in the supplementary (Section S1).
\subsection{Structure Extraction From Original Image Module}
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{figures/fig_structure_da}
\caption{(a) Structure extraction network with domain adaptation (DA). (b) The qualitative results of DA for OCT images. The structure of target image cannot be extracted well without DA.}
\label{fig:method_sem}
\end{figure}
The datasets used in previous retinal image anomaly detection work \cite{schlegl2017unsupervised}\cite{schlegl2019f} are not publicly available, therefore we propose to use the Retinal Edema Segmentation Challenge Dataset (RESC, a OCT image dataset) \cite{hu2019automated}, and a fundus multi-disease diagnosis dataset (iSee dataset \cite{yan2019oversampling}) collected in a local hospital for performance evaluation. However, the structure in both datasets are not provided. Manually annotating the structure, including vessels in fundus and layer segmentation in OCT, is extremely time consuming.
Fortunately, there are many publicly available datasets for vessel segmentation in fundus images and layer segmentation in OCT images \cite{hoover2000locating}\cite{staal2004ridge}.
To get structure without tedious manual annotation, we utilize existing datasets to train a network for structure extraction.
However, retinal images in different datasets are captured by various devices, consequently, different datasets have different noises and data distribution.
To tackle this problem, we leverage AdaSeg \cite{tsai2018learning}, a domain adaptation based image segmentation method to learn the structure extractor $\mathbf{G}_{s}$. Specifically, we map images in different datasets but with the same modality to their corresponding structures with a U-Net \cite{ronneberger2015u}, and add a discriminator to make the segmentation results from source and target datasets indistinguishable. The network architecture is shown in Fig. \ref{fig:method_sem}(a). For RESC, we use the Topcon dataset \cite{cheng2016speckle} as the source; while for iSee, we use the DRIVE dataset \cite{staal2004ridge} as the source. The training loss in this module is as follows:
\begin{align}
\mathcal{L}_{\text{seg}}(I_{\text{src}}) &= -\sum S_{\text{src}} \log(\mathbf{G}_{s}(I_{\text{src}})) \\
\mathcal{L}_{\text{seg}}(I_{\text{tar}}) &= \mathbb{E}[\log(1-D(\mathbf{G}_{s}(I_{\text{tar}}))] + \mathbb{E}[\log D(\mathbf{G}_{s}(I_{\text{src}}))]
\end{align}
where $I_{\text{src}}$ and $S_{\text{src}}$ denote the source image and its ground truth, respectively. $I_{\text{tar}}$ denotes the target image, and $D$ denotes the discriminator.
Once the structure extraction module is trained, we fix the module to simplify the optimization of the other modules in our P-Net.
\subsection{Image Reconstruction Module}
Since the structure is represented by vessels in fundus or layer section in OCT, and the ambiguity exists for the direct mapping from structure to original image \cite{Ren_2019_ICCV}\cite{isola2017image}, we propose to combine structure information and image texture information to reconstruct the original image. We define the texture as complementary information of the structure, and the texture provides the details over the local regions.
Specifically, we encode the original image and its structure with $\text{En}_1$ and $\text{En}_2$, respectively. Then we concatenate the two features and feed them into a decoder ($\text{De}$) to reconstruct the original image. Skip connections are introduced between the structure encoder and decoder for features at the same level, which avoid the information loss caused by downsampling pooling in structures, while there is no skip connection between the image encoder and decoder. The reason is that if we introduce the skip connection between them, then it is possible that we learn an identity mapping between the image and reconstructed image, which leads that there is no information flowed from structure to the original image, which is not desirable because identity mapping also makes abnormal images well reconstructed in the testing phase \cite{chen2018deep}, where anomaly detection is impossible. It is expected that only information related to texture is encoded in the original encoder and passed to decoder to help the image reconstruction, therefore probably the last layer feature in the image encoder is enough for this purpose.
Following \cite{akcay2018ganomaly}\cite{isola2017image}, we use $L_1$ norm to measure the difference between the reconstructed image and the original image.
\begin{equation}
\mathcal{L}_{\text{rec}}(\mathbf{I}) = \| \mathbf{I} - \mathbf{\hat{I}}\|_1
\end{equation}
To improve the quality of the reconstructed image, we apply PatchGAN \cite{isola2017image} to penalize the reconstruction error for the reconstructed image $\mathbf{\hat{I}}$. Formally, let $\mathbf{D}$ be the discriminator, the adversarial loss $\mathcal{L}_{\text{adv}}$ for training reconstruction network is shown as follows:
\begin{equation}
\mathcal{L}_{\text{adv}}(\mathbf{I}) = \mathbb{E}[\log (1-\mathbf{D}(\mathbf{G}_{r}(\mathbf{I},\mathbf{S})))] +
\mathbb{E}[\log \mathbf{D}(\mathbf{I})]
\end{equation}
\subsection{Structure Extraction From Reconstructed Image Module}
We further append the structure extractor $\mathbf{G}_{r}$ to the reconstructed image. There are two purposes: 1) by enforcing the structure extracted from original image
and that from reconstructed image
to be the same, the original image can better reconstructed. In this sense, image reconstruction from reconstructed image module behaves like a regularizer; 2) some lesions are more discriminative in structure, then we extract structure from original image and reconstructed image, respectively, and use their difference for normality measurement. The loss function in this module is defined as follows:
\begin{equation}
\mathcal{L}_{\text{str}}(\mathbf{I}) = \| \mathbf{S} - \mathbf{\hat{S}} \|_1
\end{equation}
\subsection{Objective Function}
We fix the structure extractor $\mathbf{G}_{s}$ in the training of image reconstruction module $\mathbf{G}_{r}$.
Therefore, we arrive at the the objective function of our P-Net:
\begin{equation}
\mathcal{L} = \lambda_1 \mathcal{L}_{\text{adv}} + \lambda_2 \mathcal{L}_{\text{rec}} + \lambda_s \mathcal{L}_{\text{str}}
\end{equation}
where $\lambda_1, \lambda_2, \lambda_s$ are the hyper-parameters. Empirically, we set $\lambda_1=0.1, \lambda_2=1, \lambda_s=0.5$ on all datasets in our experiments.
\subsection{Anomaly Detection for Testing Data}
We combine image reconstruction error with structure difference for anomaly score ($\mathcal{A}(\mathbf{I})$) measurement:
\begin{equation}
\mathcal{A}(\mathbf{I}) = (1 - \lambda_f) \| \mathbf{I} - \mathbf{\hat{I}} \|_1 + \lambda_f \|\mathbf{S} - \mathbf{\hat{S}} \|_1
\label{equa:anomaly_fusion}
\end{equation}
where $\lambda_{f}$ is a weight used to balance the image difference and structure difference.
A higher anomaly score indicates that the image is more likely to be abnormal.
\section{Related Work}
\vspace{-0.05in}
\subsection{Anomaly Detection}
Anomaly detection is a vital field in the machine learning. An intuitive assumption is that the anomalies are out of the distribution of normal samples.
Based on this hypothesis, it is natural to learn a discriminative hyperplane to separate the abnormal samples from the normal ones.
One-class support vector machine (OCSVM) \cite{scholkopf2000support} was one of the classical methods, and its derived deep one-class SVDD \cite{ruff2018deep} constrained the normal samples in a hypersphere so that the potential anomalies are the outliers being far away from the center of the hypersphere.
Besides, Gaussian Mixture Models (GMM) tends to model the distribution of normal samples, and the outliers out of the distribution might result in a high probability of being abnormal.
Schlegl \etal \cite{schlegl2017unsupervised} initially introduced Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} for anomaly detection that termed AnoGAN.
The AnoGAN generates images from a Gaussian latent space, and samples are recognized as anomalies when the corresponding latent code is out of the distribution.
Similar to AnoGAN, GANomaly \cite{akcay2018ganomaly} also involved representation learning in latent space. Compared with AnoGAN, GANomaly does not seek the latent code in the manifold by gradient descent in the test phase.
David \etal \cite{zimmerer2018context} proposed the context-encoding Variational Auto-Encoder in brain MRI images, which combines reconstruction with density-based anomaly scoring.
Schlegl \etal \cite{schlegl2019f} proposed to utilize a generator \cite{goodfellow2014generative} to map latent space to normal retinal OCT image, and use an encoder to learn the mapping from retinal OCT image to latent space.
Pramuditha \etal \cite{perera2019ocgan} proposed OCGAN to make all the samples in a closed latent space. It reconstructs all samples to the normal ones.
Also, the memory-augmented network such as \cite{gong2019memorizing} provided a fascinating idea to map the latent code of each sample to the nearest item in a learned dictionary with only normal patterns.
As discussed before, for normal healthy images, the structure and image texture are closely related. However, these existing methods fail to encode the structure-texture relation.
\subsection{Structure-Texture Relation Encoding Networks}
The texture and structure in an image are complementary to each other \cite{aujol2006structure}, and image structure has been successful used for image inpainting \cite{Ren_2019_ICCV}\cite{Nazeri_2019_ICCV_Workshops}.
Nazeri \etal \cite{Nazeri_2019_ICCV_Workshops} proposed a two-stage network, which took the edge information as the structure. The model \cite{Nazeri_2019_ICCV_Workshops} first predicts the full edge map of incomplete image by the edge generator. Then the predicted edge map and incomplete image are passed to an image completion network to compute the full image. Since the distribution of edge map is significantly different from the distribution of the color image, Ren \etal \cite{Ren_2019_ICCV} proposed to employ edge-preserved smooth images to represents the structure of the color images. The network proposed in \cite{Ren_2019_ICCV} consists of a structure reconstructor to predict the image structure and a texture generator to complete the image texture. The relation of structure-texture can be encoded in the `image-structure-image' pipeline, and this motivates us to infuse the normal structure into deep neural networks for anomaly detection.
In our work, we further encode the relation between normal image and structure by enforcing the consistency of normal image, and the consistency between the structure extracted from normal image and that extracted from the reconstructed image.
|
2,869,038,155,284 | arxiv | \section{Introduction}
\label{sec:intro}
Multilingual neural machine translation (MNMT) models~\cite{ha2016multilingual,johnson2017multilingual}
reduce operational costs and scale to a large number of language pairs~\cite{aharoni-etal-2019-massively}
by using a shared representation space.
This approach benefits low-resource languages through positive transfer from related languages, but introduces a \textit{transfer-interference trade-off}~\cite{arivazhagan2019massively}---as the number of languages grows,
the performance in more resource-rich languages starts to drop.
Prior work identifies the capacity bottleneck as a cause,
which prevents models from representing all languages equally well ~\cite{arivazhagan2019massively,zhang-etal-2020-improving}.
While naively increasing model capacity is a sure-shot to improving performance~\cite{arivazhagan2019massively}, it comes with large computational costs.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/overview.pdf}
\caption{We inject language(-pair)-specific adapters in MNMT, by generating them from a hyper-network.}
\label{fig:overview}
\end{figure}
A common remedy for the capacity bottleneck is to relax the information sharing with language\allowbreak-specific parameters~\cite{blackwood-etal-2018-multilingual, sachan-neubig-2018-parameter,wang2018multilingual,tan-etal-2019-multilingual,zhang-etal-2020-improving, fan2021beyond}.
Adapter modules~\cite{rebuffi2017adapters}
have been successfully employed in various natural language processing tasks
to address similar capacity-related issues~\cite{houlsby2019parameter, pfeiffer-etal-2020-mad}.
In MNMT, adapters have been used to adapt (via finetuning) pretrained generic models to specific language-pairs or domains\cite{bapna-firat-2019-simple},
to improve zero-shot performance~\cite{philip-etal-2020-monolingual},
or to reduce interference~\cite {zhu-etal-2021-counter-interference}.
However, using regular language(-pair) adapters has certain limitations.
First, they can be very parameter-inefficient.
While each adapter layer might be small,
the total number of layers is proportional to the number of languages.
This quickly becomes very costly,
in particular in massively multilingual settings.
In addition, there is no information sharing between the adapters of related languages.
For instance, an adapter for Nepali cannot benefit from the more abundant Hindi data,
which prevents positive transfer between the two languages.
In this work, we use a hyper-network~\cite{ha2017hypernetworks} to generate language(-pair) adapters,
dubbed \textit{hyper-adapters}, for Transformer-based MNMT.
Hyper-adapters~(Figure~\ref{fig:overview}), are a function of jointly trained language and layer embeddings.
This approach naturally encodes language relatedness and enables knowledge transfer between
related languages.
It also substantially improves parameter efficiency,
as the number of hyper-adapter parameters is invariant to the number of languages.
We also address optimization obstacles~\cite{sung2021vl}
overlooked by prior work~\cite{karimi-mahabadi-etal-2021-parameter, ansell-etal-2021-mad-g},
and propose a rescaling fix that improves convergence and
enable us to successfully scale to large hyper-networks.
We present experiments on a large multilingual translation benchmark.
Unlike prior work~\cite{bapna-firat-2019-simple,philip-etal-2020-monolingual}
that finetunes adapters for language-specific adaptation,
we train regular- and hyper-adapters jointly with the main-network.
We show that with the same parameter budget and FLOPS,
hyper-adapters are consistently better than other regular adapter variants.
We also match the performance of regular adapters with hyper-adapters up to 12 times smaller.
Hyper-adapters, also converge faster than other approaches
and improve scalability, as small dense networks with hyper-adapters yield similar results to larger regular dense networks.
Our analysis reveals that hyper-adapters do indeed exploit language similarity,
unlike regular adapters.
By comparing models on benchmarks with artificially constructed properties,
we find that the gains of hyper-adapters grows as the redundancy (e.g., language similarities) in the training data increases.
Our main contributions are:
\setlist[enumerate]{leftmargin=17pt}
\begin{enumerate}
[topsep=3pt,itemsep=3pt,partopsep=0pt, parsep=0pt]
\item We present a novel approach that injects language-specific parameters in MNMT,
by generating them from a hyper-network.
We also successfully train large hyper-networks by addressing unresolved optimization obstacles.
\item We present multilingual translation experiments.
Hyper-adapters consistently outperform regular adapters with the same parameter count, or match the results of much larger (up to 12x) regular adapters.
They also converge faster and scale better than other methods.
%
\item We present an analysis using a series of probes.
We verify that hyper-adapters encode language relatedness, unlike regular adapters.
%
We also find that the gains of hyper-adapters are proportional to the redundancy in the training data.
\end{enumerate}
\section{Background: Multilingual NMT}
\label{sec:mnmt}
In this work, we train universal MNMT models following~\citet{johnson2017multilingual}.
We prepend a special token $\langle2\textsc{XX}\rangle$ to the source and target sequences,
that denotes the target language.
Given a source sentence
$\bm{x} = \langle x_1, x_2, ..., x_{|\bm{x}|} \rangle$,
a target sequence
$\bm{y} = \langle y_1, y_2, ..., y_{|\bm{y}|} \rangle$
and a target language token $\bm{t}$, we train our models as follows:
\begin{align*}
\bm{H} &= \text{encoder}([t, \bm{x}]) \\
\bm{S} &= \text{decoder}([t, \bm{y}, \bm{H}])
\end{align*}
\noindent We use the Transformer architecture~\cite{vaswani2017transformer}
as the backbone of all our models.
\subsection{Language-Specific Parameters}
With universal MNMT, the issue of \textit{negative interference} between unrelated languages emerges, and high-resource language directions are bottlenecked by constrained model capacity ~\cite{arivazhagan2019massively}.
A common solution is to extend model capacity with language-specific modules~\cite{blackwood-etal-2018-multilingual,sachan-neubig-2018-parameter,vazquez-etal-2019-multilingual,wang2018multilingual,lin-etal-2021-learning,zhang-etal-2020-improving,fan2021beyond}.
\paragraph{Adapters}
In this work, we incorporate language-specific parameters using adapter modules,
as they are generic and widely adopted by the
community for multilingual or multi-task problems.
We follow the formulation of~\citet{bapna-firat-2019-simple, philip-etal-2020-monolingual},
and inject one adapter block after each Transformer layer,
followed by a residual connection.
Let $\bm{z}_i \in \mathbb{R}^{d_z}$ be the output of the $i$-th encoder or decoder layer,
where $d_z$ is the embedding dimension of the Transformer model.
First, we feed $\bm{z}_i$ to a LayerNorm sublayer $\bar{\bm{z}}_i=\text{\textsc{LN}}_i(\bm{z}_i|\bm{\beta,\gamma})$.
Next, we transform $\bar{\bm{z}}_i$ by applying a down-projection
$\bm{D_i} \in \mathbb{R}^{d_z \times d_b}$,
followed by a non-linearity $\phi$,
an up-projection $\bm{U_i} \in \mathbb{R}^{d_b \times d_z}$,
and a residual connection, where $d_b$ is the bottleneck dimensionality of the adapter.
Formally, each adapter is defined as:
\begin{align*}
\text{adapter}_i(\bm{z}_i) = \bm{U_i}(\phi(\bm{D_i} \, \text{\textsc{LN}}_i(\bm{z}_i))) + \bm{z}_i
\end{align*}
In this work, we use ReLU as the non-linearity $\phi$.
\paragraph{Adapter Variants}
In MNMT, prior work has used adapters for language(-pair) adaptation, via finetuning.
In our work, we consider two variants,
but train the adapters jointly with the main-network.
Preliminary experiments also showed that jointly training adapters with the main-networks yields better results than finetuning adapters.
The first variant is \textit{language-pair adapters}~\cite{bapna-firat-2019-simple},
which uses a different adapter module per language pair in each encoder and decoder layer.
This approach is effective,
but it quickly becomes prohibitively expensive in a multi-parallel setting\footnotemark,
as the number of adapter layers scales quadratically with the number of languages.
Next, we consider (monolingual) \textit{language adapters}~\cite{philip-etal-2020-monolingual},
which use one adapter per language.
Specifically, during xx$\rightarrow$yy translation,
we activate the adapters for the xx (source) language in the encoder
and the yy (target) language in the decoder.
Thus, they require less adapter layers while also they
generalize to unseen translation directions.
\footnotetext{Multi-parallel refers to a fully many-to-many setting,
unlike the English-centric setting that is \{en$\rightarrow$X $\cup$ X$\rightarrow$en\}.}
\section{Hyper-Adapters}
\label{sec:approach}
We propose to use a hyper-network~\cite{ha2017hypernetworks},
a network that generates the weights of another network,
to produce the weights of all adapter modules,
dubbed \textit{hyper-adapters}.
As shown in Figure~\ref{fig:hyper-network},
we use a single hyper-network
to generate adapters for all languages and layers
by conditioning on $(\bm{s},\bm{t},\bm{l})$ tuples,
where $\bm{s}$ and $\bm{t}$ denote the source and target language
and $\bm{l}$ denotes the encoder or decoder layer-id (e.g., enc3).
Unlike regular adapters, our approach enables information sharing across languages and layers, and the hyper-network can learn to optimally allocate its capacity across them. Our hyper-network has 3 components:
\paragraph{Input}
We first embed $(\bm{s},\bm{t},\bm{l})$.
We use a shared matrix for the source and target language embeddings,
and a separate matrix for the layer-id embeddings for all encoder and decoder layers.
\paragraph{Encoder}
The language and layer embeddings are given as input to the hyper-network encoder.
First, we concatenate the embeddings and project them with
$\bm{W_{\text{in}}}$, followed by a non-linearity,
to obtain the hyper-network hidden representation $\bm{h} \in \mathbb{R}^{d_h}$:
\begin{align*}
\bm{h} = \text{ReLU}(\bm{W_{\text{in}}} \, [\bm{s} \| \bm{t} \| \bm{l}])
\end{align*}
\noindent where $\|$ denotes the concatenation operation.
We then pass $\bm{h}$ through $N$ residual blocks,
to encode high-level interactions between the input features:
\begin{align*}
\text{encoder}(\bm{h_{i+1}}) = \bm{W_2}(\text{ReLU}(\bm{W_1} \, \text{\textsc{LN}}(\bm{h_i}))) + \bm{h_i}
\end{align*}
where $\bm{W_1}\in\mathbb{R}^{d_h\times d_h}$ and $ \bm{W_2}\in\mathbb{R}^{d_h\times d_h}$
are the trainable weights of each residual block.
\paragraph{Projections}
We feed the final representation $\bm{h}$ to separate projection heads
to obtain (by reshaping their outputs) each weight matrix of a hyper-adapter.
Specifically,
we use $\bm{H_{\text{up}}} \in \mathbb{R}^{d_h \times (d_b d_z)}$
to generate the weights for each up-projection
$\bm{U} \in \mathbb{R}^{d_b \times d_z}$,
$\bm{H_{\text{down}}} \in \mathbb{R}^{d_h \times (d_z d_b)}$
to generate the weights for each down-projection
$\bm{D} \in \mathbb{R}^{d_m \times d_b}$.
We also generate the LayerNorm parameters
$\bm{\gamma} \in \mathbb{R}^{d_z}$ and $\bm{\beta} \in \mathbb{R}^{d_z}$,
with the projection heads
$\bm{H_{\gamma}} \in \mathbb{R}^{d_h \times d_z}$
and
$\bm{H_{\beta}} \in \mathbb{R}^{d_h \times d_z}$, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/hypernetwork-arch.pdf}
\caption{We feed source language, target language
and layer-id embeddings into a shared hyper-network,
to generate adapters weights for all languages and layers.}
\label{fig:hyper-network}
\end{figure}
\subsection{Unlocking Large Hyper-networks}
\label{sec:rescaling}
Prior work~\cite{karimi-mahabadi-etal-2021-parameter, ansell-etal-2021-mad-g} in natural language understanding (NLU) has used the equivalent of small values of $\bm{d_h}$.
\citet[Figure~4]{sung2021vl}, recently found that scaling up hyper-networks~\cite{karimi-mahabadi-etal-2021-parameter}
lead to very poor results, which they attributed it to unknown optimization issues.
In preliminary experiments, we found similar issues when using larger values of $d_h$
(i.e., increasing the size of the hyper-network).
Next, we identify the cause of this problem, and propose a simple fix that allows us to effectively scale hyper-adapters.
Figure~\ref{fig:rescaling-loss} shows the training loss curve as we vary $\bm{d_h}$.
We find that increasing the hyper-network size by increasing $\bm{d_h}$ leads to worse instead of better performance and also makes training very unstable.
In Figure~\ref{fig:rescaling-std}, we plot the average standard deviation (SD) of the Transformer layer activations during training,
and find that for small $\bm{d_h}$, the activations stay within a healthy range,
but as we increase $\bm{d_h}$, the activations start to grow fast.
After a certain point, the network fails to recover
and the activations grow to extreme values.
To solve this issue,
we scale down the generated adapter weights by $\frac{1}{\sqrt{\bm{d_h}}}$,
and generate the adapter weights as
$\tilde{W} = \text{reshape}(\frac{\bm{H\, \bm{h}}}{\sqrt{\bm{d_h}}})$.
Note that, each component of the generated adapter matrix $\tilde{W}$
is the dot-product of $\bm{h}$ and the corresponding column of
a given projection head $\bm{H}$.
Thus, the generated weights' SD is proportional to $\bm{d_h}$.
The motivation is similar to the scaled dot-product in Transformer's self-attention.
Once we apply the rescaling fix,
the activations stay within a healthy range (Figure~\ref{fig:rescaling-std}),
and increasing $\bm{d_h}$ improves convergence as expected (Figure~\ref{fig:rescaling-loss}).
Note that, in this work we consider variants with $\bm{d_h} > 512$,
and the recaling fix is crucial to unlocking these variants.
\subsection{Parameter Efficiency and FLOPS}
\label{sec:param-efficiency}
Given $\bm{N}$ languages,
language adapters introduce $\bm{N}$ new modules,
whereas language-pair adapters introduce $\bm{N^2}$ new modules in a multi-parallel setting
or $\bm{2N}$ modules in an English-centric many-to-many setting.
By contrast, the number of extra parameters in hyper-adapters is invariant to both
the number of languages and layers.
Most of the parameters are in the projection heads.
Intuitively,
each row of a head's weight matrix is equivalent to a (flattened) adapter weight matrix.
The number of rows in each head is equal to the hidden size $\bm{d_h}$,
thus $\bm{d_h}$ controls its capacity.
Therefore, to reduce the memory needs compared to language adapters
we must use $\bm{d_h} < \bm{N}$,
and $\bm{d_h} < \bm{2N}$ for \textit{English-centric} language-pair adapters (details in Appendix~\ref{app:param-efficiency}).
In terms of computational cost, all adapter and hyper-adapter variants yield models with the same FLOPS.
This is because, at test time, we activate only the main network and the corresponding adapters,
with both regular and hyper-adapters having identical architecture and size.
During training, hyper-adapters incur an additional cost for generating the adapter parameters. However, this cost is negligible in practice, as it is run only once per batch for each language-pair. At test time, the generated weights can be precomputed and cached.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/loss.pdf}
\caption{Effect of increasing $\bm{d_h}$ on training. Without rescaling the weights,
as we use bigger hyper-networks, training becomes unstable and the loss increases. }
\label{fig:rescaling-loss}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/act_std_output.pdf}
\caption{Transformer layer activations as we vary $\bm{d_h}$.}
\label{fig:rescaling-std}
\end{figure}
\input{tables/random_base}
\section{Experimental Setup}
\label{sec:exp-setup}
\paragraph{Data}
We present results on ML50~\cite{Tang2020MultilingualTW},
an English-centric multilingual translation dataset with 230M sentences
between English and 50 other languages. %
We concatenate the En$\rightarrow$X and X$\rightarrow$En directions,
and group languages based on the amount of their training data into
\textsc{high}~($\geq$1M, 14 languages), \textsc{med}~($\geq$100K, 17 languages) and \textsc{low}~($<$100K, 19 languages).
We use SentencePiece\footnotemark~\cite{kudo-richardson-2018-sentencepiece}
to obtain a joint vocabulary of 90k symbols.
Then, we filter out sentence pairs with more than 250 tokens or with length ratio over $2.5$.
\footnotetext{We use the \texttt{unigram} model with coverage 0.99995.}
\paragraph{Sampling}
To obtain a more balanced data distribution
we use temperature-based sampling~\citep{arivazhagan2019massively}.
Assuming that $p_\textsc{l}$ is the probability that a sentence belongs to language $L$,
we sample sentences for $L$ with a probability proportional to $p_\textsc{l}^{1/T}$,
where $T$ is a temperature parameter.
Larger values of $T$ lead to more even sampling across languages.
During preprocessing, we train SentencePiece with $T$=5.
During training we set $T$=2, as we observed that with larger values, models overfit on low-resource languages.
\paragraph{Model Configuration}
We use the Transformer-Base architecture~\cite{vaswani2017transformer} in most of our experiments,
which has 6 encoder and decoder layers, embedding size of 512,
feed-forward filter size of 2048, 8 attention heads and 0.1 dropout.
To verify the effectiveness of our approach with more large-scale models,
we also consider an experiment with the Transformer-Big configuration, which uses embedding size of 1024,
feed-forward filter size of 4096, 16 attention heads and 0.3 dropout.
In all models, we tie the encoder-decoder embeddings and the decoder output projections~\cite{press-wolf-2017-using,inan2017tying}.
We implemented all our models in Fairseq~\cite{ott2019fairseq}.
\paragraph{Optimization}
We use Adam~\cite{kingma2014Adam} with $\beta_1=0.9$, $\beta_2=0.98$, and $\epsilon=10^{-6}$.
We train Transformer-Base models with a learning rate of $0.004$ for 360k steps,
and Transformer-Big models with a learning rate of $0.001$ for 220k,
using a linear warm-up of 8k steps, followed by inverted squared decay.
All models are trained with large batches of 256k tokens (8k $\times$ 32 V100 GPUs)
and label smoothing~\cite{szegedy2016rethinking} of 0.1.
\paragraph{Evaluation}
During training, we evaluate models every 20k steps and select the checkpoint with the best validation loss,
aggregated across languages.
At test time, we use beam search of size 5.
We evaluate all models using \textsc{bleu}\xspace~\cite{papineni2002bleu}
computed with Sacre\textsc{bleu}~\cite{post-2018-call}.
\paragraph{Baselines}
We compare with strong baselines that incorporate language-specific parameters into MNMT.
We consider two adapter variants that yield significant improvements over dense MNMT models,
namely (monolingual) \textit{language adapters} and \textit{language-pair adapters} and set their bottleneck size to 128.
Given that ML50 contains 51 languages in total,
language adapters require 612 adapter modules ($51\times12$),
whereas language-pair adapters require 1224 (i.e., twice as many).
\paragraph{Hyper-adapter Settings}
We use our proposed hyper-network to generate hyper-adapters
with \textit{identical} architecture as their regular adapter counterparts.
We consider three hyper-network variants in our experiments:
\textit{base} ($\bm{d_h}=612$),
\textit{small} ($\bm{d_h}=204$)
and \textit{tiny} ($\bm{d_h}=102$).
They contain roughly 100\%, 33\% and 17\% of the parameters of language adapters\footnotemark, respectively.
We set the size of the language and layer embeddings to 50 and use 2 layers in the hyper-network encoder.
\footnotetext{Or 50\%, 17\% and 8\% w.r.t. language-pair adapters}
\section{Results}
\label{sec:results-base}
\input{tables/random_big}
\paragraph{Main Results}
Table~\ref{table:random-ml50-base} shows our results.
Hyper-adapters-base
consistently outperforms both regular adapter variants in all directions,
while having the same parameter count as lang-adapters and half the parameter count of pair-adapters.
We also find that our smaller variants yield very competitive results to regular adapters,
while being more parameter efficient.
Hyper-adapters-small
outperforms both regular adapter variants with fewer parameters,
and hyper-adapters-tiny yields comparable results with only 1/6th and 1/12th
of the capacity of lang-adapters and pair-adapters, respectively.
In the En$\rightarrow$X directions,
hyper-adapters-base outperforms lang-adapters by 0.9 BLEU and pair-adapters by 0.7 BLEU.
Interestingly, we see gains even in high-resource settings up to +1.2 BLEU,
although regular adapters have dedicated capacity for these language(-pairs). %
In X$\rightarrow$En,
hyper-adapter-base has smaller improvements on medium- and high-resource languages,
but we observe improvements of +1.2 BLEU on low-resource languages.
We hypothesize that the lower improvements on X$\rightarrow$En compared to En$\rightarrow$X are partly due to language specific capacity being more valuable when decoding into many different languages.
\paragraph{Regular Adapters}
We discover interesting trade-offs between the regular adapter variants.
Pair-adapters are better in En$\rightarrow$X,
which suggests that it is beneficial to have dedicated capacity
for encoding the source-side of each En$\rightarrow$X pair.
By contrast, language-adapters are stronger in X$\rightarrow$En.
We believe this is because the (single) decoder-side English adapter benefits from observing all the target-side English data,
unlike the separate X-En adapters
that see only the target-side English data of each pair.
However, hyper-adapters enjoy the best of both approaches, while being more efficient.
\paragraph{Convergence}
In Figure~\ref{fig:val-loss},
we compare the validation loss curves of each adapter variant with our hyper-adapters-base variant, which has the same size as lang-adapters.
We mark the point at which each variant reaches the best loss of lang-adapters.
First, we observe that hyper-adapters converge to the best lang-adapters loss at half the number of steps
(87K-vs-174K).
This shows that assuming a fixed parameter budget, hyper-adapters can significantly reduce training time.
We also find that regular adapters suffer from overfitting,
in particular pair-adapters.
We suspect this is because using the same capacity for all languages is suboptimal.
\citet{bapna-firat-2019-simple} proposed to use bottleneck sizes proportional to the available training data of a given language pair, which requires tuning.
By contrast, hyper-adapters automatically learn to allocate their available capacity as needed.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/convergence+pair_360K.pdf}
\caption{Validation losses of adapter variants.
We mark when each variant reaches the best loss of lang-adapters.}
\label{fig:val-loss}
\end{figure}
\paragraph{Large-Scale Models}
We also evaluate models using the Transformer-Big architecture.
In these experiments, we set the bottleneck size in all adapter and hyper-adapter variants to 256.
We report results in Table~\ref{table:random-ml50-big}.
Overall, we observe a similar trends across models as with the Transformer-Base architecture,
although the gains of hyper-adapters are smaller.
We believe this is because we only scale up the main network,
while keeping constant the amount of training data.
Therefore,
this mitigates the negative interference by reducing the capacity bottleneck, and leaves less room for improvement for language-specific modules, like hyper-adapters.
To our surprise, we find that hyper-adapter-base with the Transformer-Base architecture
(Table~\ref{table:random-ml50-big}) achieves comparable results with the Transformer-Big baseline,
while having significantly fewer parameters (173M-vs-269M)
and a smaller computation cost.
This suggests that hyper-adapters are more effective for addressing negative interference
than naively scaling up dense networks.
\section{Analysis}
\label{sec:analysis}
\input{tables/analysis_lang_swap.tex}
\subsection{(Hyper-)Adapter Language Relatedness}
\label{sec:adapter-relatedness}
We design a probe (Table~\ref{table:analysis-lang-swap}),
that explicitly compares the ability of regular-vs-hyper adapters to encode language relatedness.
At test time,
instead of using the adapters of the original source language,
we activate the adapters of another similar, or distant, language.\footnotemark
\footnotetext{For hyper-adapters, we change the source language-id $\bm{s}$.}
We focus on X$\rightarrow$En,
as we found that changing the target language produced very low BLEU scores, making comparisons unreliable.
We select 4 low-resource languages which have a similar high-resource neighbour in our dataset,
namely \{af$\rightarrow$nl, pt$\rightarrow$es, gl$\rightarrow$pt, uk$\rightarrow$ru\}.
Also, we consider replacement with ``zh'',
which is high-resource but distant to all 4 source languages.
When using related languages,
hyper-adapters suffer less than regular-adapters,
as they recover more (62\%) of their original BLEU.
Pair-adapters yield worse results than lang-adapters,
presumably due to having weaker target-side (X-En) adapters.
When using an unrelated language, hyper-adapters suffer the most.
These findings further support that our hyper-networks encode
language relatedness.
\subsection{The Role of Data Redundancy}
\label{sec:data-redundancy}
We have hypothesised that our hyper-network exploits similarities (i.e., redundancies) in the data,
to produce similar adapters for similar languages and avoid encoding redundant features.
This implies that hyper-adapters would ``degenerate'' into regular adapters
if the training data contained only distant languages.
To test this hypothesis, we create two different splits out of ML50,
with and without similar languages.
First, we select 14 (+English) relatively unrelated languages and create ML15\footnotemark.
Then, we create another version of ML15,
that emulates a dataset with similar languages.
We split the data of each language into smaller parts (e.g., $\text{fr}_1, \text{fr}_2, \ldots, \text{fr}_N$)
which we treat as different languages,
which results in 47 artificial languages.
Table~\ref{table:analysis-artificial} shows the results.
We observe that in the original ML15 version,
regular- and hyper-adapters achieve similar results.
In contrast, in the fragmented ML15 version,
regular adapters suffer significantly as they cannot share information,
unlike hyper-adapters that are unaffected.
These findings show that the gains of hyper-adapters are
proportional to the redundancies in the training data.
Thus, we expect that the gap between regular- and hyper-adapter
will grow as the number of related languages, or their data, grows.
Note that, as the artificial ML15 has more languages,
regular adapters require more layers and thus more parameters.
\footnotetext{The languages of ML15 are \{en, fr, zh, hi, lt, iu, et, ro, nl, it, ar, tr, km, vi, uk\}.
We include more details in Appendix~\ref{sec:ml15-stats}.}
\input{tables/analysis_artificial.tex}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{images/langs_umap.pdf}
\caption{Plot of hyper-network language embeddings.}
\label{fig:lang-embeddings-umap}
\end{figure}
\subsection{Hyper-Network Embeddings}
\label{sec:emb-umap}
In Figure~\ref{fig:lang-embeddings-umap}, we visualize the language embeddings
of our hyper-adapters-tiny variant using UMAP~\cite{mcinnes2018umap}.
We observe that the hyper-network embeds languages that belong to the same family close to each other.
This is another piece of evidence that hyper-adapters encode language relatedness.
\subsection{Supervised vs. Zero-Shot Translation}
In this analysis (Figure~\ref{fig:lang-embeddings-umap}),
we compare the zero-shot capabilities of different hyper-adapter variants.
Specifically,
we mask either the source or target language in the hyper-network's input $(\bm{s},\bm{t},\bm{l})$
when generating the encoder or decoder hyper-adapters.
We train models for 160K steps to reduce training time.
This means that hyper-adapter haven't fully converged (Figure~\ref{fig:val-loss}), unlike regular adapters.
However, we are interested in comparing different hyper-adapter variants to each other,
and include lang-adapters for context. Note that pair-adapters cannot do direct zero-shot translation by definition.
Hyper-adapters fail at direct zero-shot translation
when using both $\bm{s}$ and $\bm{t}$ in the hyper-network for both the encoder and decoder hyper-adapters.
Masking $\bm{s}$ in decoder hyper-adapters yields a significant boost,
which is further increased by masking $\bm{t}$ in encoder hyper-adapters.
This reveals a trade-off between supervised and zero-shot translation.
Removing the target language information from encoder hyper-adapters harms En$\rightarrow$X translation,
which is reflected on the (English) pivot-based zero-shot translation.
However, removing the source language information from decoder hyper-adapters
has no effect on supervised translation, although it improves zero-shot.
These results suggest that the ``enc=$(\bm{s},\bm{t})$ dec=$(\bm{s},\bm{t})$'' variant behaves similar to language-pair adapters, which cannot do zero-shot,
whereas the ``enc=$(\bm{s})$ dec=$(\bm{t})$'' variant behaves similar to language-adapters.
In our experiments, we use the ``enc=$(\bm{s},\bm{t})$ dec=$(\bm{t})$'' variant,
which strikes a good balance.
We also explore adding dropout inside the hyper-network layers,
to produce more robust representations $\bm{h}$,
but not in the generated hyper-adapters.
We observe small negative effects in the supervised setting,
but mixed results in the zero-shot setting.
In particular, in the ``enc=$(\bm{s},\bm{t})$ dec=$(\bm{t})$'' variant,
dropout significantly improves zero-shot.
These results suggest that there is room for improvement in this direction,
but we leave this for future work.
\input{tables/analysis_zero.tex}
\section{Related Work}
\label{sec:related}
\citet{platanios-etal-2018-contextual}
explored an idea similar to hyper-networks in MNMT
with the so-called ``contextual parameter generation'' to promote information sharing across languages,
by generating the weights an RNN-based~\cite{Bahdanau2014} MNMT model from language embeddings.
By contrast, we consider a hybrid approach that generates only few (language-specific) modules,
instead of generating all the layers of a Transformer model,
which introduces a large computational overhead.
Another approach is combining hyper-networks with pretrained models.
In NLU,
\citet{karimi-mahabadi-etal-2021-parameter} generate task-specific adapters from task embeddings.
\citet{tay2021hypergrid} use a hyper-network
to learn grid-wise projections for different tasks.
\citet{ye-ren-2021-learning}
extend text-to-text Transformers~\cite{raffel2020t5} to unseen tasks
by generating adapters from task descriptions.
In multilingual dependency parsing,
\citet{ustun-etal-2020-udapter} generate adapters for the biaffine attention
from language representations in linguistic databases.
\citet{ansell-etal-2021-mad-g} also use linguistic databases for cross-lingual NLU,
and extend~\citet{pfeiffer-etal-2020-mad} by generating language adapters for unseen languages.
Unlike prior work,
(1) we identify and solve optimization issues overlooked by other hyper-network-based methods,
(2) we train (hyper-)adapters jointly with the main-network instead of using them for finetuning,
and (3) we focus on the more complex generation problem of MNMT instead of (simpler) NLU tasks.
\section{Conclusion}
In this work,
we extend the capacity of MNMT models with hyper-adapters,
which are language-specific adapter modules generated from a hyper-network.
By resolving optimization issues not addressed by prior work,
we successfully train large hyper-networks for the challenging generation task of multilingual machine translation (\S\ref{sec:rescaling}).
We show that hyper-adapters consistently outperform other regular adapter variants
across translation directions and model sizes (\S\ref{sec:results-base}),
while improving parameter efficiency.
We also observe computational efficiency gains,
as smaller Transformer-Base model with hyper-adapters gives similar results to dense Transformer-Big model, which is computationally more expensive and requires more parameters.
Besides improvements in translation quality,
hyper-adapters achieve faster training convergence as shown in \S\ref{sec:results-base}.
Finally, our analysis shows that, unlike regular adapters,
hyper-networks enable positive transfer across the hyper-adapters of similar languages,
by encoding language relatedness (\S\ref{sec:adapter-relatedness},\ref{sec:emb-umap})
and exploiting redundancies (i.e., language similarities) in the training data (\S\ref{sec:data-redundancy}).
\section*{Acknowledgments}
We thank Angela Fan, Myle Ott, Vedanuj Goswami and Naman Goyal for all their help and advice during this project.
\bibliographystyle{acl_natbib}
|
2,869,038,155,285 | arxiv | \section{Introduction}
Many-body interactions are in general difficult to realize in physical systems used in quantum information processing.
However, there is substantial theoretical interest in Hamiltonians with such interactions.
One motivation to study them comes from a question in complexity theory: which Hamiltonians are capable of universal adiabatic quantum computation (AQC)?
Kitaev~\cite{Kitaev2002Book} showed that the ground state energy problem of the 5-local Hamiltonian is quantum-Merlin-Arthur (QMA)-complete.
This result was later strengthened by Kempe and Regev~\cite{Kempe2003QIC} showing the same for the 3-local Hamiltonian problem.
Finally, by introducing ``perturbative gadgets" to construct a 2-local Hamiltonian whose low energy effective Hamiltonian approximates a given 3-local Hamiltonian, Kempe, Kitaev and Regev~\cite{Kempe2006SIAM} showed that the 2-local Hamiltonian problem is QMA-complete as well.
In order to strengthen these results it is desirable to replace the perturbative gadgets with nonperturbative techniques.
On the more practical side, $k$-local Hamiltonians are necessary to tackle difficult optimization problems like $K$-SAT.
These problems are also valuable in order to test the power of adiabatic quantum algorithms~\cite{Farhi2002arXiv}, because they are classically more challenging than those that can be directly represented with 2-local Hamiltonians only, i.e., without any indirect embedding.
Another area where the need for many-body interactions arises is the application of adiabatic quantum algorithm to quantum chemistry.
Somma et al.~\cite{Somma2002PRA} has shown that fermions can be efficiently simulated using a quantum computer made of qubits only.
In their scheme, the Jordan-Wigner transformation is used to represent the fermionic creation and annihilation operators in terms of the qubit Pauli operators.
However, this transformation maps 2-body interactions between fermions into many-body interactions between qubits of any order.
Bravyi and Kitaev~\cite{Bravyi2002AnnPhys} improved this scheme by finding a different transformation that produces interactions between qubits, the number of which scales only logarithmically in the system size.
Still, for sufficiently large systems, it is challenging to reduce these nonlocal Hamiltonians to 2-local Hamiltonians using perturbative gadgets.
Finally, we note that many-body interactions are necessary to implement some error correction schemes designed for adiabatic quantum computation~\cite{Jordan2006PRA} and quantum annealing~\cite{Young2013PRX, Pudenz2014Nat}.
The basic idea is to encode each logical qubit using $k$ physical qubits in such a way that certain types of errors can be suppressed and/or corrected.
An unavoidable consequence of the encoding is that some logical qubit operators are mapped to $k$-local operators, the implementation of which requires many-body interactions between the physical qubits.
In this paper we develop a nonperturbative technique to generate effective many-body interactions using Hamiltonians with fewer-body interactions.
Our nonperturbative technique differs from the perturbative gadgets in several other aspects.
First, unlike perturbative gadgets, each $k$-body interaction term requires the addition of a single ancillary qubit as opposed to $k$\, qubits. Second, the target Hamiltonian is not necessarily embedded in the low energy subspace of the physical Hamiltonian.
Finally, the technique described in this paper is not guaranteed to reduce the locality of an arbitrary Hamiltonian.
It works best for Hamiltonians with few many-body interactions involving arbitrary number of qubits.
The paper is organized as follows.
In Sec.~\ref{sec:def} we establish the notation and state in detail the problem we address.
In Sec.~\ref{sec:derivation} we present the derivation of the general theory.
Special applications to AQC are presented in Sec.~\ref{sec:AQCapplication}.
Sec.~\ref{sec:multiple} describes how multiple many-body interactions can be handled.
The important question of how this technique can be implemented is discussed in Sec.~\ref{sec:implementation}.
We conclude with brief remarks in Sec.~\ref{sec:conclusion}.
\section{Definitions and Conventions}
\label{sec:def}
Let $\mathcal{H}^\text{comp}$ represent the Hilbert space of the quantum system we are interested in, which consists of $N$ qubits.
We will refer to this as the computational Hilbert space.
A convenient basis for states in this Hilbert space can be constructed using tensor product of single qubit basis states
\begin{align}
\label{opbasis}
\ket{n} \equiv \ket{n_1} \otimes \ket{n_2} \otimes \dots \otimes \ket{n_N},
\end{align}
where $n = (n_1,n_2,\dots,n_N)$ and $n_i \in \{ 0,1 \}$.
Any state in $\mathcal{H}^\text{comp}$ can be represented as:
\begin{align}
\label{wavefunction}
\ket{\psi(t)} &= \sum_{n \in \mathcal{B}_N} c_n(t) \ket{n},
\end{align}
where $\mathcal{B}_N$ is the space of $N$ binary numbers.
The tensor product of single qubit Pauli operators is a convenient basis for the space of Hermitian operators in $\mathcal{H}^\text{comp}$:
\begin{align}
\label{def:op}
\wh{\mathcal{O}}^A \equiv \wh{\sigma}_1^{A_1}\otimes \wh{\sigma}_2^{A_2} \otimes \dots \otimes \wh{\sigma}_N^{A_N},
\end{align}
where $A = (A_1,A_2,\dots,A_N)$ and $A_i \in \{ 0,x,y,z\}$.
By convention $\wh{\sigma}^0 \equiv \wh{\mathds{1}}$.
Note that $\widehat{\mathcal{O}}^A$ are both Hermitian and unitary.
The Hamiltonian of the system of interest, i.e. the computational Hamiltonian, can be expressed as:
\begin{align}
\label{hamdecomp}
\wh{H}(t) = \sum_{A \in \mathcal{Q}_N} \wh{H}^{A}(t) \equiv \sum_{A \in \mathcal{Q}_N} h_A(t) \wh{\mathcal{O}}^A,
\end{align}
where $\mathcal{Q}_N$ is the space of N variables that can take the values $0,x,y,z$.
In the rest of the paper we will drop the hats on the operators and suppress the time dependence of variables for brevity of notation.
An operator is $k$-local if it acts non-trivially on at most $k$\, qubits. In terms of the basis operators $\mathcal{O}_A$ this amounts to having at most $k$\, nonzero elements in the set $A$.
To demonstrate this by an example let $A = (x,y,z,0,0,\dots,0)$. Then $\mathcal{O}^A = \sigma_1^x \otimes \sigma_2^y \otimes \sigma_3^z \otimes \mathds{1}_4\otimes \dots \otimes \mathds{1}_N$ is $3$-local.
A Hamiltonian that is a sum of many terms is said to be $k$-local if each term in the sum acts on at most $k$ qubits.
A single term of the Hamiltonian that acts on $k$ qubits will be referred to as a $k$-body interaction.
In practice, $1$-local terms are the simplest to realize in the laboratory. These are sometimes referred to as local ``fields".
$2$-local Hamiltonians can also be realized experimentally, albeit with relatively more effort. They are sometimes referred to as ``interactions".
However, it is quite a challenge to engineer $k$-local interactions for $k>2$.
Most efforts in this direction involve embedding the computational Hamiltonian in the low energy sector of another Hamiltonian living in a larger Hilbert space.
``Perturbative Gadgets" \cite{Kempe2006SIAM} are very useful in this regard.
For a $k$-body interaction they require $k$ ancilla qubits.
However, their use is limited to small $k$ because the nonlocal interaction emerges at the $k$'th order in perturbation theory~\cite{Jordan2008PRA}.
A nonperturbative embedding of k-body interactions into 2-local Hamiltonians has been developed for the special case when all terms in the Hamiltonian share the same basis~\cite{Biamonte2008PRA}.
In this manuscript, we describe a different nonperturbative technique which can be applied to any arbitrary Hamiltonian, albeit with varying success.
In the rest of the paper we consider a computational Hamiltonian with a single many-body interaction term singled out:
\begin{align}
\label{originalH}
H = \sum_{A\ne \chi} H^A + H^\chi \equiv H^* + H^\chi\, ,
\end{align}
where $H^\chi = h_\chi \mathcal{O}^\chi$ is $k$-local.
The technique developed here will be most useful whenever $H^\chi$ is a $k$-body interaction that is difficult to realize experimentally and $H^*$ is a $2$-local Hamiltonian.
However, the technique is applicable to any Hamiltonian and the split in Eq.~\eqref{originalH} can be entirely arbitrary.
For example, $H^\chi$ does not need to be the most nonlocal term in the Hamiltonian and there can be multiple many-body interactions.
We ask the following question: is there another Hamiltonian $\wt{\ham}$, possibly living in a larger Hilbert space, with a proper subspace in which it is identical to $H$?
We refer to $\wt{\ham}$ as the physical Hamiltonian.
Note that we do not require $H^*$ to be $2$-local as $H$ might have multiple many-body interactions.
Our goal is to find a different system the dynamics of which is simply related to that of the computational system and in which the $k$-local term $H^\chi$ is replaced by a less nonlocal interaction.
\section{Derivation}
\label{sec:derivation}
\subsection{Physical vs computational Hilbert Space}
We enlarge the Hilbert space of $N$ qubits by adding an ancilla qubit to obtain $\mathcal{H}^\text{phys} = \mathcal{H}^\text{anc} \otimes \mathcal{H}^N$.
The goal is to design the physical Hamiltonian $\wt{\ham}$ such that the physical Hilbert space splits into two subspaces, i.e. $\mathcal{H}^\text{phys} = \mathcal{H}^\text{comp} \oplus \mathcal{H}^\text{irr}$, such that within $\mathcal{H}^\text{comp}$ the dynamics evolves according to the computational Hamiltonian $H$ we desire to implement.~\footnote{For consistency of notation we should have used $H^\text{comp}$ for the computational Hamiltonian and $H^\text{phys}$ for the physical Hamiltonian, however, in order to avoid cumbersome notation we opted for $H$ and $\wt{\ham}$ instead.}
The other subspace $\mathcal{H}^\text{irr}$ will be referred to as the irrelevant Hilbert space, because the dynamics there is not generated by the computational Hamiltonian.
The dynamics in the physical Hilbert space is governed by the Hamiltonian $\wt{\ham}$, which we wish to determine and in which the nonlocal term $H^\chi$ proportional to $\mathcal{O}^\chi$ will be replaced with a less nonlocal term.
A convenient basis for the states in the physical Hilbert space is
\begin{align}
\label{stdbasis}
\ket{ \tilde{n} } \equiv\ket{n_0}\otimes \ket{n} = \ket{n_0} \otimes \ket{n_1} \otimes \ket{n_2} \dots \otimes \ket{n_N},
\end{align}
where the first entry is dedicated to the ancilla qubit.
Any state in the physical Hilbert space can be written as:
\begin{align}
\ket{\wt{\psi}(t)} &= \sum_{\tilde{n} \in \mathcal{B}_{N+1}} c_{\tilde{n}}(t) \ket{\tilde{n}}.
\end{align}
The physical Hilbert space is twice the size of the computational Hilbert space that we wish to simulate.
We embed the dynamics in a subspace of the enlarged Hilbert space whose dimension matches that of the original N-qubit Hilbert space.
In order to describe this subspace we need the following definitions:
\begin{align}
\label{basis}
\ket{n_{\pm}} &\equiv \mathcal{U} \left(\frac{\ket{0} \pm \ket{1}}{\sqrt{2}}\otimes \ket{n}\right) \equiv \mathcal{U} \left( \ket{\pm} \otimes \ket{n} \right),
\end{align}
where $\mathcal{U}$ is a unitary operator (to be determined later) effecting a change of basis and $\ket{\pm}\equiv(\ket{0}\pm \ket{1})/\sqrt{2}$ are the eigenvectors of the $\sigma_x$ operator.
$\ket{n_\pm}$ form a complete basis for the N+1 qubit system.
More precisely, $\ket{n_+}$ form a complete basis on $\mathcal{H}^\text{comp}$ and $\ket{n_-}$ form a complete basis on $\mathcal{H}^\text{irr}$.
Using the definition \eqref{basis} we define projectors to two subspaces of the physical Hilbert space
\begin{align}
P_\pm &= \sum_{n \in \mathcal{B}_N} \ket{n_\pm}\bra{n_\pm}
\end{align}
These projectors satisfy
$P_+ + P_- = \mathds{1}_{N+1}$ and
$P_+ P_- = P_- P_+ = 0$.
We want to embed the original dynamics of N qubits governed by the Hamiltonian \eqref{originalH} within the subspace $\mathcal{H}^\text{comp}$.~\footnote{Note that this strategy is quite different from the one used in perturbative gadgets where the embedding is done to the low energy sector of the theory.
In the approach described here energy does not play any role in the determination of the subspace into which the dynamics is embedded.} In other words, we want
\begin{align}
\wt{\ham} &= P_+ H P_+ + P_- H^\text{irr} P_-.
\end{align}
Using the basis \eqref{basis} any state in the relevant subspace $\mathcal{H}^\text{comp}$ can be written as:
\begin{align}
\label{ewavefunction}
\ket{\wt{\psi}(t)} &= \sum_{n \in \mathcal{B}_{N}} c_n(t) \ket{n_+} \, .
\end{align}
Our goal is to find a unitary transformation $\mathcal{U}$ (see Eq.\eqref{basis}) and a Hamiltonian $\wt{\ham}$ without the many-body interaction $\mathcal{O}^\chi$, such that the coefficients in \eqref{ewavefunction} exactly follow those in \eqref{wavefunction}.
In other words the (N+1)-qubit basis state $\ket{n_+}$ will ``stand for" or ``encode" the N-qubit basis state $\ket{n}$.
We write the Hamiltonian in the extended Hilbert space as:
\begin{align}
\wt{\ham} &= \sum_{{\wt{A}} \in \mathcal{Q}_{N+1}} \wt{\ham}^{\wt{A}} \, .
\end{align}
Each $\wt{\ham}^{\wt{A}}$ will be chosen to simulate the dynamics that $H^A$ of \eqref{originalH} generates in the original system.
The Schr\"odinger equation for the computational system in the chosen basis can be written as:
\begin{align}
\label{Schrodinger}
i \hbar \dot{c}_k &= \sum_n H_{kn} c_n \, ,
\end{align}
where $H_{kn} \equiv \bra{k} H \ket{n}$.
Similarly the Schr\"odinger equation for the physical system can be written as:
\begin{align}
\label{eSchrodinger}
i \hbar \dot{c}_{k} &= \sum_{n} \wt{\ham}_{k_+ n_+} c_{n} \, ,
\end{align}
where $\wt{\ham}_{k_+ n_+}\equiv\bra{k_+} \wt{\ham} \ket{n_+}$.
As stated earlier, we want the physical Hilbert space to split into two decoupled Hilbert spaces under the dynamics imposed by the physical Hamiltonian $\wt{\ham}$.
This condition can be expressed as
\begin{align}
\nonumber
\bra{k_+} \wt{\ham} \ket{n_-} &= 0\, , \\
\bra{k_-} \wt{\ham} \ket{n_+} &= 0 \, .
\end{align}
Using Eq.~\eqref{basis} these conditions can be rewritten as
\begin{align}
\nonumber
\bra{0 k} \wt{K} \ket{0 n} - \bra{0 k} \wt{K} \ket{1 n} + \bra{1 k} \wt{K} \ket{0 n} - \bra{1 k} \wt{K} \ket{1 n} &= 0 , \\
\bra{0 k} \wt{K} \ket{0 n} + \bra{0 k} \wt{K} \ket{1 n} - \bra{1 k} \wt{K} \ket{0 n} - \bra{1 k} \wt{K} \ket{1 n} &= 0 ,
\end{align}
where $\wt{K}$ is defined as
\begin{align}
\label{K}
\wt{K} &= \mathcal{U}^\dagger \wt{\ham} \mathcal{U}\, .
\end{align}
Adding and subtracting these two lines we obtain a simple expression for the conditions required for the two subspaces to be decoupled:
\begin{align}
\nonumber
\bra{0 k} \wt{K} \ket{0n} &= \bra{1k} \wt{K} \ket{1 n} \, , \\
\label{sscond}
\bra{0 k} \wt{K} \ket{1n} &= \bra{1k} \wt{K} \ket{0 n}\, .
\end{align}
The first line implies that $\wt{K}$ can not act on the ancilla qubit with a $\sigma^z$ operator, whereas the second line rules out the $\sigma^y$ operator. Thus
\begin{align}
\label{form1}
\wt{K} &= \mathds{1} \otimes K^* + \sigma^x \otimes K^\chi \, .
\end{align}
The reason for this choice of superscripts will become clear shortly.
Next, we determine the conditions for the dynamics in $\mathcal{H}^\text{comp}$ to simulate the dynamics of interest due to the computational Hamiltonian $H$.
This is simply read from Eqs.(\ref{Schrodinger}, \ref{eSchrodinger}):
\begin{align}
&\bra{k_+} \wt{\ham} \ket{n_+} = \bra{k} H \ket{n}\, .
\end{align}
Using the conditions \eqref{sscond} this expression simplifies to
\begin{align}
\label{dyncond}
\bra{0k}\wt{K} \ket{0n} + \bra{0k} \wt{K} \ket{1n} &= \bra{k} H \ket{n} \, ,
\end{align}
which can be further simplified by using \eqref{form1} and \eqref{originalH}:
\begin{align}
\bra{k} K^* \ket{n} + \bra{k} K^\chi \ket{n} = \bra{k} H^* \ket{n} + \bra{k} H^\chi \ket{n}\, .
\end{align}
Since the computational Hamiltonian \eqref{originalH} is split into two parts it is natural to make the assignment $K^* = H^*$ and $K^\chi = H^\chi$, which leads to
\begin{align}
\wt{K} &= \mathcal{U}^\dagger \wt{\ham} \mathcal{U} = \mathds{1} \otimes H^* + \sigma^x \otimes H^\chi \, ,\\
\wt{\ham} &= \mathcal{U} \wt{K} \mathcal{U}^\dagger = \mathcal{U} \left( \mathds{1} \otimes H^* + \sigma^x \otimes H^\chi \right) \mathcal{U}^\dagger \equiv \wt{\ham}^* + \wt{\ham}^{\tilde{\chi}}\, .
\label{Ht}
\end{align}
Using Eq.~\eqref{Ht} we can also calculate the matrix elements of the Hamiltonian in the irrelevant subspace $\mathcal{H}^\text{irr}$
\begin{align}
\label{Hwrong}
\bra{k_-} \wt{\ham} \ket{n_-} &= \bra{k} (H^* - H^\chi ) \ket{n} \equiv \bra{k} H^\text{irr} \ket{n} \, ,
\end{align}
where we defined the ``irrelevant" Hamiltonian $H^\text{irr}\equiv H^*-H^\chi$ as the original problem Hamiltonian with the sign of the nonlocal term reversed.
Thus, the dynamics in the irrelevant subspace $\mathcal{H}^\text{irr}$ is closely related to the desired dynamics in computational subspace $\mathcal{H}^\text{comp}$.
This also shows that unlike perturbative gadgets, here $\mathcal{H}^\text{comp}$ is in general not the low energy subspace.
Generically, the energy levels of both subspaces are intermingled.
Let us assume that the eigenstates and eigenvalues of the computational and irrelevant Hamiltonian are given by $(\ket{\psi^n_+},E^n_+)$ and $(\ket{\psi^n_-},E^n_-)$ respectively.
Then it is straightforward to show that the eigenfunctions and eigenvalues of the physical Hamiltonian are given by
\begin{align}
\ket{\wt{\psi}_\pm^n} &= \mathcal{U} \left( \ket{\pm} \otimes \ket{\psi_\pm^n}\right)\, , \quad \wt{E}^n_\pm = E^n_\pm\, .
\end{align}
Our strategy is to ``transfer" the nonlocality associated with the computational Hamiltonian to the unitary transformation $\mathcal{U}$ such that the physical Hamiltonian is free of that many-body interaction.
By demanding that the nonlocal term $\mathcal{O}^\chi$ be absent from $\wt{\ham}$, a nonlocality is introduced to $\mathcal{U}$ as a compensation.
Since the unitary transformation corresponds to a simple change of basis, the nonlocality therein does not present as serious a challenge as a many-body interaction in the Hamiltonian.
We will comment on this further in Sec.~\ref{sec:unitary}.
In particular we wish to have $\wt{\ham}^{\tilde{\chi}}$ to be a $r$-body interaction with small $r$.
There are different choices with different trade-offs, and below we treat them separately.
\subsection{Case 1: $\wt{\ham}^{\tilde{\chi}}$ is $1$-local}
\label{sec:1local}
It is possible to choose $\mathcal{U}$ such that the term in $\wt{\ham}$ corresponding to the nonlocal term $H^\chi$ is only $1$-local:
\begin{align}
\label{U1}
\mathcal{U} &= \ket{0}\bra{0} \otimes \mathds{1} + \ket{1}\bra{1}\otimes \mathcal{O}^\chi\, .
\end{align}
It is straightforward to verify that this operator is both unitary and Hermitian.
From \eqref{Ht} we get:
\begin{align}
\nonumber
\wt{\ham} &= \ket{0}\bra{0}\otimes H^* + \ket{1}\bra{1} \otimes \mathcal{O}^\chi H^* \mathcal{O}^\chi \\
\label{Httemp}
&\quad+ h_\chi \left(\ket{0}\bra{1} + \ket{1}\bra{0} \right)\otimes \mathds{1} \, .
\end{align}
We next split the Hamiltonian $H^*$ into two parts based on the commutation properties of the terms with the nonlocal term $\mathcal{O}^\chi$.
\begin{align}
H^* &= \sum_{A\ne \chi} H^A = H^*_\text{comm} + H^*_\text{anti} \, , \\
0 &= [H^*_\text{comm} , \mathcal{O}^\chi] \, , \\
0 &= \{ H^*_\text{anti}, \mathcal{O}^\chi \} \, .
\end{align}
With this definition \eqref{Httemp} becomes
\begin{align}
\label{H1local}
\wt{\ham} &= \mathds{1} \otimes H^*_\text{comm} + \sigma^z \otimes H^*_\text{anti} + h_\chi \left( \sigma^x \otimes \mathds{1} \right) \, .
\end{align}
Note the price that had to be paid in order to eliminate the nonlocal term $H^\chi$: the locality of some other terms $H^*_\text{anti}$ had to be increased by one.
The locality of other terms $H^*_\text{comm}$ which commute with the nonlocal term stayed the same, except for the nonlocal term $H^\chi$ itself, which we reduced to $1$-local.
Intuitively, one can think of the ancilla qubit as a sophisticated bookkeeping tool.
The nonlocal term $\mathcal{O}^\chi$ acting on many system qubits is replaced with a simple spin flipping term $\sigma^x$ acting on the ancilla qubit only.
Thus the state of the ancilla qubit ``keeps track" of the intended applications of the nonlocal term to the system during the evolution.
The quantum nature of this bookkeeping is manifest in the modification of the rest of the Hamiltonian according to commutation rules.
A similar technique has been developed independently by M. R. Geller~\cite{Geller2015arXiv} in the context of the single-excitation subspace method whereby each ancilla controls the application of an arbitrary $n\times n$ unitary to the data.
\subsection{Case 2: $\wt{\ham}^{\tilde{\chi}}$ is $r$-local}
\label{sec:manylocal}
In this section we eliminate the nonlocal term in favor of an $r$-local term. At first sight this seems counterproductive but we will point out some cases in which this strategy proves to be advantageous.
Let us consider a decomposition of the nonlocal operator of the form:
\begin{align}
\mathcal{O}^\chi &= \mathcal{O}^{\chi'} \mathcal{O}^{\chi''} = \mathcal{O}^{\chi''} \mathcal{O}^{\chi'} \, .
\end{align}
As an example:
\begin{align}
\nonumber
\mathcal{O}^\chi &= \sigma^x_1\otimes \sigma_2^y \otimes \sigma_3^z \otimes \mathds{1} \otimes \sigma_4^y \, , \\
\nonumber
\mathcal{O}^{\chi'} &= \sigma^x_1\otimes \sigma_2^y \otimes \mathds{1} \otimes \mathds{1} \otimes \mathds{1} \, , \\
\mathcal{O}^{\chi''} &= \mathds{1} \otimes \mathds{1} \otimes \sigma_3^z \otimes \mathds{1} \otimes \sigma_4^y \, .
\end{align}
Next we define the basis transformation as:
\begin{align}
\mathcal{U} &= \ket{0}\bra{0} \otimes \mathds{1} + \ket{1}\bra{1}\otimes \mathcal{O}^{\chi'} \, .
\end{align}
Substituting this into Eq.~\eqref{Ht} we get the Hamiltonian in the extended Hilbert space:
\begin{align}
\label{Htildemany}
\wt{\ham} &= \mathds{1} \otimes H^*_\text{comm} + \sigma^z \otimes H^*_\text{anti} + h_\chi \left( \sigma^x \otimes \mathcal{O}^{\chi''} \right) \, , \\
0 &= [H^*_\text{comm} , \mathcal{O}^{\chi'}] \, , \\
0 &= \{ H^*_\text{anti}, \mathcal{O}^{\chi'} \} \, .
\end{align}
Note that the splitting $H^* = H^*_\text{comm} + H^*_\text{anti}$ depends on the decomposition $\mathcal{O}^\chi = \mathcal{O}^{\chi'} \mathcal{O}^{\chi''}$.
It is preferable for $H^*_\text{anti}$ to have as few terms as possible and those terms to be as local as possible.
Moreover, $\mathcal{O}^{\chi''}$ should be as local as possible.
In general, these demands can be contradictory and there is no unique way to optimize the choice of $\mathcal{O}^{\chi'}$ independent of context.
However, it should be clear that there can be an advantage to using the method of this section as opposed to the previous one, and we will provide an example related to adiabatic quantum algorithms in Sec.~\ref{sec:manyspecial}.
\section{Application to Adiabatic Quantum Algorithms}
\label{sec:AQCapplication}
The discussion in this section will be restricted to combinatorial optimization problems.
The problem is embedded in a ``problem Hamiltonian" which is diagonal in the computational basis
\begin{align}
H^P = \sum_n E_n \ket{n} \bra{n} \, .
\end{align}
Being diagonal in the computational basis the problem Hamiltonian can be written in terms of the action of $\sigma^z$ and $\mathds{1}$ operators only. Thus we can expand:
\begin{align}
\label{Hpexpand}
H^P = h \mathds{1} + \sum_i h_i \sigma_i^z + \sum_{ij} h_{ij} \sigma_i^z \sigma_j^z + \sum_{ijk} h_{ijk} \sigma_i^z \sigma_j^z \sigma_k^z + \dots
\end{align}
One then considers a time-dependent Hamiltonian which extrapolates between an ``initial Hamiltonian" $H^0$ (also called ``driver Hamiltonian") and the problem Hamiltonian according to a predetermined schedule
\begin{align}
\label{protocol}
H(t) = f(t) H^0 + g(t) H^P \, ,
\end{align}
where $f(0)=g(\tau)=1$ and $f(\tau)=g(0)=0$ and $\tau$ is the duration of computation.
The adiabatic algorithm works as follows~\cite{Farhi2002arXiv}:
the initial state is prepared to be the ground state of $H^0$.
If the duration of computation is long enough, the adiabatic theorem guarantees that the system will stay arbitrarily close to the instantaneous ground state at all times.
The ground state of the final Hamiltonian encodes the solution of the optimization problem and can be read via a measurement in the computational basis.
The initial Hamiltonian is usually chosen to be:
\begin{align}
\label{H0std}
H_{\text{std}}^0 = \sum_{i=1}^N B_i \, \sigma_i^x \, .
\end{align}
We will refer to this as the ``standard initial Hamiltonian".
In some of the examples below we will also consider modified initial Hamiltonians with many-body interactions which are made out of tensor product of multiple $\sigma^x$ operators.
\subsection{$\sigma^x$ -- only interactions}
\label{sec:xonly}
Consider the nonstandard initial Hamiltonian
\begin{align}
H^0 = H_{\text{std}}^0 + B_0 \left( \sigma_1^{\beta_1} \otimes \sigma_2^{\beta_2} \otimes \dots \otimes \sigma_N^{\beta_N} \right) \equiv H_{\text{std}}^0 + H^\chi \, ,
\end{align}
where $\beta_i\in \{0,x\}$.
We follow the recipe of Sec.~\eqref{sec:1local} to simulate the last term with a $1$-local term .
We first need to determine which terms of the Hamiltonian do not commute with $H^\chi$.
It is clear that $H_{\text{std}}$ commutes with $H^\chi$.
For the problem Hamiltonian we refer to Eq.\eqref{Hpexpand}.
Whenever a term has an even number of qubits that get acted nontrivially by both $H^\chi$ and the element of $H^P$ in question, those two operators commute.
For odd number of common qubits the two operators anti-commute.
For example if
\begin{align}
H^\chi &\propto \sigma_1^x \otimes \sigma_2^x \otimes \mathds{1}_3\otimes \sigma_4^x \otimes \mathds{1}_5
\end{align}
then $h_3 \sigma_3^z$, $h_{12} \sigma_1^z \sigma_2^z$ and $h_{145} \sigma_1^z \sigma_4^z \sigma_5^z$ commute with $H^\chi$ but $h_1 \sigma_1^z$, $h_{23} \sigma_2^z \sigma_3^z$ and $h_{123} \sigma_1^z \sigma_2^z\sigma_3^z$ anti-commute
We group these terms together and rewrite the problem Hamiltonian as:
\begin{align}
H^P &= H^P_\text{comm} + H^P_\text{anti} \, ,
\end{align}
where $H^P_\text{comm}$ consists of terms that commute with $H^\chi$ and $H^P_\text{anti}$ those that anti-commute with it.
As a result the physical Hamiltonian is given by
\begin{align}
\nonumber
\wt{\ham} &= f(t) \left[ B_0 \left( \sigma^x \otimes \mathds{1} \right) + H_{\text{std}}^0 \right] \\
&\hspace{15mm} + g(t) \left[ \mathds{1}\otimes H^P_\text{comm} + \sigma^z \otimes H^P_\text{anti} \right] \\
\label{AQCxxx}
&\equiv f(t) \wt{H}^0_\text{std} + g(t) \wt{H}^P \, .
\end{align}
Notice that by treating the ancilla qubit as the $0$'th qubit $\wt{\ham}$ takes exactly the same form as $H$, the only difference being the number of qubits.
This example shows us that a nonlocal term consisting of $\sigma^x$ operators only can be incorporated into the initial Hamiltonian of an adiabatic quantum algorithm at the price of increasing the nonlocality of some of the terms in the problem Hamiltonian by one but without changing the form of the Hamiltonian.
\subsection{$\sigma^z$ -- only interactions}
\label{sec:zonly}
In this section we consider the standard adiabatic algorithm with $H_{\text{std}}^0$ given by Eq.\eqref{H0std}.
The nonlocal interaction we would like to eliminate is a product of $\sigma^z$ operators only, thus it is part of the problem Hamiltonian $H_P$:
\begin{align}
H^\chi \propto \sigma_1^{\beta_1} \otimes \sigma_2^{\beta_2} \otimes \dots \otimes \sigma_N^{\beta_N}\, ,
\end{align}
where $\beta_i \in \{0,z\}$. As a result $H^\chi$ commutes with $H^P$ but not with all the terms in $H_{\text{std}}^0$. In fact, for all $i$ such that $\beta_i = z$, the corresponding term $B_i \sigma_i^x$ does anti-commute with $H^\chi$. If we group these terms in a manner similar to the previous section we can rewrite the Hamiltonian as:
\begin{align}
H(t) &= f(t) \left( H^0_\text{comm} + H^0_\text{anti} \right) + g(t) H^P \, .
\end{align}
A derivation analogous to the previous section results in the following Hamiltonian for the physical Hilbert space
\begin{align}
\wt{\ham} &= f(t) \left[ \mathds{1} \otimes H^0_\text{comm} + \sigma^z \otimes H^0_\text{anti} \right] + g(t) \left( \mathds{1} \otimes H_P \right) \, .
\end{align}
In contrast to the previous section this Hamiltonian is not of the same form as the original Hamiltonian.
The difference is the $\sigma^z \otimes \sigma^x$ type interactions in the second term above, i.e., $\sigma^z \otimes H^0_\text{anti}$.
On the other hand, the locality of the physical Hamiltonian is the same as the locality of $H^*$, i.e., the computational Hamiltonian without the many-body interaction term $H^\chi$.
\subsection{The spin glass problem}
In this section we apply the technique described in this note to various adiabatic quantum algorithms for finding the ground state of the spin glass problem.
The spin glass problem Hamiltonian is given by
\begin{align}
\label{spinglass}
H^P = \sum_{i=1}^N h_i \sigma_i^z + \sum_{i>j=1}^N h_{ij} \sigma_i^z \sigma_j^z \, .
\end{align}
\subsubsection{Flip-All Term}
\label{sec:flipall}
A nonlocal interaction of the form given in Sec.~\eqref{sec:xonly} that leads to a particularly simple physical Hamiltonian is
\begin{align}
\label{flipall}
H^\chi = B_0 \left( \sigma_1^x\otimes \sigma_2^x\otimes \dots \otimes \sigma_N^x \right) \, .
\end{align}
This is an $N$-body interaction, involving all computational qubits.
When acting on a basis state it flips the orientation of all the spins, hence the title of this section.
Our goal is to simulate the dynamics due to
\begin{align}
H(t) &= f(t) \left( H_{\text{std}}^0 + H^\chi \right) + g(t) H^P
\end{align}
without actually implementing $H^\chi$ in the physical Hamiltonian.
It is clear that all the $2$-local terms in the problem Hamiltonian \eqref{spinglass} commute with $H^\chi$ and so does the initial Hamiltonian of the standard algorithm.
On the other hand, all the $1$-local terms in the problem Hamiltonian anti-commute with $H^\chi$.
This is a special case of Sec.~\eqref{sec:xonly} and we can directly read off the physical Hamiltonian:
\begin{align}
\label{flipallsolution}
\nonumber
\wt{\ham}(t) &= f(t) \sum_{i=0}^N B_i \, \sigma_i^x + g(t) \sum_{i>j=0}^{N} \tilde{h}_{ij} \left( \sigma_i^z \otimes \sigma_j^z \right) \\
&= f(t) \wt{H}_\text{std}^0 +g(t) \wt{\ham}^P \, , \\
\tilde{h}_{ij} &= h_{ij} \qquad \text{for } i,j \ne 0 \, , \\
\tilde{h}_{0j} &= h_j \qquad \text{for } j = 1,2,\dots,N \, .
\end{align}
This Hamiltonian has the form of a standard adiabatic algorithm to solve a spin glass problem with vanishing local fields.
The ancilla qubit is coupled to all qubits that are acted on by local fields in the original problem Hamiltonian.
This means that for every spin glass problem of $N$ qubits solved using the algorithm with the modified initial Hamiltonian $H^0_\text{std}+H^\chi$, there is an equivalent $N+1$-qubit problem which can be solved with the standard initial Hamiltonian, such that the success rate of both algorithms is identical.
\subsubsection{A case for Sec.~\ref{sec:manylocal}}
\label{sec:manyspecial}
As mentioned before, there are problems for which the approach of Sec.~\ref{sec:manylocal} works better than that of Sec.~\ref{sec:1local}.
Consider the spin glass problem with the standard initial Hamiltonian modified by adding the following term
\begin{align}
\label{flipallbut}
H^\chi = B_0 \left( \sigma_2^x\otimes \sigma_3^x\otimes \dots \otimes \sigma_N^x \right) \, .
\end{align}
This differs from Eq.\eqref{flipall} by the absence of $\sigma_1^x$ and thus represents a $(N-1)$-body interaction.
If we follow the approach of Sec.~\ref{sec:1local} we
get (again treating the ancilla as the 0'th qubit):
\begin{align}
\nonumber
\wt{\ham}(t) &= f(t) \left(\sum_{i=0}^N B_0 \, \sigma_i^x \right) + g(t) \Bigg( h_1 \sigma_1^z + \sum_{i=2}^N h_i \sigma_0^z \sigma_i^z \\& + \sum_{i=2}^N J_{1i}\, \sigma_0^z \sigma_1^z \sigma_i^z + \sum_{i>j=2}^N J_{ij} \sigma_i^z \sigma_j^z \Bigg) \, .
\end{align}
This is a $3$-local Hamiltonian if at least one of the $J_{1i}\ne 0$ for $i=2,\dots,N$.
If, on the other hand, we follow the approach of Sec.~\ref{sec:manylocal} with the choice:
\begin{align}
\mathcal{O}^{\chi'} &= \sigma_1^x \otimes \sigma_2^x \otimes \dots \otimes \sigma_N^x \, ,\\
\mathcal{O}^{\chi''} &= \sigma_1^x \otimes \mathds{1}_2 \otimes \dots \otimes \mathds{1}_N \, .
\end{align}
The resulting Hamiltonian is:
\begin{align}
\nonumber
\wt{\ham}(t) &= f(t) \left( B_0 \left(\sigma_0^x \otimes \sigma_1^x \right) + \sum_{i=1}^N B_i \, \sigma_i^x \right) \\
\label{flipallbutsolution}
&\qquad+ g(t) \left( \sum_{i>j=0}^{N} \tilde{h}_{ij} \left(\sigma_i^z \otimes \sigma_j^z\right) \right) \\
\nonumber
&= f(t) \left( B_0 \left( \sigma_0^x \otimes \sigma_1^x \right) + \mathds{1} \otimes H_{\text{std}} \right) +g(t) \wt{\ham}_P \, , \\
\tilde{h}_{ij} &= h_{ij} \qquad \text{for } i,j \ne 0 \, , \\
\tilde{h}_{0i} &= h_i \qquad \text{for } i = 1,2,\dots,N \, .
\end{align}
This differs from Eq.\eqref{flipallsolution} only by the replacement $B_0\, \sigma_0^x \rightarrow B_0 ( \sigma_0^x \otimes \sigma_1^x)$.
Note that \eqref{flipallbutsolution} is only $2$-local.
This example demonstrates how sometimes it can pay off to simulate the many-body interaction with a $(r>1)$-body interaction instead of a simple field, i.e. a $1$-local term.
\section{Multiple Nonlocal Terms}
\label{sec:multiple}
In the previous section we discussed how to eliminate a nonlocal term in a Hamiltonian.
If we wish to eliminate multiple nonlocal terms, the technique can be applied repeatedly.
However, there is a problem with this strategy, because at each step the degree of locality of some terms in the Hamiltonian increases by one.
Consider a Hamiltonian with $N_{tot}$ terms $N_l$ of which are $2$-local and $N_{nl}$ of them are more than $2$-local such that $N_{tot} = N_l + N_{nl}$.
In eliminating the $N_{nl}$ terms using the approach described in this manuscript repeatedly, we possibly raise the degree of nonlocality in other terms.
In the worst case scenario, we may end up with terms which have a degree of nonlocality $2+N_{nl}$ .
A better strategy might be to use perturbative Hamiltonian gadgets at the end of each round to reduce the degree of locality of those terms that have been raised from 2 to $3$-local.
Using this hybrid method the use of the perturbative gadgets is restricted to 3-body interactions only, where they work best.
However there is a trade-off.
For each nonlocal term eliminated many $3$-local terms are created.
There are two possible problems with this.
First, the number of ancilla qubits necessary does not only depend on the degree of nonlocality but possibly also on the number of total qubits in the system.
Thus it might scale very badly.
The second problem is related to the perturbative nature of gadgets. If errors due to different gadgets accumulate coherently we might run into trouble.
\section{Implementation}
\label{sec:implementation}
\subsection{State Preparation}
\label{sec:statepreparation}
In previous sections we showed how to construct an N+1 qubit system such that the dynamics in a subspace is mathematically identical to an N qubit system.
However, we did not address the question of how to use this mapping in an operational sense.
To this end let us consider the state preparation protocol.
A schematic description is given in Fig.~\ref{fig:fullprep}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\linewidth]{fullcircuit}
\caption{A schematic description of the state preparation protocol described in detail in Sec.~\ref{sec:statepreparation}. The top qubit is the ancilla and the rest are system qubits. $H$ acting on the ancilla stands for the Hadamard gate, not to be confused with the Hamiltonian. }
\label{fig:fullprep}
\end{figure*}
In state preparation, the goal is to prepare a target quantum state $\ket{\psi_\tau}$.
In some cases the target state is known ahead of time but in others it is defined as the output of a certain process.
Here we are interested in the latter.
More specifically, we are interested in the evolution of a given initial state $\ket{\psi_0}$ of N qubits for a time $\tau$ according to Hamiltonian dynamics with time-dependent Hamiltonian $H(t)$.
At the final time $\tau$ the desired N-qubit state $\ket{\psi_\tau}$ is obtained.
State preparation can serve as a subroutine of a larger computation or as part of a larger simulation.
Many-body interactions can be used to increase the success rate of adiabatic state preparation by allowing one to explore a larger space of Hamiltonian paths from the initial to the final Hamiltonian.
There is preliminary numerical evidence for this claim for the spin glass problem and the initial Hamiltonian studied in Sec.~\ref{sec:flipall}~\cite{inprep}.
We assume that we have the ability to prepare the N-qubit initial state $\ket{\psi_0}$.
In order to implement the technique described in previous sections, we need to encode this state in the N+1 qubit Hilbert space using \eqref{ewavefunction}.
First the ancilla qubit is initialized to the superposition state $\ket{+}$.
Then one implements the unitary operator $\mathcal{U}$ on the combined system (the details are discussed in the next section)
\begin{align}
\nonumber
\ket{\wt{\psi}(0)} &= \mathcal{U} \left(\ket{+} \otimes \ket{\psi_0} \right) \\
\label{initialstate}
&= \frac{\ket{0}\otimes \ket{\psi_0} + \ket{1} \otimes \mathcal{O}^\chi\ket{\psi_0}}{\sqrt{2}}\, .
\end{align}
Next, the initial state $\ket{\wt{\psi}(0)}$ is evolved with the Hamiltonian $\wt{\ham}$ of \eqref{H1local}. After a time $\tau$ the following state is obtained:
\begin{align}
\label{finalstate}
\ket{\wt{\psi}(\tau)} &= \frac{\ket{0}\otimes \ket{\psi_\tau} + \ket{1} \otimes \mathcal{O}^\chi\ket{\psi_\tau}}{\sqrt{2}}\, ,\\
&=\mathcal{U} \left(\ket{+} \otimes \ket{\psi_\tau} \right) \, .
\end{align}
One then applies the inverse unitary transformation $\mathcal{U}^\dagger = \mathcal{U}$ to obtain the state:
\begin{align}
\mathcal{U}^\dagger \ket{\wt{\psi}(\tau)} = \ket{+} \otimes \ket{\psi_\tau} \, .
\end{align}
This completes the state preparation protocol.
At this point one can ignore the ancilla qubit altogether, and the rest of the $N$ qubits are in the target state.
The ancilla qubit does not need to be measured but a measurement in the $\ket{\pm}$ basis can help detect some errors that may have occurred.
If the ancilla is found in the $\ket{-}$ state, it is an indication that the N+1 qubit system has been knocked out of the relevant subspace $\mathcal{H}^\text{comp}$ into $\mathcal{H}^\text{irr}$ and the state preparation is not to be trusted.
\subsection{Implementing the Unitary $\mathcal{U}$}
\label{sec:unitary}
Note that this unitary is a simple product of 2-qubit controlled-U gates of the form
\begin{align}
\nonumber
\mathcal{U} &= \ket{0}\bra{0} \otimes \mathds{1} + \ket{1}\bra{1} \otimes \mathcal{O}^\chi \\
\label{Uhow}
&= \prod_{i=1}^N \bigg( \ket{0}\bra{0} \otimes \mathds{1} + \ket{1}\bra{1} \otimes \sigma_i^{\chi_i} \bigg) \, ,
\end{align}
where in the second line we abused the notation by letting $\ket{1}\bra{1} \otimes \sigma_i^{\chi_i} \equiv \ket{1}\bra{1} \otimes \mathds{1}_1 \otimes \cdots \otimes \sigma_i^{\chi_i} \otimes \cdots \otimes\mathds{1}_N$.
The product is over those $i$ such that $\chi_i \ne 0$ only, because all other terms are the identity.
Thus for a $k$-body interaction term $\mathcal{O}^\chi$, $\mathcal{U}$ can be implemented by repeatedly applying $k$ 2-qubit controlled-U gates (more specifically controlled-X,Y,Z gates)~\cite{Nielsen2010Book}.
This can be easily achieved in a gate based (digital) quantum computing model.
The problem of realizing many-body interactions we are addressing in this paper is more pertinent to the adiabatic quantum computing model.
Unlike the gate model, AQC is not built upon the concept of gates.
Yet both paradigms of quantum computing have been shown to be polynomially equivalent in terms of complexity theory~\cite{Aharonov2008SIAM}.
Recently, Hen~\cite{Hen2015PRA} made a connection between these two paradigms by showing that a universal set of quantum gates can be realized within the AQC framework via controlled adiabatic evolutions.
Hence the technique developed in this paper can be applied to AQC by using Hen's quantum adiabatic algorithms for gates as subroutines for implementing the unitary $\mathcal{U}$ at the beginning and at the end of the protocol.
The original method due to Hen requires a single auxiliary qubit. The runtime scales linearly with $k$ and is independent of $N$, the total number of qubits.
Shortcuts to adiabaticity can be used to realize these gates with unit probability at finite time~\cite{Santos2015arXiv}.
\subsection{A special case}
\label{sec:special}
Let us now consider the case when the initial state is invariant under the action of the many-body interaction term, i.e.,
\begin{align}
\mathcal{O}^\chi \ket{\psi_0} = \ket{\psi_0}.
\end{align}
This implies that $ \mathcal{U} \left(\ket{+} \otimes \ket{\psi_0}\right) = \ket{+} \otimes \ket{\psi_0}$.
Thus for this case the first application of the unitary transform prior to the Hamiltonian evolution is not necessary.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.70\linewidth]{shortcircuit}
\caption{A schematic description of the state preparation protocol applicable only to the special case described in detail in Sec.~\ref{sec:special}. The top qubit is the ancilla and the rest are system qubits. $H$ acting on the ancilla stands for the Hadamard gate, not to be confused with the Hamiltonian. After the Hamiltonian dynamics, the ancilla qubit is measured and conditional on the outcome the system qubits are acted on by a different unitary.}
\label{fig:shortcircuit}
\end{figure*}
A schematic description of the state preparation protocol applicable to this case is given in Fig.~\ref{fig:shortcircuit}.
The state after the Hamiltonian evolution is given by \eqref{finalstate}.
In this state the system and ancilla are entangled.
In the previous section we suggested applying the inverse unitary to disentangle them.
Another option is to measure the ancilla qubit first.
If the ancilla is found to be in the $\ket{0}$ state the system qubits are already in the desired target state $\ket{\psi_\tau}$.
If the ancilla is measured to be in the $\ket{1}$ state the system is in the state $\mathcal{O}^\chi \ket{\psi_\tau}$.
This state can be transformed into the target state by acting on it with the unitary operator $\mathcal{O}^\chi$ again.
The advantage of this approach is that $\mathcal{O}^\chi$ can be realized as $k$ successive single qubit unitaries.
\begin{align}
\label{Ophow}
\mathcal{O}^\chi = \prod_{i} \sigma_i^{\chi_i}\, ,
\end{align}
where the product is over the indices with nonvanishing $\chi_i$.
This is to be contrasted with $k$ successive two qubit unitaries necessary to implement $\mathcal{U}$ shown in \eqref{Uhow}.
Since two qubit unitaries are significantly more difficult to implement experimentally than single qubit unitaries the method of this section can provide great simplification whenever applicable.
Adiabatic optimization algorithms, for which both the initial Hamiltonian and the many-body interaction term $\mathcal{O}^\chi$ are made up of $\sigma^x$ operators only, falls under this category.
The initial state can be prepared simply by applying strong local fields to all N+1 qubits as in \eqref{AQCxxx}
Moreover, for the purpose of finding the answer to the optimization problem one does not even need to prepare the state $\ket{\psi_\tau}$ itself.
One can perform the final step described in \eqref{Ophow} {\it on paper}.
If the ancilla is found to be in state $\ket{0}$ and the rest of the qubits in a state $\ket{n}$, the outcome of the computation is simply given by $n$.
If, on the other hand, the ancilla is found to be in state $\ket{1}$ and the measurement of the system yields $\ket{n}$, the outcome of the computation is interpreted as $\bar{n}$, such that $\mathcal{O}^\chi \ket{n} = \ket{\bar{n}}$.
Since $\mathcal{O}^\chi$ is assumed to be a tensor product of $\sigma^x$ operators only, such an $\bar{n}$ always exists.
\section{Conclusion}
\label{sec:conclusion}
In this paper we presented an embedding of an N-qubit Hamiltonian with many-body interactions, into a subspace of an (N+1)-qubit Hamiltonian.
In the simplest case, a many-body interaction term of the computational Hamiltonian is replaced with a spin flip operator $\sigma_x$ acting only on the ancilla qubit.
As a concrete application of our method, in Sec.~\ref{sec:implementation} we discussed a state preparation protocol in detail.
Our technique is nonperturbative and for a class of problems can be used to reduce the degree of nonlocality of the Hamiltonian significantly.
We have discussed how it can be implemented in a hybrid as well as a solely adiabatic quantum computer.
The success of the adiabatic quantum algorithms rely heavily on one parameter $\Delta$ which stands for the minimal spectral gap of the time-dependent Hamiltonian $H(t)$ along the path from $H(0)$ to $H(\tau)$.
Eq.\eqref{protocol} specifies one such path for each choice of the functions $f$ and $g$.
By choosing these wisely, the success rate can be increased significantly~\cite{Roland2002PRA,Lidar2009JoMP}.
However \eqref{protocol} does not represent the most general Hamiltonian path from $H(0)$ to $H(\tau)$.
For instance an arbitrary Hamiltonian can be turned on and off during the time interval $[0,\tau]$, without effecting the endpoints.
Evidence that adding a random local Hamiltonian to the middle of the adiabatic path increases the success probability has been presented in~\cite{Crosson2014arXiv}.
It is conceivable that general paths involving nonlocal Hamiltonians at intermediate times could result in further likelihood of success.
Similarly, using nonlocal initial Hamiltonians might also improve success rate in some cases.
The technique developed in this paper might provide the means to realize such Hamiltonian paths using only few-body interactions.
The nonlocality is one aspect that can make a many-body interaction challenging to realize experimentally at a fundamental level.
A more practical difficulty is to couple a qubit to many other qubits, even via 2-body interactions.
While remedying the former, the technique developed in this paper might exacerbate the latter in a given application.
In the worst case, the ancilla qubit might be required to interact with all of the system qubits (see Sec.~\ref{sec:flipall}).
In most experimental setups it is not feasible to have a complete interaction graph between all qubits.
However, it might be possible to design experiments where a small percentage of nodes (qubits) have very high connectivity.
In such setups Hamiltonians with a few highly nonlocal interactions can be realized using the technique developed in this paper.
Thus our analysis suggests a new design criteria for experiments to the extent that such Hamiltonians are relevant and useful in particular applications.
\vspace{-1mm}
\acknowledgments
We gratefully acknowledge financial support from the Lockheed Martin Corporation under contract U12001C.
We would like to thank Oren Raz, Stephen Jordan, Paul Hess and Kanupriya Sinha for useful discussion.
|
2,869,038,155,286 | arxiv | \section{Introduction} \label{sec:intro}
Galaxy cluster collision shocks occur when two galaxy clusters collide. These shocks are characterized by a relatively low sonic Mach number ($\, M_{\rm s}\,\sim\,1-4$) and an Alfv{\'e}nic Mach number ($\, M_{\rm A}$) of approximately an order of magnitude higher, leading to a plasma-$\beta$ (the ratio of thermal to magnetic energy) of approximately 100.
The nature of these shocks has been studied extensively through cosmological hydrodynamical simulations \citep{ Miniatietal:2000,Ryuetal:2003,Pfrommeretal:2006,Kangetal:2007,Skillmanetal:2008,Hoeftetal:2008,Vazzaetal:2009, Hongetal:2014,SchaalVolker:2015,Hongetal:2015}, but whether such shocks are capable of accelerating particles to relativistic speeds, thereby contributing to the cosmic ray (CR) spectrum, remains an open question.
Although there are indications for \emph{electron} acceleration in merging clusters \citep{vanWeerenetal:2010}, so far, studies of these shocks \citep[e.g.][]{PinzkePfrommer:2010,ZandanelAndo:2014,KangRyu:2018} have failed to find evidence of \emph{proton} acceleration.
If CR acceleration occurred, one would expect to find evidence in the form of diffuse gamma-radiation originating from the collision between CR protons and thermal protons, followed by neutral pion decay.
However, a study using the \emph{Fermi}-LAT instrument \citep{Ackermannetal:2016} failed to find evidence of such gamma-ray emission.
Computational studies indicate that if galaxy cluster collision shocks produce CRs at all, the energy fraction lost to the shock in the fashion is expected to be less than one-tenth of a percent \citep{Vazzaetal:2016}.
Further investigation of the possibility of CR acceleration in galaxy cluster shocks requires that we produce numerical models of the shock that include the necessary physical processes to follow particle acceleration.
Although the strong shock regime has been explored extensively, using PIC, PIC-hybrid and combined PIC-MHD codes \citep[e.g.][]{RiquelmeSpitkovsky:2010,Caprioli13,CaprioliSpitkovski:2014a,Caprioli14b,Guoetal:2014a,Guoetal:2014b,Caprioli15,Baietal:2015,paper1}, as well as the Vlasov-Fokker Planck method \citep{Reville12,Reville13}, the weak shock regime has so far been largely ignored.
\citet{Haetal:2018} used particle-in-cell (PIC) simulations that showed that, depending on the input parameters, low-Mach, high-$\beta$ shocks such as these can accelerate particles, depending on the exact sonic Mach number. For those shocks with $\, M_{\rm s}\,\simeq\,2.25$ or more, \cite{Haetal:2018} found both injection of supra-thermal particles near the shock, as well as the onset of diffusive shock acceleration (DSA), the process that allows particles to gain momentum through repeated shock crossings, also known as Fermi~I acceleration \citep[e.g.][]{Bell:1978,BlandfordOstriker:1978,Drury:1983}.
This process requires the presence of instabilities in the local magnetic field, which can reflect the particles toward the shock.
For shocks with a sonic Mach number of $\, M_{\rm s}\,<\,2.25$, \citet{Haetal:2018} found a small particle injection rate (approximately an order of magnitude lower than found for the higher Mach shocks), but no DSA.
Compared to the analytic prediction based on the Rankine-Hugoniot conditions \citep{EdmistonKennel:1984}, which placed the critical Mach number at $M_{\rm s}\approx1-1.1$, this critical Mach number (2.25) is significantly higher, which may help account for the lack of observed diffuse gamma-rays produced in low-Mach, high-$\beta$ ICM shock accelerated ions \citep{Ackermannetal:2016}.
However, the computational cost of the PIC method puts limits on the ability to follow the process over a long time and constrains the size of the spatial domain.
Therefore, these results do not show whether the DSA process in these shocks is adequate to propel particles to relativistic speeds.
In this paper we continue the investigation of particle acceleration in weak shocks, using a different method: the particle-in-MHD-cell (PI[MHD]C) technique \citep{Baietal:2015,paper1,Mignoneetal:2018,Baietal:2019}, using the same code that was previously used in \citet{vanmarleetal:2017,casseetal:2018,vanmarleetal:2018,paper1,vanMarleetal:2019,vanMarleetal:2019b,vanmarle:2020}.
This method allows us to take advantage of the computational efficiency of grid-based magnetohydrodynamics (MHD) while retaining the ability to follow individual, supra-thermal particles.
We take advantage of this by running all simulations in 2-D, which will allow us to study the behaviour of the shock front and evaluate whether the effectively 1-D nature of the original simulations done by \citet{Haetal:2018} created any artefacts.
Our simulations also follow the interaction over a more extended period, which enables us to judge how the growing instabilities influence that morphology of the gas.
\subsection{Requirements for diffusive shock acceleration}
The DSA process allows a shock to accelerate particles to high relativistic speeds. However, before this process can begin, there has to be an injection of supra-thermal particles near the shock itself.
This occurs at the shock front, with a fraction of the particles being reflected, rather than passing through the shock. The PI[MHD]C method cannot model this part of the process.
In a PI[MHD]C simulation, shocks are modelled with MHD, and therefore regarded as discontinuities.
The internal structure of the shock, which is far more complex \citep[e.g.][and references therein]{Treumann:2009}, is not part of an MHD model.
Instead, we assume that the injection process is effective, and use a parametrized injection mechanism that determines what percentage of the gas passing through the shock becomes supra-thermal. We base our injection rates on the results by \cite{Haetal:2018}, which involved the use of PIC simulations, to determine the injection rate as a function of the sonic Mach number.
Once the supra-thermal particles have been injected, further acceleration depends on their ability to generate instabilities as they travel into the upstream medium.
If these instabilities are sufficiently strong, they will create deviations in the direction of the local magnetic field, which allows the magnetic field to reflect particles back toward the shock, starting the DSA process \citep{Sundbergetal:2016}. This process is self-sustaining and self-amplifying. The higher the momentum of the particles, the more effective they are at exciting instabilities, which, in turn, means that the efficiency of the acceleration process increases. However, the strength of the instabilities (and whether they occur at all) depends on the shock's sonic and Alfv{\'e}nic Mach numbers.
\subsection{Layout}
This paper is structured as follows: In Sect.~\ref{sec:method} we discuss the numerical method as well as the general setup of the simulations. Next, we present a series of simulations, based on the models by \citet{Haetal:2018}, the results of which are shown in Sect.~\ref{sec-result}. Then, in Sect.~\ref{sec-longrun} we explore the long term effects of the particle acceleration process on a single, large-scale simulation.
Finally, in Sect.~\ref{sec-conclusions}, we discuss our results and future plans.
We also include a brief description of the equations used in our code in Appendix~\ref{sec-app1}, as well as an analytical approximation of the various time-scales involved in the perturbations in Appendix~\ref{sec-app2}.
\begin{figure*}
\centering
\mbox{
\includegraphics[width=\columnwidth]{Mach2p0_200.jpeg}
\includegraphics[width=\columnwidth]{Mach2p15_200.jpeg}}
\mbox{
\includegraphics[width=\columnwidth]{Mach2p25_200.jpeg}
\includegraphics[width=\columnwidth]{Mach2p5_200.jpeg}}
\caption{Morphology of the gas at the end of the simulation after $t\,=\,20\,000\,R_{\rm l}/c$, for $\, M_{\rm s}\,=,2.0$ (top left), $2.15$ (top right) and , $2.25$ bottom left), and $2.5$ (bottom right), showing, from top to bottom the magnetic field strength relative to the undisturbed upstream magnetic field, the relative density of the supra-thermal particles relative to the thermal gas density and the thermal gas density relative to the undisturbed upstream thermal gas density. The magnetic field lines are plotted on top of the thermal gas density plot. This figure zooms in on the shock, the actual simulation box extend from $x=-90$ to $x=90$. Although all simulations show a disturbance of the magnetic field strength (top panel), none show a significant disturbance of the \emph{direction} of the magnetic field lines, which is required for particle acceleration.}
\label{fig:results1}
\end{figure*}
\begin{figure*}
\centering
\mbox{
\includegraphics[width=\columnwidth]{Mach2p85_200.jpeg}
\includegraphics[width=\columnwidth]{Mach3p2_200.jpeg}}
\mbox{
\includegraphics[width=\columnwidth]{Mach3p5_200.jpeg}
\includegraphics[width=\columnwidth]{Mach4p0_200.jpeg}}
\caption{Similar to Fig.~\ref{fig:results1} but for simulations with for $\, M_{\rm s}\,=,2.85$ (top left), $3.2$ (top right) and , $3.5$ (bottom left), and $4.0$ (bottom right). Unlike the simulations with lower $\, M_{\rm s}$ these models show disturbance of the magnetic field lines, indicating that particle acceleration can take place.}
\label{fig:results2}
\end{figure*}
\section{Numerical method}
\label{sec:method}
\subsection{Code}
We use the combined PIC-MHD or Particle in MHD cells (PI[MHD]C) method described in \citet{Baietal:2015,paper1}. This method is based on the assumption that the gas can be treated as a primarily thermal gas, with a relatively small supra-thermal component. The thermal gas is treated as a fluid, using the MHD method, whereas the supra-thermal component is treated kinetically.
The interaction between the two components is treated in a self-consistent manner.
The force generated by the electromagnetic field of the thermal gas is part of the equation of motion of the particles, as is the opposite force on the thermal gas generated by the particles. Furthermore, the effect of a charge and current density resulting from the presence of the non-thermal particles is incorporated in the MHD-equations through a rewritten form of Ohm's law \citep{Baietal:2015},
\begin{equation}\label{Eq:Ohmslaw}
c\, \mathbf E = -\left((1-R)\, \mathbf v +R\, \mathbf u_{\rm part}\right)\times\, \mathbf B
\end{equation}
with $c$, the speed of light, $\, \mathbf E$ the electric field, $\, \mathbf v$ the thermal plasma velocity, $\, \mathbf u_{\rm part}$ the average velocity of the supra-thermal particles, obtained through interpolation, $\, \mathbf B$ the magnetic field and $R$ the supra-thermal particle charge density relative to the total charge density.
$\, \mathbf u_{\rm part}$ and the supra-thermal particle charge density are obtained by mapping the particle distribution in space and momentum space onto the grid at the start of each time step.
Time step control is maintained through the Courant{-}Friedrichs{-}Lewy (CFL) condition, which is applied to both the thermal gas and the particles to ensure numerical stability.
We consider the gas to be non-collisional. Therefore, there is no kinetic interaction between the particles, nor a friction force generated by the motion of the particles through the thermal gas. The only interaction between the thermal and supra-thermal components is through the electromagnetic field (the code does not include a CR-pressure term).
The main advantage of the PI[MHD]C approach is that it allows for faster calculations than either the PIC or the PIC-hybrid method, both of which require large particle populations to simulate the thermal gas, whereas the PI[MHD]C method achieves this by solving the MHD equations for the thermal gas, which is a less computationally expensive approach.
Our code is based on the {\tt MPI-AMRVAC} code \citep[e.g.][]{vanderHolstetal:2008,Keppensetal:2012}; a fully conservative, finite-volume MHD-code that solves the conservation equations of magnetohydrodynamics on an {\tt OCTREE}-based adaptive mesh \citep{ShephardGeorges:1991} and is MPI-parallel.
To this code we have added a new set of conservation equations \citep{paper1} that incorporates the effect of supra-thermal particles on the thermal gas and a Boris-pusher \citep{BirdsallLangdon:1991} to calculate the motion of the particles as a function of the electromagnetic field.
We have also added a constrained-transport module based on \citet{BalsaraSpicer:1999} in order to guarantee a divergence-free magnetic field.
As in \citet{paper1} we use a {\tt TVDLF} solver, combined with a van~Leer flux limiter. This combination provides us with a reliable and precise numerical scheme capable of capturing the small scale features of the plasma and the magnetic field.
The conservation equations, as implemented in this code, can be found in Appendix~\ref{sec-app1}. For a full description of the numerical method as implemented in this code, please refer to \citet{paper1}, as well as the description in \citet{Baietal:2015}.
\begin{table}
\caption{Input parameters \label{tab:input}}
\label{tab:anysymbols}
\begin{tabular}{lcccc}
\hline
Model name & $\, M_{\rm s}$ & $\, M_{\rm A}$ & $v_0/c$ & f$_{\rm inj}$ \\
\hline
M2 & 2.0 & 18.2 & 0.027 & 0.003 \\
M2.15 & 2.15 & 19.6 & 0.0297 & 0.003 \\
M2.25 & 2.25 & 20.5 & 0.0315 & 0.0035 \\
M2.5 & 2.5 & 22.9 & 0.035 & 0.0035 \\
M2.85 & 2.85 & 26 & 0.0395 & 0.00375 \\
M3.2 & 3.2 & 29.2 & 0.052 & 0.004 \\
M3.5 & 3.5 & 31.9 & 0.057 & 0.007 \\
M4 & 4.0 & 36 & 0.066 & 0.009 \\
\hline
\end{tabular}
\end{table}
\subsection{Simulation setup}
As input conditions, we use the same models described in \citet{Haetal:2018}, focussing on the models M2-M4 listed in Table~1 of that paper. In those simulations a beam of plasma collided with a fixed wall, forming a shock that moved upstream into the beam. These simulations, which \citet{Haetal:2018} describe as {\emph{almost 1-D}, used an extremely narrow box that covers less than a single Larmor radius in the direction perpendicular to the flow because of the high computational cost of the PIC method, which made the use of a wider box impractical. As a result, they could not resolve any variations in the thermal plasma along the perpendicular axis. \citet{Haetal:2018} defined the physics properties of the simulations by using a fixed starting temperature for the upstream gas of $10^8\,K$ and plasma-$\beta\,=\,100$ for all simulations. With these two quantities fixed, \citet{Haetal:2018} chose the sonic Mach number of the shock as the free parameter that made each simulation unique. They placed upstream magnetic field in the plane of the simulation at a 13$^o$ angle with the flow.}
Because of the difference in numerical method, rather than aiming a beam of plasma at a reflecting wall and tracing the shock as it moves back into the upstream medium, we start our simulations from the analytical solution of the Rankine{-}Hugoniot conditions. This approach allows us to use the co-moving frame of the shock as rest-frame for our simulations, reducing the required size of the simulation box along the flow direction. Like models M2-M4 in \citet{Haetal:2018}, our simulations cover the parameter space ranging from a sonic Mach number of $\, M_{\rm s}\,=\,2-4$, with a constant plasma-$\beta\,=\,100$ and a fixed angle of $13^{\rm o}$ between the flow and the upstream magnetic field $B_0$.
We place field in the plane of the simulation, with the third component, which is orthogonal to the plane of the simulation, initialized at zero.
For the full set of input parameters, see Table~\ref{tab:input}, which uses the same units as in \citet{Haetal:2018}. This table lists the sonic and Alf{\'e}nic Mach numbers of the shock, the upstream velocity $v_0$ and the injection rate of supra-thermal particles. It should be noted that in this table $v_0$ is the flow-velocity in the rest-frame of the downstream medium, as used by \citet{Haetal:2018}, rather than the shock velocity, which follows from the Rankine-Hugoniot conditions.
At the start of the simulation, we consider the gas to be purely thermal with no supra-thermal component. During the simulation, we continuously inject supra-thermal particles directly downstream of the shock, at a rate derived from the PIC simulations in \citet[][Figure~5]{Haetal:2018}. (If the shock moves during the simulation, we track this motion to ensure that new particles continue to be injected at the shock.)
Using these values allows us to avoid the main uncertainty in the PI[MHD]C method: the injection rate of the supra-thermal particles.
In two cases, we deviate from this pattern: the $\, M_{\rm s}\,=\,2$ and $2.15$ simulations. \citet{Haetal:2018} found only a negligible injection rate for these particular models.
Injecting supra-thermal particles at such a low rate would be pointless as it would merely confirm that without a
significant fraction of supra-thermal particles the thermal gas and the shock remain in the equilibrium derived from the Rankine-Hugniot conditions. Instead, we use the power-law shown in
\citet[][Figure~5]{Haetal:2018} to extrapolate an injection fraction of 0.003 ($0.3$\,percent of the total mass). This will allow us to determine whether any kind of DSA is possible at such a low Mach number, given the injection of a sufficient number of supra-thermal particles.
The charge-to-mass ratio of the particles reflects that of protons (we assume that all electrons are thermal) and is calculated so that a total of $1\times10^7$ particles is injected during the simulation. The total duration of each simulation equals 20\,000\,$\, R_{\rm l}/c$, where $\, R_{\rm l}=v_{\rm inj}/{B_0}$ the Larmor radius determined by the upstream magnetic field and the injection velocity $v_{\rm inj}$ and the upstream magnetic field $B_0$, and the injection starts at $t\,=\,1\,000\,\, R_{\rm l}/c$, to allow the thermal gas a brief period to relax from the analytical solution to a stable numerical solution. We inject the particles with a velocity $v_{\rm inj}$ of three times the pre-shock speed of the gas, isotropically distributed within the rest-frame of the post-shock thermal gas, conform the prescription used in \citet{paper1} and \cite{vanMarleetal:2019}.
During the injection process, mass, momentum and energy are conserved by removing an equivalent amount of mass, momentum and energy density from the thermal gas in the grid cell where each particle is injected.
For our simulations, we use a 2-D grid, spanning 180$\, R_{\rm l}$) along the x-axis (parallel to the flow) by 30$\, R_{\rm l}$ along the z-axis (perpendicular to the flow). At the coarsest level, our grid has one grid cell per Larmor-radius, and we allow three additional levels of refinement. Each level doubles the effective resolution. Therefore, at maximum resolution, we have eight grid cells per Larmor radius.
For comparison with PIC or di-hybrid simulations, we need to keep in mind that the particle injection velocity is three times the upstream bulk velocity in the rest-frame of the shock. Therefore $\, R_{\rm l}$, as defined in this paper, is three times the Larmor radius of the upstream thermal gas.
Within five Larmor radii of the shock, we enforce the highest level of refinement at all times to ensure that both the shock itself and the particle motion near the shock remain fully resolved.
Vector quantities (velocity, electromagnetic field) have three components, making our models effectively 2.5-D. When calculating the charge and current density resulting from the particle distribution, we map the charge and current of each particle onto the four surrounding cell-centres.
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{para_SED_2_2p25_new.png}}
\caption{Normalized spectral energy distribution, as a function of the Lorentz factor ($\gamma$), for the total particle population of the simulations with sonic Mach numbers 2-2.25 at $t\,=\,20\,000\,R_{\rm l}/c$. None of these SEDs show any evidence for DSA.}
\label{fig:parased1}
\end{figure}
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{para_SED_2p5_3p2_new.png}}
\caption{Similar to Fig.~\ref{fig:parased1} but for the simulations with sonic Mach numbers 2.5-3.2. These simulations show evidence of DSA, but only for part of the particle population.}
\label{fig:parased2}
\end{figure}
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{para_SED_3p5_4_new.png}}
\caption{Similar to Figs.~\ref{fig:parased1}-\ref{fig:parased2} but for the simulations with sonic Mach numbers 3.5 and 4. These simulations show clear evidence of DSA but only for part of the particle population. The slopes are approaching the expected power law index. For comparison we have added the analytically predicted slope for the $M_S\,=\,4$ shock ($s\,=\,1.64$).}
\label{fig:parased3}
\end{figure}
\section{Parameter space}
\subsection{2-D Morphology of the gas}
\label{sec-result}
The response of the thermal gas and the magnetic field to the injection of the supra-thermal particles varies according to the sonic Mach number, as shown in Figs.~\ref{fig:results1}-\ref{fig:results2}, shows a clear trend. Each panel shows the magnetic field strength relative to the undisturbed upstream magnetic field strength ($B_{\rm 0}$) (top), the supra-thermal gas density ($\rho_{\rm p}$) relative to the thermal gas density (middle) and thermal gas density relative to the undisturbed upstream thermal gas density ($\rho_{\rm 0}$) (bottom) as well as the magnetic field lines. Although the simulations are run in the shock rest-frame (determined from the analytical solution), the shocks tend to move over time.
The injection process removes a small but noticeable fraction of the thermal gas pressure in the downstream medium, which would lead the shock to start moving in the downstream direction.
Depending on the strength of the instabilities, this trend can be reversed if the downstream magnetic field amplification becomes sufficiently strong to counteract this effect by increasing local magnetic pressure.
The low-Mach simulations ($\, M_{\rm s}\,\leq\,2.5$) show a disturbance of the magnetic field strength, but not of the magnetic field lines, even though we have artificially enhanced the injection rate for the $\, M_{\rm s}\,=\,2-2.15$ shocks.
As a result, even though the magnetic field is influenced by the presence of supra-thermal particles, no DSA can be expected to occur. In contrast, the (relatively) high-Mach simulations ($\, M_{\rm s}\,\geq\,3.2$) show disturbance of both the field lines and the field strength, indicating that DSA can take place. In between these two extremes lies a regime that shows some distortion of the field lines, but only locally, with some field lines showing twists, while others do not.
Even in those cases where the perturbations are relatively strong, they tend to be weaker than perturbations found for comparable models of high-Mach, low-$\beta$ shocks \citep{paper1}.
The magnetic field amplification tends to be of the order of $2-4$, not exceeding one order of magnitude even for the $\, M_{\rm s}\,=\,4.0$ model. For comparison, the $\, M_{\rm s}\,=\,30.0$, $\beta\,=\,1$ shocks in \citet{paper1} showed amplification of 15-25, depending on the angle between the flow and the magnetic field.
This result can be compared qualitatively to the results found by \citet{Haetal:2018}, which determined that the instabilities in the lower Mach simulations are resonant-streaming instabilities \citep{Bell:1978}, whereas the higher Mach models display both resonant and non-resonant streaming \citep{Bell04} characteristics.
This behaviour is consistent with the results of \citet{Caprioli14b}, which determined that the transition between resonant and non-resonant streaming instabilities lies around an Alf{\'e}nic Mach number of $\, M_{\rm A}\,\simeq\,30$.
The critical Mach number, below which no significant instabilities occur, seems to lie somewhat higher than what was found by \citet{Haetal:2018}, who put it at ($\, M_{\rm s}\,\simeq\,2.25$) This can be explained by the nature of the injection mechanism, which, in our case, is isotropic in the post-shock medium, whereas, in reality, the particle injection velocity and direction is a more complicated issue that depends on the exact conditions of the shock.
Furthermore, our models show a transition region ($2.5\,\geq\,\, M_{\rm s}\,\geq\,3.2$), rather than a single critical Mach number.
This is the result of having a fully 2-D model, rather than one that is effectively 1-D In a 1-D model, the magnetic field is either perturbed or not, whereas a 2-D model allows for variations along the plane of the shock. As will be seen in Sect.~\ref{sec-parased}, these variations also influence the particle spectral energy distribution (SED).
The instabilities lack the filamentary structure that is characteristic of the high-Mach, low-$\beta$ simulations \citep[e.g.][]{paper1,vanMarleetal:2019}.
Such filaments require a relatively stiff magnetic field that can resist the forces that are acting upon it.
In a high-$\beta$ plasma, the magnetic field is powerless to resist either the distorting force exerted by the particles (assuming a sufficiently high injection rate) or the thermal pressure.
As a result, a local instability will tend to grow in all directions, distorting the magnetic field, rather than conforming to the direction of the field lines.
How the perturbations in the upstream medium depend on the sonic Mach number can be explained by a comparison of three different time-scales: The perturbing time-scale, which depends on the force acting on the magnetic field, determines how quick the instabilities grow. The sonic and magnetic time-scales determine how quickly the thermal gas can respond.
As long as the sonic and magnetic time-scales are shorter than the perturbing time-scale, the instabilities cannot grow. On the other hand, if the perturbing time-scale is shorter than both the sonic and magnetic time-scales, the instabilities will continue to grow because the thermal gas cannot compensate in time.
In between these two extremes lies a regime where the perturbing time-scale is shorter than the magnetic time-scale, which allows the variations in magnetic field strength to grow, but longer than the sonic time-scale, which means that any perturbation in the direction of the field lines is counteracted by the thermal pressure.
(N.B. This only applies to the high-$\beta$ regime. In gas with a plasma-$\beta\,<\,1$, the situation would be reversed.) An analytical approximation of the various time-scales is presented in Appendix~\ref{sec-app2} but should be treated with caution because it depends on many factors, which tend to vary over time.
This time-scale dependence explains the behaviour of the high-$\beta$ simulations.
In the case of the $\beta\,=\,1$ simulations, which have been more common in the past, the thermal gas either shows a clearly perturbed magnetic field, or it shows no instabilities at all because the 'in-between' regime does not exist. The larger the gap between the sonic and Alf{\'e}nic speeds, the more opportunities there are for a 'partial instability' to occur where only the field strength varies, whereas the field direction remains constant.
For most simulations, the plane of the shock remains almost undisturbed, in contrast to the high-Mach shocks shown in \citet[e.g.][]{Baietal:2015,paper1,vanMarleetal:2019}, where ram-pressure variations in the upstream medium, resulting from the filamentary structure of the instabilities, caused the corrugation and eventually the complete distortion of the shock front. For low-Mach shocks, the variation in the total pressure that the shock experiences is much smaller and both time- and space-dependent. As a result, the shocks show little to no corrugation.
\subsection{Spectral distribution of supra-thermal particles}
\label{sec-parased}
The momentum distribution for particles accelerated through diffusive shock acceleration is given by
\begin{equation}
\label{eq:mom}
f(p)~\propto~p^{-q}
\end{equation}
with $q\,=\,3r_{\rm c}/(r_{\rm c}-1)$ and $r_{\rm c}$ the compression ratio of the shock \citep{BlandfordOstriker:1978}.
For non-relativistic particles where the particle energy $E_{\rm i}$ scales with the momentum as $E_{\rm i}\,\propto\,p_{\rm i}^2/m_{\rm i}$.
This results in a spectral energy distribution that can be represented by
\begin{equation}
\label{eq:gammamin1}
\frac{\partial N}{\partial\gamma}~\propto~(\gamma-1)^{-s},
\end{equation}
with $s\,=\,(q-1)/2$ and $\gamma$ the Lorentz factor. However, should the particles attain relativistic speeds the energy/momentum relationship changes until, at high relativistic speeds, it becomes $E_{\rm i}\,\propto\,p_{\rm i}$.
The SEDs for the particles, shown in Figs.~\ref{fig:parased1}-\ref{fig:parased3} in our simulations can be classified into three distinct groups. Like \citet{Haetal:2018}, we find no evidence for DSA for the lowest sonic Mach numbers ($\, M_{\rm s}\,=\,2.0-2.25$, Fig.~\ref{fig:parased1}).
The particles show no sign of significant acceleration, as was already predicted from the morphology of the gas and the magnetic field. Neither is there any sign that the particles adhere to a power-law spectrum, which would appear as a straight line in a log-log plot. Instead, the particle energies have spread out around the injection speed, indicating that although some particles have been accelerated, others have actually been decelerated. Simulations of high-Mach shocks \citep{paper1,vanmarle:2020} also show that a fraction of the injected particles loses decelerate, triggering instabilities in the thermal plasma at the expense of their own energy. However, in high-Mach shocks, once the instabilities have been triggered, subsequent interactions between the magnetic field accelerate particles through the DSA process.
For the $\, M_{\rm s}\,=\,2.0-2.25$ simulations shown here, no significant acceleration occurs because the instabilities dissipate almost immediately.
The end result is a particle energy distribution that starts to resemble that of a thermal plasma.
However, because the particles do not collide directly, they cannot actually become thermalized.
The behaviour at low-Mach numbers is particularly meaningful in light of the artificially enhanced injection rate that we have adopted for the models with $\, M_{\rm s}\,=\,2.0-2.15$.
Despite the high injection rate of supra-thermal particles, there is no evidence of DSA.
Clearly, under these conditions, the shock is incapable of accelerating CRs.
At the other end of the scale ($\, M_{\rm s}\,=3.5-4$, Fig.~\ref{fig:parased3}) the particles SEDs show clear evidence for DSA. Both SEDs show evidence that the energy distribution is starting to follow a straight line in the log-log plot, indicating adherence to a power-law with a fixed index between $(\gamma-1)\,=\,0.03-0.06$ for the $\, M_{\rm s}\,=\,3.5$ shock and $0.04-0.08$ for the $\, M_{\rm s}\,=\,4$ shock. For the latter, the expected slope, following from the compression ratio would be $s\,\sim\,1.64$ according to Eq.~\ref{eq:gammamin1}. We have included an indicator of that expected slope in the plot for comparison.
For the intermediate Mach numbers (Fig.~\ref{fig:parased2}, the SEDs show a combination of these two extremes.
Although there is evidence for DSA, in the form of an extended tail at high energies, there is also a peak around the injection energy, clearly indicating that not all particles are participating in the acceleration process.
That some particles can avoid participating in the DSA process is partially the result of the behaviour of the magnetic streamlines described in Sect.~\ref{sec-result}.
Depending on the injection location, particles may either encounter instabilities on its journey upstream or not, giving the particles a chance to escape from the system without being accelerated. Adherence to a power-law index is barely visible, except in the $\, M_{\rm s}\,=\,3.2$ case, and even there only for a very short interval, although this is partially a result of the limited size of the simulation box. As the particles gain momentum, they will travel further away from the shock before being reflected. If they reach the upper or lower x-boundary before being reflected, they escape from the system. Although these simulations give us a good indication whether DSA is possible at all, in order to estimate its efficiency as well as the maximum possible energy, we will need to increase the size of the simulation box. (See Sect.~\ref{sec-longrun}).
\begin{figure*}
\centering
\mbox{
\includegraphics[width=\columnwidth]{Mach3p2_050_longbox.jpeg}
\includegraphics[width=\columnwidth]{Mach3p2_100_longbox.jpeg}}
\mbox{
\includegraphics[width=\columnwidth]{Mach3p2_150_longbox.jpeg}
\includegraphics[width=\columnwidth]{Mach3p2_200_longbox.jpeg}}
\caption{Similar to Figs.~\ref{fig:results1}-\ref{fig:results2}, but for the $\, M_{\rm s}\,=\,3.2$ model with an extended simulation box, showing the area near the shock. These figures show the area near the shock at $t\,=\,5\,000$ (top left), 10\,000 (top right)
15\,000 (bottom left) and 20\,000 $R_{\rm l}/c$ (bottom right). The large distortion of the magnetic field is clearly visible. }
\label{fig:results3}
\end{figure*}
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{longbox_sed.png}}
\caption{Total particle SED at the same moments in time as \ref{fig:results3} for the $\, M_{\rm s}\,=\,3.2$ model with an extended simulation box. Over time, the SED matches the expected power-law index and shows that DSA can accelerate particles to relativistic speeds.
The $q\,=\,4.4$ line demonstrates the power-law slope expected for a Mach 3.2 shock.}
\label{fig:longboxSED}
\end{figure}
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{Bangle.png}}
\caption{Angle between the flow and the magnetic field at the shock as a function of the location along the z-axis at the same moments in time as in Fig.~\ref{fig:results3}. The shock, which started out as quasi-parallel, occasionally becomes semi-perpendicular as a result of the instabilities in the upstream medium.}
\label{fig:Bangle}
\end{figure}
\begin{figure}
\centering
\mbox{
\includegraphics[width=\columnwidth]{MS.png}}
\caption{Sonic mach number at the shock as a function of the location along the z-axis at the same moments in time as in Figs.~\ref{fig:results3} and \ref{fig:Bangle}. Whereas the shock started at $\, M_{\rm s}\,=\,3.2$, it varies over time between approximately 2.7 and 3.4.}
\label{fig:machnumber}
\end{figure}
\section{The large box model}
\label{sec-longrun}
In order to explore the long term effects of the instabilities as well as to determine the maximum particle energy that can be achieved in this fashion, we repeat the simulation with a sonic Mach number of 3.2 with a far more extended box, which has a size of $2400\,\times\,15\, R_{\rm l}$, compared to the $180\,\times\,30\, R_{\rm l}$ box of the previous simulations. (We have reduced the size of the bow along the z-axis by half in order to reduce computation time.)
For this simulation, we use a minimum resolution of $4800\,\times\,30$ grid points and allow two additional levels of refinement in order to achieve the same effective resolution as in the models described in Sect.~\ref{sec-result}. The result, in a series of snapshots, is presented in Fig~\ref{fig:results3}, which shows the morphology of the gas and the magnetic field near the shock at $t\,=\,5\,000$ ), 10\,000, 15\,000, and 20\,000 $R_{\rm l}/c$.
As for the smaller box model, this simulation shows that at Mach 3.2, the upstream magnetic field becomes distorted, facilitating the DSA. As can be seen in Fig.~\ref{fig:results3}, the box size along the z-axis is still large enough that it does not interfere with the maximum wavelength of the instabilities and allows for variation along the plane of the shock.
\subsection{The particle SED}
\label{sec-parasedlong}
Figure~\ref{fig:longboxSED} (right panel) shows the particle SED the total particle populations at the same moments in time as Fig.~\ref{fig:results3}.
Because the particles are entering the relativistic regime, where the relationship between energy and momentum changes, we show the SED as a function of normalized momentum $p=\gamma v$. Using the Lorentz factor would no longer produce a straight line in the log-log plot. For a $\, M_{\rm s}\,=\,3.2$ shock, the power-law index can be expected to be approximate $q\,=\,4.4$, according to Eq.~\ref{eq:mom}. We have added a slope indicator in the plot for comparison. The slope matches reasonably well, though not perfectly.
However, we should keep in mind that the power-law index depends on the compression ratio of the shock.
For the $\, M_{\rm s}\,=\,3.2$ shock, the compression ratio is expected to be a factor 3.1, based on the Rankine-Hugoniot conditions. However, as will be seen in Sect.~\ref{sec-distort}, the sonic Mach number is not a constant in time owing to the instabilities in the upstream medium.
The extreme length of the simulation box ensures that the particles can continue to be part of the acceleration process, rather than escaping from the simulation, allowing them to reach much higher velocities.
The SED shows adherence to a power-law up to approximately eight times the injection momentum, reaching a Lorentz factor of approximately 1.8, which, for protons, is the equivalent of a total energy of $E_{\rm tot}\,=\,\gamma mc^2\,=\,1.6$\,GeV.
Such protons, if they were to interact with thermal protons, would be able to produce gamma-radiation. Beyond this point, the SED drops off, and, from the time evolution, it is clear that running the simulation longer will not change the energy at which the DSA ceases to be effective.
\subsection{Distortion of the upstream medium}
\label{sec-distort}
As the instabilities grow in size, they start to change the nature of the shock.
The angle at which the magnetic field enters the shock varies considerably, both in time and space, as shown in Fig.~\ref{fig:Bangle}, which shows the angle between the flow and the magnetic field directly ahead of the shock at different moments in time. \citet{Haetal:2018} found that the angle of the magnetic field with the shock does not greatly influence the particle injection rate, as long as the shock qualifies as quasi-parallel.
Similarly, \citet{Fangetal:2019} found comparable ion SEDs for shocks varying between 15$^{\rm o}$ and 45$^{\rm o}$.
However, this relies on the magnetic field remaining quasi-parallel. As can be seen in Fig.~\ref{fig:Bangle}, which shows the angle between the flow and the magnetic field ahead of the shock, this can no longer be assumed because the magnetic field becomes quasi-perpendicular at certain times, depending on the position along the shock front. How this will influence the injection rate is a complex question.
\citet{Haetal:2018} found that a quasi-perpendicular shock could give particles a limited acceleration through the shock-drift acceleration (SDA) process, but found no evidence for DSA. This result coincides with the results found by \citet{CaprioliSpitkovski:2014a} for higher Mach shocks, whereas \citet{paper1} found that DSA could occur in quasi-perpendicular shocks, assuming a sufficiently high injection rate.
However, none of the above results genuinely apply to a situation in which only part of the shock is quasi-perpendicular and then only at certain times.
A further effect of the increased instabilities in the upstream gas is a change in the sonic Mach number of the shock, shown in Fig.~\ref{fig:machnumber} for the same moments in time as in Fig.~\ref{fig:Bangle}. As the magnetic field becomes distorted, the flow in the thermal gas starts to compress the loops in the magnetic field as they approach the shock. The compression of the magnetic field, in turn, increases the local magnetic field strength, which, combined with the magnetic tension as the curvature of the magnetic field lines increases allows the magnetic field to resist the pressure effectively. This causes variations in the local gas temperature and flow-velocity, changing the Mach number.
The influence of the sonic Mach number on the injection rate is shown quite clearly by \cite{Haetal:2018}, though we should keep in mind that this was for simulations with a fixed plasma-$\beta$, something that does not apply to our simulations as a result of the changes in the upstream morphology.
\section{Conclusions}
\label{sec-conclusions}
We have investigated the behaviour of low-Mach, high-$\beta$ shocks and their ability to accelerate particles through the DSA process, using the PI[MHD]C method.
For small scale simulations, our results bear a close similarity to those obtained by \citet{Haetal:2018} with PIC simulations, allowing for the inherent uncertainty caused by the fact that we have to assume a particle injection rate. Rather than finding a 'critical Mach number', at which the shocks go from not accelerating particles at all to clear evidence of DSA, we find that there is a transitional zone between approximately $\, M_{\rm s}\,\approx\,2.5-3.2$. For shocks with Mach-numbers below this zone, no acceleration takes place, even when stimulated with an artificially high injection rate, while above this zone the shocks show clear evidence of DSA.
Within the transitional zone, some particle acceleration will occur, but the SEDs show deviation from the expected power-law index.
It is likely that within this region the efficiency of DSA will depend strongly on input conditions, which, in turn, means that some galaxy cluster shocks in this parameter space will be able to accelerate CRs, whereas others will not, depending on local circumstances.
Over longer time-scales, the growing instabilities in the upstream medium observed in simulations with sonic Mach number of >3 deviate from the results by \citet{Haetal:2018}. This deviation takes the form of varying Mach numbers at the shock, as well as a severely distorted magnetic field. Once the simulation reaches this point, the fixed injection rate is no longer a valid assumption.
In the long run, this effect is likely to cause a considerable change in the efficiency of CR production. Even those shocks that have a sufficiently high Mach number will end up exciting such severe instabilities that the Mach number starts to vary. Only if this causes the injection rate to be sufficiently reduced, which is certainly possible if the Mach number drops, the instabilities will fade, which will eventually allow the shock to return to its original strength, at which point the process will repeat itself.
Furthermore, the power-law index of the particle SED depends directly on the compression ratio of the shock.
Owing to the instabilities, this will become increasingly variable over time, limiting our ability to quantify the energy loss through cosmic rays, which is unlikely to be a constant factor. Further investigation, in particular regarding the injection efficiency as a function of Mach-number, plasma-$\beta$, and magnetic field angle, is required before such an analysis becomes possible.
These simulations demonstrate that large-scale, multi-D simulations are required in order to thoroughly investigate the behaviour of shocks in the presence of supra-thermal particles. While 1-D simulations can tell us much about the structure of the shock itself as well as the injection process, the long term evolution of the instabilities that enable the acceleration process is a fundamentally multi-D problem on scales that exceed the Larmor radius of the injected particles. This requires us to use simulation boxes that can capture the large-scale variations in the thermal plasma perpendicular to the flow and extend sufficiently to allow for repeated particle acceleration events.
Future developments will have to include a scheme that adjusts the injection rate as a function of shock conditions. However, this is not an easy task because the shock conditions do not only vary over time but depend on the location along the shock front as well.
Furthermore, the velocities of the thermal gas in this kind of shock is approaching the point where a relativistic treatment of the thermal gas becomes necessary. A new version of the PI[MHD]C code, which allows for the combination of a relativistic thermal gas with non-thermal particles, is currently undergoing testing.
Whether there is merit to repeating these simulations in 3-D is an open question. \citet{vanMarleetal:2019} showed for a high-Mach model that the morphology of the thermal gas and the magnetic field did not change significantly between 2-D and 3-D models, but that the SED of the 3-D model showed a marked decrease in acceleration efficiency. However, that was for a shock that became severely distorted over time, something that is not the case for the low-Mach models. As it is, the computational cost of a 3-D model, particularly a 3-D model large enough to resolve the instabilities and follow the acceleration of particles to high velocities, remains prohibitive.
\section*{Acknowledgements}
This work was supported by the National Research Foundation (NRF) of Korea through grant 2016R1A5A1013277 and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2018R1D1A1B07044060). This work was supported by the National Research Foundation (NRF) of Korea through grant 2016R1A5A1013277 and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2018R1D1A1B07044060). This work is supported by the ANR-19-CE31-0014 GAMALO project.
The author wishes to thank Prof. D. Ryu, Prof. H. Kang and J.-H. Ha for their valuable comments and discussions.
The author thanks the anonymous reviewer for their many helpful comments
\section*{Data availability}
Data available on request.
\bibliographystyle{yahapj}
|
2,869,038,155,287 | arxiv | \section{Introduction}
While we observe many extragalactic supernovae \citep[SNe; e.g.,][]{sako08,drake09,law09,leaman11}, they are very indirect probes of the SN mechanism. This near total lack of direct constraints on the mechanism contributes to the many unsolved problems about SN, in particular, why they explode at all \citep[e.g.,][]{mezzacappa05,janka12,pejcha12,burrows13}. Supernovae in our Galaxy and its dwarf companions, while rare, enable a broad range of new probes, in particular, neutrinos \citep[e.g.,][]{thompson03,ikeda07,marek09,abbasi11,scholberg12}, gravitational waves \citep{ott09,leonor10,yakunin10,andersson13}, nuclear $\gamma$-rays \citep{gehrels87,timmes97,hungerford03}, and shock breakout (SBO) timing \citep{matzner99,kistler12}. Of these new probes, only neutrinos and nuclear $\gamma$-rays were demonstrated with SN 1987A \citep{hirata87,bionta87,matz88,sandie88,fryxell91,mccray93}.
Neutrinos are especially important, because they reveal the physical conditions in the core at the instant of collapse. The detection of a burst of MeV neutrinos that will be produced from a Galactic supernova can provide the answers to three important observational questions:
\begin{itemize}
\item
\emph{IF astronomers should look for a Milky Way supernova}. A high-statistics neutrino burst would decisively indicate that a core collapse had occurred in the Milky Way or one of its dwarf companions. The nature of the electromagnetic transient will depend on the success of the explosion, ranging from a full supernova to something weaker to perhaps something near-impossible to detect; at present, there is no ongoing optical or IR survey that guarantees rapid detection. Contrariwise, if no neutrinos are detected, then any electromagnetic transient is not a nearby core-collapse; it might instead be a supernova impostor or a Type Ia supernova.
\item
\emph{WHEN astronomers should look}. Neutrino detections will reveal the time of core collapse to within seconds. In principle, an alert could be distributed that rapidly. This would provide an early warning that could enable the detection of the SBO signal, the early supernova light curve, and any surprises about the first electromagnetic signals following core collapse. The precise timing will also help the detection of possible gravitational-wave signals, which may be detectable for collapses with adequate deviations from spherical symmetry. The time-integrated neutrino signal is relatively well known and should have relatively modest variations from one event to another, including the case of failed supernovae with black hole formation.
\item
\emph{WHERE astronomers should look}. For a Milky Way core-collapse, the Super--Kamiokande detector will be able to exploit the directionality in neutrino-electron scattering to restrict the source direction to within a few degrees. This will greatly improve the chances of successful electromagnetic searches on short timescales. In principle, this information could be distributed in less than a minute. This directionality will only be valuable if there is a means to quickly exploit it with wide-field instruments to first narrow the search region and then quickly follow up with more powerful instruments.
\end{itemize}
Optical/near-IR observations will remain a crucial component of studies of Galactic SNe. This includes traditional uses such as characterizing the external explosion \citep[energy, mass, composition, velocity; e.g.,][]{hamuy03} and the properties of the progenitor \citep[e.g.,][]{smartt09r}, but also new probes (see Fig. \ref{fig:schematic}), such as progenitor variability \citep[e.g.,][]{szczygiel12}, precursor eruptions \citep[e.g.,][]{pastorello07,ofek13,mauerhan13}, and constraining the existence of failed SNe \citep{kochanek08}. Now that large neutrino detection experiments are running, the next Galactic ccSNe will also provide an unprecedented opportunity to measure the delay time between neutrino detection and shock breakout, which would probe the density structure of the progenitor \citep{kistler12}. All these applications depend critically on the optical/near-IR observability of the SNe and its progenitor given our position near the midplane of a dust-filled disk galaxy.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig1.pdf}
\caption{Schematic time sequence for the stages of a ccSN. The scaling of the time axis varies to display vastly different timescales. The top panel shows the combined bolometric electromagnetic and neutrino luminosities, while the bottom panel displays the typical V-band magnitudes. The progenitor phase refers to the pre-core collapse star. With the ignition of carbon and later stages of nuclear burning the progenitor may experience episodes of high variability in the millennia, years, or days before the core-collapse, where the maximum and minimum luminosities and magnitudes for these precursor events are from SN 2010mc \citep{ofek13} and SN2011dh \citep{szczygiel12}. The core-collapse releases $\sim$ $10^{4}$ times more energy in neutrinos in $\sim$ 10 seconds than is released in the electromagnetic signal of the supernova over its entire duration. The progenitor luminosity and post-shock breakout light curve are from SN1987A and its likely progenitor, SK -69d 202 \citep{arnett89,suntzeff90}, and the error bar on the peak of the $M_{V}$ light curve represents the full range of peak magnitudes observed by \cite{li11a}. \label{fig:schematic}}
\end{figure}
Aspects of this problem have been discussed previously. \cite{bergh75} presents predictions of the V-band observability of the next Galactic SNe assuming the Galaxy was a uniform disk with uniform absorption and a uniform incidence of SNe and further discusses the prospects of distance determination. \cite{tammann94} use a similar exercise to infer the Galactic SN rate; their model consists of thin disk, thick disk, and halo components as well as a dust distribution, but no further details are given.
Given the improvement in models of the Galactic dust distribution, it is worth revisiting the estimates of \cite{bergh75} and \cite{tammann94}. We model the SNe distribution with a double-exponential disk model using modern estimates of the scale lengths and heights for each population, and model the extinction with a similar double-exponential distribution normalized to the line of sight extinction of modern dust maps. We present results for both V-band and near-IR observability. We also fold in the observed luminosity functions of SNe and estimate the probability of identifying the SNe progenitor in archival data.
We separately consider SNe Ia and ccSNe, since they should have differing spatial distributions, and use our modeled SN observability to infer a Galactic supernova rate. We also predict the observability of the shock breakout and failed supernovae. Finally we review the neutrino detection process and discuss how near real-time neutrino alerts could be provided. In \S2 we define our models. We discuss the electromagnetic observability results in \S3, neutrino detection in \S4, and present our conclusions in \S5. Two appendices outline observational systems to detect Galactic SBO emission even in daytime and for observing extragalactic SBO events within the Local Volume. Throughout the paper we use the Vega magnitude system.
\section{Models}
\label{sec:models}
The basis of our model is a Monte Carlo simulation of the positions of Galactic SNe and their corresponding dust extinctions. We model the progenitor and dust distributions using the double-exponential spatial distribution
\begin{equation}
\rho = A e^{-R/R_{d}}e^{-|z|/H}
\label{eq:exp}
\end{equation}
where $R$ is the Galactocentric radius and $z$ is the height above the Galactic mid-plane. We must define $A$, $R_{d}$, and $H$ for the dust distribution, the core-collapse supernova (ccSN) progenitors, and the Type Ia (SN Ia) distribution. We outline our approach for each of these cases in the following subsections. For these models we use several of the same input parameters as TRILEGAL (TRIdimensional modeL of thE GALaxy), a population synthesis code for simulating stellar populations along any direction through the Galaxy \citep{girardi05}. The Sun is placed $H_{\odot} = 24$ pc above the mid-plane of the disk at a Galactocentric radius of $R_{\odot} = 8.7$ kpc. The Galactic thin and thick disk components are truncated at $R_{out} = 15$ kpc.
\subsection{Dust}
\label{sec:dust}
We assume that dust largely traces star formation, and thus use the scale length of the thin disk for the scale length of the dust distribution. We adopt a scale length of $R_{d} = 2.9$ kpc from the TRILEGAL model. TRILEGAL uses a scale height of $H = 110$ pc for the dust and $H = 95$ pc for the thin disk. While we calculated results for both values, the differences were so small that we only discuss the result for $H = 110$ pc. These choices for the spatial distribution are less critical than the estimated total extinction along any line of sight. We separately consider four possible normalizations for the total line of sight extinction. The simplest method we use (hereafter referred to as SIMPLE) distributes the dust following Eqn. \ref{eq:exp} and normalizes the distribution to have $A_{V} = 30$ to the Galactic center. In the remaining models, we distribute the dust along any line of sight following Eqn. \ref{eq:exp}, but normalize each line of sight using an empirical model for the total extinction in that direction. In our second model (SFD98), we normalize the extinction along each line of sight by the total extinction from \cite{schlegel98}. However, \cite{schlegel98} is believed to overestimate $E(B-V)$ in regions of high extinction \citep{stanek98,arce99,chen99}. To account for this, we consider a modified SFD98 model (modSFD98), where we correct the high extinction values following \cite{bonifacio00}, such that
\ifapj
$E(B-V)' = E(B-V)$ for $E(B-V) \leq 0.1$ and $E(B-V)' = 0.1 + 0.65(E(B-V)-0.1)$ for $E(B-V) > 0.1$,
\else
\begin{equation}
E(B-V)' = \left\lbrace \begin{array}{lcl}
E(B-V) & \mbox{for} & E(B-V) \leq 0.1 \\
0.1 + 0.65(E(B-V)-0.1) & \mbox{for} & E(B-V) > 0.1
\end{array}\right.,
\label{eq:extinction_fiddle}
\end{equation}
\fi
which significantly reduces the total extinction in the Galactic plane. Since the SFD98 dust maps may be completely problematic in the areas of high extinction found near the Galactic midplane \citep[e.g.,][]{majewski11}, we also consider a model employing the Rayleigh-Jeans Color Excess (RJCE) extinction map of the Galactic midplane presented by \cite{nidever12} where possible, falling back to the modified SFD98
\ifapj
\else
(Eqn. \ref{eq:extinction_fiddle})
\fi
only in the 17\% (42\%) of cases where our ccSNe (SNe Ia) lay outside of the RJCE extinction map footprint. We note that the RJCE extinction map is derived from red giant branch stars which lie within the Galaxy, and so only estimates the total extinction out to 18-20 kpc from the observer. This should still represent the total extinction for most of the simulated SN positions. For comparison we also present results that assume no extinction (No Dust).
We adopt $A_{V} = R_{V} E(B-V)$ and $A_{K} = 0.114 R_{V} E(B-V)$ following \cite{cardelli89}, with $R_{V}=3.1$.
The extinction law, the value of $R_{V}$ at its simplest, is not uniform in the Galaxy. The value of $R_{V}=3.1$ we adopt is an average value \citep[e.g.,][]{cardelli89}. Dense molecular clouds can have far larger values of $R_{V}$ \citep[e.g.,][]{jenniskens93,olofsson10}, but molecular clouds cover a small fraction of the sky. Dust in lower extinction directions towards the bulge show evidence for an $R_{V}$ significantly below $R_{V} =3.1$ \citep{nataf13}, but if extinction is low, the particular value of $R_{V}$ is unimportant. Readers should be aware of these issues, but they will not dominate our results in any typical direction where extinction is dominated by integrating the normal ISM over long sight lines.
\subsection{ccSNe}
\label{sec:ccSNe}
We assume that ccSNe trace the thin disk and use the thin disk parameters from TRILEGAL ($R_{d} = 2.9$ kpc and $H = 95$ pc) described in \S\ref{sec:dust}. The distance probability distribution of ccSNe for these parameters is shown in Fig. \ref{fig:dcomp} and the extinction probability distributions of ccSNe for the different dust models are displayed in the left panels of Fig. \ref{fig:Acomp}. We only present results for this single set of thin disk parameters because, to the extent that dust traces massive star formation, the exact choice of disk parameters is relatively unimportant. First, if the dust distribution traces the distribution of massive star formation, then the differential distribution of ccSNe along a line of sight, $dN/dl$, is proportional to the differential of the optical depth along the line of sight, $d\tau/dl$. Thus, if dust traces star formation, the differential distribution of the progenitors in optical depth, $dN/d\tau$, is independent of the line of sight spatial distribution chosen for the ccSNe and the dust. Second, any effects from changing the spatial distribution are negligible compared to the differences between the extinction models.
For example, consider the effects of adding a spiral arm at 1 kpc with a 1 kpc inter-arm distance. Our model will spread the star formation of that arm uniformly over $\sim1/2$ the inter-arm separation rather than putting it in the arm. This means we spread the distance modulus by $\sim 0.6$ mag at most, which is negligible compared to the effects of the dust distribution. For more distant arms, the problem rapidly becomes even smaller, so the detailed 3-D structure of the disk is unimportant to our results.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig2.pdf}
\caption{Differential (top) and cumulative (bottom) distance distributions of Galactic SNe from the Sun. Reasonable changes in the distance distributions have little effect on the visibility, so we only present the fiducial case. In particular, 3-D structure, such as spiral arms, would produce features in this figure but would have little consequence for the magnitude distribution of SNe, as discussed in \S\ref{sec:ccSNe}.\label{fig:dcomp}}
\end{figure}
Given the distances and extinctions to each supernova position, we calculate the apparent magnitude probability distribution of ccSNe. We first consider a case using a fixed magnitude of $M_{V,max} = -16$ and $V-K = 1.0$, where the color is a ``typical" value from \cite{krisciunas09}. This simple case allows the reader to easily rescale the observability for arbitrary luminosity and color. We also present the magnitude distribution obtained by folding in the ccSNe luminosity function found by \cite{li11a} and use this case for quantitative estimates of the observability of ccSNe. While folding in the luminosity function broadens the resulting magnitude distribution only slightly, this effect is easy to include.
We find the apparent magnitude distribution for the ccSNe progenitor population by assuming that the number distribution of the population is given by a Salpeter IMF ($dN/dM \varpropto M^{-2.35}$) with a minimum mass of $8 \mathrm{M}_{\odot}$ and a maximum mass of $100 \mathrm{M}_{\odot}$. To find the progenitor luminosity for a given mass, we rely on an interpolation of the Padova isochrones \citep{marigo08}, taking the progenitor luminosity to be the luminosity of the most massive star left on the isochrone. Other models would yield moderately different results due to differing treatments of mass loss and the transition between being red or blue supergiants and Wolf-Rayet stars at the time of explosion \citep[e.g.,][]{groh13}.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig3.pdf}
\caption{Differential (top) and integral (bottom) extinction distributions for ccSNe (left) and SNe Ia (right). The bottom axis for each panel gives the V-band extinction (in magnitudes), while the top axis gives the K-band extinction. The models for the different dust normalizations are described in \S\ref{sec:dust}. The model dependence of the extinction distribution, rather than the distance distribution, is the primary source of uncertainty in the visibility of SNe and their progenitors.\label{fig:Acomp}}
\end{figure}
\subsection{SNe Ia}
\cite{mannucci06} and \cite{brandt10} find that SNe Ia progenitors can be described by a bimodal progenitor delay time distribution, with approximately half the SNe Ia occurring at stellar ages of order 100 Myr and the remaining half occurring on Gyr timescales. Therefore, we draw our SNe Ia progenitors equally from the thin disk population used for the dust and ccSNe in \S\ref{sec:dust} and \S\ref{sec:ccSNe} and from a thick disk population with $R_{d} = 2.4$ kpc and $H = 800$ pc, again following the TRILEGAL parameters.
We note that recent work has advocated a continuous delay time distribution \citep{horiuchi10,maoz12}, but this extra complication seemed unnecessary for our present models.
As with the ccSNe, the distance and extinction probability distributions of SNe Ia are shown in Figs. \ref{fig:dcomp} and \ref{fig:Acomp}.
We give the cumulative magnitude probability density for SNe Ia of a characteristic magnitude of $M_{V} = -18.5$ and $V-K = -0.7$ \citep{folatelli10} and by folding in the SNe Ia luminosity function from \cite{li11a}, using the results of the latter for quantitative estimates of the observability of SNe Ia. As for the ccSNe, the elaboration of including the observed luminosity function is simple but only slightly broadens the resulting magnitude distribution.
\subsection{Confusion}
The observability of SNe or their progenitors located towards the Galactic center may be reduced by confusion. We will discuss confusion only in relation to the progenitors of ccSNe since the nature of SN Ia progenitors is debated. We note, however, that current searches for binary companions to SN Ia in the Galaxy and LMC are primarily limited by the difficulties in inferring the position of the SN from the geometry of the remnant \citep[see, e.g.,][]{edwards12,kerzendorf12,schaefer12}. The position of any new SN Ia would be directly measured, greatly simplifying the search for either the progenitor or a surviving companion, donor star. To estimate the effect of confusion in the near-IR, we measured the density of sources brighter than $m_{0K}=12$ in the 2MASS catalog \citep{skrutskie06} towards each simulated progenitor position. We then extrapolate the integrated surface density, $\Sigma_{K}$, based on a power law, $\Sigma_{K} = \Sigma_{0K}10^{\alpha_{K}(m_{prog}-m_{0K})}$, where $m_{prog}$ is the apparent magnitude of the progenitor, $\alpha_{K}$ is the power law index, and $\Sigma_{0}$ is the source density to the limiting magnitude. We estimated $\alpha_{K} \sim 0.4$ by fitting the slope of the log of the number of sources in the 2MASS catalog for different limiting magnitudes at different coordinates.
We followed a similar procedure for estimating the effect of confusion in the V-band. We measured the density of sources brighter than $m_{0V}=17$ in the USNO-B1.0 catalog \citep{monet03} towards each simulated progenitor position, with rough $V$ magnitudes estimated from the photographic $R$ and $B$ magnitudes by the relation\footnote{\url{http://www.aerith.net/astro/color_conversion.html}} $V = 0.625R + 0.375B$. We extrapolate the integrated surface density of sources in V-band by $\Sigma_{V} = \Sigma_{0V}10^{\alpha_{V}(m_{prog}-m_{0V})}$, where we have estimated that $\alpha_{V} \sim 0.45$.
While the available data are not ideal for these estimates, they should be adequate.
For each Monte Carlo realization, the probability, $P$, of finding a source with $m<m_{target}$ within a given radius of the target is $P = 1 - e^{-\Sigma\theta^{2}}$.
\subsection{Shock Breakout}
\label{sec:sbo}
The first electromagnetic signature from a SN is not the familiar rise to peak and then decline over weeks or months, but a short flash of radiation as the shock wave ``breaks out" from the surface of the star. While this SBO phenomenon also occurs in SNe Ia, we focus on ccSNe because SNe Ia do not emit a (currently) detectable neutrino signal that could be used to trigger a search \citep{odrzywolek11} and the shock breakout from a white dwarf would be much fainter than that from a massive star \citep{piro10}.
The breakout pulse from ccSNe has only been observed a few times \citep[GRB 060218/SN 2006aj, XRT 080109/SN 2008D, and SNLS-04D2dc;][]{campana06,soderberg08,schawinski08} because its characteristic duration roughly corresponds to the light crossing time of the star, $R_{*}/c$. It occurs with a delay after the neutrino or gravity wave pulse set by the time for the shock to reach the surface of the star, making it a probe of the structure of the star \citep{kistler12}. The search for a pulse lasting seconds to hours occurring minutes to days after a neutrino or gravitational wave trigger is challenging for observers based on a rotating Earth orbiting a bright star and embedded in a dusty Galactic disk.
As a simple approximation for the SBO properties we adopt the $n=3$ (radiative) polytrope model of \cite{matzner99}, similar to the recent work by \cite{kistler12}. Fixing the explosion energy at $10^{51}$ erg, simply using Thomson opacities and defining the luminosity as the characteristic energy divided by the characteristic time scale, we obtain order of magnitude estimates that
\ifapj
\begin{eqnarray}
T_{\mathrm{eff}} \sim 1.24 \times 10^{6} \mathrm{K} \left( \frac{M_{\star}}{M_{\odot}} \right)^{0.046} \left( \frac{M_{\mathrm{ej}}}{10 M_{\odot}} \right)^{-0.114} \nonumber \\ \times \left( \frac{R}{50 R_{\odot}} \right)^{-0.48}
\end{eqnarray}
and
\begin{eqnarray}
L \sim 1.66 \times 10^{45} \mathrm{erg/s} \left( \frac{M_{\star}}{10 M_{\odot}} \right)^{0.126} \left( \frac{M_{\mathrm{ej}}}{10 M_{\odot}} \right)^{-0.816} \nonumber \\ \times \left( \frac{R}{50 R_{\odot}} \right)^{-0.22}
\end{eqnarray},
\else
\begin{eqnarray}
T_{\mathrm{eff}} \sim 1.24 \times 10^{6} \mathrm{K} \left( \frac{M_{\star}}{M_{\odot}} \right)^{0.046} \left( \frac{M_{\mathrm{ej}}}{10 M_{\odot}} \right)^{-0.114} \left( \frac{R}{50 R_{\odot}} \right)^{-0.48} \nonumber
\end{eqnarray}
and
\begin{eqnarray}
L \sim 1.66 \times 10^{45} \mathrm{erg/s} \left( \frac{M_{\star}}{10 M_{\odot}} \right)^{0.126} \left( \frac{M_{\mathrm{ej}}}{10 M_{\odot}} \right)^{-0.816} \left( \frac{R}{50 R_{\odot}} \right)^{-0.22},
\end{eqnarray}
\fi
where $M_{\star}$ is the mass of the progenitor, $M_{\mathrm{ej}}=M_{\star}-1.4 M_{\odot}$ is the mass of the ejecta assuming a neutron star is formed, and $R$ is the progenitor radius. Combined with our model for the progenitor properties (see \S \ref{sec:progenitor}), this leads to the predicted distribution of the peak absolute magnitudes of the SBO shown in Fig. \ref{fig:sbo_mag} if we assume the radiation is thermalized to a blackbody spectrum. This is an important assumption \citep[see, e.g., the discussion in][]{nakar10}, and requires careful consideration as part of any full design study of our proposal for a Galactic SBO detection system in Appendix \ref{app:One} or extragalactic SBO detection system in Appendix \ref{app:Two}. \cite{sapir13} find that the SBO radiation has a shallower spectral slope at low energies than blackbody radiation, meaning that the SBO optical and IR luminosities we present in \S\ref{sec:observing_sbo} assuming a blackbody spectrum should be taken as lower limits. Nonetheless, these should be regarded as only order of magnitude estimates of the SBO flux.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig4.pdf}
\caption{Cumulative absolute (left) and apparent (right) magnitude probability distributions in the optical (top) and near-IR (bottom) for the shock breakout radiation from ccSNe. These estimates assume that the SBO radiation is thermalized. For comparison we show the approximate surface brightness (in $\mathrm{mag}/\mathrm{arcsec}^{2}$) of the daytime sky in the visible and in the near-IR. Note that the majority of shock breakouts from Galactic SNe appear brighter than the daytime sky in the near-IR.\label{fig:sbo_mag}}
\end{figure}
\subsection{Failed SNe}
\label{sec:failed_sne}
There appears to be a paucity of higher mass progenitors to ccSNe
\citep{kochanek08,smartt09a,eldridge13}.
In particular, \cite{smartt09a} note that the maximum zero-age main-sequence mass
that seems to be associated with Type IIP SNe originating from
red supergiants appears to be $\sim 17M_\odot$,
while stars are expected to explode as red supergiants up to
$\sim 25 M_\odot$. Simulations show that stars in this mass
range have density structures that make it more difficult for these
stars to explode as SNe \citep{oconnor11,ugliano12}.
While there has always been some parameter range where black
hole formation without an SNe was expected \citep[e.g.,][]{heger03},
these recent results suggest that the phenomenon is more common
than the earlier view that it would be restricted to very high
mass stars in low metallicity galaxies. The upper limit on the fraction of ccSN that fail to explode is $\sim50\%$ with a nominal estimate of $\sim10\%$ \citep{lien10,horiuchi11}.
It is perfectly feasible to search for such failed SN based on
electromagnetic signatures because in the final analysis a
massive, luminous star effectively vanishes, potentially with
some interim transient \citep{kochanek08}. In the Galaxy
one has the added advantage that neutrinos will clearly indicate
that a core collapse has occurred. \cite{nadezhin80} pointed
out that the envelopes of red supergiants in this mass range are so tenuously
bound that the drop in gravitational potential energy due to
the binding energy carried by the escaping neutrinos
during core collapse is sufficient to unbind
the envelope. Recently, \cite{lovegrove13} have
simulated the process using realistic models of $15$ and $25M_\odot$
red supergiants and confirmed the effect. The external signature
consists of an SBO, followed by a roughly
year long transient with a luminosity of $\sim 10^6 L_\odot$ and
an apparent temperature of order $3000$~K as the envelope
expands, cools, and releases the energy associated with
recombination. Because the shock velocities of $\sim 10^2$~km/s
are much lower than for a true SN, the shock breakout pulse is
both much weaker and much cooler. The \cite{lovegrove13}
simulations show a peak of order $10^6 L_\odot$ but may not
adequately resolve the thin surface layer. \cite{piro13} applied
analytic models of shock breakouts, finding a peak luminosity
of order $3 \times 10^7 L_\odot$ with a temperature of order
$10^4 K$ and lasting $\sim 10$~days, and determine that the shock breakout spectrum is thermal. As a rough guide to
the detectability of such transients we make the shock breakout a $10^7 L_\odot$, $10^4$~K blackbody, and the transient
a $10^6 L_\odot$, $3000$~K blackbody.
\section{Results}
\label{sec:results}
Using our Galactic model we evaluate, in the following subsections, the prospects of observing the next Galactic ccSNe, its shock breakout, its progenitor, any precursor variability, and failed SNe. Where relevant, we discuss both ccSNe and SNe Ia. We also infer a Galactic SNe rate from historical SNe using our simulated observability. We adopt the RJCE extinction model as our standard, and in most cases simply show the results for the other models in the figures.
\subsection{Prospects of Observing the Next Galactic SN}
\label{sec:sne}
There is little likelihood of the next (successfully exploding) Galactic SN being unobservable. We present the cumulative apparent magnitude distributions for ccSNe in Fig. \ref{fig-SNeII_mag} and SNe Ia in Fig. \ref{fig-SNeIa_mag}. Most ccSNe should be observable in the optical and virtually all should be observable in the near-IR. For example, there is a 99\% chance that the next Galactic ccSN will peak at $m_{V,max}<25$ and a $\simeq 100\%$ chance of $m_{K,max}<14.3$. In fact, it is likely that a Galactic ccSN would be observable by semi-professional amateurs knowing where to look, with 82\% of Galactic ccSNe having $m_{V}<15$. There is approximately a one-in-three chance that a Galactic ccSN would be visible with the naked eye ($m_{V}<5$).
\begin{figure}
\centering\includegraphics[width=9.2cm, angle=0]{fig5.pdf}
\caption{Cumulative magnitude probability distributions for ccSNe and their progenitors. The top panels assume a fixed $M_{V,max} = -16$. The middle panels use the luminosity function found by \cite{li11a}. In both cases we use a fixed $V-K = 1.0$. The bottom panels are derived from the Padova isochrones \citep{marigo08}. The models for the different dust normalizations are described in \S\ref{sec:dust}. To illustrate the importance of extinction, we show the brightness of a typical ccSN occurring in the LMC ($m_V \approx 2.7$, $m_K \approx 1.5$, assuming $M_{V,max}=-16$, a distance modulus of 18.5 \citep{pietrzynski13}, and an extinction of $A_{V}=0.2$.\label{fig-SNeII_mag}}
\end{figure}
SNe Ia will be even easier to observe because the delayed component lies off of the Galactic plane and will be less extinguished. While there will be no neutrino trigger or pointing information to search for a Galactic SN Ia, 92\% will have $m_{V,max}<13.5$, which is within the limits of current all sky surveys, and, if the Large Synoptic Survey Telescope (LSST) monitors the Galactic plane, over 99\% of SN Ia would be detected.
Confusion will have little effect on the observability of Galactic SNe because the vast majority of SNe appear relatively bright in both V and K-bands. We also note that the position of a Galactic SN Ia could easily be determined on a time scale of weeks or months using 56Ni/56Co gamma rays \citep{timmes97,horiuchi10,diehl12,ng12}. A Galactic ccSN could likely be observed in gamma rays given some directional information \citep{timmes97,diehl12}.
\begin{figure}
\centering\includegraphics[width=9.2cm, angle=0]{fig6.pdf}
\caption{Cumulative magnitude probability distributions for SNe Ia. The upper panels assume a fixed $M_{V,max} = -18.5$. The lower panels use the luminosity function found by \cite{li11a}. Both panels use a fixed $V-K = -0.7$ mag. Given the model uncertainties, we make no attempt to predict the progenitor properties of SNe Ia.\label{fig-SNeIa_mag}}
\end{figure}
As emphasized in \S\ref{sec:ccSNe}, the magnitude distribution is primarily controlled by extinction rather than by the small spread in distance modulus across the Galaxy.
Fig. \ref{fig:dcomp} shows that the 10th-90th percentiles of the cumulative distance probability distribution range from approximately 5-15 kpc, which corresponds to a less than 2.5 magnitude spread in distance modulus. Meanwhile, Fig. \ref{fig:Acomp} shows that the 10th-90th percentiles of the cumulative extinction probability distribution using the RJCE model differ by approximately 15 magnitudes.
If we use the modSFD98 dust model, instead of RJCE, the predicted observability decreases substantially in V-band but remains near 100\% for both ccSNe and SNe Ia in K-band. Confusion has a noticeable effect on the V-band observability when using the modSFD98 model because this model predicts a substantial number of SNe will be faint ($20 \lesssim m_{V,max} < 25$). With the modSFD98 model there is a $\sim 76\%$ chance the next Galactic ccSN will peak at $m_{V,max}<25$, which decreases to $\sim 72\%$ when also requiring no brighter source within 1". Similarly, there is an 88\% chance that a Galactic SN Ia would peak at $m_{V}<25$, and an 86\% that there will also be no brighter source within 1".
\subsection{Progenitor Characteristics}
\label{sec:progenitor}
We show the cumulative magnitude probability distribution of likely ccSNe progenitors in Fig. \ref{fig-SNeII_mag}. We do not consider SNe Ia in this section because it is not clear what mechanism (single or double degenerate) is responsible for the majority of SN Ia events and it is clear that they would be much less luminous than ccSNe progenitors. We again emphasize, however, that the precise astrometric position available for a new Galactic SN Ia will greatly simplify attempts to characterize the progenitor.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig7.pdf}
\caption{Probability, as a function of angular separation from a ccSNe or its progenitor, of a source with $m<m_{target}$ being present. The panels are the same cases as in Fig. \ref{fig-SNeII_mag}. Note that the probability of confusion affecting K-band observations of a Galactic SN is negligable. The results for some of the dust models cannot be seen in the plot because they are essentially zero.}\label{fig-SNeIIconf}
\end{figure}
Fig. \ref{fig-SNeIIconf} shows the probability of finding a star with $m<m_{prog}$ within a given radius of a ccSNe progenitor. In the near-IR, it is unlikely that confusion is a problem. We find that $92\%$ of likely progenitor stars have already been observed by 2MASS, assuming $m_{K,lim}=14.3$ and no brighter sources within $2"$.
In the optical, the odds are less favorable. We find that $57\%$ of likely progenitor stars are in the USNO-B1.0 catalog, given $m_{V,lim}=21.0$ and no brighter sources within $1"$. A similar fraction of likely progenitors should be included in the recently completed INT Photometric H$\alpha$ Photometric Survey of the Northern Galactic Plane \citep[IPHAS;][]{drew05} and the ongoing VST/OmegaCam Photometric H$\alpha$ Survey of the Southern Galactic Plane and Bulge (VPHAS+). A lack of optical data on the progenitor would limit our ability to physically characterize the progenitor because measurements near the peak of its spectral energy distribution are needed to constrain its temperature and thus its luminosity. However, the ability to increase the fraction of progenitors with optical data is limited by the enormous extinction toward the Galactic center. Even if LSST eventually images the entire sky with $m_{V,lim}=26.5$ in its coadded images, the likelihood of the progenitor being observed with no brighter sources within 1" only increases to 66\%.
The K-band results are relatively insensitive to the extinction model (with the 2MASS observability dropping slightly to 89\% with modSFD98 extinction), but the V-band results decrease significantly when using the modSFD98 model, with only 42\% of likely progenitor stars in the USNO-B1.0 catalog given the same magnitude and resolution limits, and only increasing to 48\% when considering coadded imaging from LSST.
The impact of confusion on the observability of SNe Ia will be negligible (see Fig. \ref{fig-SNeIaconf}).
We also note that difference imaging methods, scaling and subtracting post-explosion images from the pre-explosion image, can essentially eliminate confusion, as has been done for several extragalactic SNe \citep{galyam09,maund13}. This depends on also matching the effective band passes of the data and likely cannot be applied to older photographic survey data.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig8.pdf}
\caption{Probability, as a function of angular separation from a Galactic SN Ia, of a source with $m<m_{target}$ being present. The panels are the same cases as in Fig. \ref{fig-SNeIa_mag}. Note that the probability of confusion affecting K-band observations of a Galactic SN is negligable. The results for some of the dust models cannot be seen in the plot because they are essentially zero.}\label{fig-SNeIaconf}
\end{figure}
Several SNe have now shown high luminosity ($10^{7}-10^{8}L_{\odot}$) eruptions in the years or months \citep{pastorello07,ofek13,mauerhan13} preceding their explosions as SNe. While attention has focused on the dramatic but rare examples of pre-SN outbursts, there is no reason to think that the lower level of variability observed by \cite{szczygiel12} for SN2011dh is not the norm. Few surveys exist that image the Galactic plane with the cadence necessary to detect variability in the precursor of the next Galactic SNe. For example, the ``New Milky Way" system is being developed to survey the entire Milky Way area visible from its observing site each night to a limiting magnitude of $m_{V,lim} \sim 13.5$ \citep{sokolovsky13}. The ASAS survey monitors the full sky to a limiting magnitude of $m_{V,lim} \sim 14$ \citep{pojmanski02} and will be extended to $\sim 16$ by the ASAS-SN survey \citep{shappee13}. However, an all-sky catalog with $m_{V,lim} \leq 14$ (16) with no other sources brighter within 1" only includes $\sim 21\%$ (32\%) of ccSNe progenitors. LSST has the potential to improve this situation, with 66\% of ccSNe progenitors observable with $m_{V}<24.5$ and no brighter sources within 1", however, LSST currently plans to ignore the Galactic plane \citep{abate12}. The best method to detect precursor variability is to survey the Galactic plane in the near-IR with at least monthly cadence. The VISTA Variables in the Via Lactea survey \citep{minniti10} will image a substantial fraction of the Galactic plane in the near-IR with sufficient cadence, but it is limited to a 5-year duration and a large fraction of likely SN progenitors will be brighter than its saturation limit. A limiting magnitude of $m_{K}\sim 8$ (10) would be sufficient to monitor $\sim 78\%$ (87\%) of likely ccSN progenitors for precursor variability, with the caveat that near-IR variability tends to be weaker than optical variability.
Progenitors of failed ccSNe can be as easily identified as those of successful ccSNe if there is a transient associated with the event, either the optical signature we discuss in \S\ref{sec:failed_sne} and \S\ref{sec:observing_failed_sne} or X-ray emission from accretion of residual material onto the newly formed black hole. Even without such additional information, the progenitor may still be identifiable by its absence in post-event imaging, essentially following the approach of \cite{kochanek08}. The challenge is separating the vanishing of the progenitor star from all other Galactic variable sources in the search region defined by the neutrino signal. This is likely feasible because most variable sources are ``continuously" variable, while the progenitor of a failed ccSNe can only vanish once.
\subsection{Observing the Shock Breakout}
\label{sec:observing_sbo}
Using our Galactic model, we can predict the distribution of SBO apparent magnitudes, given the radiation thermalization caveat from \S\ref{sec:sbo}, as shown in Fig. \ref{fig:sbo_mag}. An SBO occurring in the night-time sky would likely be easily observable in both the visible (P($m_{V}<20$)$\sim 0.85$) and near-IR (P($m_{K}$$<12$)$\sim 0.92$). Because of the high radiation temperature, the natural wavelength to search for an SBO is in the UV, which can only be done from space, as proposed by \cite{sagiv13}. For Galactic SNe and from the Earth's surface, however, the best wavelengths are actually in the near-IR. This is a combination of the effects of Galactic absorption and the fact that the daytime near-IR sky is darker than in the optical. The probability of a thermalized shock breakout exceeding the brightness of the daytime near-IR sky is 60, 63, and 64\% using approximate near-IR sky brightnesses from \cite{jim11} of 6.6, 5.6, and 4.9 mag per square arcsec in J, H, and K respectively. We present a design sketch of an instrument to detect Galactic SBO pulses even in daytime in Appendix \ref{app:One}. While a SBO would still be unobservable if the SN appears too close to the Sun, we expect only 2\% (9\%) of SNe to occur with $20^{\circ}$ ($40^{\circ}$) of the Sun. Even in these cases where it would be difficult or impossible to detect the SBO, the duration of the SN is long enough that there will be no trouble identifying the SN as it fades and becomes observable at night. Given the brightness of SBO events, we present in Appendix \ref{app:Two} a system capable of detecting extragalactic shock breakouts that occur within the Local Volume.
\subsection{Identifying Failed SNe}
\label{sec:observing_failed_sne}
The prospects of identifying a star dying without a dramatic SN explosion are challenging but feasible in external galaxies \citep{kochanek08}. Within the Galaxy it is more difficult because you have to search a huge area and the unknown distance means that you cannot associate a flux with a luminosity. However, if failed SNe associated with red supergiants follow the models of \cite{lovegrove13} and \cite{piro13}, it would likely be possible to observe the weak shock breakout and transient associated with such an event occurring within the Galaxy provided that rough positional information is obtained from neutrino or gravitational wave experiments.
While failed SNe arising from red supergiants and their associated shock breakouts are less luminous than traditional ccSNe, their cooler temperatures make their observability comparable. Assuming the radiation is thermalized, the absolute magnitudes, in various filters, of a typical weak shock breakout from a failed SN are $M_{B}\sim -13.3$, $M_{V}\sim -13.5$, $M_{R}\sim -13.6$, $M_{I}\sim -13.7$, $M_{J}\sim -13.6$, $M_{H} \sim -13.7$, and $M_{K} \sim -13.6$. The fraction of such events brighter than an arbitrary magnitude in V or K can be scaled from the top panels of Fig. \ref{fig-SNeII_mag}. Using the RJCE extinction model this corresponds to $\sim 100\%$ of events with $m_{K}<15$ and 91\% with $m_{V}<20$, which is slightly better than the observability of the normal ccSN SBOs (see Fig. \ref{fig:sbo_mag}). However, it is noteworthy that this improved observability translates to $\sim 97\%$ of such shock breakout events appearing brighter than the near-IR sky ($m_{K}<4.9$).
Similarly, for the transient, $M_{B}\sim -6.9$, $M_{V}\sim -8.7$, $M_{R}\sim -10.0$, $M_{I}\sim -11.2$, $M_{J}\sim -12.2$, $M_{H} \sim -12.9$, and $M_{K} \sim -13.3$, which translates to $\sim 100\%$ of events with $m_{K}<15$ and 72\% with $m_{V}<20$.
\subsection{Estimates of the Galactic SNe Rate}
We can also use the magnitude distribution of Galactic SNe from \S\ref{sec:sne} together with historical SNe to estimate the frequency of Galactic supernovae, the Galactic star formation rate (SFR), and the ratio of Galactic core-collapse to Type Ia SNe. These estimates, however, are limited by the small number of recorded SNe and the completeness of the record. \cite{stephenson02} conclude that 5 Galactic SNe have been observed since 1000 AD, when the historical records become relatively complete. These supernovae are SN~1006 (SNIa), SN~1054 (ccSN), SN~1181 (ccSN), SN~1572 (SNIa), and SN~1604 (SNIa). \cite{clark77} estimate that the apparent brightness of SN1181 was $\sim0$ mag, while the other 4 SNe were $\lesssim-4$ mag. Given the magnitude probability distributions of our models, there is only a $\simeq 3\%$ chance that only one SN occurred with $-3<m_{V,max}<0$ if 4 occurred with $m_{V,max}<-3$. This suggests that the historical record may be incomplete for SNe fainter than $m_{V,max}\lesssim-2$. Therefore we separately present results using the $m_{V,max}<-2$ (3 SNeIa and 1 ccSNe) and the full SN samples (3 SNeIa and 2 ccSNe), but take the results found using the $m_{V,max}<-2$ sample to be more meaningful. While the supernova that produced Cas A might have been observed in 1680 as a 6th magnitude event \citep[see, e.g.,][]{thorstensen01}, we do not consider it in our analysis since the historical record is clearly incomplete for such faint SNe.
Our results from \S\ref{sec:sne} show that $3.6\%$ ($9.0\%$) of Galactic ccSNe and $24\%$ ($40\%$) of SNe Ia have $m_{V,max} < -2$ (0). SNe Ia are more observable than ccSNe both because they are intrinsically brighter and because the delayed SNe Ia component suffers from far less extinction due to its larger scale height.
Historical SNe were recorded almost exclusively by cultures in the Northern hemisphere, with China providing the most complete record.
Since 1000 AD, the Chinese capitals, where the observations that are the basis of the historical records were made, were located primarily between $30^{\circ}$ and $40^{\circ}$ N. We find that for a fiducial latitude of $35^{\circ}$ N, $90\%$ of Galactic SNe with $m_{V,max}<-2$ would have been above the horizon at night during their peak.
The number of historical SNe, folded together with our simulated observability at $35^{\circ}$ N, suggests a Galactic core-collapse SN rate of $3.2^{+7.3}_{-2.6}$ ($2.5^{+3.4}_{-1.6}$) per century and a Galactic Type Ia SN rate of $1.4^{+1.4}_{-0.8}$ ($0.8^{+0.8}_{-0.5}$) per century for a total Galactic SN rate of $4.6^{+7.4}_{-2.7}$ ($3.4^{+3.4}_{-1.7}$) per century using $m_{V,max}<-2$ (0) limits.
Repeating this exercise using the modSFD98 extinction (instead of RJCE), we find that $3.4\%$ ($8.2\%$) of Galactic ccSNe and $24\%$ ($39\%$) of SNe Ia have $m_{V,max} < -2$ (0). This corresponds to $7.2\%$ ($15\%$) of all Galactic SNe having $m_{V,max} < -2$ (0), a Galactic core-collapse SN rate of $3.4^{+7.8}_{-2.8}$ ($2.8^{+3.7}_{-1.8}$) per century, and a Galactic Type Ia SN rate of $1.4^{+1.4}_{-0.8}$ ($0.8^{+0.8}_{-0.5}$) per century for a total Galactic SN rate of $4.8^{+7.9}_{-2.9}$ ($3.7^{+3.8}_{-1.9}$) per century using $m_{V,max}<-2$ (0) limits. The different extinction models we test give fairly consistent results for the inferred Galactic SN rate because these bright SNe must be relatively nearby where the details of the dust model are relatively unimportant.
The SN rates we infer are in reasonable agreement with other estimates that are found by a variety of methods, including historical Galactic SNe \citep[][with $2.5^{+0.8}_{-0.5}$ SN/century and $5.7\pm1.7$ SN/century respectively]{tammann94,strom94}, the massive star birthrate \citep[][1-2 ccSN/century]{reed05}, radioactive aluminum from massive stars \citep[][$1.9\pm1.1$ ccSN/century]{diehl06}, the pulsar birthrate \citep[][$2.8\pm0.1$ ccSN/century and $10.8^{+7}_{-5}$ ccSN/century respectively]{keane08,faucher06}, and the extragalactic SN rate by Hubble type and stellar mass \citep[][$2.8\pm0.6$ SN/century]{li11b}.
We use the SFR to core-collapse SN rate conversion coefficient of $0.0088/\mathrm{M}_{\odot}$ from \cite{horiuchi11}, which assumes a modified Salpeter initial mass function, to infer a Galactic SFR from our calculated rate of Galactic ccSNe. Using the rate based on the RJCE extinction model and the $m_{V,max}<-2$ ($m_{V,max}<0$) sample of ccSNe we estimate the Milky Way's SFR to be $3.6^{+8.3}_{-3.0}$ ($2.9^{+3.8}_{-1.9}$) M$_{\odot}$ yr$^{-1}$, where the quoted uncertainties are purely statistical. This SFR is consistent with direct estimates of the SFR, which range from 1 to 4 M$_{\odot}$ yr$^{-1}$ \citep[e.g.,][]{mckee97,murray10,robitaille10,chomiuk11,davies11}.
We can also use the observed fraction of each type of SN to place weak limits on the total fraction of each type of SN in the Milky Way. One of the four (2 of 5) observed Galactic SNe with $m_{V}<-2$ (0) since 1000 AD were ccSNe, which, folded together with the relative observability of core-collapse and Type Ia SNe, suggests that the fraction of Galactic SNe that are ccSNe, $f_{ccSN}$, is $0.69^{+0.22}_{-0.46}$ ($0.75^{+0.16}_{-0.31}$). This result is consistent with the $f_{ccSN}=0.81$ found for Milky Way-like galaxies by \cite{li11b}.
\begin{figure}
\includegraphics[width=9.2cm, angle=0]{fig9.pdf}
\caption{Limits on the fraction of core-collapse events that produce normal, luminous ccSNe as a function of the total Galactic core-collapse rate. These limits are found by combining the rate of Galactic ccSNe we infer from the historical record with the upper limits placed on the rate of Galactic core-collapse events by the non-detection of SN neutrinos by 30 years of active neutrino detection experiments.
\label{fig-failed_vs_Rcc}}
\end{figure}
The SN rate, SFR, and fraction of SN by type inferred from our model all have large uncertainties due to the limited number of historical SN, but they demonstrate that the predicted observability of Galactic SNe given by our models is reasonable. While consistent with estimates of the Galactic SN and star formation rates due to the large uncertainties, the rates are somewhat high, which could suggest that the SN rate within a few kpc of the Earth is higher than the Galactic mean.
We note that the existence of ``failed" SNe would increase our inferred Galactic core collapse and star formation rates.
Combining the upper limit placed by the non-detection of SN neutrinos over $\sim30$ years of measurements by neutrino detection experiments \citep{alexeyev02,ikeda07} of $\lesssim 8$ core-collapses/century with our inferred luminous ccSN rate allows us to place weak limits on both the rate of core-collapse events and the fraction of such events that fail to produce normal, luminous ccSNe (see Fig. \ref{fig-failed_vs_Rcc}).
This limit on the fraction of core-collapse events that fail to explode is consistent with the upper limit of $\sim 50\%$ found by \cite{horiuchi11} and the nominal estimate of $\sim 10\%$ given by \cite{lien10}.
\section{Importance of Neutrino Detection}
As noted in the introduction, the detection of a burst of MeV neutrinos can provide crucial answers to three questions: \emph{IF astronomers should look for a Milky Way supernova}, \emph{WHEN they should look}, and \emph{WHERE they should look}.
In the following, we review the role of neutrinos in understanding collapses, the basics of their production and detection, and the present state of inter-experimental co-operation. We then provide new information on the alert procedure of the Super--Kamiokande Collaboration. Most importantly, we also provide the first announcement of a new fast-response capability using the EGADS experiment.
The case of SN 1987A is instructive for defining the main issues \citep{arnett89}. The neutrinos were detected a few hours before the deduced start of the electromagnetic signals, though this was not realized until later. At the time, the world first became aware of the event through optical detections of the early light curve \citep{kunkel87}, which were fortuitous and might have been missed until the supernova became brighter. The neutrinos were easily detected even though the detectors of that time were relatively small and the Large Magellanic Cloud is about five times farther than a typical Milky Way supernova (see Fig. \ref{fig:dcomp}). The progenitor star was detected in archival images \citep{walborn87}, and little information was available on possible variations in its pre-explosion luminosity \citep[see][]{plotkin04}. The SBO and the earliest supernova light curve were also undetected.
For the next Milky Way supernova, neutrinos should serve as the starting gun indicating that the race is on to characterize this once-in-a-generation event in detail across many timescales and electromagnetic bands. The key advantage of neutrinos, besides answering the three questions above and thus possibly allowing one last look at the undisturbed star just before it is destroyed, is that they can reveal the conditions and dynamics deep within the star. A primary goal that we emphasize is to catch not just the early supernova light curve, but also the SBO that precedes it. This will require getting alerts, times, and directions from the neutrino experiments far more rapidly than envisaged by the current system. Some of the present detectors are considerably larger and all have much swifter data processing than those in 1987, but existing data-sharing plans may still lead to crucial lost opportunities.
\subsection{Neutrino Production in Core Collapse}
The protoneutron star formed after core collapse is nearly at nuclear density and is at a central temperature of tens of MeV. It sheds almost all of its energy by radiating neutrinos, mostly through neutrino pair-production processes (the neutronization process $p + e^- \rightarrow n + \nu_e$ corresponds to only $\sim 10\%$ of the total neutrino emission). Because the density is so high, even neutrinos have difficulty escaping from the proto-neutron star, and they diffuse out over several seconds with a spectrum characteristic of the temperature, $T$, at the surface of last scattering, typically a few MeV. There are thought to be differences between the six neutrino flavors ($\nu_e$, $\nu_\mu$, $\nu_\tau$, and their antiparticles) in terms of their total and average energies, but it is a reasonable simplification to say that each flavor should carry about 1/6 of the binding energy release of $\sim (3/5) G M^2 / R \sim 3 \times 10^{53}$ erg and have an average energy of $\simeq 3T \simeq 15$ MeV.
Core collapses are extremely efficient neutrino generators and the neutrinos carry about $10^4$ times more energy than the eventual optical supernova. All smaller, less efficient explosions cannot possibly produce enough neutrinos to be reliably detected. The neutrino emission from a Type Ia supernova arises just from accelerated nuclear burning, is much more modest, and does not include the most detectable flavor, $\bar{\nu}_e$. Supernova impostors, if they really are just non-destructive outbursts of massive stars, should produce essentially no neutrinos. For collapses that lead to black hole production and little or no electromagnetic emission, the time-integrated neutrino signals are similar, essentially because the thermal energy of the hot proto-neutron star must be lost before the final collapse \citep{nakazato07,oconnor11}. Black-hole forming events will show a distinctive truncation of the neutrino signal in time \citep{beacom01,nakazato12}, which would be very relevant for the subsequent electromagnetic searches.
\subsection{Detecting Supernova Neutrinos}
\label{sec:neutrino_detection}
The most important supernova neutrino detection reaction is inverse beta decay, $\bar{\nu}_e + p \rightarrow e^+ + n$, where the proton is a hydrogen nucleus \citep[see][]{scholberg12}. The total energy of the outgoing positron is $E_e \simeq E_\nu - 1.3 {\rm\ MeV}$, and its direction is nearly isotropic because of the small recoil energy of the nucleon. There are also interactions with electrons and with nucleons bound in nuclei, but they have smaller cross sections and less favorable kinematics for the detectable particles in the final states.
The Super--Kamiokande detector \citep{fukuda03} in Japan has a nominal fiducial volume of 22.5 kton of ultra-pure water, and for a core collapse at the Milky Way center, Super--Kamiokande expects to detect $N_{\nu p} \sim 10^4$ inverse beta events over several seconds, with a negligible number of background events. The reconstruction directions of these neutrino events will be nearly isotropic \citep{vogel99}. Super--Kamiokande will have excellent measurements of the $\bar{\nu}_e$ energy spectrum and luminosity profile, as well as information on other neutrino flavors using other detection reactions. From the number of events, the distance to the supernova could in principle be estimated with percent-level precision; however, uncertainties in the emission models will likely restrict this to a few tens of percent.
The IceCube detector is vastly larger than Super--Kamiokande but has a very high detector background rate, which means that individual neutrino interactions cannot be separated from non-neutrino events. Nevertheless, for a core collapse in the Milky Way, the number of neutrino interactions would be so large in such a short time that IceCube would see a highly significant increase in the apparent ``background" rate, yielding an unambiguous detection \citep{abbasi11}. IceCube will have excellent data on the luminosity profile but no information on the energies or flavors of individual events. There are various other smaller detectors that will also provide important confirmations of a neutrino burst and some additional information about the other neutrino flavors \citep{scholberg12}. The range of existing detectors is just the Milky Way and its immediate companions, from which a burst could be detected easily, and not any nearby galaxies. For a core collapse in Andromeda, Super--Kamiokande would detect $\sim 1$ event and other detectors would detect nothing; much larger detectors will be needed to probe the much-more frequent extragalactic events \citep[][but see also Appendix \ref{app:Two}]{ando05,kistler11}.
Another important detection reaction in Super--Kamiokande is neutrino-electron scattering, for which the cross section is smaller. Summed over all flavors, hundreds of events are expected. Because the electron mass is small compared to the neutrino energies, the electrons are scattered forward, within $\sim 10^\circ$ of the neutrino direction. Taking into account various systematics, Super--Kamiokande should be able to constrain the direction to a Galactic ccSN to within a few degrees \citep{beacom99}. The prospects for directionality from timing triangulation using multiple detectors are poor, due to the long timescales and low statistics relative to the Earth-crossing time \citep{beacom99}.
\subsection{Neutrino Alert of Core Collapse}
As the neutrinos are generated and arrive at Earth before any electromagnetic signals, there will be a brief period, hours at most, during which the detected neutrinos are the only indication, other than gravitational waves, of an ongoing core collapse event. This places a high burden of responsibility on the neutrino experiments to announce as much information as soon as possible.
On the other hand, Milky Way core collapses are so rare that false signals, perhaps related to detector electronics, may occur during the decades-long waits. This gives a strong motivation to the experimentalists to be very careful to avoid announcing possibly false signals. The human intervention required to have adequate confidence may take hours.
There lies peril on both sides: the consequences of failing to react swiftly to a true signal or of raising a false alarm could be quite detrimental to the reputation of the neutrino experiments. Of these two unpleasant scenarios, however, avoiding even the possibility
of issuing a false alarm has traditionally been of greater concern to experimentalists. None of the present detectors, with the possible exception of Baksan \citep{alekseev87}, has ever seen a core-collapse neutrino burst. Couple this with the difficulty inherent in devising a calibration method with which to properly mimic uniformly volume-distributed, SN-like light in the detectors, and the problem becomes evident: how can an experimental collaboration rapidly muster sufficient confidence that what they are seeing is, in fact, a supernova, and get the word out to astronomers in time to catch the SBO?
\subsubsection{SNEWS}
In an early attempt to get around the problem of false alarms and to get word out quickly, a few of the world's supernova neutrino detectors have banded together via the SuperNova Early Warning System, SNEWS \citep{antonioli04}. The essential idea is simple: if a participating detector believes it is seeing a supernova neutrino burst, it sends the time of the start of the burst (and any other data it wishes to release) to a central server. If several geographically separated detectors report a burst within a certain period then an automated alert is sent to a pre-determined, open subscription email list. SNEWS has been operating in one configuration or another since 1998 with between two and four detectors in the network. It currently contains Super--Kamiokande, IceCube, Borexino, and LVD\footnote{\url{http://snews.bnl.gov/}}.
In order to avoid false alarms and potential embarrassment of the experimental collaborations, by binding agreement, no information whatsoever is ever released unless the SNEWS coincidence threshold determined by the number of participating detectors is reached. In addition, the false alarm rate of individual participating detectors is required to be low enough --- roughly once per month --- that random false coincidences should occur less than once per century. Detectors which for any reason temporarily exceed this agreed upon rate are excluded during that period from participating in coincidences.
Unfortunately, the statistics are such that based upon arrival times and size of the burst alone --- which is all the participating experiments have agreed to release prior to human review --- the direction of the burst cannot be located \citep{beacom99}, relegating SNEWS to serve as a wakeup call for those who have signed up. Furthermore, given the limited number of participating collaborations, their overriding desire to avoid false positives, and the resulting strict coincidence conditions, there is inevitably some risk of a false SNEWS negative. Ironically, the magnitude of this important risk cannot be known due to the nondisclosure agreements regarding individual detector performance (i.e., uptime fraction and false alarm rate) that made SNEWS possible in the first place.
\subsubsection{SN Procedure in Super--Kamiokande}
While it is critical to get the directional information out as soon as possible, only Super--Kamiokande will have quality pointing data --- a few-degree error circle --- for a burst anywhere in the Milky Way. Therefore we must consider that crucial experiment's (previously unpublished) approach to supernova data review and release. Within the Super--Kamiokande collaboration exists a standing committee called SURGE, the Supernova Urgent Response Group of Experts. If Super--Kamiokande's near-realtime data analysis processes identify a sudden burst of supernova neutrino-like events in the detector, within approximately two minutes an alert containing the time of the burst is sent to SNEWS.
These processes also initiate a specific, pre-arranged, and rehearsed procedure. Automated phone calls are placed to the SURGE members as well as the experiment's Spokesman and other executive committee members, about twenty people in all. Burst data in various forms are also sent to their mobile phones for review. A video conference is convened within 15 minutes of the burst's detection, during which the operating condition of the detector is verified, key plots are discussed, and characteristic event displays are shown.
If it is agreed that a real supernova has been detected, the rest of the Collaboration is notified via email and pre-worded announcements are sent to IAU and ATel. These telegrams contain the universal time of the start of the burst, its duration, how many neutrinos above 7 MeV were observed during that time, the right ascension and declination of the radiant, and the errors on the fitted direction. This is all designed to take place in an hour or less. Drills are held to work out sticking points in the procedure and speed up the entire process as much as possible. Nevertheless, it is clear that the experiment's very careful, very responsible, hands-on approach to building the locally required level of confidence and consensus means that Super--Kamiokande cannot issue its supernova alert as quickly as would be ideal.
\subsubsection{EGADS and Instant Alerts}
How can we build the confidence needed to get the announcement response time of even a single neutrino detector below the one-minute mark, while simultaneously minimizing the risk of both false positives and false negatives?
Presently, the Super--Kamiokande detector can only detect the positrons in the inverse beta decay reaction, $\bar{\nu}_{e} + p \rightarrow e^{+} + n$. If we could detect the neutrons in coincidence, then we could greatly increase the certainty that a supernova was occurring, as there are very few physics processes that could mimic this coincident signal, and none that could fake a burst of many such events.
As was first pointed out some years ago \citep{Beacom04}, adding a 0.2\% solution of a water-soluble gadolinium compound like gadolinium chloride or gadolinium sulfate to light water Cherenkov detectors would allow such coincident detection, within displacements of centimeters in space and tens of microseconds in time. Gadolinium has a thermal neutron capture cross section of 49,000 barns (about 5 orders of magnitude larger than that of protons) and emits a gamma cascade of 8 MeV that can be easily detected by detectors like Super--Kamiokande. This assertion has since been verified in the Super--Kamiokande detector itself via the use of a gadolinium-loaded calibration source \citep{watanabe09}.
In an effort to prove that a gadolinium-enriched Super--Kamiokande will be effective,
starting in 2009 a large-scale test facility called EGADS, Evaluating Gadolinium's Action on Detector Systems, was built in the Kamioka mine \citep{vagins11}\footnote{\url{http://www.ipmu.jp/webfm_send/555}}$^{,}$\footnote{\url{http://hanse2011.desy.de/sites/site_hanse2011/content/e119287/e119757/Vagins_HANSE11.pdf}}.
EGADS's centerpiece is a 200-ton water tank, essentially a $\sim 1\%$ scale model of Super--Kamiokande, complete with a novel, selective water filtration system \citep{vagins12} and 240 50-cm photomultiplier tubes.
The gadolinium studies have gone well, and this facility will soon be repurposed for an exciting new (and previously unpublished) role.
As part of a new multimessenger supernova astronomy initiative in Japan\footnote{\url{http://www.gw.hep.osaka-cu.ac.jp/gwastro/A03/overview_e.html}}, in 2014 EGADS will be converted --- primarily via upgraded DAQ electronics, addition of computing sufficient for 100\% real-time event reconstruction, and improved calibration --- from an R\&D testbed to a dedicated supernova neutrino detector. As part of this process, the EGADS acronym will be redefined to mean ``Employing Gadolinium to Autonomously Detect Supernovas". For a core-collapse event near the Milky Way center, $\sim 100$ events will be detected.
This modestly-sized detector will become an especially important supernova neutrino detector, due to a unique and vital capability: bursts will be detected and announced to the world within {\em one second} of the first neutrino's arrival in the tank. The confidence required to do this is provided by the gadolinium loading. If the ``heartbeat'' signature of several coincident inverse beta decay events is seen, the double flash of positron Cherenkov light quickly followed by neutron capture gammas from the same spot in the detector, then a Milky Way burst is most assuredly under way and can be announced immediately without human intervention.
This data will have no directionality unless there is an especially close event (EGADS would see $\sim$100,000 events from Betelgeuse). While officially a standalone, independently-funded project distinct from Super--Kamiokande, all the members of the considerably smaller EGADS Collaboration are in fact members of both collaborations. What is more, the PI of the R\&D-phase EGADS (Masayuki Nakahata) and the PI of the supernova detector-phase EGADS (MRV, one of this paper's authors) are both members of SURGE and are the co-conveners of Super--Kamiokande's solar and supernova neutrino analysis group. It is therefore hoped that an immediate, positive supernova detection in EGADS will supply the neighboring Super--Kamiokande, even if it has not yet been enriched with gadolinium itself, with sufficient confidence to react much more quickly in releasing its critical directional information.
With EGADS, or a similar solution to the problem of timeliness, and a modest investment as outlined in Appendix \ref{app:One}, there is no fundamental barrier to observing even the short ephemeral SBO signature of a Galactic ccSN.
\section{Summary and Conclusions}
The scientific community is eagerly awaiting the next Galactic SN. A Galactic ccSN will allow the application of an array of probes that are not possible for the many extragalactic SNe that are observed. Using modern dust models we provide a detailed assessment of the observability of the next Galactic SN, including, for the first time, near-IR estimates, the effect of confusion, and the observability of ccSN progenitors, precursors, shock breakouts, and failed SNe.
We find that a Galactic ccSN (assuming a successful explosion) will be observable in the near-IR ($P(m_{K}<5)\simeq 100\%$) and very likely ($P(m_{V}<20)\sim96\%$) will be observable in V-band. A Galactic ccSN will produce an unmistakable neutrino signal to easily trigger electromagnetic searches. For a Milky Way SN, the Super--Kamiokande detector will localize the SN position to within a few degrees. Given the ccSN magnitude probability distribution we find, along with the expected neutrino pointing uncertainty, it will be possible for wide field near-IR and optical instruments to identify the SN.
While $\sim4/5$ Galactic SN are of the core-collapse variety \citep{li11b}, we note that a Galactic SN Ia is likely to be appear brighter than a ccSN because SNe Ia are intrinsically brighter than ccSNe and their spatial distribution has a larger scale height, which results in less average line-of-sight extinction. Although a Galactic SN Ia would not produce a (currently) detectable neutrino signal to trigger a search, we find that 92\% will have $m_{V,max}<13.5$, which is within the limits of current all sky surveys, and if the Large Synoptic Survey Telescope (LSST) monitors the Galactic plane over 99\% of SN Ia would be detected.
We use our modeled observability of Galactic SNe, together with the record of historical SNe, to estimate the rate of Galactic SNe, the ratio of Galactic core-collapse to Type Ia SNe, and the Galactic SFR. We infer a Galactic ccSN rate of $3.2^{+7.3}_{-2.6}$ per century and a Galactic SN Ia rate of $1.4^{+1.4}_{-0.8}$ per century for a total Galactic SN rate of $4.6^{+7.4}_{-2.7}$ per century and constrain the fraction of Galactic SNe that are ccSNe to be $0.69^{+0.22}_{-0.46}$. We in turn use this Galactic ccSN rate to infer a Galactic SFR of $3.6^{+8.3}_{-3.0}$ M$_{\odot}$yr$^{-1}$.
Combining the upper limit placed by the non-detection of SN neutrinos over $\sim30$ years of measurements by neutrino detection experiments \citep{alexeyev02,ikeda07} of $\lesssim 8$ core-collapses/century with our inferred luminous ccSN rate allows us to place weak limits on both the rate of core-collapse events and the fraction of such events that fail to produce normal, luminous ccSNe (see Fig. \ref{fig-failed_vs_Rcc}).
This limit on the fraction of core-collapse events that fail to explode is consistent with the upper limit of $\sim 50\%$ found by \cite{horiuchi11} and the nominal estimate of $\sim 10\%$ given by \cite{lien10}.
We show that a Galactic ccSN could provide a unique opportunity to obtain detailed observations of the SN shock breakout. We present expected absolute and apparent magnitude probability distributions for the SBO. We also consider the possibility of detecting the weak SBO of a failed SN occurring when a red supergiant's envelope becomes unbound after its core undergoes a direct collapse to a black hole. While failed SNe from other progenitors (such as blue supergiants or Wolf-Rayet stars) would not produce this sort of signal, we show that those following the red supergiant models of \cite{lovegrove13} and \cite{piro13} would be even easier to detect in the near-IR than the SBOs of normal ccSNe. We note that given the scale of investment in neutrino detection, a dedicated day and night IR SBO detection system could be built and operated at moderate cost (see Appendix \ref{app:One}) and that the existence of such a system could further reduce the potential embarrassment of false detections---it would simply represent another layer of the overall trigger system.
Since the SBOs of normal SNe occur on timescales ranging from minutes to a couple of days after the neutrino burst, it is of paramount importance that neutrino detection experiments provide directional information in near real-time in order for an electromagnetic detection of the SBO to be possible. We describe the procedure Super--Kamiokande will follow in the event of a SN detection before releasing this information. We further describe EGADS, a system that will provide instant Galactic SN alerts. We also present an outline for a system capable of detecting extragalactic SN SBOs (see Appendix \ref{app:Two}) that could better cue searches for neutrino bursts that could not be detected directly.
We also model the observability of likely ccSN progenitors. We find that $\sim 92\%$ already have near-IR imaging (with 2MASS), but only $\sim 57\%$ have V-band imaging (in the USNO-B1.0 catalog). A lack of optical imaging of the progenitor would limit our ability to characterize its temperature and luminosity. However, the enormous extinction towards the Galactic center will make it difficult to substantially increase the fraction of likely progenitors imaged in V-band. We also consider the potential for observing precursor outbursts, but find that current all-sky surveys are not likely to observe such events. We note that LSST (if it images the Galactic plane) or a shallow near-IR all-sky survey would be capable of monitoring a majority of likely progenitors for precursor variability. Unfortunately, LSST presently plans to largely ignore the Galactic plane, which, as recently discussed by \cite{gould13}, may be a suboptimal strategy.
This paper shows that the astronomical community could make important observations of the stages leading up to and including the traditional SN light curve of the next Galactic SN.
However, there are steps that must be taken by astronomers and neutrino experimentalists working together to insure that the next rare opportunity to observe a Galactic SN is not squandered.
\begin{acknowledgements}
We thank Todd Thompson, Rick Pogge, Bruce Atwood, Jos\'{e} Prieto, and Ben Shappee for valuable discussions and information. We thank Elizabeth Lovegrove, Stan Woosley, and Tony Piro for advanced looks at their relevant works on modeling the explosions and shock breakouts of failed SNe. We thank Georg Raffelt and the anonymous referee for helpful comments. J.F.B. is supported by NSF grant PHY-1101216. M.R.V. is supported by the World Premier International Research Center
Initiative of Japan's Ministry of Education, Culture, Sports, Science, and
Technology (MEXT), and also by a MEXT Grant-in-Aid for Scientific Research
on Innovative Areas (24103004).
\end{acknowledgements}
\begin{appendix}
\section{Instrumentation for Shock Breakout Detection in Daytime}
\label{app:One}
Here we outline a rough design for an instrument capable of detecting Galactic SN shock breakout bursts even in daytime. This particular design minimizes cost at the price of sensitivity. Sensitivity can be rapidly gained using a larger aperture at the cost of a smaller field of view or more detectors. While the shock breakout pulse peaks in the far-UV/X-ray, a search for a Galactic breakout pulse should be done in the near-IR because of Galactic extinction and the brightness of the daytime sky. As noted earlier, we are making assumptions about the thermalization of the SBO radiation, but based on \cite{sapir13} they are conservative.
We consider an 86~mm aperture, 300~mm focal length, short
wavelength IR (SWIR) lens with a 20.5~mm focal plane
corresponding to a 3.9~degree field of view (an Optec
OB-SWIR300/3.5 SWIR lens). We then use a FLIR Systems
InGaAs $640\times 512$ detector (a Tau SWIR BP detector)
with 25$\mu$m pixels. This provides a $3.0 \times 2.5$~degree
field of view with $17$~arcsec pixels that is well-matched
to the positional uncertainties expected from Super--Kamiokande (see \S\ref{sec:neutrino_detection}). The detector QE (80\%)
and optical through-puts (50\%) are not grossly dissimilar
from those of an astronomical infrared instrument, but the
read noise and well depths are much higher ($400$ and
$2.5 \times 10^6$~e$^-$, respectively), and the detector
can be read with a frame time of 30~Hz. If we scale from
the Lucifer exposure time calculator\footnote{\url{http://www.lsw.uni-heidelberg.de/lucifer-cgi/calculator/calculator.py}} for the Large Binocular
Telescope \citep{ageorges10}, we expect 7500 and 18000 counts/second for J/H=6~mag.
However, with the large pixels, the sky count rates are
$2.2$ and $5.5\times 10^6$/second, respectively, so to avoid
saturation one would operate near the maximum frame rate
\footnote{Daytime IR sky brightnesses are approximately 6.6 and 5.6 mag per square arcsec in J and H respectively \citep{jim11}, while the daytime V-band sky brightness is $\sim 4$ mag per square arcsec \citep{rork82}.}.
For these nominal parameters, the signal-to-noise ratio
is roughly $3$ or $6 \times t^{1/2}10^{-0.4(m-6)}$, for a
source with a J or H magnitude of $m$ and an integration time of $t$ seconds.
The lens and detectors
are of moderate estimated price ($\sim\$$40,000), so it would be
entirely feasible to set up a system that would have a high probability of detecting the shock breakout pulse of the next Galactic supernova in either day or night time (unless the SN occurred at a very small Sun angle) by distributing several
such systems around the globe. Four such facilities to cover north/south, east/west, and weather would provide a reasonably high probability of success. The system would also supply a continuous near-IR variability survey of the Galaxy.
Note that such a system requires the directional information supplied by Super--Kamiokande to detect the SBO, although on longer time scales it could search the Galactic plane for a SN with no neutrino pointing information.
This particular design sketch is meant to minimize cost while maintaining a field of view comparable to the localization accuracies of neutrino and gravitational wave detectors (see \S\ref{sec:observing_sbo}). Sensitivity scales with aperture, $D$, as $D^{2}$, so increasing the aperture rapidly increases the sensitivity. The price is either a reduced field of view, requiring a scanning strategy until the SBO is identified, or a significantly more expensive detection module. In theory, existing wide-field IR instruments can make such surveys in daytime if narrow band filters can sufficiently reduce the count rates and there is a means of maintaining thermal stability, but the control, scheduling, and safety concerns for these facilities are likely problematic.
Moreover, a SBO detection system run by a neutrino detector group can simply be regarded as part of the overall system, allowing it to internally operate at a triggering rate viewed as unacceptably high for public announcements.
\section{Observing Extragalactic Shock Breakouts}
\label{app:Two}
Megaton-class neutrino detectors are capable of detecting
small numbers of neutrinos from ccSN out to roughly 10~Mpc \citep{ando05,kistler11}. The ccSN rate at these distances is dominated by roughly 40
galaxies, representing 90\% of the local rate of 1-2/year \citep{ando05}. The
key issue for flagging a small number of neutrino events is to narrow the
temporal search window so that there is a negligible probability
of background events in the window. Narrowing the temporal search window would similarly improve the sensitivity of gravitational wave detectors. \cite{cowen10} show that given a well-sampled early-time
light curve, particularly one which has some information on the
shock breakout signal, the time of the core collapse can be estimated to within hours. The problem, however, is that the short SBO
durations mean that normal surveys are unlikely to catch the breakout pulse except by chance. In this sense, the \cite{kistler12}
emphasis on the importance of finding shock breakouts has the problem
backwards -- you want to use a high sensitivity survey for breakout pulses to trigger searches in the low sensitivity neutrino or
gravity wave detectors rather than the reverse.
It is feasible to simply monitor all or some of these
galaxies for shock breakout events, but it requires a
fairly industrial approach to the problem. At 10~Mpc, and again
assuming the radiation thermalizes, the $\sim16$~mag optical SBO events
(Fig. \ref{fig:sbo_mag}) are easily accessible using off-the-shelf
equipment. For example, a 12.5in Planewave telescope
with an SBIG STXL-1102 camera has a roughly $0.5\times1.0$ degree
field of view that basically covers all the relevant
galaxies except M~31 and M~33 in a single exposure and
would have $S/N\sim 5$ at $R\sim 20$ in 60 seconds for
a price on the order \$35,000 per unit after including
a Paramount ME robotic mount and a control computer.
Based on the Winer Observatory, an additional \$10,000/year
would provide space in an existing dome and basic servicing
needs. Thus, for an overall system with $32$ units, the
hardware costs are roughly \$1,000,000 (assuming a modest
discount from list prices given the scale of the order)
with direct operating costs of order \$300,000/year.
The telescopes need to be sited relatively uniformly in
longitude, with a greater emphasis on the North. While
we did not attempt a detailed optimization, we experimented
with the achievable completeness defined by the fraction of
the local SN rate which could be monitored at a given cadence.
The rates for each galaxy were set following \cite{cappellaro99}
based on the galaxy type and absolute magnitude. We
located equal numbers of telescopes at 8 existing observatory sites
(Canaries, Cerro Tololo, Hawaii, Kitt Peak, Maidanak, Siding Spring,
Sutherland and Xinglong station) and assumed a random and
uncorrelated 30\% of days were cloudy. Telescopes were assigned
to the un-observed, visible (after $18^\circ$ twilight, airmass $<2$) galaxy
with the highest estimated SN rate. For a one minute
cadence, needed to try to catch breakout events from blue
supergiants (SN~87A, Type~IIb, Type~Ib), systems with 8,
16, 24, 32 and 40 telescopes achieved 11, 20, 26, 31 and 35\%
completeness. For a 5 minute cadence, which would miss most
breakouts from blue supergiants but monitor those from red
supergiants well, the fractions are 19, 33, 40, 45 and 48\%.
In practice, systems with larger numbers of telescopes have
significant ``idle time'' on a strict 5 minute cadence, so
there would be significant numbers of observations on shorter
time baselines. With 8 sites, our 30\% uncorrelated weather
losses have negligible effects on the coverage fraction.
It is clear, however, that an optimized design would use still
more sites since allowing observations all the way down to
the horizon raises the (5 minute) fractions to 31, 51, 61, 69
and 74\% respectively. Since there is no need for superb
image quality (darkness, cloud cover and bandwidth are
the more relevant criteria), we will assume that an optimized
system of 32 telescopes can achieve 60\% coverage for Type~II
(red supergiant) breakout shocks. While some breakout shocks
from blue or Wolf-Rayet stars will be detected, they would not
be well sampled. Since Type~II SNe then represent $\sim 60\%$
of the overall SN rate \citep{li11a}, the system would
detect shock breakouts from roughly $1/3$ of local SN. The
average rate from these galaxies is somewhat uncertain, but is
in the range of $1$-$2$~SN/year, so the system would detect
one breakout event every $\sim 2$-$3$ years. While the yield is far lower than the UV spacecraft proposed by \cite{sagiv13}, the costs are also much lower and the system will find the closest SN, for which there is the greatest likelihood of a neutrino detection.
\end{appendix}
|
2,869,038,155,288 | arxiv |
\section{Introduction}
Neural text-to-speech is very popular in recent years \cite{Ren2020FastSpeech2F,Shen2018NaturalTS,Mehri2017SampleRNNAU,Sotelo2017Char2WavES}, and it can already produce speech that's almost as natural as speech from a real person with high voice quality. However data collection is still a big challenge. We often need to collect a large amount of data in a high-fidelity recording studio with professional guidance to obtain high voice quality and high consistency of recordings. It is very costly, time-consuming or even impossible, e.g. in cases of custom speech and lombard speech \cite{Bollepalli2019LombardSS}. However, noisy and diverse data is usually easier to be collected. Thereby multi-speaker speech synthesis is proposed, which collects diverse data from lots of speakers to train a robust multi-speaker generative model. It can be further adapted in different tasks such as Speaker adaptation \cite{Hu2019NeuralTA}, cross-lingual Text-to-speech synthesis \cite{Chen2019CrossLingualMT}, and style conversion \cite{Paul2020EnhancingSI}.
The state-of-art systems have an encoder-decoder structure of network with speaker embedding as conditions \cite{Paul2020EnhancingSI, Choi2020AttentronFT, Chien2021InvestigatingOI, Chen2020MultiSpeechMT, Cooper2020ZeroShotMT, Nachmani2018FittingNS, Jia2018TransferLF}. Some works investigated into the effective representations of speaker, e.g. \cite{Chien2021InvestigatingOI, Cooper2020ZeroShotMT} studied the effects of different speaker embeddings such as d-vector \cite{Wan2018GeneralizedEL}, x-vector \cite{Snyder2018XVectorsRD}, LDE-based speaker encoding \cite{Cooper2020ZeroShotMT}. \cite{Choi2020AttentronFT} proposed an attention-based variable-length embedding. \cite{Cai2020FromSV} measured the speaker similarity between the predicted mel-spectrogram and the reference. Some works focused on solving the problem of noisy data \cite{Hu2019NeuralTA, Paul2020EnhancingSI, Cong2020DataEV, Kons2019HighQL}, e.g. \cite{Paul2020EnhancingSI, Cong2020DataEV} researched into the methods of transfer learning for noisy samples. \cite{Hsu2019DisentanglingCS} aimed to disentangle speaker embedding and noise by data augmentation and a conditional generative model.
And some works were interested in the controllability of system in the manner of zero-shot. \cite{Choi2020AttentronFT, Cooper2020ZeroShotMT} tried to obtain target voice by feeding target speaker embedding without speaker adaptation. \cite{Hsu2019HierarchicalGM, Zhang2019LearningLR} introduced latent variables to control the speaking style.
The previous studies rarely gave insights to what role the information other than text-content (called control information in this paper, e.g. speaker embedding, pitch and energy) played. The control information is usually represented by a fixed-or-variable-length embedding which may not be as effective as we excepted, e.g. the pitch embedding is relevant to the harmonics of speech but it's not an effective representation of the harmonic structure. Besides, the embedding of control information is typically concatenated or added to the text-content representation, or is simply used to perform affine transformation on it \cite{Kumar2020FewSA}. In this way, the control information is playing a similar role as the text-content information in the network. However, the text-content is the most important characteristic of speech which determines the intelligibility while the control information is affecting other characteristics like voice color of speech.
In this paper, we investigate better use of the control information under an encoder-decoder architecture. The major contributions are 1) Excitation spectrogram is designed to explicitly characterize harmonic structure of speech, which is fed to decoder instead of pitch/energy embeddings. 2) Conditional gated LSTM (CGLSTM) is proposed whose input/output/forget gates are re-weighted by speaker embedding while its cell/hidden states are dependent on the text-content. That’s to say, the speaker embedding controls the flow of text-content information through gates without affecting cell state and hidden state directly.
The rest article is organized as: Section 2 describes the proposed multi-speaker generative model from the overall framework (Section 2.1), excitation spectrogram generator(Section 2.2), and CGLSTM for decoder(Section 2.3). Section 3 is about the detailed settings and results of experiments. Finally, conclusion is drew in Section 4.
\section{Multi-speaker Generative Model}
\subsection{Framework}
The framework of the proposed system is illustrated as Figure~\ref{fig:framework}. It is the state-of-art Tacotron-like structure with speaker encoder jointly trained. Due to the insufficient performance of attention mechanism for diverse data, phonemes' duration are predicted for the alignment between phoneme sequence and mel-spectrogram sequence through a length regular module \cite{Ren2020FastSpeech2F}. In addition, energy/pitch are predicted to generate the excitation spectrogram which is fed to the CGLSTM-decoder finally.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.85]{framework.png}
\caption{Overall framework of the proposed multi-speaker generative model.}
\label{fig:framework}
\end{figure*}
The text-encoder is the standard Tacotron2 encoder \cite{Shen2018NaturalTS} which has a stacked Conv1d followed by BLSTM. It takes phoneme-sequences with tones and prosody notations as inputs, and the output text-content embedding which, along with the speaker embedding, is used to predict phonemes' duration firstly and then lf0/energy/mel-spectrogram after length regular.
The speaker-encoder has a GST-encoder-like structure \cite{Wang2018StyleTU}, with a stacked Conv2d+BatchNorm reference encoder, and a multi-head attention \cite{Vaswani2017AttentionIA}. It takes the mel-spectrogrom of reference as inputs, and outputs the speaker embedding which is used to classify the speakers on the one hand and used as control information of the system on the other hand. Instead of introducing Gradient Reversal Layer (GRL) \cite{Ganin2015UnsupervisedDA} to remove text-content information from the speaker embedding, the reference is randomly chosen from the same speaker with the target mel-spectrogram \cite{Jia2018TransferLF}.
The structure of duration-predictor is simply a layer of BLSTM with pre-dense and post-dense. It is used to replace attention based alignment between phoneme sequence and mel-spectrogram sequence. In the length-regular module, the phoneme sequence is repeated to the length of the mel-spectrogram sequence according to the duration. Besides, the frame position in phoneme is concatenated.
Pitch and energy are predicted separately with the same network structure, a stacked Conv1d with post-dense. Then excitation spectrogram is generated using the pitch and energy (see Section 2.2), which aims to address the one-to-many mapping problem by providing information of harmonic structure.
Finally, the decoder is an auto-regressive structure as Figure~\ref{fig:decoder} with the proposed CGLSTM (see Section 2.3) in it.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.85]{decoder.png}
\caption{Network structure of CGLSTM decoder.}
\label{fig:decoder}
\end{figure}
In the flow of information, affine transformation is carried out on the text-content. It is defined as Equation~\ref{eq:affine}
\begin{equation}
affine(x_{i}, x_{c})=x_{i}\odot proj^{1}(x_{c})+proj^{2}(x_{c})
\label{eq:affine}
\end{equation}
\begin{equation}
proj(x)=w\circledast x+b
\label{eq:proj}
\end{equation}
where $\odot$ means element-wise multiplication, and $\circledast$ means matrix multiplication.
\subsection{Excitation Spectrogram Generator}
In the source-filter analysis \cite{Morise2016WORLDAV}, speech is produced when excitation signal passes through a system composed of chest, glottis and oral-cavity, etc., in which resonance occurs. The resonance phenomenon is reflected in the voice as the harmonic structure which is a very important characteristic of speech. Unfortunately, existing studies pay more attention to the use of pitch, rather than the harmonic structure. Pitch can reflect the periodic characteristics of the excitation signals while it does not reflect the resonance phenomenon. Thus we propose an excitation spectrogram generator to act as a simple resonator which takes pitch/energy as inputs and generates the excitation spectrogram with harmonics at vowels and uniform spectrogram at consonants. It provides a start point with explicit harmonic structure for the prediction of target mel-spectrogram.
The harmonics are defined as the multiples of the fundamental frequency as Equation~\ref{eq:har}:
\begin{equation}
har_{i} = i \ast f_{0} \:\:\:\:\: i\in [1, N_{h}]
\label{eq:har}
\end{equation}
where $har_{i}$ means the $i^{th}$ harmonic position of speech, $N_{h}$ is the number of harmonics, and $f0$ is the fundamental frequency.
Then the excitation spectrogram is supposed to have energy only at harmonic positions during vowel and at all positions during constant. Energy is distributed equally at these positions as Equation~\ref{eq:els}:
\begin{equation}
els_{i, j} = \left\{\begin{matrix}
e_{i}/N_{fft} & if \; i\in constant\; \; \; \; \; \;\; \; \; \; \;\; \; \; \; \; \; \; \; \; \; \\
e_{i}/N_{h}\; \; \; & if \; i\in vowel,\:j\in harmonics\\
0\; \; \; \; \; \;\; \; \; & if \; i\in vowel,\:j\notin harmonics
\end{matrix}\right.
\label{eq:els}
\end{equation}
where $els_{i, j}$ means the $j^{th}$ linear spectrogram at $i^{th}$ frame, $e_{i}$ is the total energy of $i^{th}$ frame, and $N_{fft}$ is the fft number in the calculation of linear spectrum.
Finally, the linear excitation spectrogram $els$ is converted to mel excitation spectrogram $esp$ by Equation~\ref{eq:esp}
\begin{equation}
esp = els \circledast W^{l2m}_{N_{fft} \times N_{mel}}
\label{eq:esp}
\end{equation}
where $W^{l2m}_{N_{fft} \times N_{mel}}$ is the transformation matrix from linear to mel spectrogram and $N_{mel}$ is the dimension of mel-spectrogram.
\subsection{Conditional Gated LSTM}
Text content is the most important feature of speech due to its decisive role on the intelligibility. In addition, speech can be characterized in terms of timbre, style, speaker and emotion etc. Many researches aims to change or control some of these characteristics without a negative influence on the intelligibility. For this purpose, the control information is usually added or concatenated to the text content, which is fed as the inputs of network. However, in this way, the control information plays a similar role to the text content in the network. Both of them would directly take effect in the same way on the intelligibility and other characteristics at the same time, which is not the way we expected. Consequently, we propose conditional gated LSTM (CGLSTM) where the control information is used to re-weight the gates and the text content flows in the hidden/cell states. Thereby the control information will directly play a part in the gates-based flow of the text content without operating the text content in itself.
Compared with Long Short Time Memory (LSTM), which is frequently used in speech synthesis tasks due to its good capacity of learning long dependencies, the proposed CGLSTM calculates hidden/cell states in the same manner by current inputs and previous hidden/cell states. As for the calculation of input/output/forget gates, the control information is used to re-weight the LSTM gates as Equation~\ref{eq:fgate} -~\ref{eq:ogate}.
\begin{equation}
f_{t} = \sigma(( W_{xf} \circledast [h_{t-1}, x_{t}] + b_{xf} ) \odot ( W_{cf} \circledast c_{t} + b_{cf} ))
\label{eq:fgate}
\end{equation}
\begin{equation}
i_{t} = \sigma(( W_{xi} \circledast [h_{t-1}, x_{t}] + b_{xi} ) \odot ( W_{ci} \circledast c_{t} + b_{ci} ))
\label{eq:igate}
\end{equation}
\begin{equation}
o_{t} = \sigma(( W_{xo} \circledast [h_{t-1}, x_{t}] + b_{xo} ) \odot ( W_{co} \circledast c_{t} + b_{co} ))
\label{eq:ogate}
\end{equation}
where $f_{t}$, $i_{t}$ and $o_{t}$ are the forget, input and output gates; $x_{t}$, $c_{t}$, and $h_{t-1}$ are the current text content inputs, current control information inputs, and previous hidden state; $W$ and $b$ are the corresponding weights and biases.
\section{Experiments}
\subsection{Corpus}
The data set of our experiments is the public multi-speaker mandarin corpus AISHELL-3 \cite{Shi2020AISHELL3AM}, which contains roughly 85 hours of recordings spoken by 218 native Chinese mandarin speakers. Among them, recordings from 173 speakers have Chinese character-level and pinyin-level transcripts and total 63263 utterances. This part of the transcribed data will be used in our experiments, which is divided into training set and test set without overlapping.
\begin{itemize}
\item \textbf{Training set}: contains 57304 utterances from 165 speakers, with 133 females 46915 utterances and 32 males 10389 utterances. The training set is used to pre-train the multi-speaker generative model, which is further adapted using the test set.
\item \textbf{Test set}: contains 4 females and 4 males, and only 20 utterances of each speaker are randomly chosen for speaker adaptation.
\end{itemize}
The recordings are mono, 16bit, and down-sampled from 44100HZ to 16000HZ. Preprocessing is conducted on both the training and the test sets to reduce the diversity of them: 1) Energy normalization by scaling the maximal amplitude of utterance. 2) Silence trimming by keeping 60ms silence at the head and tail of utterance.
\subsection{Setup}
The pipeline of our experiments includes 1) Pre-training: train the multi-speaker generative model using the training set. 2) Speaker adaptation: train the target model by transfer learning using single-speaker data from the test set and 3) Inference: infer the mel-spectrogram to synthesize the waveform by vocoder. Here the modified neural vocoder LPCNet \cite{Valin2019LPCNETIN} is used, which takes the mel-spectrogram as inputs.
In our experiments, the frame hop size is set to 12.5ms, the window size is set to 50ms, and the number of mel-bank is set to 80 for mel-spectrogram. Mean Absolute Error (MAE) is calculated to measure the reconstruction error of lf0 and energy while Mean Square Error (MSE) is applied to mel-spectrogram. Besides, the task of speaker classification uses cross-entropy as loss function. The setup of our experiments is described as follows:
\begin{itemize}
\item \textbf{Baseline}: Compared with the framework in Figure~\ref{fig:framework}, following modifications are made: 1) the excitation spectrogram generator is removed 2) the CGLSTM in the decoder is replaced with the standard LSTM while the speaker embedding is used to transform the text content by an affine layer before fed to the decoder.
\item \textbf{System-1}: Baseline + excitation spectrogram generator
\item \textbf{System-2}: Baseline + CGLSTM decoder
\item \textbf{System-3}: Baseline + excitation spectrogram generator + CGLSTM decoder.
\end{itemize}
\subsection{Multi-speaker Generative Model}
Figure~\ref{fig:mel_loss} shows the reconstruction error of mel-spectrogram of different systems in the pre-training stage. Compared with the baseline, the excitation-spectrogram generator (System-1) and CGLSTM-decoder (System-2) brought obvious improvement in terms of reconstruction error separately. The reconstruction error reduced further in System-3. It shows that the excitation spectrogram and the CGLSTM, used together or separately, can greatly improve the modeling capability for multi-speaker data.
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{mel_loss.png}
\caption{Reconstruction Error of mel-spectrogram of different systems}
\label{fig:mel_loss}
\end{figure}
We also compared the amount of parameters of each system as Table~\ref{tab:para}. In general, there is no big difference among them. Compared with the baseline, the parameter amount of System-3 even drops by 10\%. In other words, we can achieve better performance with less computation.
\begin{table}[th]
\caption{The amount of parameters of different systems (millions)}
\label{tab:para}
\centering
\begin{tabular}{@{}llll@{}}
\toprule
Baseline & System-1 & System-2 & System-3 \\ \midrule
11.24 & 9.5 & 11.37 & 10.08 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Speaker Adapted Model}
For unseen speakers in the test set, we adapted the multi-speaker model using data from the target speaker. The Mean Opinion Score (MOS) test was carried out to evaluate the performance in intelligibility of speech, voice quality and speaker similarity. 20 native Chinese tester participated in it. The MOS results are shown at Table~\ref{tab:ft_qs}.
\begin{table}[th]
\caption{MOS of intelligibility (Intellig.), voice quality (Quality) and speaker similarity (Similarity) for unseen-speaker after speaker adaptation. In the evaluations, scores range from 1 to 5. 1) For intelligibility, score=1 indicates that the voice is hard to understand, having ambiguous or bad pronunciation while score=5 means the voice is pronounced clearly and correct, easy to understand. 2) For voice quality, score=1 means the voice has strong and annoying noise while score=5 means the voice is clean and pleasant. 3) For speaker similarity, score=1 means the compared two voices don't sound like from the same person at all while score=5 means it's easy to make the judgement that they are from the same person. (p.s. GT means Ground truth.)}
\label{tab:ft_qs}
\centering
\begin{tabular}{@{}llllll@{}}
\toprule
& \multicolumn{5}{c}{\textbf{MOS}} \\ \cmidrule(l){2-6}
& \multicolumn{1}{c|}{\multirow{2}{*}{Intellig.}} & \multicolumn{2}{c|}{Quality} & \multicolumn{2}{c}{Similarity} \\ \cmidrule(l){3-6}
& \multicolumn{1}{c|}{} & \multicolumn{1}{c}{female} & \multicolumn{1}{c|}{male} & \multicolumn{1}{c}{female} & \multicolumn{1}{c}{male} \\ \midrule
GT & \multicolumn{1}{l|}{4.75} & 4.46 & \multicolumn{1}{l|}{4.56} & - & - \\
Baseline & \multicolumn{1}{l|}{3.30} & 2.54 & \multicolumn{1}{l|}{2.38} & 2.76 & 3.02 \\
System-1 & \multicolumn{1}{l|}{4.09} & 3.93 & \multicolumn{1}{l|}{2.76} & 3.92 & 3.13 \\
System-2 & \multicolumn{1}{l|}{3.56} & 2.57 & \multicolumn{1}{l|}{2.67} & 3.17 & \textbf{3.26} \\
System-3 & \multicolumn{1}{l|}{\textbf{4.11}} & \textbf{3.93} & \multicolumn{1}{l|}{\textbf{3.10}} & \textbf{4.05} & 3.05 \\ \bottomrule
\end{tabular}
\end{table}
According to the MOS results, the System-1 outperforms the baseline in all aspects of intelligibility, voice quality and speaker similarity. It indicates that the excitation spectrogram, which captures explicit harmonic structure of speech, is much more effective than the simple use of pitch and energy. It can improve the clearness of pronunciation of some speakers, and at the same time reduce the noise or signal distortion caused by insufficient modeling capabilities for complex data.
For the proposed CGLSTM-decoder, we can find that, it also brings much improvement by comparing the System-2 and the baseline. The MOS of intelligibility increased by 0.26 points, which proves that CGLSTM can reduce the negative impact of control information on the intelligibility. Besides, the improvement in speaker similarity indicates that CGLSTM can control the specific characteristics of voice better than LSTM.
After using the excitation spectrogram and the CGLSTM-decoder together in System-3, we achieve the best performance from the MOS. In addition, AB-preference test is conducted between System-1 and System-3 as Figure~\ref{fig:ab_test}. Here System-3 performs slightly better than System-1 for male and worse for female in voice quality with on average 37.5\% testers have no preference. Considering both the results of MOS and AB preference, System-1 and System-3 are comparable.
\begin{figure}[th]
\centering
\includegraphics[scale=0.8]{AB-test.png}
\caption{AB preference of voice quality between System-1 and System-3.}
\label{fig:ab_test}
\end{figure}
Finally, the performance of males in the test set is obviously worse than that of females. One possible reason is that the data in the training set for females and males is not balanced, with a rough ratio female:male=9:2. The performance gap between females and males becomes smaller after using CGLSTM-decoder, e.g. the voice quality gap drops from 1.17 (System-1) to 0.83 (System-3). We may explain it like this: in the case of imbalanced data, CGLSTM can share information in a better way than LSTM and it can control the specific feature of voice through the control information. In the future, we need more investigations to prove it.
\section{Conclusions}
In this paper, we have proposed 1) the excitation spectrogram generator to capture the harmonic structure of speech, which aims to handle the diversity of multi-speaker data by providing a start point for the mel-spectrogram, and 2) CGLSTM to better control the specific characteristics of speech with less impact on the intelligibility than LSTM. The experiments showed large reduction in the reconstruction errors of mel-spectrogram by using the excitation spectrogram generator and CGLSTM decoder. In System-3, the multi-speaker generative model obtained better modeling capabilities with 10\% reduction in model size. The effectiveness of the proposed methods is further verified in the subjective evaluations of speaker adapted models. We have made a comprehensive improvement in terms of intelligibility, voice quality and speaker similarity. e.g. the MOS of intelligibility was improved from 3.30 to 4.11, and voice quality improved from 2.54 to 3.93 for female in System-3. However, we also found that the performance of male is worse than that of female, which perhaps derives from the imbalanced data for females/males and needs further research in the future.
\bibliographystyle{IEEEtran}
\input{template.bbl}
\end{document}
|
2,869,038,155,289 | arxiv | \section*{Introduction}
Liquid spray atomization plays an important role in analyzing the combustion process. A standard modeling approach is to split the process into two steps: primary followed by secondary atomization as shown in Fig. \ref{fig:spray}. Traditionally, the spray dynamics is modeled using an EL point-particle/parcel approach where liquid droplets are assumed subgrid as point droplets and their motion is captured by laws for drag, lift, added mass, and pressure forces. Their effect on the carrier phase is then modeled through two-way coupling of mass, momentum, and energy exchange \cite{Dukowicz1980}. Liquid ``blobs'' with the size of the injector diameter introduced into the combustion chamber undergo atomization based on either deterministic (e.g., \cite{Orourke_1987}) or stochastic breakup models \cite{Apte2003}. In the standard EL point-particle approach, the volume fraction and size of the dispersed phase is assumed small compared to the computational cell. However, this assumption is not strictly applicable to dense spray regions with high void fraction such as those in the primary and the dense regime of secondary atomizations (see Fig. \ref{fig:spray}). This could result in less accurate predictions of such regimes.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/spray_schematic.eps}
\caption{Different regimes in a liquid atomization process with illustration of dispersed phase volume fraction.}
\label{fig:spray}
\end{figure}
Several works have depicted the importance of accounting for the volume/mass displaced in EL approaches, e.g., \cite{patankar2001a,Snider2001,Ferrante2004,Deen2007,Vanderhoef2008,Apte2008} among others. \cite{Finn2011} and \cite{Cihonski2013} showed that under some conditions, the entrainment of eight small bubbles results in significant levels of vortex distortion when the volumetric displacement effects are accounted for. \cite{Fox2014} showed that new turbulence production terms arise due to correlations between the particle-phase volume fraction and fluid-phase velocity fluctuations. \cite{Capecelatro2014_JFM} observed a strong correlation between the local volume fraction and the granular temperature in the results of fully developed cluster-induced turbulence. \cite{Finn2016} applied this formulation to simulations of natural sand dynamics in the wave bottom boundary layer where overall excellent agreement with experiments was achieved.
Besides, when a droplet is exposed to a high velocity gaseous phase, it undergoes deformation and distortion due to the balance between aerodynamic pressure force, surface tension and viscous dissipation forces. This effect which is typically neglected in the standard EL approaches could ultimately change the breakup process such as breakup time as well as size and velocity of the product drops. However, modeling such effect and coming up with a unique predictive tool for all type of breakup regimes is challenging. \cite{taylor1963} suggested the analogy between an oscillating and distorting droplet and a spring-mass system. In this analogy, the surface tension is analogous to the restoring force of the spring and the aerodynamic pressure force is analogous to the the external force on the mass. \cite{Orourke_1987} added the liquid viscosity as a damping force to this model and modified it as Taylor Analogy Breakup (TAB) model which predicts the breakup process as well. In this spring-dashpot-mass system, forces act on the center of droplet and model its oscillation and deformation. Since droplet is distorted at both north and south equators, therefore the idea of having forces act on the center of droplet was corrected by \cite{Clark1988} in a energy conserved based formulation. In this modified model, each droplet consists of two half drops where forces act on the center of mass of each half. This results in two spring-dashpot-mass system for the given condition. Since Clark's model was linearized and the effect of non-linear deformation particularly for large magnitudes was lost, \cite{Ibrahim_1993} improved their model by more accurately capturing the non-linear effects in large deformations. The three dimensional nature of distorting drop is accounted for by conserving the drop volume instead of area leading to a new Droplet Deformation breakup (DDB) model.
\cite{Park_2002} improved the original TAB model by modifying the aerodynamic pressure force. This was performed by taking into account the size variation in the projected area of the drop during deformation which was neglected in the original TAB model. \cite{wang2014} developed a model for Bag-Type Breakup (BTB) based on a modified version of the model put forth by \cite{detkovskii1994} wherein kinetic energy of drop is assumed negligible for low Weber number deformations. In their formulation, the expression of deformation was moved to the center of half-drop due to Hill-vortex formation around this point. Similar to \cite{Clark1988} and \cite{Ibrahim_1993}, all forces are applied to the center of mass of half drop. Surface tension is decomposed into two positive and negative parts where the former tends to flatten and the latter restores the drop, respectively. The extension of their model for higher Weber number cases, i.e., Multimode Bag (MMB) breakup regime includes the kinetic energy of the droplet \cite{wang2015}.
\cite{Sor_2015} in the context of ice accretion modified the DDB model by \cite{Ibrahim_1993} by taking into account the accurate calculation of surface tension force. In addition, the instantaneous velocity of the droplet is employed in the deformation model rather than a constant upstream velocity. Furthermore, the center of mass of a half ellipsoid was chosen rather than that of half of sphere. Better predictions on the deformation of a droplet impinging on an airfoil were observed compared to the traditional models, e.g., TAB, Clark's and DDB models.
In this work, the volumetric displacement effects of deforming droplets in a dense spray are planned for investigations. However, in order to isolate the volumetric displacement effects, Large Eddy Simulation (LES) coupled with Point-Particle (PP) approach modified with spatio-temporal variations in the volume fraction of the carrier phase is employed. For this part, droplets are assumed non-deforming solid particles, and coalescence, breakup and evaporation are all masked to focus only on the displacement effects. Accordingly, a turbulent jet flow laden with a dense loading of solid particles are investigated. Results of this modified LES-PP formulation (volumetric coupling) are compared with those of standard EL point-particle approach (standard coupling) where displacement effects are neglected. For the next step, in order to study the volumetric displacement effects with deforming droplets, different deformation models are investigated by assessing their predictive capabilities for a wide range of Weber numbers and breakup regimes typically observed in sprays. Drop deformation in bag, multimode, transition and shear breakup regimes are all examined to identify a proper model for each regime. These models along with volumetric coupling formulation are deemed to apply to a real atomizing jet in cross flow. However, as a first step in studying the deformation effects, a single liquid droplet injected into a cross flow is examined where the flow parameters are similar to a bag-type breakup condition. It is conjectured that volumetric displacement as well as drop deformation both have to be accounted for in modeling the dense spray regimes.
\section*{Mathematical description}
Carrier phase is captured through solving the governing equations in an Eulerian framework using LES formulation. Motion of liquid droplets is modeled in a Lagrangian framework based on point-particle approach \cite{Maxey1983}. Two phases are coupled primarily by two mechanisms; the displacement of the carrier phase by the volume occupied by the particles and force wise momentum exchange between the phases. The LES volume-averaged governing equations for the carrier phase are as follows
\begin{equation}
\frac{\partial}{\partial t}\left(\overline{\rho_f\theta_f}\right) + \frac{\partial}{\partial x_j}\left(\overline{\rho_f\theta_f}\widetilde{u}_j\right) = 0
\label{eqn:mass}
\end{equation}
\begin{equation}
\begin{split}
&\frac{\partial}{\partial t}\left(\overline{\rho_f\theta_f}\widetilde{u}_i\right) + \frac{\partial}{\partial x_j}\left(\overline{\rho_f\theta_f}\widetilde{u}_i\widetilde{u}_j\right) = -\frac{\partial \widetilde{P}}{\partial x_i} + \\
&\frac{\partial}{\partial x_j} \left(2\overline{\mu_f\theta_f}\widetilde{S}_{ij}\right) -\frac{\partial q^{r,vol}_{ij}}{\partial x_j} +\overline{\rho_f\theta_f}g_i + F_{i,p \rightarrow f}
\label{eqn:momentum}
\end{split}
\end{equation}
\noindent Here, $\overline{\rho_f\theta_f}$ is the filtered density modified by local volume fraction, $\widetilde{u}$ and $\widetilde{P}$ are the Favre-averaged velocity field and pressure respectively, and $\widetilde{S}_{ij}$ is the Favre-averaged rate of strain. The additional term $q^{r,vol}_{ij}$ in the momentum equation represents the subgrid-scale stress and is modeled using the dynamic Smagorinsky model \cite{moin1991}. As expressed below, rewriting these equations in the form of standard two-way coupling results in extra source terms as $S_{v,cont}$ and $S_{v,mom}$ in the continuity and momentum equations respectively. The former identifies the divergence of velocity due to variation in the local volume fraction whereas the latter gives rise to the volumetric displacement forces. Both source terms are zero in the typical two-way coupling approaches.
\begin{equation}
\frac{\partial \widetilde{u}_j}{\partial x_j} = S_{v,cont}
\label{eqn:cont_source}
\end{equation}
\begin{equation}
\begin{split}
&\overline{\rho_f} \left(\frac{\partial \widetilde{u}_i}{\partial t} + \frac{\partial \widetilde{u}_i\widetilde{u}_j}{\partial x_j}\right)
= -\frac{\partial \widetilde{P}}{\partial x_i} + \frac{\partial}{\partial x_j} \left(2\overline{\mu_{f}}\widetilde{S}_{ij}\right) -\\
&\frac{\partial q^{r,2w}_{ij}}{\partial x_j}+\overline{\rho_f}g_i + F_{i,p\rightarrow f} + S_{v,mom}
\label{eqn:momentum_source}
\end{split}
\end{equation}
Throughout this work if the spatio-temporal variations in the local volume fraction is accounted for (i.e., $\theta_f\neq 1$), then volumetric coupling terminology is used whereas the standard two-way coupling (i.e., $\theta_f=1$) is recalled when these effects are neglected. Given the point-particle approach, droplets are tracked using the Newton's second law of motion based on the forces acting on them as
\begin{equation}
\frac{d{\mathbf x}_p}{dt}= u_p;~\frac{d\mathbf{u}_p}{dt} = \frac{1}{m}_p\left(\mathbf{F}_g + \mathbf{F}_{p} + \mathbf{F}_{drag} +
\mathbf{F}_{\rm lift}\right)
\label{eqn:newton_2}
\end{equation}
Different modeling approaches on the droplet deformation are reviewed here. For each model the normalized equations with $y=y/r_o$ and $t=tu_{\infty}/r_o$ are provided. Deformation equation in the TAB model is expressed as follows
\begin{equation}
\frac{d^2y}{dt^2} + \frac{5N}{ReK}\frac{dy}{dt} + \frac{8}{WeK}y = \frac{2}{3K}
\end{equation}
\noindent where $N=\mu_l/\mu_g$, $K=\rho_l/\rho_g$, $Re=\rho_gur/\mu_g$ and $We=\rho_gu^2r/\sigma$ are viscosity ratio, density ratio, Reynolds and Weber numbers of drop, respectively. The improved TAB model developed by \cite{Park_2002} in which the aerodynamic force modified during deformation process is obtained as
\begin{equation}
\frac{d^2y}{dt^2} + \frac{5N}{ReK}\frac{dy}{dt} + \frac{1}{K}y\left(\frac{8}{We}-2C_F-0.5C_F\right) = \frac{2C_F}{K}
\label{eqn:Park_model}
\end{equation}
\noindent where $C_F=4/19$ is chosen such that the critical Weber number, i.e., $We_{crt}=6$ is met. DDB model by \cite{Ibrahim_1993} and its modified version by \cite{Sor_2015} are given below. These two models are different in calculation of surface area as well as the center of mass of half drop. The latter leads to different constant $c$ values of $3\pi/4$ and $8/3$ for DDB and its modified version, respectively.
\begin{equation}
\frac{d^2y}{dt^2} + \frac{4N}{ReK}\frac{1}{y^2}\frac{dy}{dt} + \frac{3c}{4KWe} \frac{dA_s}{da} = \frac{3}{8K}c_p
\end{equation}
\noindent where $c_p$ is the pressure coefficient in order to take into account the variations in the gas pressure acting on the droplet surface during deformation. This parameter could be adjusted based on any available experimental data or accurate fully resolved DNS results. $dA_s/da$ for both models is given based on the following expression. Despite the original DDB wherein a simplified version of this parameter was used, its accurate calculation is employed in the modified DDB by \cite{Sor_2015}.
\begin{equation}
\frac{dA_s}{da} =
\begin{cases}
\begin{split}
& 4a - \frac{4}{a^5\epsilon} \ln \left( \frac{1+\epsilon}{1-\epsilon}\right) + \frac{3}{a^{11}\epsilon}[ \frac{2}{\epsilon(1-\epsilon^2)} \\ & -\frac{1}{\epsilon^2} \ln \left( \frac{1+\epsilon}{1-\epsilon}\right) ] \quad \text{Modified DDB}\\
\end{split} \\
4a(1-2(a)^{-6}) \quad \text{DDB}
\end{cases}
\end{equation}
\noindent where $a=cy$ is the normalized major semi-axis of the half drop and $\epsilon = \sqrt{1-a^{-6}}$. The deformation model in BTB model developed by \cite{wang2014} is expressed as
\begin{equation}
\frac{dy}{dt} = \frac{yC_L}{(KN)^{1/3}}\left( \frac{C_d}{2} - \frac{2C_f}{We} \left[ a^{-1} + a^{5} -2a^{-4}\right]\right)
\end{equation}
\noindent where $C_L=C_{d,sph}=0.45$ to account for changes in the pressure on drop surface during deformation from sphere to disk. Comparing with experiments, $C_f=1/600$ was obtained to be the best to close the model \cite{wang2014}. The drag coefficient, $C_d$, is obtained as
\begin{equation}
C_d =
\begin{cases}
C_{d,sph} \quad \text{for} \quad (We<10)\\
2.1 - 13.63/We^{0.95} \quad \text{for} \quad (We\geq10)
\end{cases}
\label{eqn:cd_wang}
\end{equation}
The MMB model by \cite{wang2015} is expressed as
\begin{equation}
\begin{split}
\frac{d^2y}{dt^2} &= \frac{12N}{KRe} [ -\frac{1}{y}\frac{dy}{dt} + \\ & \frac{C_L}{(KN)^{1/3}}\left( \frac{C_d}{2} - \frac{2C_f}{We} \left[ a^{-1} + a^{5} -2a^{-4}\right]\right) ]
\end{split}
\end{equation}
\noindent where $C_f=0.005$ and $C_d$ is achieved similar to Eq. \ref{eqn:cd_wang} while $C_L$ is obtained as following
\begin{equation}
C_L =
\begin{cases}
C_{\mu}(360 - 413.We^{-0.057}) \quad (15<We\leq40)\\
\begin{split}
&C_{\mu}[18.72\exp(5.29\times10^{-3}We) \\
&+ 0.1125\exp(5.8\times10^{-2}We)] \quad (40<We\leq80)
\end{split}
\end{cases}
\end{equation}
\noindent where
\begin{equation}
C_{\mu} = 7.024 \times 10^{-3}Oh^{-4/3}K^{-1/3}
\end{equation}
\noindent and Ohnesorge number, $Oh=\mu_l/\sqrt{\rho_ld_0\sigma}$. Note that in the two last models (BTB and MMB) unlike others, the Weber and Reynolds numbers are calculated based on diameter of drop.
\section*{Results and Discussion}
The numerical approach used in this work has been extensively applied to and validated for different applications \cite{Shams2011,Finn2011,Cihonski2013,Finn2016,Pakseresht2017_asme,pakseresht2017_aps,pakseresht2017_ilass,He2018}. As a first step in modeling dense spray atomization, the volumetric displacement effects of the carrier phase is isolated by masking shape deformation, coalescence, breakup and evaporation of liquid phase by performing LES simulation of a dense particle laden turbulent jet flow. Different particle Stokes numbers are studied to examine their influence on these effects. The studied cases and the corresponding flow parameters are listed in Table \ref{tab:cases_jet}. Detailed explanation and further results can be found in our recent work \cite{pakseresht2019}. For each case, the results of standard and volumetric two-way couplings are compared together. Note that although inter-particle collision is employed for each case (i.e., four-way coupling), in order to solely focus on the particle-fluid interactions, the two-way coupling terminology is utilized. It is imperative to note that to our best of knowledge, no experimental data exist for such dense cases. Thus, only these two formulations are compared in order to investigate the displacement effects of the carrier phase.
\begin{table}[h]
\begin{center}
\begin{adjustbox}{width=\columnwidth}
\begin{tabular} {lccccc}
\hline
Case & $d_p(\mu m)$ & $Re_{j}$ & S.G. & St & $[\overline{\theta_p}]_{inlet}$ \\
A & 105 & 5712 & 2122.24 & 11.6 & 37.6(\%) \\
B & 105 & 5712 & 7 & 0.0383 & 37.6(\%) \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Flow parameters for different particle-laden turbulent jet cases.}
\label{tab:cases_jet}
\end{center}
\end{table}
Figure \ref{fig:u} shows the results of these two formulations on the mean and r.m.s. velocities of the near field of the jet for case A. As shown, the volumetric coupling predicts higher velocities very close to the nozzle due to the volumetric displacement effects. However, further downstream due to the jet spread and dispersion of particles, the local volume fraction of the carrier phase decreases so do the displacement effects. The increase in the prediction of volumetric coupling is attributed to the continuity source term, $S_{v,cont}$, which drives the higher velocity in the regions with low volume fraction \cite{pakseresht2019}.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/all_icmf19.eps}
\vspace{-0.03\textheight}
\caption{Stream-wise mean and r.m.s. velocities of the carrier phase for case A based on the standard and volumetric two-way couplings}
\label{fig:u}
\end{figure}
As plotted in Fig. \ref{fig:force_fluid}, the contribution of volumetric displacement force ($S_{v,mom}$) in the displacement effects is quite negligible. As shown, the point-particle force in volumetric coupling, $F_{p,vol}$, is predicted almost twice than that of the standard two-way coupling, $F_{p,2w}$. This is due to the higher velocity prediction caused by continuity source term which in turn exerts higher forces on the particles in this formulation.
\begin{figure}[!htpb!]
\begin{center}
\includegraphics[width=\columnwidth,keepaspectratio=true,trim={0 0 0 0},clip]{./figures/fluid_force_icmf19.eps}\\
\caption{Radial profile of the normalized time-averaged stream-wise forces in both formulations at the nozzle exit ($x/d_j=0.04$). Shown includes the volumetric displacement force ($S_{v,mom}$) in the volumetric coupling formulation. }
\label{fig:force_fluid}
\end{center}
\end{figure}
The influence of particle Stokes number on the displacement effects is illustrated in Fig. \ref{fig:stokes_error} by looking at the percentage difference on the results of these two formulations. As depicted, decreasing the Stokes number increases the voluemtric displacement effects further downstream. In addition, the dispersed phase gets more affected by these effects. This is attributed due to the fact that particles with lower relaxation time absorb changes in the background flow and react more rapidly to the displacement effects.
We observed that these effects become important when the inlet average volume loading of the jet is greater than 5\% \cite{pakseresht2019}. For this region, the standard two-way coupling approach is conjectured to be insufficient in order to accurately capture the particle-turbulence interactions. Therefore, for a real atomizing spray where the local volume fraction in the dense regime is on the order of unity, $\theta_p \sim O(1)$, the volumetric displacement effects would be more remarkable, and one needs to account for them.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth,trim={0 0 0 0},clip]{./figures/error_stokes_icmf19.eps}
\vspace{-0.01\textheight}
\caption{Relative increase in the centreline mean velocity prediction of volumetric coupling for both phases of cases A and B.}
\label{fig:stokes_error}
\end{figure}
As the next step, the deformation effect on the spray characteristics is investigated. It is widely observed that in a spray atomization process, depending on the Weber and Ohnesorge numbers, droplets undergo different phases in terms of deformation and breakup \cite{Krzeczkowski_1980,Hsiang_1992}. For $We<1$, no deformation occurs while drops experience non-oscillatory or oscillatory deformation for $1<We<10$. Increasing Weber number further, would result in more distortion which in turn depending on Weber number, one of the bag, multimode, transition or shear breakup takes place \cite{Hsiang_1992}. Moreover, it is observed that deformation in each breakup regime is quite different \cite{Krzeczkowski_1980}.
There have been several models predicting the deformation, yet having a model being capable for all regimes is challenging. In this part, the capability of all available models are compared together against the experimental data of \cite{Krzeczkowski_1980} in order to identify the best possible model for each breakup regime. Bag, multimode bag, transition and shear breakup regimes corresponding to the experiment are listed in Table \ref{tab:cases_drop}.
\begin{table}[h]
\begin{center}
\begin{adjustbox}{width=\columnwidth}
\begin{tabular} {lccccc}
\hline
Case & $Re$ & $We$ & $Oh$ & $N$ & $K$ \\
Bag & 3323.16 & 13.5 & $1.88\times10^{-3}$ & 47.9 & 789 \\
Mult. bag & 5161.93 & 18 & $1.4\times10^{-3}$ & 47.9 & 789 \\
Transition & 8794.4 & 52.6 & $1.4\times10^{-3}$ & 47.9 & 789 \\
Shear & 12235.69 & 101 & $1.4\times10^{-3}$ & 47.9 & 789 \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Different breakup regimes based on the experimental work of \cite{Krzeczkowski_1980}.}
\label{tab:cases_drop}
\end{center}
\end{table}
Deformation models were solved numerically using fourth order Runge-kutta method. Note that the ratio of drop diameter to its initial value, $a/r_o$, is defined differently among models. In TAB and its modified version, $a/r_o=1+0.5y$ whereas in other models $a/r_o=cy$. As shown in Fig. \ref{fig:krz_bag1} and \ref{fig:krz_bag2}, the MMB model by \cite{wang2015} predicts better deformation among others where a good agreement with experiment is achieved. TAB and DDB models and their modifications fail in predicting the large deformation involve in these cases. The modified TAB model by \cite{Park_2002} predicts the deformation better than TAB and DDB, however, it underpredicts for $t>100$. Accordingly, it can be inferred that the MMB model developed by \cite{wang2015} would be suitable for deformation modeling of a droplet in bag and multimode bag breakup regimes. Regarding the transition regime shown in Fig. \ref{fig:krz_transition}, both TAB and DDB models show better agreement with the experiment whereas BTB, MMB and modified TAB models all together overpredict the large deformations, i.e., $t>80$. For shear-type breakup regime as plotted in Fig. \ref{fig:krz_shear}, the modified TAB model enormously over predict the experimental observation and does not follow the experimental trend. In addition, as mentioned in their work, both BTB and MMB models are suited for bag breakup regime and applying them to shear regime is naive \cite{wang2014,wang2015}. Both TAB and DDB models are within the range of experiment for shear breakup regime, however, the downward trend observed in the experiment is only captured in the DDB model and its modification by \cite{Sor_2015}. This shows that for shear breakup regime, one can employ the energy based deformation model by \cite{Ibrahim_1993}. It is worth mentioning that the robustness and predictive capability of these models would be verified better if they were compared with more experimental data in each regime.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/krz_bag_1.eps}
\vspace{-0.025\textheight}
\caption{Drop deformation in bag breakup regime based on different models compared to the experiment}
\label{fig:krz_bag1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/krz_bag_2.eps}
\vspace{-0.01\textheight}
\caption{Drop deformation in multimode bag breakup regime based on different models compared to the experiment}
\label{fig:krz_bag2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/krz_transition.eps}
\vspace{-0.025\textheight}
\caption{Drop deformation in transition breakup regime based on different models compared to the experiment}
\label{fig:krz_transition}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/krz_shear.eps}
\vspace{-0.01\textheight}
\caption{Drop deformation in shear breakup regime based on different models compared to the experiment}
\label{fig:krz_shear}
\end{figure}
Moreover, as observed by \cite{Sor_2015}, the pressure term may vary during deformation and this can be accounted for by introducing a pressure coefficient, $C_p$. They found $C_p=0.93$ to better predict the corresponding experiment in the context of ice accretion, however, this may change for different flow and regimes. The effect of this parameter on the deformation of a droplet in the shear breakup regime is shown in Fig. \ref{fig:krz_shear_cp}. $c_p=0.7$ gives rise to better results for this regime revealing the fact that further modifications and tuning are required for this model and the assumption of having constant pressure on drop surface might be invalid.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/krz_shear_cp.eps}
\vspace{-0.01\textheight}
\caption{Effect of pressure coefficient on the prediction of DDB model}
\label{fig:krz_shear_cp}
\end{figure}
It should be mentioned that these models have to be implemented for real cases where more accurate Reynolds and Weber numbers are used rather than the constant values typically employed in the literature. However, depending on the drop relaxation and deformation time scales, one can estimate whether drop is displaced significantly during deformation. For cases where deformation occurs much faster than its displacement, then assuming a constant slip velocity during deformation would be acceptable. For a real spray case where different Weber and Reynolds numbers exist, a strategy would be required in order to switch between these models. Therefore, employing one model may result in inaccurate deformation results and its consequence effects on breakup.
This models are intended to be tested on a case wherein series of liquid drops are injected into a cross flow and they undergo deformation before breakup occurs. As an initial test case, in order to isolate the deformation effect, a single liquid droplet is injected into a uniform flow with parameters similar to bag breakup regime. Due to the small volume loading of the drop, one-way coupling is chosen and the volumetric displacement effect for this case is conjectured to be insignificant. \cite{Hsiang_1992} observed that drag coefficient increases linearly from sphere to disk during deformation process if internal circulation is negligible. This shows that deformation has a direct influence on the dynamics of the drop through its modified drag coefficient. \cite{Liu_etal_1993} obtained a linear relation for drag coefficient as a function of deformation parameter based on TAB model as
\begin{equation}
C_d = C_{d,sph}(1+2.632y)
\end{equation}
\noindent while \cite{liang2017} derived a power law relation for this coefficient as
\begin{equation}
C_d = 0.7y^{0.516}+0.47
\end{equation}
In order to couple the deformation and its effect on the motion of a single droplet injected into the cross flow, cases with and without the deformation effect on the drag are compared. Bag-type breakup condition of Table\ref{tab:cases_drop} is examined before breakup occurs ($t<300$). Fig. \ref{fig:u_drop_sh} shows the results with modified drag coefficient based on the above formulations. As shown, a significant deviation is observed in the motion of droplet relative to the case where deformation is not accounted for. This could potentially alter the breakup process and affect the size and velocity of the product drops after breakup and disintegration takes place. In our future investigations, the deformation effects on the trajectory of series of liquid drops injected into the cross flow will be examined where a combination of different models will be employed to more accurately capture these effects. Then, volumetric displacement effects of deforming liquid jet into cross flow are studied in order to obtain better predictive tools for modeling dense sprays. In addition, the effect of internal circulation, which is conjectured to decrease the drag coefficient, is deemed for further investigations.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{./figures/u_drop_bg.eps}
\vspace{-0.02\textheight}
\caption{Drop deformation effect on the motion of a liquid droplet in a bag-type breakup regime}
\label{fig:u_drop_sh}
\end{figure}
\section*{Conclusions}
Volumetric displacement effects of deforming liquid droplets were investigated. In order to isolate the volumtric displacement effects, deformation, coalescence, breakup and evaporation were all masked by performing a turbulent jet flow laden with a dense loading of solid particles. The standard EL two-way coupling approach was modified by accounting for the spatio-temporal variations in the local volume fraction of the carrier phase. Results of both standard and modified EL approaches were compared together to quantify the volumetric displacement effects and the regions where these effects become important. It was found that these effects increase both mean and r.m.s. velocities of the carrier phase in the region very close to the nozzle. However, they decrease further downstream due to the jet spread and dispersion of particles. Lowering the particle Stokes number increases the displacement effects on both phases. As a result, we conclude that accounting for the spatio-temporal variations in the volume fraction of the carrier phase is necessary for EL approaches of modeling dense flows. The developed model here can also be used for other configurations such as jet impingement \cite{azimi2015slot}.
In addition, in order to investigate the deformation effects, different models were tested against experimental data. It was observed that the MMB model predicts the best for bag and multimode bag breakup regimes while the original TAB agreed well with the experiment in the transition regime. The modified DDB model with a modified pressure coefficient was observed to well match the data. It was conjectured that a hybrid model based on combination of these models is required for real spray atomization flows wherein a wide range of Weber numbers and breakup regimes exists. This can also be obtained by performing fully resolved simulations \cite{azimi2018_journal,azimi2018_aps}. In order to isolate the deformation effects, as a first step, a single droplet injected into the cross flow was investigated. It was observed that accounting for deformation effect results in a significant increase in the velocity of droplet. Accordingly, we hypothesize that if volumetric coupling approach is systematically extended to the dense atomizing sprays, similar results with more pronounced effects on both displacement and deformation will be achieved.
\section*{Acknowledgments}
Financial support was provided under the NASA Contract Number NNX16AB07A monitored by program manager Dr. Jeff Moder, NASA Glenn Research Center. In addition, the authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin as well as San Diego Supercomputer Center (SDSC) at University of California San Diego for providing HPC resources that have contributed to the results reported here.
\bibliographystyle{ilass}
|
2,869,038,155,290 | arxiv | \section{Introduction}
Electron scattering data are strongly desirable and very useful in many scientific areas and technological applications \cite{Sanc09,Maso14,Altw16,Chri04}.
In the last decades increasing attention has been devoted to understanding elementary phenomena that accompany the interaction of electrons with biologically relevant molecules \cite{Brun17,Kohanoff17}.
Pyridine [C$_5$H$_5$N] is an aromatic heterocyclic compound structurally related to benzene [C$_6$H$_6$], with one CH group in the ring replaced by a nitrogen atom (cf. Fig.~1).
The pyridine unit occurs in many compounds of biological importance, for example, in nicotine and B-group vitamins.
Also, pyridine is used as a precursor to agrochemicals and pharmaceuticals and as a reagent and solvent.
Numerous pyridine derivatives have importance for modern clinical applications \cite{Alta15}.
Study on the electron scattering from the pyridine molecule has quite a long tradition.
Early experiments focused on the study of temporary negative ion states formed at low impact energies \cite{Hueb68,Pisa73,Nenn75,Math76,Mode83};
the electron induced electronic transitions in the gas-phase pyridine \cite{Jons69,Doer72,Walk89};
and the electron-impact ionization efficiency near the threshold \cite{Arim84}.
Electron transmission experiment with thin film of solid target suggested the formation of compound states also in solid pyridine \cite{Sanc79}.
The features observed in the aforementioned works were located on the energy scale, but the intensities of the investigated processes were given in arbitrary units only what makes such results inconvenient for practical applications.
Renewed interest in the electron scattering with biomolecules, among them in pyridine, appeared after observation that low-energy electrons can lead to the break-up of DNA \cite{Boud00}.
Formation of resonant states in the electron-pyridine scattering was examined also in computations using the Schwinger multichannel method \cite{Barb13} and R-matrix method \cite{Sier14}.
Cross sections for electron induced ionization of pyridine were calculated \cite{Bull14,Gupt14,Sier14} and measured in absolute scale \cite{Jiao06,Bull14}.
Examination of the measured electron energy-loss spectra allowed to observe and assign the triplet excited states of pyridine \cite{Line16}.
Recently, the formation of anionic species resulting from the dissociative attachment of low-energy electrons to pyridine has been investigated \cite{Rysz17}.
Just in the course of the present experiment, the absolute TCS data have been published \cite{Trao18}; measured in the transmission experiment from 13 to 902~eV and computed between 1 and 1000~eV.
\begin{figure}[h]
\begin{center}
\includegraphics[width=16cm,height=12cm,angle=0]{Graph1.eps}
\caption
{Structure of molecules studied. From left to right: pyridine, benzene, 2-chloropyridine and 2-bromopyridine.
}
\end{center}
\end{figure}
Deficiency of absolute electron-scattering cross sections for the pyridine molecule prompted us to measure reliable absolute $grand$-total cross section (TCS) for this compound, for impact-energy range from low to intermediates.
Total cross section is that quantity describing the electron scattering which can be obtained without any normalization procedure, with a good accuracy over a wide energy range.
Therefore, it may be used as a calibration standard or the upper limit for the normalization of particular scattering quantities, taken only in arbitrary units, as well as for the estimation of that quantities which are difficult to obtain.
The TCS may serve also as one of the ranges of experimental tests of the reliability of theoretical models and computational procedures used in the electron-scattering calculations.
Present absolute \textit{grand}\,-total cross sections (TCS) were measured at electron-impact energies ranging from 0.6 to 300~eV using the linear electron-transmission method.
To our knowledge, the experimental TCS data below 13~eV are not available in the literature.
The observed low-energy features in our TCS were explained based on findings of previous experiments \cite{Hueb68,Pisa73,Nenn75,Math76,Mode83} and computations \cite{Barb13,Sier14}.
The current electron-scattering TCS results for the pyridine [C$_5$H$_5$N] molecule were then compared to experimental TCS data \cite{Moze96,Gull98} for its isoelectronic 6-membered ring counterpart benzene [C$_6$H$_6$]; the substituent effect is discussed.
We present also elastic (ECS) and ionization (ICS) cross sections for pyridine and its two halogenated derivatives (2-chloropyridine and 2-bromopyridine) computed at intermediate and high electron-impact energies in the additivity rule approximation and the binary-encounter-Bethe approach, respectively.
\section{Experiment}
\subsection{\textit{Experimental procedure}}
The total cross sections for electron scattering from pyridine molecules reported in this work have been obtained using a linear electron-transmission method in single-collision conditions.
The idea of the transmission method is based on the measurements of the attenuation of a projectile particle-beam passing through the scattering medium under study (for details see e.g. Ref.~\cite{Bede71}).
The total cross section (TCS), $Q(E)$, for the scattering of projectiles of given energy $E$ from target particles, is related to the attenuation of the transmitted beam intensity through the Bouguer-de~Beer-Lambert (BBL) relationship:
\[
I_{n}(E) = I_{0}(E)\; {\rm exp}[- Q(E)nL] .
\]
\noindent
Here, $I_{n}(E)$ is the measured intensity of the projectile beam after traversing a length $L$ of target medium whose the absolute number density is $n$, and $I_{0}(E)$ is the intensity of the beam taken in the absence of the target in the reaction volume.
The experimental set-up and the measurement procedure used in the present electron-pyridine scattering experiment have been described in detail elsewhere \cite{Szmy97}, so only a brief outline is given here.
A tunable-energy electron beam, formed in a system of electrostatic lenses coupled to an energy dispersing 127$^{\circ}$ electrostatic deflector, is directed into a reaction cell where its intensity is attenuated by the presence of the vapor sample under investigation.
Those electrons which leave the cell through the exit aperture are energy discriminated by the retarding-field filter and eventually detected with the Faraday cup.
Electron optics of the spectrometer is housed in a vacuum chamber evacuated to a base pressure of about 40~$\mu$Pa.
The magnetic field along the whole electron trajectory is reduced below 0.1~$\mu$T with the system of Helmholtz coils.
The quantities necessary for TCS derivation are taken directly in the present experiment and therefore cross section values reported in this work are given in absolute units, without any normalization procedure.
$L$ was taken equal the distance (30.5~mm) between entrance and exit apertures of the reaction cell,
while the target density value, $n$, is evaluated from the ideal gas formula corrected for the thermal transpiration effect \cite{Knud10}
\[
n = \frac{p_{t}}{k \sqrt{T_{t}T_{m}}} ,
\]
\noindent
where: $p_{t}$ means the pressure of the vapor-target in the cell as measured by a capacitance manometer and
$k$ denotes the Boltzmann constant;
$T_{t}$ is the temperature of the target cell determined using a thermocouple;
$T_{m}=322$~K~$ >T_{t}$ is the temperature at which the manometer head is held.
The energy scale has been calibrated against the oscillatory structure visible around 2.3~eV in the transmitted current when molecular nitrogen was admixtured to the target under study.
The declared inaccuracy of the energy scale ($\sim$ 0.1~eV) is higher than that resulting directly from the calibration due to the shift in energy, perceptible in the course of the long lasting experiment.
A commercially supplied sample of pyridine from Sigma-Aldrich, with a stated purity of 99.8\%, was distilled by freeze-pump-thaw repetitive cycles before use to remove volatile impurities.
The target vapor was admitted into the spectrometer via a variable leak valve, alternately into the reaction cell and the outer vacuum volume, thus the pressure in the region of the electron optics was maintained constant (below 0.6~mPa) whether or not the target was present in the cell;
that ensured a stable primary electron-beam intensity during both phases of the intensity measurements.
Due to a low vapor pressure of pyridine at room temperature, the sample handling system has been maintained at elevated temperature about 315~K.
The TCS measurements have been carried out at target-vapor pressures in the reaction cell between 70 and 200~mPa.
Under these conditions no systematic variation of the measured TCSs with the target pressure was observed.
\subsection{\textit{Uncertainty analysis}}
The final TCS value at each electron impact energy was derived as a weighted mean of results obtained in different runs.
The statistical variations of the reported TCS for pyridine, estimated as one standard deviation of the weighted mean value from TCS values obtained in different runs, do not exceed 1\% below 100~eV and gradually increase up to nearly 2\% at the highest electron-impact energies applied.
However, the accuracy of the measured TCSs is mainly determined by the possible systematical uncertainties immanently associated with the transmission-type experiment \cite{Bede71}.
The most serious problem arises due to inability to discriminate against electrons which are scattered elastically or with small energy losses through small angles in the forward direction and which contribute to the measured transmitted current, resulting in the lowering of the measured TCS \cite{Sull11}.
The retarding field filter prevents only the electrons scattered inelastically in the forward direction to be detected together with those unscattered.
The amount by which the experimental TCS might be lowered due to the \textit{forward-angle scattering effect} can be roughly estimated based on an angle distribution of the scattered electrons, measured or calculated.
Taking into account theoretical elastic differential cross sections \cite{Barb13,Sier14,Trao18}, we found that the measured TCS can be lowered by 4--5\% around 3~eV, about 2--3\% within 30--100~eV, and 3--4\% above 200~eV.
Another troublesome uncertainty in the electron-transmission experiment relates to unavoidable effusion of the target molecules through orifices of the reaction cell leading to:
(i) inhomogeneous target distribution, $n$, (especially in the vicinity of cell apertures) along the electron trajectory in the cell, and
(ii) incorrect determination of the effective path length, $L$, of electrons across the sample volume.
To estimate the uncertainty related to the factor $nL$ in the BBL formula we followed the method adopted from Ref.~\cite{Nels73} to the present experimental conditions.
The calculations show that both aforementioned effects nearly compensate and the estimated uncertainty of $nL$ is about 2--3\%, taking also into account the uncertainty in the pressure measurements.
The overall systematical uncertainty in our absolute TCSs, estimated as the sum of potential systematic errors of all quantities taken in the experiment, amounts up to 9--11\% between 0.6 and 2~eV, decreasing gradually to 6--8\% within 2--5~eV, and to about 5\% between 5 and 100~eV, increasing again to 7--8\% at higher energies.
It is to be noted that the reported TCS data are not corrected for the \textit{forward-angle scattering effect}.
Finally, we note that the observed shift in energy may cause that the low-energy structures in the measured TCS become less pronounced, especially if they are located on steeply rising or falling side of the curve.
\section{Results and discussion}
\subsection{Pyridine, C$_5$H$_5$N}
Figure 2 shows the variation of absolute \textit{grand}\,-total electron-scattering cross section (TCS) for pyridine [C$_5$H$_5$N] measured in this work over the impact energy from 0.6 to 300~eV.
A comparison is made with recent experimental TCS data obtained above 13~eV \cite{Trao18}.
The numerical values of our TCSs are listed in Table I.
\begin{figure}[h]
\begin{center}
\includegraphics[width=16cm,height=12cm,angle=0]{Graph2.EPS}
\caption
{(Color online) Experimental total cross sections for the electron scattering from the pyridine (C$_5$H$_5$N) molecule:
full (red) circles, present, error bars correspond to overall experimental uncertainties;
open (olive) triangles, from Ref.~\cite{Trao18}.
}
\end{center}
\end{figure}
\begin{table}[h]
\caption{Absolute experimental electron-scattering total cross sections (TCSs) for the pyridine [C$_5$H$_5$N] molecule; in units of $10^{-20}$~m$^{2}$.}
\label{prop}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{@{}*{10}{r}}
E (eV)&TCS & E (eV)& TCS &E (eV)&TCS & E (eV)& TCS & E (eV)& TCS\\
\hline
0.6 & 133 & 2.0 & 64.5 & 4.3 & 65.5 & 11.5& 64.9 & 60 & 44.6\\
0.7 & 131 & 2.1 & 64.2 & 4.6 & 65.9 & 12.5& 61.9 & 70 & 41.4\\
0.8 & 124 & 2.2 & 61.9 & 5.0 & 65.3 & 14.5 & 58.6 & 80 & 40.1\\
0.9 & 112 & 2.3 & 62.4 & 5.5 & 64.9 & 16 & 56.6 & 90 & 38.3\\
1.0 & 100 & 2.4 & 62.1 & 6.0 & 66.2 & 18 & 55.0 & 100 & 37.2\\
1.1 & 95.5 & 2.5 & 61.6 & 6.5 & 67.8 & 20 & 54.9 & 110 & 35.3\\
1.2 & 91.5 & 2.6 & 61.2 & 7.0 & 69.1 & 23 & 54.6 & 120 & 33.6\\
1.3 & 86.8 & 2.8 & 61.1 & 7.5 & 71.3 & 25 & 54.7 & 140 & 31.3\\
1.4 & 82.8 & 3.0 & 61.1 & 8.0 & 71.0 & 28 & 53.4 & 160 & 28.2\\
1.5 & 77.5 & 3.2 & 61.9 & 8.5 & 71.1 & 30 & 52.0 & 180 & 26.3\\
1.6 & 73.3 & 3.4 & 62.1 & 9.0 & 70.8 & 35 & 50.9 & 200 & 24.7\\
1.7 & 71.8 & 3.6 & 61.9 & 9.5 & 69.4 & 40 & 49.0 & 220 & 23.6\\
1.8 & 68.6 & 3.8 & 62.7 & 10 & 68.3 & 45 & 48.2 & 250 & 21.4\\
1.9 & 66.3 & 4.1 & 63.7 & 10.5 & 66.1 & 50 & 46.6 & 300 & 18.4\\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
The most distinct feature of the TCS energy curve for pyridine (Fig.~2) is visible below 2~eV;
at 0.6~eV, the lowest impact energy used, the magnitude of TCS reaches the highest value of above $130 \times 10^{-20}$~m$^{2}$ and then rapidly decreases to about $61 \times 10^{-20}$~m$^{2}$ in the vicinity of 2.8~eV, where TCS curve has its local minimum.
The rapid rise in TCS towards low-impact energies may be in the predominant part due to strong permanent electric dipole moment of the C$_{5}$H$_{5}$N molecule ($\mu = 2.215$~D, see Table II).
For polar targets the direct long-range (point-charge---electric-dipole) forces are dominant and at low impact energies contributes significantly to the scattering -- the higher dipole moment reflects in more distinct increase of cross section towards thermal energies \cite{Itik77}.
Because the C$_{5}$H$_{5}$N molecule has quite pronounced polarizability ($\alpha$ = $9.18 \times 10^{-30}$m$^3$, see Table II), some contribution to the TCS is also related to the interaction of electron with the induced electric dipole moment of target molecule.
A closer inspection of the descending side of the TCS curve, below 2~eV, reveals that around 0.7 and 1.2~eV the TCS energy function changes slightly a slope.
These two hardly perceptible TCS features are located in the energy range where distinct structures were visible in the experiments more suitable for the detection of weak variations of cross section \cite{Hueb68,Pisa73,Nenn75,Math76,Mode83}.
These structures were attributed to the formation of two shape resonant states, $\pi^{*}_1$(b$_1$) and $\pi^{*}_2$(a$_2$), taking place when the incoming electron is temporarily accommodated on the lowest normally unfilled $\pi^{*}$ orbitals of the C$_5$H$_5$N molecule in its electronic ground state.
Resonant maxima close to this energy range were also visible in the computed integral elastic cross sections \cite{Barb13,Sier14}.
We suppose that two weak features visible in our TCS between 0.6 and 1.5~eV are the demonstration of these resonant processes.
Above 3~eV the TCS energy curve shows a very broad enhancement peaking within 7.5 and 9~eV with the value of about $71 \times 10^{-20}$~m$^{2}$.
On the low-energy side of this enhancement, between 4 and 5~eV, a weak hump in the TCS curve is perceptible.
This feature corresponds to the 4--5~eV structure observed in the low-energy transmission experiments \cite{Nenn75,Math76,Mode83} and is related to the formation of the third resonant state, $\pi^{*}_3$(b$_1$), located around 4.5~eV.
In the computed elastic cross sections \cite{Barb13,Sier14} that resonant structure appears at higher energy, within 5--6~eV.
Worth noting is also some flattening of the TCS curve between 7 and 9.5~eV.
In this energy regime, the transmission spectrum \cite{Math76} and anion yield curves \cite{Rysz17} also suggested the presence of some resonances.
At electron impact energies above 10~eV, the TCS decreases systematically with energy increase down to about $18 \times 10^{-20}$~m$^{2}$ at 300~eV.
Only around 25~eV some change in the slope of the TCS curve is discernible.
The shoulder in this energy region of TCS is a feature which is common for complex hydrocarbons.
Figure 2 shows also that, in the common energy range of compared experiments, the present TCS results are in reasonable agreement with the recent data from Ref.~\cite{Trao18}, although some differences do exist.
The most pronounced difference in the magnitude is visible between 30 and 60~eV where our TCS is higher by nearly 20\%, somewhat more than the combined declared uncertainties.
\subsection{Comparative studies}
\subsubsection{\textit{Experimental total cross sections for pyridine and benzene}}
In this section, we examine how the replacement of one CH group in the benzene ring with the nitrogen atom is reflected in the TCS energy dependence.
For this purpose, in Fig.~3, the present experimental TCS for pyridine is compared with the experimental TCS curves for benzene: at very low energies the TCS was taken by Gulley et al \cite{Gull98}, and that above 0.6~eV was obtained in our laboratory \cite{Moze96}.
The schematic geometry of both compared compounds is shown in Fig.~1; some their parameters are given in Table~II.
\begin{figure}[h]
\begin{center}
\includegraphics[width=16cm,height=12cm,angle=0]{Graph3.EPS}
\caption{(Color online) Comparison of experimental total cross sections for electron scattering from:
pyridine (C$_5$H$_5$N), full (red) circles, present and
benzene (C$_6$H$_6$), open (olive) triangles -- interpolation to guide the eyes, based on Ref.~\cite{Gull98}; open (blue) boxes, from Ref.~\cite{Moze96}.
}
\end{center}
\end{figure}
Figure 3 shows, that according to the shape the TCS energy functions for benzene and pyridine look quite similar.
Below 2~eV, the compared curves rapidly decrease with an energy increase, and above 3~eV, they have broad enhancement peaking near 9~eV.
This similarity in the shape of TCS curves for benzene and pyridine is at low impact energies somewhat intriguing.
For pyridine, one would expect a rapid rise in TCS toward the lowest energies because such behavior is rather typical for targets with high permanent electric dipole moment ($\mu_{pyridine} \sim 2.2$~D).
On the other hand, for benzene (with $\mu \simeq 0$~D) early TCS experiments \cite{Sueo88,Moze96,Mako03} indicated that slightly below 1~eV (up to 0.6~eV) the TCS is nearly constant, like for many nonpolar targets.
However, the low-energy experiment \cite{Gull98} clearly showed that below 0.6~eV
the TCS for C$_6$H$_6$ starts to increase rapidly with the energy decrease to the thermal energy region.
Such low-energy TCS behavior in benzene might be attributed to the formation of parent negative ions with the lifetime of about 1~$\mu$s, what (like for the SF$_6$ molecule; cf. Ref.~\cite{Chri04}) would lead to large cross section at near zero energy.
However, detailed study of electron attachment to benzene \cite{Fiel01} provides no evidence for the formation of relatively long-lived benzene anions close to zero energy.
Alternative explanation for such rapid rise in cross section for benzene is based on the virtual-state model scattering \cite{Gian98,Fiel01,Barb17}.
Both TCS energy curves for benzene show one feature located between 1 and 2~eV related to electron capture into the lowest unfilled degenerate $\pi^{*}$(e$_{2u}$) orbital yielding temporary anion state in this energy regime (see Ref.~\cite{Barb17} and references therein).
The replacement of the CH group by the nitrogen atom in the benzene ring removes this degeneracy and two shape resonant states are formed in pyridine, what reflects in the change of the slope of TCS curve near 0.7 and 1.2~eV.
For benzene the TCS shows a shoulder between 4 and 6~eV, while a weak hump is located around 4.6~eV in the TCS for pyridine.
Both these structures were also associated with the formation of resonant state, of mixed shape and core-excited character.
With respect to the magnitude, the TCS for pyridine is generally higher than that for benzene.
At 0.6~eV, the TCS for pyridine is larger by a factor of about 5 and that difference is mainly related to the direct interaction of the incoming electron with the polar pyridine molecule.
For higher energies the difference in TCS magnitude systematically decreases and above 200~eV both TCS curves tend to merge.
\subsubsection{\textit{Calculated cross sections for pyridine [C$_5$H$_4$N] and its halogenated derivatives: 2-chloropyridine [2-C$_5$H$_4$ClN] and 2-bromopyridine [2-C$_5$H$_4$BrN]}}
Figure~4 shows integral elastic (ECS) and ionization (ICS) cross sections computed for pyridine and its derivatives in which one H atom (the next to N atom) was substituted with the Cl or Br atom: 2-chloropyridine [2-C$_5$H$_4$ClN] and 2-bromopyridine [2-C$_5$H$_4$BrN];
for the schematic geometry of those compounds see Fig.~1.
\begin{figure}[h]
\begin{center}
\includegraphics[width=16cm,height=12cm,angle=0]{Graph4.eps}
\caption{(Color online) Cross sections calculated for pyridine and its halogenated derivatives.
Elastic cross section: dash-dot (red) line, pyridine; dash-dot-dot (blue) line, 2-chloropyridine; short dash (green) line, 2-bromopyridine;
ionization cross section: full (red) line with bullets, pyridine; dashed (blue) line with triangles, 2-chloropyridine; dotted (green) line with boxes, 2-bromopyridine;
total: full (red) line pyridine; dashed (green) line, 2-chloropyridine; dotted (blue) line, 2-bromopyridine.
Computed total cross section for pyridine is compared to respective experimental data, full (red) circles, present, and open (green) triangles from Ref.~\cite{Trao18}.
}
\end{center}
\end{figure}
The computed cross sections were obtained using simple methods: the ECS -- in the additive rule approximation \cite{Raj91} with the static+polarization interaction taken into account; while the ICS with the binary-encounter-Bethe approach \cite{Hwan96}.
All quantities necessary in calculations have been obtained on HF and OVGF level with the Gaussian code \cite{Fris09} and 6-311++G(2d,2p) Gaussian basis set.
The used theoretical approaches and computational procedures were described in detail in our previous works \cite{Moze12,Szmy18}, and therefore are not repeated here.
For considered molecules the sum of ECS and ICS, which represents the computed total cross section, is also depicted in Fig.~4.
Detail comparison of calculated ionization and total (\mbox{ECS+ICS}) cross sections for pyridine molecules with available theoretical and experimental data is shown in Figure~5.
Our computed ionization cross section is in reasonable agreement with experimental data of Bull et al.~\cite{Bull14}.
Although our data are lower than experimental results of Jiao et al.~\cite{Jiao06} and theoretical data~\cite{Gupt14,Trao18} they are still within declared combined experimental and computational uncertainties.
For pyridine, above 40~eV, the computed total cross section (\mbox{ECS+ICS}) is in quite good agreement with our experimental TCS, it is also in reasonable accord with the high energy experimental and theoretical data from Ref.~\cite{Trao18}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=16cm,height=12cm,angle=0]{Graph5.eps}
\caption{(Color online) Comparison of cross sections calculated for pyridine molecule.
Theoretical ionization cross section: full (red) line, present; dotted line~\cite{Gupt14}; dashed line~\cite{Trao18}.
Experimental ionization cross section: full (black) circles~\cite{Jiao06}; full line with squares~\cite{Bull14}.
Total cross section: full (red) circles, present experimental data; open (green) triangles experimental data from Ref.~\cite{Trao18}; dash-dot (red) line, present calculations; short dott line, IAM-SCAR calculations~\cite{Trao18}; short dash line, IAM-SCAR+I+R calculations~\cite{Trao18}.
}
\end{center}
\end{figure}
\begin{table}[h]
\caption {Location of the low-energy resonant-like features, $E_{\rm r}$, perceptible in the TCS curves compared in Fig.~3.
Listed are also selected electric parameters for considered compounds (from Ref.~\cite{Lide95}):
the permanent dipole moment, $\mu$, and the static dipole polarizability, $\alpha$.
}
\label{prop}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{@{}*{4}{l}}
& $E_{\rm r}$ & $\mu$ &$\alpha$ \\
Molecule & (eV) &(Debye) &($10^{-30}$m$^3$) \\
\hline
pyridine [C$_5$H$_5$N] &($\sim$0.7, $\sim$1.2, 4.6, 7--9.5)\footnotemark[1] & 2.215 & 9.18 \\
benzene [C$_6$H$_6$] & (1.15, 4.8, 8--9)\footnotemark[2] & 0 & 10.32 \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Present work.}
\footnotetext[2]{Reference~\cite{Moze96}.}
\end{center}
\end{table}
\section{Conclusions}
We have presented our experimental absolute total cross sections for electron scattering by the pyridine [C$_5$H$_5$N] molecule over wide energy range from 0.6 to 300~eV.
TCS data measured at impact energies below 13~eV are presented for the first time.
At low impact energies, the TCS rapidly decreases with energy increase and has a minimum located near 2.8~eV.
Below the minimum, in the vicinity of 0.7 and 1.2~eV, a weak changes in the slope of the TCS energy curve are discernible.
Above 3~eV the TCS energy dependence shows a very broad, highly asymmetric enhancement peaking between 7.5--9~eV.
On the low-energy side of this enhancement, between 4 and 5~eV, a weak hump is clearly visible.
Based on results of earlier low-energy experiments and calculations, the TCS features observed around 0.7, 1.2 and 4.6~eV were attributed to the formation of short-living negative ion states.
Our TCS energy dependence is in reasonable agreement with very recent TCS measurements \cite{Trao18}.
To study how the replacement of the CH group in the benzene ring with the nitrogen atom reflects in the cross section energy dependence we have compared the TCS for pyridine and benzene molecule.
In addition, for pyridine and its halogenated derivatives: 2-chloropyridine [2-C$_5$H$_4$ClN] and 2-bromopyridine [2-C$_5$H$_4$BrN], integral elastic (ECS) and ionization (ICS) cross sections have been calculated at intermediate and high electron-impact energies in the additivity rule approximation and the binary-encounter-Bethe approach, respectively.
For pyridine the sum of ECS and ICS is in good agreement with the measured TCS above 40~eV.
That agreement suggests that our calculated \mbox{ECS+ICS} results can reasonably represent total cross section values also for 2-chloropyridine and 2-bromopyridine which may be useful for further applications.
Monohalopyridines are considered as guest molecules in the preparation of superconducting crystals \cite{Prok15}.
\begin{acknowledgments}
This work has been supported in part by the Polish Ministry of Science and Higher Education (MNiSzW Project 2017-2018).
Sylwia Stefanowska kindly acknowledge the support of the Polish Ministry of Science and Higher Education within the Diamond Grant program (Project no. DI2015 018945).
Numerical computations have been performed at the Academic Computer Center (TASK) in Gda{\'n}sk.
\end{acknowledgments}
|
2,869,038,155,291 | arxiv |
\section{Introduction}
\label{sec:intro}
Metallic surfaces become subjected to unprecedentedly high electric fields in the high-power devices of improved efficiency but compact dimensions. Vacuum is known for very high insulating properties. The higher the vacuum the higher electric fields can be applied between the two metal plates before an arc will bridge them. Hence there is always a certain threshold voltage (known as breakdown voltage), at which a medium conducting strong arcing currents appear even in ultra high vacuum. The Compact Linear Collider~(CLIC)~\cite{clic2016updated}, a proposed next-generation particle accelerator in CERN, is one of the important examples where tiny vacuum arcs may affect the performance efficiency of the entire machine. This room-temperature Cu ``tube'' spanning from a few millimeters in inner diameter to~50 kilometers in length is designed to enable up to~3\,TeV collision between electrons and positrons. Both types of particles are accelerated to the required energies by high-gradient electromagnetic fields within Cu accelerating structures. The bursts of vacuum arcs consume power and divert bunches of accelerated particles, as well as damage the accelerating structures themselves. Desirable reduction of occurrence of these current bursts is difficult, since their mechanisms are not completely known.
Under high electric field some electrons always leak into vacuum through field emission process. These initially low currents rise by many orders of magnitude, when a plasma builds up above the surface~\cite{zhou2020spectroscopic}. To form plasma, particles of both negative (electrons) and positive (ions) charges are needed. The positive ions are thought to originate from the surfaces exposed to the electric field. The electric field magnitudes applied in vacuum arcing experiments---in the hundreds of~MV\,nm$^{-1}$~\cite{saressalo2020classification}---are too low for direct field evaporation, which takes place in the~10--50\,GV\,nm$^{-1}$ range~\cite{kelly2012atom}. Hence it has been suggested that a feedback-loop of self-reinforcing growth of a surface protrusion must exist~\cite{pohjonen2011dislocation, kyritsakis2018thermal}. The sharper the tip, the stronger the field at its top. The enhanced field induces stronger currents eventually leading to melting and subsequent evaporation of neutral atoms and atom clusters into vacuum. However, the protrusion growth is expected to be too fast for experimental observation, hence, theoretical and computational models are developed to understand the mechanisms governing the process of surface protrusion growth.
Microscopy of surfaces that have experienced multiple breakdowns reveal a large number of breakdown spots in the shape of solidified molten regions known as craters, see e.g. Refs.~\cite{shipman2015experimental,saressalo2020classification,saressalo2021plasma}. The crater edges are jagged, and thus they themselves can function as field enhancing features~\cite{saressalo2020classification}. However, the field enhancement on such features is weak as the features are generally blunt---the aspect ratio of the crater edge features has not been reported to be sufficiently high. Hence, these frozen-in features cannot initiate a feedback loop that can result in a subsequent breakdown. Thus some additional mechanisms of growth and sharpening of protrusions must exist.
Under an applied electric field, any surface asperity induces local field enhancement that is estimated as $F_{\text{loc}}=\beta F_0$, where $\beta \approx h/r$, the geometric aspect ratio of the surface asperity~\cite{edgcombe2001enhancement,edgcombe2001microscopy,djurabekova2011atomistic}. Naturally, the enhanced field may result in enhanced Maxwell tensile stress, which will affect locally the atomic dynamics at this enhancing surface feature~\cite{parviainen2015atomistic}. However, this is not the only effect, which can be caused by the applied electric field at the surface. Already in~1975, Tsong et al. proposed a mechanism where atomic diffusion is biased toward stronger electric fields in the presence of an electric field gradient. This bias is expected due to alternation of polarization characteristics of surface atoms, such as dipole moments and polarizability~\cite{tsong1975direct}. Recently we have improved this approach by applying the theory to changes in the dipole moment and the polarizability of the entire surface due to a single jump of a migrating adatom~\cite{kyritsakis2019atomistic}. The bias can be expressed as a change in the migration energy barrier $E_\mathrm{m}$:
\begin{equation}
\label{eq:modbarrier}
\Delta E_\mathrm{m} = -\mathcal{M}_\mathrm{sl} F - \frac{\mathcal{A}_\mathrm{sl}}{2}F^2 - \mathcal{M}_\mathrm{sr}\gamma l - \mathcal{A}_\mathrm{sr}\gamma l F
\end{equation}
$F$ is the strength of the the electric field at the initial lattice site of the atom, and $l$ is the distance between the initial site and the saddle point, i.e. the highest energy point along the minimum energy path of the jump. $\mathcal{M}$ is the dipole moment and $\mathcal{A}$ is the polarizability of the system; these electrical parameters are material-dependent. Subscript sl denotes a difference between the lattice site and the saddle point (e.g. $\mathcal{M}_\mathrm{sl}=\mathcal{M}_\mathrm{s}-\mathcal{M}_\mathrm{l}$), and sr the difference between the saddle point and the reference system of a flat substrate without the adatom. Note that while the electric field affects material in the direction of surface normal (along the direction of local field), the electric field gradient will affect the surface in the direction perpendicular to the surface normal and, hence, to the electric field. The barrier is lower moving in the direction of the gradient, and higher moving in the opposite direction.
By the mechanism of biased diffusion under electric field gradient, a field enhancing feature (a tip for short) would tend to grow taller and sharper, thus also producing a higher factor $\beta$. This phenomenon has been observed in kinetic Monte Carlo~(KMC) simulations of W surface by Jansson et al.~\cite{jansson2020tungsten}.
Although a KMC approach to model surface diffusion processes is very attractive, we have previously identified numerous challenges in KMC simulations of the Cu surface~\cite{baibuz2018migration,kimari2020application}. A significant part of these issues is caused by the assumption of a rigid lattice with fixed lattice sites; on the fcc~\hkl{111} surface, for instance, the off-lattice hexagonal close-packed~(hcp) sites often have a stability comparable with a regular lattice site. Nevertheless, the understanding of the breakdown phenomenon requires simulation results under these conditions.
In this work, we study the drift of Cu adatoms under inhomogeneous electric field with the electric field gradient due to existing surface features. We carry out these simulations using molecular dynamics~(MD), since it offers the flexibility of dynamically evolving system with all positions accessible within the simulation cell. To introduce effect of electric fields and to overcome length and time scale limitations, we modified the classical MD by coupling it to finite elements method~(FEM)~\cite{veske2018dynamic} field solver, and applying collective variable~-driven hyperdynamics~\cite{bal2015merging} acceleration. To verify the validity of the~MD-FEM electrostatic model, we estimate the dipole moment and polarizability characteristics based on the diffusion results, and compare them to the corresponding values calculated with density functional theory~(DFT). All simulation details are described in section~\ref{sec:methods}. Section~\ref{sec:results} describes the simulation results, which are further discussed in section~\ref{sec:discussions}. Finally, conclusions are drawn in section~\ref{sec:conclusions}.
\section{Methods}
\label{sec:methods}
\subsection{Molecular dynamics}
\label{subsec:methods}
We simulated Cu self-diffusion along a nanowire surface with molecular dynamics~(MD), using the LAMMPS software~\cite{plimpton1995fast}. The MD region consisted of a slice of nanowire with periodic boundary conditions along the length of the wire, which was set in the~\hkl<110> direction, in terms of Miller indices. The thickness of the nanowire slice was~8 interatomic distances, i.e.~\raisebox{0.5ex}{\texttildelow} 20\,\AA, and the radius of the wire ranged from~10 to~20\,\AA. Two adatoms were added on the surface of the wire, to be tracked for the total distance they travel along the wire length during the simulation. We studied diffusion on the three surfaces with the lowest surface energy: the~\hkl{100}, the~\hkl{110} and the~\hkl{111} surface. The cross-section of the wire was roughly circular in the~\hkl{100} case, but modified to increase the area of the facet of interest in the~\hkl{110} and~\hkl{111} case to reduce the probability of atoms leaving this facet, which might affect the statistical analysis. The geometries with different cross-sections of the wires are shown in figure~\ref{fig:sections}. Ten repetitions of each case were conducted with different random seeds and randomized positions of the adatoms on the desired surfaces.
\begin{figure*}
\centering
\hfill
\begin{subfigure}{0.2\linewidth}
\includegraphics[width=\linewidth,trim={2cm 0 2cm 0}, clip]{100_section.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.21\linewidth}
\includegraphics[width=\linewidth,trim={2cm 0 2cm 0}, clip]{110_section.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.28\linewidth}
\includegraphics[width=\linewidth]{111_section.png}
\end{subfigure}
\hfill{}
\caption{Cross-sections of the MD-simulated nanowires of this work. The added adatoms are colored red, with the rest of of the atoms copper-colored. Adatoms were added either on the~\hkl{100} (left), the~\hkl{110} (middle), or the~\hkl{111} (right) surfaces of the wire.}
\label{fig:sections}
\end{figure*}
The atomic interactions were defined by an MD/MC-CEM potential by Stave et al.~\cite{stave1990corrected}, reportedly optimized specifically for Cu surfaces~\cite{sinnott1991corrected}. We used a~4\,fs timestep, and a Nosé-Hoover thermostat set to~300\,K. Simulation systems were thermalized for~400\,ps before any other modifications. The atoms immediately around the axis of the nanowire were fixed to prevent any overall drift due to external forces.
The total runtimes of the simulations were~10--700\,\textmu s when collective variable~-driven hyperdynamics~(CVHD; see section~\ref{subsec:cvhd}) was used, and~10--100\,ns otherwise.
Beyond the~20\,\AA\ thick MD region, the simulated system was extended with a continuum surface mesh, to allow a more realistic calculation of the electric field with the finite elements method~(FEM), described in the section \ref{subsec:fem}.
\subsection{Finite elements method}
\label{subsec:fem}
We obtained the distribution of the electric field in our simulations by using the Femocs~\cite{veske2018dynamic} library that is a finite element method~(FEM) solver. The library has interfaces to LAMMPS and Parcas~\cite{nordlund1997point} MD softwares and the Kimocs~\cite{jansson2016long} kinetic Monte Carlo~(KMC) software, and it can also be compiled as a standalone program. Femocs translates the applied electric field into a potential that follows the metallic equipotential surfaces, and calculates the surface charges. Charge is distributed to surface atoms that consequently experience electrostatic forces added within the MD algorithm.
Femocs can easily extend the solver mesh beyond the atomic system, which increases significantly the simulation domain compared to limited length scales of MD. To emulate the situation of surface diffusion under electric field \emph{gradient}, we built the Cu system with a nanotip placed on a surface. The nanotip is sufficiently tall to enhance the electric field toward its sharp top, imposing a ``natural'' gradient along its length. Specifically, we are interested in the $rz$-component of the gradient tensor $\gamma$, i.e. the partial derivative of the electric field radial component with respect to the (axial) $z$-coordinate:
\begin{equation}
\label{eq:gammatensor}
\gamma_{rz} = \frac{\partial F_r}{\partial z},
\end{equation}
Two different nanotip geometries were used: a tall,~93\,nm tip, and a short,~5\,nm one, with a similar cross-section shape as the MD region in each case. In the case of nearly elliptical~\hkl{100} nanowire, we used elliptical extension cross-section; for the more flattened~\hkl{110} wire, we applied similar flattening to the extension cross-section; and for the~\hkl{111} wire, we used a diamond-shaped cross-section for the extension. The MD region was placed either in the bottom, the middle or the top of the tall nanotip, or in the middle of the short one. See figure~\ref{fig:extension} for an example of the extended tall nanotip system. While the MD region has periodic boundaries in the $z$-direction and open boundaries in the horizontal directions, \emph{the extended simulation box has open boundaries in the $z$-direction, and periodic boundaries in the horizontal directions}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{100_extension.png}
\caption{Extended simulation system, with the MD region (zoomed into in the inset) in copper color and the static, continuous extension in gray. The system has periodic boundaries in the horizontal directions, and open boundaries in the $z$-direction. The MD region, on the other hand, has a periodic boundary through it's own $z$-span, independently of the extension.}
\label{fig:extension}
\end{figure}
We want to frame the simulation setting in the following way. The property of interest is the drift of single adatoms along the $z$-coordinate. To collect statistics, one option would be to add adatoms e.g. in the middle of the wire, and remove any atoms that exit through the top or the bottom of the MD region. What we have done here, instead, is a more convenient way to accomplish this by utilizing periodic boundaries. Atoms crossing the MD boundary can be, for all intents and purposes, considered to be removed from the system. At the same instant, a new adatom is added in the system from the opposing boundary---the fact that this is technically the same adatom does not make a difference. We are \emph{not} simulating the diffusion of atoms along a \emph{long} extended region, but rather investigating the \emph{local bias} that is imposed on the adatoms by the electric field gradient in the thin MD slice.
We want to emphasize that when an atom is arbitrarily close to the top (bottom) of the MD region, it does not see any fields present at the bottom (top) of the region. Instead, it sees the fields present in the extended system above (below) the MD region. Once the crossing of the border happens, the atom lands in a new environment, forgetting the fields it just left. Even if the migration behavior were anomalous precisely at the border of the MD region, this would constitute a small error in the total drift through numerous MD region heights. Furthermore, regardless of whether the possible boundary anomaly was attractive (adatoms are less likely to move away from the boundary) or repulsive (adatoms are less likely to cross the boundary), the error is expected to be in the \emph{downward} direction (reducing the drift). Thus, any error in the boundary will not result in a false positive result for the observation of biased diffusion.
To solve the electric field, Femocs first constructs a surface mesh based on the atomic coordinates and the continuous extension. It also constructs a 3D mesh for the vacuum. Neglecting any space charge that would be due to electron emission or Cu ions detached from the surface, the Laplace equation holds:
\begin{equation}
\label{eq:laplace}
\nabla^2 \Phi = 0
\end{equation}
$\nabla^2$ is the Laplace operator that gives the divergence of the electric field that is due to the electrostatic potential $\Phi$. In the absence of any total charge in the system, the divergence has to equal zero everywhere.
The boundary conditions~(BC) used for the Laplace equation are in this case:
\begin{enumerate}
\item Constant $\Phi$ everywhere on the surface, since the Cu is a conductor (Dirichlet BC).
\item Constant $\nabla \Phi$ at the top of the extended simulation box due to a far-away anode (Neumann BC).
\item No electric flux through the extended simulation box boundaries.
\end{enumerate}
All these boundary conditions are implemented in Femocs. It solves the Laplace equation with FEM in the 3D mesh generated in the vacuum, bounded by the surface.
Solving the potential $\Phi$ lets us distribute the surface charges to individual surface atoms (see Ref.~\cite{veske2018dynamic} for details) and calculates the electric field
\begin{equation}
\label{eq:field}
\mathbf{F} = -\nabla \Phi
\end{equation}
The electric field exerts forces to the charged atoms. The forces are finally exported back to the MD algorithm to modify the atomic dynamics accordingly.
The applied electric field range studied in this work starts from~100\,MV/m. The upper limit of the field depends on the geometry: due to the field enhancement near the tip of the~93\,nm tall protrusion, the atomic structure disintegrates at applied electric fields above~1\,GV/m. At the bottom of the protrusion, as well as in the~5\,nm protrusion, the field could be increased up to~5\,GV/m. For reference, the nominal acceleration voltage in the CLIC device is~100\,MV/m, corresponding to fields of~200\,MV/m or more on the surfaces surrounding the beam~\cite{saressalo2020classification}.
The direction of the applied field was exactly opposite to the $z$-axis, so that the simulation model acted as the cathode of a two-electrode electrostatic system. However, we note that we neglect the processes of electronic heating and space charge effects at the cathode to be able to focus on biased diffusion due to an electric field; hence, the choice of the field direction is not critical for the current simulations.
\subsection{Collective variable -driven hyperdynamics}
\label{subsec:cvhd}
Diffusion process timescales are often beyond the range accessible by MD. In the scope of this work, the timescale problem applies to the Cu~\hkl{100} surface. On the~\hkl{110} and the~\hkl{111} surfaces the potential energy surface felt by adatoms is smooth enough for fairly easy transitions between the lattice sites. However, on the~\hkl{100} surface the adatoms sit in deep potential energy wells, with approx.~0.5\,eV activation energy for migration.
To assess the effect of biased diffusion on all three most commonly appearing surfaces, we use the collective variable -driven hyperdynamics~(CVHD) acceleration~\cite{bal2015merging} for the surface with the ~\hkl{100} orientation. Since the details of the algorithm vary slightly between different implementations~\cite{fukuhara2020accelerated}, we briefly review the basics of this acceleration method and its parameters for better reproducibility of the presently reported results.
In CVHD, the potential energy of the system is biased by a term that depends on a one-dimensional collective variable~(CV) $\eta$:
\begin{equation}
\label{eq:Vbias}
V^*(\mathbf{R}) = V(\mathbf{R}) + \Delta V(\eta)
\end{equation}
The term $V(\mathbf{R})$ is the regular interatomic potential in the system, a function of the atomic coordinates $\mathbf{R}$, and $\Delta V(\eta)$ is the added CVHD bias. The $\eta$ variable is chosen such that it is able to detect the rare events that are of interest, i.e. $\eta$ should have almost zero values when the system as a whole is near equilibrium, and values near unity when the system is almost at the boundary between the two states separated by the event; $\Delta V(\eta)$ is an almost monotone decreasing function of $\eta$, pushing the system away from the equilibrium. Presently by the events we understand migration jumps of individual adatoms between the neighboring lattice sites. We adopt the ``bond-breaking'' based CV following the suggestion by Bal and Neyts~\cite{bal2015merging}. Starting from the interatomic distances between all nearest neighbors~(NN) in the system, $r_i$, the CV is defined in the following way. First, each $r_i$ is associated with a \emph{local distortion}:
\begin{equation}
\label{local_distortion}
\chi_i =
\left\{\begin{array}{@{}rl@{}}
0 , &\text{if}\quad r_i \leq r_\mathrm{min}\\
\frac{r_i-r_\mathrm{min}}{r_\mathrm{max}-r_\mathrm{min}} , &\text{if}\quad r_\mathrm{min} < r_i \leq r_\mathrm{max}\\
1 , &\text{if}\quad r_i > r_\mathrm{max}
\end{array}\right.
\end{equation}
Here, $r_\mathrm{min}$ and $r_\mathrm{max}$ are user-defined parameters, bounding the interval within which $r_i$ is expected to stretch due to thermal motion. $r_\mathrm{min}$ is usually set to the equilibrium distance between the nearest neighbors at the given temperature of the simulated domain, and $r_\mathrm{max}$ is set to be the "breaking bond" distance, where one of the atoms in the pair $i$ has jumped away from its original lattice site.
The next step toward a \emph{collective} variable is to define the global distortion as a $p$-norm of the local distortions:
\begin{equation}
\label{eq:chi}
\chi = \left( \sum_i \chi_i^p \right)^\frac{1}{p}
\end{equation}
$p>1$ is another user-defined parameter that is designed to emphasize the effect of large individual local distortions $\chi_i$ within the global distortion, i.e. to be more sensitive to individual processes anywhere in the system. The higher the value of $p$, the more sensitive the method is to the significant distortions.
Finally, to bound the collective variable to interval $[0,\,1]$, the global distortion is passed through the cosine function:
\begin{equation}
\label{eq:eta}
\eta =
\left\{\begin{array}{@{}rl@{}}
\frac{1}{2}\left[1 - \cos(\pi \chi^2) \right], & \text{if}\quad\chi \leq 1\\
1 , & \text{if}\quad\chi > 1
\end{array}\right.
\end{equation}
This is the CV that appears in the bias potential $\Delta V(\eta)$. The nature of this CV is such that it will have small values when \emph{all} the NN distances $r_i$ in the system are close to $r_\mathrm{min}$, and large values when at least one $r_i$ is stretched. When any $r_i > r_\mathrm{max}$, the CV will cap to exactly~1. By monitoring this capping, transitions in the system can be detected and the NNs re-assigned when necessary.
The bias potential is constructed dynamically during the simulation. In the beginning, $\Delta V(\eta) = 0$ everywhere. At user-defined intervals $\tau$, the current value of $\eta=\eta_\tau$ is calculated. At this location, \emph{a small Gaussian hill} is added, and $\Delta V(\eta)$ will become
\begin{equation}
\label{eq:DeltaV}
\Delta V(\eta) = w \exp\left[ -\frac{(\eta - \eta_\tau))^2}{2\delta^2} \right]
\end{equation}
After this, at every step $k\tau$, ($k=1,\,2,\,3,\,\ldots$), a similar Gaussian hill is added at the location $\eta_{k\tau}$. Parameters $w$ and $\delta$ are defined by the user and they control the height and the width of the potential energy ``packages'' that are added to $\Delta V$. This way, the bias potential grows slowly over time as illustrated in figure~\ref{fig:DeltaV}.
\begin{figure}
\centering
\input{DeltaV.tex}
\caption{Schematic of the evolution of the bias potential over $10\tau$ steps. Small Gaussian hills are summed together to form the total bias potential function (the topmost line, in solid red). The bias pushes the system toward higher values of $\eta$, i.e. away from the current state.}
\label{fig:DeltaV}
\end{figure}
In the well-tempered metadynamics~\cite{barducci2008well} variant of CVHD, used also in this work, the height of the Gaussian hills is modified so that lower
energies are added in $\eta$ regions where the bias is already high:
\begin{equation}
\label{eq:temper}
w_k = w \exp\left(-\frac{\Delta V(\eta_{k\tau})}{k_\mathrm{B} \Delta T}\right)
\end{equation}
where $k_\mathrm{B}$ is the Boltzmann constant and $\Delta T$ is an algorithmic bias temperature---it has no connection to the physical temperature of the system.
The final equation required for the CVHD acceleration is the stretching of time due to the bias potential. Every timestep length $\Delta t_\mathrm{MD}$ (that is used in the integration of the equations of motion), is multiplied by a bias-dependent factor to account for the time that would elapse if the transition had happened naturally due to thermal vibrations:
\begin{equation}
\label{eq:CVHD_time}
\Delta t_\mathrm{CVHD} = \Delta t_\mathrm{MD} \left\langle \exp\left(\frac{\Delta V(\eta)}{k_\mathrm{B}T}\right) \right\rangle
\end{equation}
where $T$ is the temperature of the simulated system. In other words, the more bias has been accumulated, the faster the time advances. This somewhat resembles KMC dynamics, where the system outright skips any movement between interesting jumps.
In the beginning of the simulation, all NN distances are close to the equilibrium value $r_\mathrm{min}$, and thus $\eta$ will have values near to zero. This will cause bias $\Delta V(\eta)$ grow specifically at low values of $\eta$. The bias potential will exert a force that drives $\eta$ to higher values, i.e. where at least one NN distance---``bond''---is stretched. Due to the exponent $p$ in Eq.~\eref{eq:chi}, small distortions will have little effect in $\eta$, and thus the bias will not significantly affect atoms that are tightly bound in the lattice sites, i.e. the lengths of their bonds do not deviate from equilibrium significantly due to thermal motion. Atoms on the surface, on the other hand, have more space to move, and feel the bond-stretching force of the bias.
Over time, $\eta$ will be sampled at higher and higher values, making the bias push stretching bonds even further, until one atom in the system finally makes a jump further away from its NN than the distance $r_\mathrm{max}$. This causes $\eta$ to saturate at value~1 ``indefinitely''. If $\eta=1$ for $\tau_\mathrm{threshold}$ timesteps, the system is considered to have moved to a new state. At this point,
\begin{enumerate}
\item The NN atoms will be recalculated. Any atoms that have distance less than $r_\mathrm{cut}$ are considered NNs with each other.
\item All bias potential is removed, i.e. the accumulation of $\Delta V$ begins anew.
\end{enumerate}
This ensures that no bias is added in the transition states of the system, which would skew the dynamics. The bias resetting allows CVHD avoiding the problem of small barriers, that many other acceleration methods face~\cite{bal2015merging}. Even if the dynamics of the system is unknown beforehand, with widely varying migration energy barriers, all processes that have a barrier higher than the Gaussian height $w$ of Eq.~\eref{eq:DeltaV} will be handled correctly.
The resetting of the bias may decrease the acceleration efficiency in simulations where events happen very frequently. This is another reason why we did not use CVHD in the~\hkl{110} and~\hkl{111} surface simulations, where diffusion is very fast---the obtained boost was less than the additional cost of the CVHD algorithm. The~\hkl{100} surface, on the other hand, is truly ideal for adjusting the expected frequency of jumps to utilize dynamic CVHD to its full potential. Extending the MD region with the FEM mesh allows us to further decrease the number of atoms (\raisebox{0.5ex}{\texttildelow} frequency of jumps) in the system without encountering detrimental finite size effects.
The LAMMPS software includes the CV framework as a standard feature. The implementation of the bond-breaking CV and the CVHD acceleration (bias potential and the time factor) was written by Bal and Neyts~\cite{bal2015merging} and updated for a newer version of LAMMPS by Kurki~\cite{kurki2020performance}. The parameters used in this work are tabulated in table~\ref{tab:CVHD}.
\begin{table}
\centering
\caption{Parameters used in the CVHD acceleration.}
\label{tab:CVHD}
\begin{tabular}{lc}
\toprule
Parameter & Value \\
\midrule
$r_\mathrm{min}$ & 2.56466\,\AA \\
$r_\mathrm{max}$ & 3.30\,\AA \\
$r_\mathrm{cut}$ & 3.00\,\AA \\
$p$ & 20 \\
$w$ & 0.005\,eV \\
$\delta$ & 1.0 \\
$\tau$ & 4\,ps \\
$\tau_\mathrm{threshold}$ & 10\,ps \\
$\Delta T$ & 2000\,K \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Calculation of surface polarization characteristics by density functional theory}
\label{DFT}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{DFT_systems.pdf}
\caption{Schematic of the unit cells of the three different surface systems simulated by DFT.}
\label{fig:DFT_systems}
\end{figure}
For the calculation of the polarization characteristics of single adatoms placed on Cu surfaces of three different orientations, we used the Vienna Ab Initio Simulation Package~(VASP) with its corresponding pseudopotential data base~\cite{kresse1993abinitio,kresse1994abinitio, kresse1996efficient}. We employed the projector-augmented wave~(PAW) potential method~\cite{blochl1994projector,kresse1999from} along with the Perdew-Becke-Ernzerhof~(PBE) Generalized Gradient Approximation~(GGA) functional~\cite{perdew1996generalized} to describe the electronic exchange and correlation effects. The cut-off energy was set to 600\,eV. A~$4 \times 4 \times 1$ K-point mesh was used to sample the Brillouin zone according to the Monkhorst-Pack scheme~\cite{monkhorst1976special}. This mesh satisfies $N_k \cdot a \geq 35$\,\AA\ for all directions, where $N_k$ is the number of K-point samples and $a$ is the simulation box size at a given direction. The structures were relaxed until the residual forces were lower than 0.01\,eV/\AA. A vacuum space of at least 25\,\AA\ was appended to the cell and dipole corrections were applied between the slabs. The slab thicknesses were~6,~8, and~8 monoatomic layers for the~\hkl{100},~\hkl{110}, and~\hkl{111} system, respectively.
The polarization characteristics of diffusing adatoms were deduced from the DFT calculations following the methodology developed previously in Ref.~\cite{kyritsakis2019atomistic}. For this, we simulated three different Cu slabs, one for each of the simulated surfaces, as shown in figure~\ref{fig:DFT_systems}. An adatom was placed on the top surface in each slab. To obtain the saddle point for the hopping barrier in the~\hkl{100} and~\hkl{110} surfaces, we used the same symmetry considerations as in Ref.~\cite{kyritsakis2019atomistic}, fixing the atom with respect to the lateral directions ($x,\,y$) at the bridge site, while allowing it to relax vertically ($z$). For the~\hkl{111} surface, obtaining the saddle point is much more complex, due to an intermediate local minimum that would occur at the HCP site~\cite{baibuz2018migration}. Since in this work we are interested only in comparison between the polarization characteristics of adatoms deduced from the MD and DFT simulations to an order of magnitude, we can approximate these characteristics on~\hkl{111} with the good level of confidence by their values at the lattice site.
Each cell with a specific position of the adatom was calculated for five different values of the electric field between $-2$ and $2$\,GV/m. From these calculations we extracted the systemic dipole moment $\mathcal{M}_s$ ($\mathcal{M}_l$ for the~\hkl{111} surface) and polarizability $\mathcal{A}_s$ ($\mathcal{A}_l$ for the~\hkl{111} surface) by fitting a parabola to the corresponding field-energy curve, as prescribed in Ref.~\cite{kyritsakis2019atomistic}. Additionally, we calculated the reference values $\mathcal{M}_r$ and $\mathcal{A}_r$ for all the surfaces with no adatoms present.
\section{Results}
\label{sec:results}
\subsection{Accelerated Molecular Dynamics}
Examples of the fields and the electrostatic potentials of the molecular dynamics~(MD) region in different extended geometries are shown in figure~\ref{fig:fieldscharges}. The electric field becomes stronger toward the positive $z$-coordinate, demonstrating the electric field gradient. In the figure, the atoms are colored according to the value of the electrostatic potential and the arrows according to the value of the electric field. The arrows are directed toward the surface, although the arrowheads are not visible, for clarity of the image.
As one can see, the color of the atoms vary, which indicates that the potential on different atoms has different values despite the Dirichlet boundary condition applied at the materials surface. This due to the potential at each atomic position being calculated as a weighted average over the nodes of the mesh cell where the atom resides. The vacuum nodes contribute non-zero values to the potential, raising its value over the one assumed in the FEM solver. Moreover, there is a tilt in the vectors of the electric field with respect to the normal of the surface, especially evident in the short nanotip system; while the electric field is always perpendicular to the FEM mesh surface, the field at the atomic positions will obtain a parallel component from the vacuum nodes (see appendix~\ref{sec:curl}). These effects are unavoidable at the junction between discrete atom system with atomic resolution and the finite elements of a continuum mesh.
\begin{figure}
\centering
\begin{subfigure}{0.65\linewidth}
\includegraphics[width=\linewidth]{100_fieldscharges_tall_top.png}
\end{subfigure}
\par\hrulefill\par
\begin{subfigure}{0.65\linewidth}
\includegraphics[width=\linewidth]{100_fieldscharges_tall_mid.png}
\end{subfigure}
\par\hrulefill\par
\begin{subfigure}{0.65\linewidth}
\includegraphics[width=\linewidth]{100_fieldscharges_tall_bottom.png}
\end{subfigure}
\par\hrulefill
\par\bigskip
\begin{subfigure}{0.65\linewidth}
\includegraphics[width=\linewidth]{100_fieldscharges_short_mid.png}
\end{subfigure}
\caption{Electric fields and the electrostatic potential in the~\hkl{100} systems with external electric field $F_\mathrm{ext}=100$\,MV/m. The top three panels are from the tall nanotip (93\,nm) system, and the bottom-most is from the short nanotip (5\,nm). The field vectors point toward the surface atoms, the arrowheads not visible. Atoms are colored according to their potential, and the field vectors are colored according to their magnitude. In the top three panels, the scales of the potential and the field vectors are kept the same, to emphasize how the absolute value of the field also increases in the tall nanotip system. In the bottom-most panel, the magnitudes are scaled up so that the field vectors and the potential coloring are better visible.}
\label{fig:fieldscharges}
\end{figure}
Finally, we remind here that the MD region is wrapped by periodic boundary condition in the $z$-direction, leading to a discontinuity in the electric field when the boundary is crossed. The effect of this discontinuity is expected to be insignificant, as was explained in section~\ref{subsec:fem}.
Table~\ref{tab:factors} summarizes the electric field condition in the nanotips of different geometries. The field enhancement $\beta$ is calculated as
\begin{equation}
\label{eq:beta}
\beta = \frac{F_\mathrm{MD}}{F_\mathrm{ext}}
\end{equation}
where $F_\mathrm{MD}$ is the mean value of the electric field that the surface atoms experience in the entire MD region and $F_\mathrm{ext}$ is the magnitude of the applied external electric field. Note that at the bottom of the tall nanotip, the field is in fact suppressed by a factor of~\textapprox0.4, while in the other geometries it is promoted.
We calculate the field gradients as
\begin{equation}
\label{eq:gamma}
\gamma = \frac{F_{r,\,\mathrm{top}}-F_{r,\,\mathrm{bottom}}}{h}
\end{equation}
where $F_{r,\,\mathrm{top}}$ is the mean radial component of the electric field in the top atomic layer of the MD region, $F_{r,\,\mathrm{bottom}}$ the same in the bottom layer, and $h$ the height of the MD region. These gradients are different in different parts of the tip as it depends on the how close to the protrusion tip the layer simulated by MD is. This value approximately equals to $\gamma_{rz}$ of equation~\eref{eq:gammatensor}. The gradients are proportional to the applied electric field $F_\mathrm{ext}$ in a given geometry; the tabulated gradients are in fact the slopes $s$ of $\gamma = sF_\mathrm{ext}$ (see figure \ref{fig:slope}).
\begin{table}
\centering
\caption{Field enhancement factors $\beta$ and the radial components of the field gradient slopes $s$ (see Fig.~\ref{fig:slope}) in different geometries. The gradients are expressed in terms of the magnitude of the external electric field $F_\mathrm{ext}$: e.g. the gradient across the~\hkl{100} surface MD region in the middle of the nanotip in~$F_\mathrm{ext}=1$\,V/\AA\ would be~0.015\,V/\AA$^2$.}
\label{tab:factors}
\begin{tabular}{llrrr}
\toprule
Surface & Geometry & $\beta$ & $s \left(\mathrm{\AA}^{-1}\right)$ \\
\midrule
\multirow{4}{*}{\hkl{100}} & Tall, top & 18.98 & 0.308 \\
& Tall, middle & 6.03 & 0.015 \\
& Tall, bottom & 0.37 & 0.012 \\
& Short, middle & 1.05 & 0.044 \\
\midrule
\multirow{4}{*}{\hkl{110}} & Tall, top & 21.21 & 0.248 \\
& Tall, middle & 5.96 & 0.010 \\
& Tall, bottom & 0.41 & 0.012 \\
& Short, middle & 1.15 & 0.050 \\
\midrule
\multirow{4}{*}{\hkl{111}} & Tall, top & 19.06 & 0.217 \\
& Tall, middle & 5.61 & 0.007 \\
& Tall, bottom & 0.38 & 0.011 \\
& Short, middle & 1.08 & 0.050 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\input{slope.tex}
\caption{The gradient of the electric field $\gamma$ in the MD system with adatoms on the~\hkl{100} surfaces, placed in the middle of the tall nanotip, as a function of the applied external electric field $F_\mathrm{ext}$~1\,V/\AA\ =~10\,GV/m. Magenta points are the simulation data with almost invisible error bars of one standard deviation, and the green line is the linear fit. The slope $s$ of the fitted line is~0.015\,\AA$^{-1}$. All slopes are tabulated in table~\ref{tab:factors}.}
\label{fig:slope}
\end{figure}
Since we aim to analyze the surface atom diffusion bias in terms of drift velocity, it is important to know the exact value of the time elapsed in each simulations. While this information can be obtained directly from MD simulations for the nanotips with~\hkl{110} and~\hkl{111} side facets, the time advance during adatom diffusion on the~\hkl{100} surface requires a special attention as it was discussed in sec.~\ref{subsec:cvhd}. In figure~\ref{fig:acceleration} we show the advantage of the use of the collective variable~-driven hyperdynamics~(CVHD). One can see that for the surface diffusion on the~\hkl{100} surface, the simulation time advances approximately~50--60 times faster than $N_\mathrm{step}\Delta t_\mathrm{MD}$. We use this accelerated time in analysis of the drift velocity of the biased under the electric field diffusion.
\begin{figure}
\centering
\input{acceleration.tex}
\caption{Accelerated ``CVHD time'' as a function of the elapsed ``MD time'', i.e. the number of timesteps $N_\mathrm{step}$ times the MD timestep length $\Delta t_\mathrm{MD}$.}
\label{fig:acceleration}
\end{figure}
Examples of the evolution of the average $z$-coordinate of the adatoms on different surfaces are shown in figure~\ref{fig:z}. The different jump rates can be clearly seen here: on the~\hkl{100} surface, individual jumps show up in the average $z$ coordinate despite the very long time scale, while on the other two surfaces the jumps seem to blend together. We note that in~\hkl{110} and~\hkl{111} surface simulations, it was necessary to track all atoms from the surface layer and not only the adatoms that were initially placed on these surfaces to study the diffusion behavior. The reason for this is that many transitions on~\hkl{110} and~\hkl{111} surfaces happen via an exchange event: the diffusing adatom occupies a neighboring lattice site that is temporarily freed up due to thermal vibrations. The adatom becomes trapped and turns into a regular surface atom, while the freed up atom continues migration on the surface until the next exchange event takes place. Such exchange events affected the diffusion mainly on the~\hkl{110} surfaces, since on the close-packed~\hkl{111} hopping diffusion is very fast, and thus exchange events only take place when the adatom reaches sharp edges joining the sides separated by the acute angle of the diamond shaped cross-section (see Fig.~\ref{fig:sections}). In some simulations on the~\hkl{111} surface, the two adatoms met forming a dimer, although originally they were placed on the opposite sides of the nanowire. This situation resulted in changing the value of the drift velocity, as can be seen in the changing of the slope for three curves in figure \ref{fig:z}c. These simulations were excluded from the mean drift velocity analysis. Moreover, some of the nanotips lost their integrity during the simulations because of too high electric fields, and were also excluded them from the analysis.
\begin{figure}
\begin{subfigure}{\linewidth}
\centering
\input{100_z.tex}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\input{110_z.tex}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\input{111_z.tex}
\end{subfigure}
\caption{Examples of the average $z$-coordinate evolution of the two adatoms diffusing on the surface. Panel~(a) shows all~10 runs on the~\hkl{100} surface in the middle of the~93\,nm nanotip system, under an applied field of~100\,MV/m. Panel~(b) shows the same on the~\hkl{110} surface at the top of the nanotip, at~1\,GV/m, and panel~(c) on~\hkl{111} surface of the~5\,nm nanotip, at~5\,GV/m field. The three lines that diverge from the overall trend in panel (c) are the simulations where the two adatoms formed a dimer and continued diffusion with a lower bias.}
\label{fig:z}
\end{figure}
In figure~\ref{fig:drift} we summarize the main results of the current study. Here we show the drift velocity as a function of the electric field gradient $\gamma$ as calculated in Eq.~\eref{eq:gamma}. The drift velocity is calculated as the z$-$displacement of adatoms at the end of the simulation divided by the simulation time. The obtained velocities are averaged over~10 repetitions in each case. The three top rows show results for a tall nanotip with the height of~93 nm, while the diffusion processes were simulated in different parts of the tip: the top, the middle and the bottom from the top row down, respectively. The last row shows the results for a short nanotip of~5 nm in height. The results are also organized in columns: the drift velocity for the surface diffusion on the~\hkl{100},~\hkl{110} and~\hkl{111} surfaces are shown in columns from left to right, respectively.
\begin{figure*}
\centering
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{100_tall_top_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{110_tall_top_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{111_tall_top_drift_grad.tex}
\end{subfigure}
\hrule
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{100_tall_mid_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{110_tall_mid_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{111_tall_mid_drift_grad.tex}
\end{subfigure}
\hrule
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{100_tall_bottom_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{110_tall_bottom_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{111_tall_bottom_drift_grad.tex}
\end{subfigure}
\hrule
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{100_short_mid_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{110_short_mid_drift_grad.tex}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\raggedleft
\input{111_short_mid_drift_grad.tex}
\end{subfigure}
\caption{Average surface diffusion drift velocity in different geometries and on different surfaces. Error bars are one standard deviation. Note that also the local electric field varies linearly with the gradient in each plot.}
\label{fig:drift}
\end{figure*}
We observe that on the~\hkl{110} and the~\hkl{111} surfaces (the two rightmost columns), the diffusion bias grows as a function of gradient $\gamma$ in all systems except for the bottom of the 93\,nm nanotip (the second row from the bottom). On the~\hkl{100} surface (the first column), a clear bias can only be seen at high gradient at the top of the 93\,nm nanotip.
The differences within each surface (each column) can be attributed to the different absolute values of the electric field in these systems. As shown in table~\ref{tab:factors}, field enhancement $\beta$ is the highest at the top of the 93\,nm tip, and smallest (less than 1) at the bottom. The differences between surfaces can be explained by different jump rates. See section~\ref{sec:discussions} for further discussions.
\section{Discussions}
\label{sec:discussions}
In our simulations, we observed a clear bias in the migration of adatoms on the three lowest-index (\hkl{100},~\hkl{110}, and~\hkl{111}) Cu surfaces under an electric field gradient. This mechanism promotes sharpening of field-enhancing surface features by adding a bias in the random walk migration of atoms on material surface toward places where the field is higher, such as corners and vertices of protrusions. Sharpening will further strengthen field enhancement and increase gradients, thus creating a positive feedback loop. On the cathode side of the electric system, the sharpening mechanism could promote the growth of small surface roughness into field-emitters with another feedback loop (see for detail Ref.~\cite{kyritsakis2018thermal}) activated: field emission current generates heat that increases the resistivity of the structure, leading to stronger heating and eventually a runaway evaporation of the tip. After evaporation, the biased diffusive process would start to regenerate the sharpness of the remaining protrusion.
The diffusion bias can be explained by the modification of the migration energy barriers by the electric field and its gradient, through the dipole moment and polarizability characteristics of the surface (Eq.~\ref{eq:modbarrier}). The unbiased ($\gamma=0$) jump rate $\Gamma$ is defined by the Arrhenius equation
\begin{equation}
\label{eq:arrhenius}
\Gamma = \nu \exp\left(-\frac{E_\mathrm{m}-\mathcal{M}_\mathrm{sl}F - \frac{\mathcal{A}_\mathrm{sl}}{2}F^2}{k_\mathrm{B}T}\right),
\end{equation}
where $\nu$ is assumed to be a field-independent prefactor and $E_\mathrm{m}$ is the migration energy barrier in the absence of electric field. The expected displacement $\left\langle x \right\rangle_\mathrm{b}$ of the adatom after time $\tau$ is
\begin{equation}
\label{eq:totalbias}
\left\langle x \right\rangle_\mathrm{b} = 2\tau l\Gamma\sinh\left(l\gamma\frac{\mathcal{M}_\mathrm{sr} + \mathcal{A}_\mathrm{sr}F}{k_\mathrm{B}T}\right)
\end{equation}
It can be seen that the mean displacement is proportional to the unbiased jump rate $\Gamma$. This explains the large differences between the observed drift velocity between surfaces (see Fig.~\ref{fig:drift}: $E_\mathrm{m}$ on the~\hkl{100}, the~\hkl{110}, and the~\hkl{111} surface, given by the MD-MC-CEM potential we used, is~0.52\,eV,~0.25\,eV, and~0.04\,eV, respectively. Without applied electric field, at 300\,K temperature, migration on the~\hkl{110} surface would be approx.~35\,000 times faster, and on the~\hkl{111} surface~$10^8$ times faster than on the~\hkl{100} surface; thus, the bias can be expect to differ multiple orders of magnitude between different surfaces.
The differences between the bias on the same surface in different geometries are due to the different values of local electric field $F$ in these systems. For instance, the field enhancement factor $\beta$ is 20 times higher at the top of the 93\,nm tip than in the 5\,nm tip, leading to 5--10 times larger drift velocity in the~\hkl{110} and~\hkl{111} systems.
As can be seen from Eq.~\eref{eq:totalbias}, the mean displacement of the adatom depends on the polarization characteristics of the surface, namely the permanent dipole moment difference $\mathcal{M}_\mathrm{sr}\equiv\mathcal{M}_\mathrm{s}-\mathcal{M}_\mathrm{r}$ and the polarizability difference $\mathcal{A}\equiv\mathcal{A}_\mathrm{s}-\mathcal{A}_\mathrm{r}$. Subscript s stands for the saddle point of the migration event, and r for the flat surface reference system. We can estimate these characteristics from the diffusion bias in MD simulations, and compare them to values calculated directly in DFT. This estimation can be done independently of the migration rate $\Gamma$ and the simulation time, by dividing Eq.~\eref{eq:totalbias} by the mean square displacement $\left\langle x^2 \right\rangle = \tau l^2\Gamma$:
\begin{equation}
\label{eq:bias}
\frac{\langle x\rangle_\mathrm{b}}{\left\langle x^2\right\rangle} = \frac{2}{l}\sinh\left(l\gamma\frac{\mathcal{M}_\mathrm{sr}+\mathcal{A}_\mathrm{sr}F}{2k_\mathrm{B}T}\right)
\end{equation}
The fitting results for $\mathcal{M}_\mathrm{sr}$ and $\mathcal{A}_\mathrm{sr}$ to our data on the three surfaces are shown in table~\ref{tab:msr_asr}, along with the corresponding values calculated directly by DFT according to the methods described in section \ref{DFT}. Unfortunately, we were able to obtain the results for the~\hkl{100} surface only within very large error bars (\raisebox{0.5ex}{\texttildelow} 100\,\%). This is explained by the limited statistics resulting from the low jump rate on this surface, even when accelerated with CVHD. A further study with higher statistics and longer time spans, and/or stronger electric field gradients would likely permit the fitting of $\mathcal{M}_\mathrm{sr}$ and $\mathcal{A}_\mathrm{sr}$ more accurately. In this study, the agreement of these parameters is qualitative compared to the~\hkl{110} and~\hkl{111} surfaces.
We see that our method underestimates $\mathcal{A}_\mathrm{sr}$ by about an order of magnitude, and predicts $\mathcal{M}_\mathrm{sr}$ generally to have the opposite sign. This reflects the physical nature of the two different quantities. The permanent dipole moments of adatoms on a surface depends on the surface chemistry, and it is therefore impossible to capture them by a purely electrostatic approach. Likewise, the low-field regime diffusion bias toward the weaker field on the cathode side due to the permanent dipole moment, predicted by theory, will not be observed in this model. In high fields, the bias is turned toward the stronger field by the effective adatom polarizability $\mathcal{A}_\mathrm{sr}$, which is directly related to the field-free volume induced by the presence of the adatom, as explained in Ref.~\cite{kyritsakis2019atomistic}. This is a purely electrostatic effect that must appear in simulations that are coupled with electrostatic field calculations. The underestimation of $\mathcal{A}_\mathrm{sr}$ in MD compared to DFT can be partially explained by the different geometry: in MD, we used a cylindrical atomic system with a needle-like FEM extension to maximize the electric field gradient, while the DFT calculation does not need a gradient at all, and thus we used a slab system for computational efficiency. The cylindrical MD system can be expected to polarize differently to a slab. The correct tendency which we observe in table~\ref{tab:msr_asr} confirms that the developed approach indeed is able to reproduce biased effect of surface diffusion in the presence of electrostatic field gradients.
\begin{table}
\centering
\caption{The dipole moment $\mathcal{M}_\mathrm{sr}$ and the polarizability $\mathcal{A}_\mathrm{sr}$ on different surfaces in this study, obtained by fitting Eq.~\eref{eq:bias} to the mean atomic displacement and its variance. The values calculated directly by DFT are given in parentheses for comparison. We note that the DFT values for the~\hkl{111} case correspond to the lattice position ($\mathcal{M}_\mathrm{lr},\,\mathcal{A}_\mathrm{lr}$). The error bars are given by the standard error of the mean. The large error bars for the~\hkl{100} surface are due to limited statistics caused by the lower atomic jump rate.}
\begin{tabular}{@{}l@{\,}c@{\ }c@{\ }c@{\ }c@{}}
\toprule
\multirow{2}{*}{Surface} & \multicolumn{2}{c}{$\mathcal{M}_\mathrm{sr}$ (e\AA)} & \multicolumn{2}{c}{$\mathcal{A}_\mathrm{sr}$ (e\AA$^2$/V)} \\
\cmidrule{2-5}
& MD & DFT & MD & DFT \\
\midrule
\hkl{100} & $ .0 \pm .2$ & $.106 \pm .003$ & $.1 \pm .1$ & $.27 \pm .02$ \\
\hkl{110} & $-.008 \pm .005$ & $.094 \pm .006$ & $.034 \pm .008$ & $.30 \pm .04$ \\
\hkl{111}* & $-.016 \pm .006$ & $.162 \pm .003$ & $.02 \pm .01$ & $.23 \pm .02$ \\
\bottomrule
\end{tabular}
\label{tab:msr_asr}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
We have observed biased self-diffusion on Cu surfaces in the presence of electric field gradients in molecular dynamics simulations coupled to the finite element solver for calculation of electric field distribution at a metal surface. We found that the bias is always toward the stronger value of the electric field, which is consistent with previously reported theoretical predictions. The bias was observed to be stronger on~\hkl{111} and~\hkl{110} surfaces, while diffusion on~\hkl{100} was found to be closer to regular unbiased diffusion since the migration energy barriers for self-diffusion on this surface are much higher than on the other two. The mechanism of biased diffusion can contribute to sharpening and regrowth of field-enhancing nanotips that are proposed to provide the particles necessary for plasma formation in vacuum arc breakdowns. Good agreement between the polarization characteristics deduced from the MD simulations with the direct density functional theory calculations confirms the validity of the developed approach.
\section*{Acknowledgements}
We would like to thank Ekaterina Baibuz for her contributions to data collection and analysis for the manuscript. J. Kimari was supported by a CERN K-contract. Computing resources were provided by the Finnish IT Center for Science~(CSC) and the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533). Y. Wang, A. Kyritsakis and V. Zadin were funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 856705.
|
2,869,038,155,292 | arxiv | \section{Introduction}
\subsection{Clustering Analysis}
A vast array of literature has explored clustering techniques and missing data issues in both mathematics and public health research. Clustering refers to the separation of data into meaningful groups so that data within each group is similar.
The Latent Profile Analysis (LPA) method is a common approach in health behavior research to identify unobserved classes of participants and explain the pattern of responses~\cite{Vermunt02, Lubke05, Wang12, Marsh09}. Many current software packages use an iterative expectation maximization (EM) algorithm to estimate the parameters~\cite{Mplus}. The EM algorithm and other variants have both advantages and drawbacks for estimation of the LPA parameters. The algorithms are sensitive to the initial values of the parameters with the potential for local solutions, and the EM approach does not estimate standard errors. Model identification, the issue of whether there is sufficient information to estimate the parameters~\cite{Vermunt02}, and subjective model fit selection are also drawbacks to these approaches.
Spectral clustering (SC) is a geometric method that can identify relationships in the data (here we consider $n$ individuals each with $d$ variables) that are non-linear~\cite{lloyd1982least, shi2000normalized, wu2009top}. Here, one designs a similarity measure to form a \emph{Laplacian} matrix from the data. A typical normalized \emph{Laplacian} matrix $\mathbf{L}\in\mathbb{R}^{n\times n}$ is defined by
\begin{equation} \label{eqn1}
L = D^{-1/2}(D-W)D^{-1/2},
\end{equation}
where $W$ is the symmetric weight matrix whose $(i,j)$th entry corresponds to the similarity between individuals $i$ and $j$, and the degree matrix $D$ has diagonal entries $D_ii = \sum_j W_{ij}$. Spectral clustering computes the eigenvectors of this Laplacian which form a lower dimensional, linear separable representation of the dataset~\cite{Shi00}.
In the dataset from Section~\ref{sec:clu}, the unsorted and sorted eigenvector (from the second largest eigenvalue) entries are shown in Figure~\ref{fig-1}. Since each entry of the eigenvector corresponds to an individual, we may use the values of these entries to separate the individuals into clusters. In this case, the threshold to designate different clusters seems to be $y = 0$, as plotted in red. For more than two clusters, one can instead run $k$-means~\cite{lloyd1982least,wu2009top} on this data to identify the clusters.
\begin{figure}[ht]
\includegraphics[width=3.5in]{eig.eps}
\caption{Left: unsorted eigenvector. Right: Sorted eigenvector. The red horizontal line indicates the separation. \label{fig-1}}
\end{figure}
In Figure~\ref{fig0}, we show the comparison of the spectral clustering results and that of the actual (randomly generated) clusters on right.
\begin{figure}[ht]
\includegraphics[width=3in]{rgclu.eps}
\caption{Results from spectral clustering (on the left) actual real clusters (on the right). Red and green show different clusters. \label{fig0}}
\end{figure}
\subsection{Missing Data}
In many large scale applications, data is incomplete. For example, participants may be unable or unwilling to complete an ongoing survey, or participants may be randomly assigned different blocks of questions to increase the variety of constructs assessed.
\subsubsection{FIML}
Full Information Maximum Likelihood (FIML)~\cite{Graham12,Enders10} aims to maximize the likelihood of the data by auditioning combinations of parameter estimates~\cite{Ender01}. The procedure relies on assumptions such as normality, which when violated can result in biased parameters. There is also a risk of convergence to local maxima resulting in poor parameter estimates.
The FIML estimator implemented in common statistical software packages maximizes a likelihood function that is the sum of $n$ case-wise likelihood functions. A likelihood function is calculated for each observation or individual. The function measures the discrepancy between the current parameter estimates and the observed data for the $i$th case. The function is maximized assuming multivariate normality:
\begin{equation*}
\log L_i = K_i - \frac{1}{2}\log |\Sigma_i| - \frac{1}{2}(x_i-\mu_i)' \Sigma_i^{-1} (x_i-\mu_i)
\end{equation*}
The vector of complete data for case $i$ is represented by the term $x_i$, and the vector of estimated means for those variables that are observed for case $i$ is the term $\mu_i$. A constant, $K_i$ depends upon the number of complete data points for case $i$. Only those variables that are observed for case $i$ are used to calculate the determinant and inverse of $\Sigma_i$. The discrepancy function for the entire sample is calculated by summing over the $n$ case-wise functions:
\begin{equation*}
\log L(\mu,\Sigma) = \sum_{i=1}^{N}\log L_i.
\end{equation*}
It is assumed that missing values for X are conditionally dependent on other observed variables in the data. Probability values for the missing data are implied during the parameter estimation process by incorporating vectors of partially complete data in the individual-level likelihood functions. This is analogous to using multiple regression of X on other variables to generate predicted scores for the missing data. The FIML estimate does not impute missing values, however, but uses all available raw data to directly estimate parameters and standard errors for the model.
\subsubsection{Compressive Sensing}
Compressive sensing (CS) is a new and fast growing field in applied mathematics. The CS application \textit{matrix completion} demonstrates that a (nearly) low-rank matrix can be completed accurately and robustly from observation of only a few of its entries by solving a nuclear-norm minimization problem~\cite{DSPweb,CandeRT_Stable,Donoho06}. A typical format of this optimization problem is
\begin{eqnarray*}
\textrm{minimize} & ||X||_{*} \\
\textrm{subject to} & X_{ij} = M_{ij} \quad (i,j) \in \Omega
\end{eqnarray*}
where the nuclear norm $||X||_* = \sum_{k=1}^{n}\sigma_k(X)$, $M$ is the matrix we wish to recover, and $\Omega$ is the set of locations of observed matrix entries in $M$. This popular convex relaxation of the rank minimization problem is feasible and commonly used in matrix completion, since minimization of the rank of $X$ is NP-hard due to its combinatorial nature.
When the underlying matrix is low-rank, matrix completion completes the data matrix provably well from a small number of possibly noisy observations~\cite{candes2010matrix,nep:rank,recht2007guaranteed}. To quantify how exact the method recovers the matrix, we generate two $1000\times 300$ matrices with rank $2$ and $10$ and remove $20\%$, $40\%$, $60\%$, and $80\%$ of the data purposefully. The entries are random values that follow a standard normal distribution. The matrix completion results are presented in Table~\ref{table-1} and Table~\ref{table0}. We measure the recovery error between the actual matrix $X$ and the recovered matrix $\hat{X}$ by the Frobenius norm $\|X-\hat{X}\|_F$, the relative Frobenius norm $\|X-\hat{X}\|_F/\|X\|_F$, and the spectral norm $\|X-\hat{X}\|$. As is evident and not surprising, the error increases slightly with more missing data, and the higher rank matrix has slightly higher recovery error.
\begin{table}
\begin{minipage}{\linewidth}
\centering
\caption{Rank 2 Matrix Completion Results}
\begin{tabular}{cccc} \toprule[1.5pt]
Rank 2 & Frobenius & Relative Frob. & Spectral \\ \hline
missing 20\% & 0.0687 & 8.84E-05 & 0.0505 \\
missing 40\% & 0.0376 & 4.85E-05 & 0.0246 \\
missing 60\% & 0.0651 & 8.38E-05 & 0.0422 \\
missing 80\% & 0.0959 & 1.23E-04 & 0.0645 \\
\bottomrule[1.25pt]
\end{tabular}\par
\label{table-1}
\bigskip
\caption{Rank 10 Matrix Completion Results}
\begin{tabular}{cccc} \toprule[1.5pt]
Rank 10 & Frobenius & Relative Frob. & Spectral \\ \hline
missing 20\% & 0.0896 & 5.37E-05 & 0.0297 \\
missing 40\% & 0.145 & 8.69E-05 & 0.0512 \\
missing 60\% & 0.186 & 1.12E-04 & 0.0643 \\
missing 80\% & 0.350 & 2.10E-04 & 0.1890 \\
\bottomrule[1.25pt]
\end{tabular}\par
\label{table0}
\end{minipage}
\end{table}
In public health data, especially the data from surveys or investigations, one expects the data to be low-rank or approximately low-rank for certain variables, because there are a small number of underlying factors that influence specific human opinions and behaviors.
In this paper, we empirically investigate the use of matrix completion with spectral clustering to cluster incomplete data, and compare to standard FIML and LPA methods. From these studies, we find that the combination of compressive sensing and spectral clustering methods can offer better performance than standard methods currently used in health data research.
\section{Empirical Results}
\subsection{Experiments on Clustering Analysis} \label{sec:clu}
We first use simulated data to compare the clustering performance of spectral clustering and LPA. First, we generate two-dimensional points whose $x$ and $y$ values follow a normal distribution with mean $0$ (for one cluster) or $a>0$ (for the other cluster) and variance $1$. As $a$ increases, we expect clustering to be more successful, because the difference between clusters is more obvious.
We define the correct classification rate (CCR) as the ratio of the number of correctly clustered points over the total number of points in each trial.
We simulate 40 different data sets for each value of $a$, and use these 40 trials to compute the rates of each method.
In Figure~\ref{fig1-1}, we show the mean CCR for datasets that contain two equally sized clusters, with approximately $250$ observations in each cluster. Figure~\ref{fig1-2}, however, illustrates the CCR from datasets that contain unequal sizes of clusters, where one has approximately $25$ ($5\%$) observations and the other has approximately $475$ ($95\%$) observations. This experiment aims to test how well spectral clustering and LPA classify observations in situations with different clustering complexity and relative size of clusters.
\begin{figure}[ht]
\includegraphics[width=3in]{clus50.eps}
\caption{Clustering results of spectral clustering and LPA methods for equally sized clusters. \label{fig1-1}}
\bigskip
\includegraphics[width=3in]{clus95.eps}
\caption{Clustering results of spectral clustering and LPA methods for unequally sized clusters. \label{fig1-2}}
\end{figure}
\begin{table}
\begin{minipage}{\linewidth}
\centering
\caption{Correct Classification Rate Results}
In this table, $a$ represents the centroid $(a,a)$ other than $(0,0)$. CCR is the abbreviation of correct classification rate, and N represents the number of observations. In this test we have sample size of 500 and approximately 250 obs. for each cluster. The rest of items are summary statistics of correct classification rate estimated from 40 trials.
\begin{tabular}{cccccccc} \toprule[1.5pt]
$a$ & CCR & N & Mean & S.D. & Min & Med. & Max \\ \hline
1 & SC & 500 & 0.755 & 0.0198 & 0.692 & 0.756 & 0.812 \\
& LPA & 500 & 0.707 & 0.0550 & 0.508 & 0.072 & 0.796 \\ \hline
2 & SC & 500 & 0.918 & 0.0132 & 0.868 & 0.918 & 0.958 \\
& LPA & 500 & 0.918 & 0.0130 & 0.882 & 0.916 & 0.962 \\ \hline
3 & SC & 500 & 0.981 & 0.0065 & 0.958 & 0.982 & 0.998 \\
& LPA & 500 & 0.982 & 0.0060 & 0.960 & 0.982 & 0.998 \\ \hline
5 & SC & 500 & 1.00 & 0.0008 & 0.994 & 1.00 & 1 \\
& LPA & 500 & 1.00 & 0.0010 & 0.996 & 1.00 & 1.00 \\
\bottomrule[1.25pt]
\end{tabular}\par
\label{table1}
\end{minipage}
\end{table}
We observe that both methods have increasing CCR for larger $a$, and that spectral clustering has a higher correction rate when the distance between centroids is small for equally sized clusters and for unequally sized clusters. Overall, spectral clustering seems to offer improvements over LPA in this setting. A detailed summary of results for equal cluster sizes is shown in Table~\ref{table1}. When $a\geq 2$, where the (spatial) distance between centroids is greater than $2.828$, the CCR of both methods exceed $90\%$, but in most cases spectral clustering has lower standard deviation of estimation.
\subsection{Experiments on Missing Data}\label{sec:missdata}
Missing data is an important part of real-world health data analysis.
There is an obvious difference in how FIML and compressive sensing handle this problem. FIML does not recover missing data or interpolate unknown values to incomplete data entries, while matrix completion does precisely this. Given this difference, it is hard to compare the performance of FIML and compressive sensing directly, so we instead use their results to cluster. We compare the correction rates of the following three methods, a) FIML combined with LPA, b) matrix completion followed by LPA, and c) matrix completion followed by spectral clustering.
By comparing methods a) and b), we can compare the relative performance of FIML and matrix completion, because the clustering methods are the same (LPA). Therefore the correct classification rates will reflect how well these two methods handle missing data. The results of b) and c) will strengthen the conclusion from part a), where the two different clustering methods are compared.
We generate $40$ data sets of $1000$ (the number of individuals) by $100$ (the number of variables) matrices. One can imagine the $500\times 100$ top half of the matrix corresponding to one cluster, and the bottom half to the other. The bottom half has standard normally distributed entries, whereas the top half has normally distributed entries with variance $1$ but with varying means; the first ten columns have mean $0.1$, the next $10$ have mean $0.2$, and so on, so that the last ten have mean $1.0$ (we do this to introduce more variety within the cluster). We remove entries from the matrix uniformly at random to create missing data.
\begin{figure}[ht]
\includegraphics[width=3in]{missdatagraph.eps}
\caption{Mean correct classification rate as a function of the proportion of missing data for method a), method b), and method c), (red, green, and blue, respectively). The description of the methods is elaborated in~\ref{sec:missdata}.
\label{fig2}}
\end{figure}
In Figure~\ref{fig2}, we observe that the correct classification rate is higher for method b) than for method a). This indicates that matrix completion may handle missing data better than FIML in this setting. Additionally, method c) is better than both, suggesting that matrix completion coupled with spectral clustering may offer even better performance. There is of course a computational tradeoff, but for methods where accuracy is the priority, matrix completion coupled with spectral clustering may offer improved performance over standard methods.
\subsection{Application to Real Public Health Data}
We next compare the above methods on real public health data. The data is obtained from the teen California Health Interview Survey (CHIS) from 2009. CHIS is one of the largest surveys in the nation and is conducted and maintained by the UCLA Center for Health Policy Research and its collaborators.
CHIS obtains data via phone interviews on extensive health related items such as health status, health conditions, health-related behaviors, health insurance coverage, access to health care services, and other health and health related issues~\cite{CHIS}.
One major difficulty of analyzing the clustering techniques on real data is that there is not an obvious ground truth to which to compare.
To over come this, we first eliminate irrelevant variables such as individual's serial number and zip code. This yields a data matrix with 3379 individuals and 144 variables. Then we apply both spectral clustering and LPA, and identify those individuals who were clustered in the same way by both methods. This left 2836 individuals as the ``consistent population'', which is 83.93\% of the original data. Next, we sample $1000$ individuals (without replacement) among this consistent population, and repeat this process $40$ times. In each of these $40$ trials, we randomly remove $10\%$, $30\%$, and $50\%$ of the entries to mimic missing data. Finally, we apply matrix completion/spectral clustering and FIML/LPA and compute the mean CCR using the consistent clusters as ground truth for each approach.
The averaged rates are illustrated in Figure~\ref{fig3}. Though as expected the correct classification rates decrease monotonically, the CCR for compressive sensing/spectral clustering seems to decay at a much slower speed than that of FIML/LPA. Regardless, these two approaches overall generate quite reliable outcomes, even when the proportion of missing data reaches as large as $50\%$.
\begin{figure}[ht]
\includegraphics[width=3in]{realdata.eps}
\caption{Missing data completion and clustering analysis on CHIS data.
\label{fig3}}
\end{figure}
\section{Summary}
Using two groups of simulated data, we observe that spectral clustering may be preferable to LPA, and that compressive sensing methods may have an advantage over FIML in giving the recovered data matrix explicitly, taking advantage of nearly low-rank data.
The contribution of this paper is to bring two methods from applied mathematics into health behavior research, and verify their advantages over traditionally used methods. Our future research direction is to further compare the performance of these methods on real health (and other types of) data, and aim to identify in what settings each type of method is preferred. This identification can aid in the design of health data surveys allowing for intentional missing data, thereby reducing participant burden and cost.
\section*{Acknowledgement}
Support for this research was provided by a BLAIS Challenge Award (221-2170045) received from Claremont Graduate University, Claremont, CA.
\bibliographystyle{plain}
|
2,869,038,155,293 | arxiv | \section{Introduction}
Inflationary cosmology, first proposed by Guth in 1980 \cite{Starobinsky:1980te,Sato:1981ds,Sato:1980yn,Kazanas:1980tx,Guth:1980zm,Linde:1981mu,Albrecht:1982wi}, remains an widely studied and successful approach for understanding the early universe, solving at a stroke the flatness and horizon problems, as well as providing a mechanism to generate the observed primordial power spectrum \cite{Starobinsky:1979ty,Mukhanov:1981xt,Mukhanov:2003xw,Linde:1983gd,Hawking:1982cz,Hawking:1982my,Starobinsky:1982ee,Guth:1982ec,Bardeen:1983qw}. Inflation relates the evolution of the universe to one or more scalar \textit{inflaton} fields, the properties of which dictate the dynamics of the period of rapidly accelerating expansion which terminates locally in a period of reheating, followed by radiation-dominated expansion. While we cannot precisely determine specific form of the potential for the inflaton field or field, different choices of potential result in different values for cosmological parameters, which are distinguishable by observation \cite{Dodelson:1997hr,Kinney:1998md}. Recent data, in particular the Planck measurement of Cosmic Microwave Background (CMB) anisotropy and polarization \cite{Ade:2015lrj,Ade:2015xua,Aghanim:2015xee,Aghanim:2018eyx,Aghanim:2019ame,Akrami:2019bkn,Akrami:2019izv,Akrami:2018odb} and the BICEP/Keck measurement of CMB polarization \cite{Ade:2015fwj,Ade:2015tva} now place strong constraints on the inflationary parameter space, falsifying many previously viable inflationary potentials, including some of the simplest and most theoretically attractive models.
One such class of models is Natural Inflation, put forward in 1990 by Freese, Frieman and Olinto \cite{Freese:1990rb} as a solution to certain theoretical challenges inherent to slow rolling inflation models, which are limited by the fact that in order to generate an adequate amount of inflation, the slope of the inflaton potential must be very nearly flat. This creates fine-tuning problems, in particular, quantum corrections in the absence of a symmetry generically spoil the flatness of the potential, which is known as the \textit{$\eta$-problem}.
Natural inflation (NI) models avoid this by using an axionic field to drive inflation, where the term ``axionic'' refers in the most general sense to a field which has a flat potential as a result of a shift symmetry. During the early Universe, explicit breaking of the shift symmetry gives rise to slow-roll expansion In this sense the inflaton in NI is a pseudo-Nambu-Goldstone boson, with a nearly flat potential, exactly as inflation requires.
When Freese, Frieman and Olinto proposed their original model of Natural Inflation in 1990, they modeled the inflaton field directly on the QCD axion, albeit with a different mass scale. As with the QCD axion, the potential was an ordinary cosine, with a height of $\approx 10^{16}\ \mathrm{GeV}$ and a width of at least $10^{19}\ \mathrm{GeV}$, to match CMB observations \cite{Freese:2014nla,Freese:1990rb}. Subsequently, many other variants have been proposed, such as axion monodromy \cite{Silverstein:2008sg,Kobayashi:2014ooa,Higaki:2014sja}, but for the purposes of this paper, we shall confine ourselves to discussing the original cosine potential, though the principle can be extended to cover many other potentials.
The two primary observable parameters of the primordial power spectrum (as seen in the CMB) used to determine the viability of inflationary models are: 1. $r$, the ratio of tensor (gravitational wave) to scalar (density) perturbations, ($r\equiv P_T/P_R$), and 2. $n_s$, the spectral index, which describes the degree of scale dependence of the fluctuation amplitude. While neither of these carry any direct dependence on post-inflationary dynamics, they do depend on the value of $N_k$, the number of e-folds of expansion between the point when fluctuation modes on the pivot scale (generally taken to be $k=0.002\ \mathrm{Mpc}^{-1}$) exited the horizon and the end of inflation. This is typically around 60 e-folds, but the actual value depends on the evolution of the universe between the end of inflation and nucleosynthesis, a dependence we explain in some detail in Sec. 2.2 of this paper. We find that, assuming conventional post-inflationary dynamics, current observations entirely rule out standard Natural Inflation, but that by positing a period between the end of inflation and nucleosynthesis during which the universe expands at a rate slower than radiation domination, we can bring it back into agreement with the data. (Other modifications to improve agreement with data have been proposed, for example thermal dissipative effects \cite{Reyimuaji:2020bkm} and non-minimal coupling to gravity \cite{Reyimuaji:2020goi}.)
The structure of this paper is as follows: in Sec. 2, we cover inflationary theory, first in terms of the mechanics of Natural Inflation, and then explain how the inflationary epoch parameters relate to modern-day observables; in Sec. 3, we discuss the way the reheating parameters influence these observables, and then the methodology of our calculations; Sec. 4 presents our results, first in the conventional case and then in the more general scenario, and we present our conclusions in Sec. 5.
\section{Theory}
\subsection{Natural Inflation}
In order to generate sufficient inflation while still satisfying observational constraints on anisotropy in the cosmological microwave background (CMB), the inflaton field must be characterized by an extremely flat potential; assuming a single-field inflationary model, the ratio of the potential's height to its width must satisfy \cite{Adams:1990pn}
\begin{equation}
\chi \equiv \frac{\Delta V}{\left(\Delta \phi \right)^4}\leq \mathcal{O}\left(10^{-6}-10^{-8}\right),
\label{nim1}
\end{equation}
where $\Delta V$ is the change in the inflationary potential $V(\phi)$ and $\Delta \phi$ is the change in the inflaton field $\phi$ during the slow roll portion of the inflationary period. The inflaton self-coupling must therefore be exceedingly weak, with an effective quartic self-coupling constant $\lambda_{\phi} <10^{-12}$ for reasonable models. \cite{Freese:2014nla} This extreme ratio between mass scales is referred to as the ``fine-tuning'' problem in inflation, a review of which can be found in Ref. \cite{Bassett:2005xm,Kinney:2009vz,Baumann:2009ds}.
Natural Inflation approaches this problem by positing that the inflaton potential is flat due to the presence of a shift symmetry, that is, $V(\phi)=V(\phi+\mathrm{const.})$ If the symmetry were perfect, such an inflaton could not roll and drive inflation, so we require a further explicit symmetry breaking, rendering the inflatons pseudo-Nambu Goldstone bosons (PNGBs) with the desired very nearly (but not \textit{exactly}) flat potential. Such a model naturally generates the small mass scale ratio specified in equation \ref{nim1}; for comparison, the QCD axion has a corresponding ratio of order $10^{-64}$, significantly smaller than inflation requires. \cite{Freese:2014nla} Furthermore, this ratio of scales is stable to radiative corrections because of the underlying global symmetry of the Lagrangian.
The original NI model, which we address in this paper, is characterized by a potential of the form \cite{Freese:1990rb}
\begin{equation}
V(\phi)=\Lambda^4\left[1\pm \cos\left(N\phi/f\right)\right],
\end{equation}
where we will be considering the positive root, and defining $\frac{f}{N}\equiv\mu$, both of which can be done without loss of generality. Thus, the actual form of the potential we will be discussing is
\begin{equation}
V(\phi)=\Lambda^4\left[1+ \cos\left(\phi/\mu\right)\right].
\label{nim2}
\end{equation}
Given appropriate scales for $\Lambda$ and $\mu$ ($\approx m_{\mathrm{GUT}}$ and $\approx m_{\mathrm{Pl}}$, respectively), such an inflaton potential can drive inflation, producing an appropriately small value of $\chi$ to satisfy \ref{nim1} with an inflaton mass of $m_{\phi}=\Lambda ^2 /\mu \approx \mathcal{O} \left( 10^{11}-10^{13}\ \mathrm{GeV}\right)$. \cite{Freese:2014nla}
\subsection{The Generation of Observables}
As the inflaton field rolls along the potential, quantum fluctuations generate perturbations in the metric, which rapidly increase in size until their wavelength exceeds the horizon size, at which point they `freeze out' and cease to evolve until they re-enter the horizon after inflation ends. The two primary types of perturbations are scalar modes, which represent fluctuations in density, and tensor modes, which represent gravitational wave fluctuations. The perturbation amplitude of these two types of fluctuations are given by
\begin{equation}
P_{\mathcal{R}}^{1/2}\left(k\right)=\frac{H^2\left(k\right)}{2\pi \dot{\phi}_k},
\end{equation}
for scalar modes, and
\begin{equation}
P_{\mathcal{T}}^{1/2}\left(k\right)=\frac{4H\left(k\right)}{\sqrt{\pi} m_{\mathrm{Pl}}},
\end{equation}
for tensor modes. In both cases, the left-hand side of the equation denotes the perturbation amplitude when a specific wavelength (pivot scale $k$) re-enters the Hubble radius after inflation, while the right-hand side is evaluated at the point during inflation where that same comoving wavelength froze out. \cite{Freese:2014nla} These two amplitudes are critical for evaluating the viability of inflationary models, as (under the slow roll approximation) $P_{\mathcal{R}}[k=0.002]\equiv A_s$ fixes the height of the inflationary potential,
\begin{equation}
A_s = \frac{H^2}{8 \pi^2 M_{\mathrm{P}}^2\epsilon}\bigg\vert_{k=aH},
\end{equation}
and the spectral index $n_s$ reflects the scale dependence of $P_{\mathcal{R}}$,
\begin{equation}
n_s-1 \equiv \frac{\mathrm{d}\ln P_{\mathcal{R}}}{\ln k},
\end{equation}
while the tensor amplitude is generally expressed in terms of the ratio between it and the scalar amplitude, $r\equiv P_{\mathcal{T}} / P_{\mathcal{R}}$.
Since we use $A_s$ to normalize our potential, this leaves us $r$ and $n_s$ as observables whose values we can use to determine the viability of our models.
\FloatBarrier
\section{After Natural Inflation}
Standard inflationary cosmology assumes that at the end of inflation, the universe is characterized by a temperature of approximately $\mathcal{O}\left(10^{16}\ \mathrm{GeV}\right)$ and equation of state $w=-1/3,$ after which the universe undergoes a period of reheating, during which the inflaton field decays. Reheating can be instantaneous or protracted, and is generally defined by two parameters, the average equation of state, $\overline{w}$, and either the number of e-folds before the the universe enters a thermal equilibrium, radiation-domination epoch, $N_{\mathrm{re}}$, or, equivalently, the temperature at which this transition occurs, $T_{\mathrm{re}}$. This transition necessarily occurs before big bang nucleosynthesis (BBN), and therefore $T_{\mathrm{re}}>\mathcal{O}\left(1\ \mathrm{MeV}\right)$.\footnote{This is a rough estimate. For a more accurate treatment, see Refs. \cite{Kawasaki:2000en,Sabir:2019xwk}.} However, while BBN bounds place a lower bound on the onset of radiation domination at a temperature of $T_{\mathrm{re}}=1\ \mathrm{MeV}$, unconventional dynamics below the scale of electroweak symmetry breaking at about $100\ \mathrm{GeV}$ could potentially have interesting implications for baryogenesis, and thus values of $T_{\mathrm{re}}$ below $100\ \mathrm{GeV}$ should be handled with some caution, particularly if the equation of state during reheating is greater than $1/3$ \cite{Cook:2015vqa,Tanin:2020qjw}.
Standard reheating assumes the weakly coupled decay of an oscillatory inflaton field at the end of inflation, and thus a reheating period characterized by $\overline{w}=0$, but many more complex models have been put forward \cite{Kofman:1997yn,Kofman:1994rk,Felder:2001kt,Felder:2000hj,Abolhasani:2009nb,Dufaux:2006ee,Shuhmaher:2005mf,Cook:2015vqa}. For such models, the average equation of state $\overline{w}$ can go as high as $\overline{w}=1,$ though any equation of state greater than 1 requires violating the dominant energy condition of general relativity/causality, and thus we only consider $\overline{w} \leq 1.$ For our purposes, we make the simplifying assumption the equation of state is functionally constant, or at least that we can approximate it as holding constant at the average value.
To determine the viability of inflationary models, we must relate present day observations to the dynamics of the inflaton field during the inflationary period. This is done by relating a comoving scale, $k$, observed today, to the point during inflation when fluctuations on that scale exited the horizon, defined by
\begin{equation}
N_{k} \equiv \ln{\left(\frac{a_{\mathrm{end}}}{a_k}\right)}.
\label{Nkdef}
\end{equation}
To find the relationship between scale $k$ and $N_k$, we begin with the expression relating a given wavenumber $k$ to the size of the sound horizon, $\left(aH\right)$, when it froze out during inflation, $k = \left(aH\right)_k$. We rewrite this as
\begin{equation}
\ln{\left(\frac{k}{a_0H_0}\right)}=\ln{\left[\frac{k}{\left(aH\right)_k}\frac{\left(aH\right)_k}{a_0H_0}\right]}=\ln{\left(\frac{\left(aH\right)_k}{a_0H_0}\right)},
\end{equation}
expanding the log to cover the various evolutionary epochs as
\begin{equation}
\ln{\left(\frac{k}{a_0H_0}\right)}= \ln{\left(\frac{\left(aH\right)_k}{\left(aH\right)_{\mathrm{end}}}\right)}+\ln{\left(\frac{\left(aH\right)_{\mathrm{end}}}{\left(aH\right)_{\mathrm{RD}}}\right)}+\ln{\left(\frac{\left(aH\right)_{\mathrm{RD}}}{\left(aH\right)_{\mathrm{eq}}}\right)}+\ln{\left(\frac{\left(aH\right)_{\mathrm{eq}}}{\left(aH\right)_{\mathrm{0}}}\right)},
\label{epochexpand}
\end{equation}
where the subscript ``end'' represents the value at the end of inflation, the subscript ``RD'' is equivalent to ``re'', indicating the value at the end of reheating/the beginning of radiation domination, and ``eq'' indicates the value at matter-radiation equality. Assuming a constant equation of state, we next use the identity
\begin{equation}
aH \propto a^{-(1+3w)/2},
\end{equation}
and the definition of $N_k$ given in Eq. \ref{Nkdef}.
Substituting these two identities into \ref{epochexpand}, we have
\begin{equation}
\ln{\left(\frac{k}{a_0H_0}\right)}=-N_k+ \ln{\left(\frac{H_k}{H_{\mathrm{end}}}\right)}+\left(\frac{1+3\overline{w}}{2}\right)N_{\mathrm{re}}-N_{\mathrm{RD}}+\ln{\left(\frac{\left(aH\right)_{\mathrm{eq}}}{\left(aH\right)_{\mathrm{0}}}\right)}.
\label{Nk1}
\end{equation}
Here, $N_{\mathrm{RD}}$ is the number of e-folds of expansion between the onset of radiation domination and matter-radiation equality. To evaluate this, we rewrite it as
\begin{equation}
N_{\mathrm{RD}}=\ln\left(\frac{a_{\mathrm{RD}}}{a_{\mathrm{eq}}}\right)=-\ln{\left(\frac{T_{re}}{T_{\mathrm{eq}}}\right)}-\frac{1}{3}\ln{\left(\frac{g_{*S}\left[T_{re}\right]}{g_{*S}\left[T_{\mathrm{eq}}\right]}\right)}.
\end{equation}
From the Planck values for the matter and photon densities, \cite{Aghanim:2018eyx}, we can write the redshift of matter/radiation equality as
\begin{equation}
1+z_{\mathrm{eq}}=\left(\frac{a_0}{a_{\mathrm{eq}}}\right)=\left(\frac{T_{\mathrm{eq}}}{T_{\mathrm{0}}}\right)=\left(\frac{\Omega_mh^2}{\Omega_{\gamma}h^2}\right)=3404,
\end{equation}
so that
\begin{equation}
T_{\mathrm{eq}}=3404T_0=9295\mathrm{K} =8.01\times 10^{-10}\ \mathrm{ GeV}.
\end{equation}
We next derive an equation for $\left(aH\right)_z / \left(aH\right)_{\mathrm{0}}$ from the Friedmann equation, $H^2 / H_0^2=\Omega_ma^{-3}+\Omega_{\gamma}a^{-4}+\Omega_{\Lambda},$
\begin{equation}
\frac{\left(aH\right)^2_z}{\left(aH\right)^2_0}=\frac{1}{(1+z)^2}\left[\Omega_{m0}(1+z)^3+\Omega_{\gamma 0}(1+z)^4+\Omega_{\Lambda 0}\right].
\end{equation}
Taking $z=z_{\mathrm{eq}}=3404$, $\Omega_{m0}=0.3166,$ $\Omega_{\gamma 0}=9.32\times 10^{-5},$ and $\Omega_{\Lambda 0}=1-\Omega_{m 0}-\Omega_{\gamma 0}=0.6833,$ we find
\begin{equation}
\ln\left(\frac{\left(aH\right)_{\mathrm{eq}}}{\left(aH\right)_0}\right)=3.839.
\end{equation}
Plugging these values into \ref{Nk1}, we find a straightforward equation for $N_k,$
\begin{equation}
N_k=-\ln{\left(\frac{k}{a_0H_0}\right)}+ \ln{\left(\frac{T_{\mathrm{re}}}{10^{25}\ \mathrm{ eV}}\right)}+\frac{1}{3}\ln{\left(\frac{g_{*S}\left[T_{re}\right]}{g_{*S}\left[T_{\mathrm{eq}}\right]}\right)}+\left(\frac{1+3\overline{w}}{2}\right)N_{\mathrm{re}}+\ln{\left(\frac{H_{k}}{H_{\mathrm{end}}}\right)}+61.62,
\label{Nk2}
\end{equation}
an equation which explicitly shows the dependence of $N_k$ on the reheating epoch. Since the values of $r$, $n_S$, and the normalization of the potential (and therefore $V_{\mathrm{end}}$) all depend on evaluating certain inflationary parameters at the point when the pivot-scale modes froze out, changing $N_k$ changes that evaluation point, and results in shifting all of these, shifts which need to be taken into account when evaluating the observable parameters of inflationary models.
\subsection{The effects of $\overline{w}$ and $T_{\mathrm{re}}$. }
In this section, we discuss how $\overline{w}$ influences $N_k$ (and consequently $r$ and $n_S$) in terms of two cases: $\overline{w}>1/3$ and $\overline{w}<1/3$. This distinction reflects the fact that $\overline{w}=1/3$ corresponds to instantaneous reheating followed by radiation domination, as it implies that we have radiation domination for the entire period between the end of inflation and BBN. The cases $\overline{w}<1/3$ and $\overline{w}>1/3$ exhibit drastically different behaviors, which require separate discussions. While it does not have any other direct impact, increasing the length of the transition period (by decreasing $T_{\mathrm{re}}$) increases the effect relative to instantaneous reheating by increasing the duration of the period of expansion with equation of state $\overline{w}$.
To obtain a bound, we take the limit of the longest possible $\overline{w}$-period, so that $T_{\mathrm{re}}=T_{\mathrm{BBN}}$.
The first equation we use to parametrize this period is:
\begin{equation}
\dfrac{\rho_{\mathrm{end}}}{\rho_{\mathrm{BBN}}} =\left(\frac{a_{\mathrm{end}}}{a_{\mathrm{BBN}}}\right)^{-3\left(1+\overline{w}\right)}.
\label{writ1}
\end{equation}
We further take equations for $\rho_{\mathrm{end}}$ and $\rho_{\mathrm{BBN}}$ \cite{Cook:2015vqa}:
\begin{equation}
\rho_{\mathrm{end}}=\frac{3}{2}V_{\mathrm{end}},
\label{writ2}
\end{equation}
and
\begin{equation}
\rho_{\mathrm{BBN}}=\frac{\pi^2}{30}g_{\mathrm{BBN}}T_{\mathrm{BBN}}^4,
\label{writ2.5}
\end{equation}
where ``$g_{\mathrm{BBN}}$'' represents the number of relativistic degrees of freedom at BBN.
\begin{figure}
\includegraphics[scale=0.35]{NatInflNNvsw.pdf}
\caption{Number of e-folds of inflation vs equation of state during $\overline{w}$ period. }
\label{figNNvsw}
\end{figure}
While the value of $V_{\mathrm{end}}$ depends on both the model and the number of e-folds of inflation, for simplicity we first hold it constant, so we can substitute these into Equation (\ref{writ1}). For ease of notation, we define a parameter $\Gamma \equiv \rho_{\mathrm{end}} / \rho_{\mathrm{BBN}}$, and combine Eqs. (\ref{writ1}), (\ref{writ2}), and (\ref{writ2.5}):
\begin{equation}
\dfrac{\rho_{\mathrm{end}}}{\rho_{\mathrm{BBN}}} =\dfrac{\frac{3}{2}V_{\mathrm{end}},}{\frac{\pi^2}{30}g_{\mathrm{BBN}}T_{\mathrm{BBN}}^4.}=\left(\frac{a_{\mathrm{end}}}{a_{\mathrm{BBN}}}\right)^{-3\left(1+\overline{w}\right)}=\Gamma.
\label{writ3}
\end{equation}
Meanwhile, we know that $\left(aH\right)^{-1}\propto a^{\left(1+3w\right)/2}$, so that we can define the change in the size of the comoving horizon during the transition period as
\begin{equation}
\frac{\left(aH\right)^{-1}|_{\mathrm{BBN}}}{\left(aH\right)^{-1}|_{\mathrm{end}}}=\left(\frac{a_{\mathrm{BBN}}}{a_{\mathrm{end}}}\right)^{\frac{1}{2}\left(1+3\overline{w}\right)}.
\label{writ4}
\end{equation}
By combining equations \ref{writ3} and \ref{writ4}, we write the change in $\left(aH\right)^{-1}$ during the $\overline{w}$ period as
\begin{equation}
\frac{\left(aH\right)^{-1}|_{\mathrm{BBN}}}{\left(aH\right)^{-1}|_{\mathrm{end}}}=\Gamma^{\frac{1+3\overline{w}}{6\left(1+\overline{w}\right)}}\equiv\Gamma_{\overline{w}},
\end{equation}
which we can compare to instantaneous reheating by taking the ratio of $\Gamma_{\overline{w}}$ to $\Gamma_{\gamma}$ (the value of $\Gamma$ assuming instantaneous reheating and subsequent radiation domination):
\begin{equation}
\frac{\Gamma_{\overline{w}}}{\Gamma_{\gamma}}=\Gamma^{\frac{1}{6}\left(\frac{1+3\overline{w}}{1+\overline{w}}-\frac{3}{2}\right)}.
\end{equation}
Examining this expression, we can see that, for any fixed value of $\Gamma$, having $\overline{w}>1/3$ leads to \textit{increased} change in the size of the horizon between the end of inflation and Big Bang Nucleosynthesis, whereas $\overline{w}<1/3$ results in a \textit{reduced} change in the size of the horizon during this period. This explains the impact $\overline{w}$ has on $N_k$, and consequently $n_s$ and $r$: our reconstruction of the horizon size at the end of inflation depends on the evolution of the horizon between the end of inflation and BBN, and relate to match the pivot scale, $\overline{w}>\frac{1}{3}$ requires \textit{more} e-folds, while $\overline{w}<\frac{1}{3}$ requires \textit{fewer} e-folds, as shown in Figure \ref{figNNvsw}.
\FloatBarrier
\subsection{Methodology}
In order to exactly calculate the $r$ and $n_S$ values generated by each combination of inflationary potential and set of reheating parameters, we created a Mathematica code which takes an un-normalized inflationary potential and a corresponding mass scale $\mu$ (in this case, $V(\theta)=1+\cos{\theta}$, where $\theta \equiv \phi / \mu$) and the two reheating parameters (average equation of state during reheating, $\overline{w}$, and the temperature at the onset of radiation domination, $T_{\mathrm{re}}$) and outputs predicted values of $N_k$, $r$ and $n_S$.
For the first step, the desired output is the potential at the end of inflation ($V_{\mathrm{end}}$) and $H_k$, both of which depend on the number of e-folds of inflation ($N_k$). The formula for $V_{\mathrm{end}}$ can be written in terms of $\theta_k$ and $\theta_{\mathrm{end}}$, where $\theta_{\mathrm{end}}$ is the value of the field at the end of inflation, and $\theta_k$ is the value at $N_k$ e-folds, as:
\begin{equation}
V_{\mathrm{end}} = \frac{3\mathrm{A_s}M^2}{128 \pi}\left(\frac{V'\left(\theta_k\right)}{V\left(\theta_k\right)}\right)^2\left(\frac{V\left(\theta_{\mathrm{end}}\right)}{V\left(\theta_k \right)}\right).
\label{Vend}
\end{equation}
\cite{Freese:2014nla}
Here $M \equiv m_{\mathrm{Pl}} / \mu$, $\mathrm{A_s}$ is the initial amplitude of the pivot-scale curvature fluctuations (taken here to be $2.105\times 10^{-9}$, in accordance with the most recent Planck results, \cite{Aghanim:2018eyx}), and $\theta_{\mathrm{end}}$ is found by numerically solving the equation for $\epsilon(\theta_{\mathrm{end}})=1,$ while $\theta_k$ is found by numerically solving
\begin{equation}
N_k = \frac{8 \pi}{M^2}\int_{\theta_k}^{\theta_{\mathrm{end}}}\left[\frac{V(\theta)}{V'(\theta)}\right]\mathrm{d}\theta
\end{equation}
for $\theta_k$ given a certain value of $N_k$. Similarly, $\theta_{\mathrm{end}}$ is found by numerically solving the equation for $\epsilon(\theta_{\mathrm{end}})=1.$ (Here a prime indicates a derivative taken with respect to $\theta,$ rather than $\phi.$)
We write the equation for $H_k$ in terms of the same parameters, as
\begin{equation}
H_k = m_{\mathrm{Pl}}\sqrt{\frac{\mathrm{A_s}}{16}}\left(\frac{V'\left(\theta_k\right)}{V\left(\theta_k\right)}\right).
\label{Hk}
\end{equation}
By rearranging equation 2.4 from \cite{Cook:2015vqa} for $N_{\mathrm{re}}$,
\begin{equation}
N_{\mathrm{re}}=\frac{1}{3(1+\overline{w})}\ln{\frac{45V_{\mathrm{end}}}{\pi^2g_{\mathrm{*S}}\left(T_{\mathrm{re}}\right) T_{\mathrm{re}}^4}},
\label{Nre1}
\end{equation} and combining it with equations \ref{Nk2}, \ref{Vend} and \ref{Hk}, we find an equation for $T_{\mathrm{re}}$:
\begin{eqnarray}
T_{\mathrm{re}} =&&\left(\frac{45V_{\mathrm{end}}}{g_{\gamma}\pi^2}\right)^{1/4} \exp\left[\frac{3\left(1+\overline{w}\right)}{2(1+3\overline{w})}\right] \times \cr
&&\exp\left[N_k-\ln{\frac{T_{\mathrm{re}}}{10^{25}\ \mathrm{eV}}}-\frac{1}{3}\ln{\left(\frac{g_{*S}\left(T_{re}\right)}{g_{*S}\left(T_{\mathrm{eq}}\right)}\right)} +\ln{\frac{k}{a_0H_0}}-\ln{\frac{H_k}{H_{\mathrm{end}}}}-61.62\right],
\label{Tre}
\end{eqnarray}
which we finally solve numerically to find the corresponding value of $N_k$ for the specified initial value parameters, giving us all the information we require to calculate the actual observables, $r$ and $n_s$.
We compare the model predictions to the regions of the $r$ / $n_s$ plane which are allowed by the Planck 2018 TT/TE/EE temperature and polarization data~\cite{Aghanim:2018eyx,Aghanim:2019ame,Akrami:2019bkn,Akrami:2019izv,Akrami:2018odb}, and the BICEP2/Keck Array 2015 combined polarization data~\cite{Ade:2015tva}. We include Baryon Acoustic Oscillation (BAO) data from the Sloan Digital Sky Survey Data Release 12 \cite{Alam:2015mbd}, the 6DF Data Release 3 \cite{Jones:2009yz}, and the Sloan Digital Sky Survey Data Release 7 main galaxy sample (MGS) sample \cite{Ross:2014qpa}. The allowed regions are calculated numerically the \texttt{CosmoMC} Markov Chain Monte Carlo (MCMC) sampler~\cite{Lewis:2002ah}, and the CAMB Boltzmann code. We fit to a seven-parameter $\Lambda$CDM+$r$ model with the following parameters:
\begin{itemize}
\item{Baryon density $\Omega_{\rm b} h^2$.}
\item{Dark matter density $\Omega_{\rm C} h^2$.}
\item{Angular scale of acoustic horizon $\theta$ at decoupling.}
\item{Reionization optical depth $\tau$.}
\item{Power spectrum normalization $A_s$.}
\item{Tensor-to-scalar ratio $r$, calculated at a pivot scale of $k = 0.05\ h \mathrm{Mpc}^{-1}$.}
\item{Scalar spectral index $n_{\rm S}$.}
\end{itemize}
In our analysis, we assume curvature $\Omega_{\rm k}$ is zero, and the Dark Energy equation of state is $w = -1$. We set the number of neutrino species $N_\nu = 3.046$, and neutrino mass $m_\nu = 0.06\ {\rm eV}$. We apply Metropolis-Hastings sampling to 8 chains running in parallel, and use a convergence criterion for the Gelman-Rubin parameter $R$ of $R - 1 < 0.05$.
\FloatBarrier
\section{Results}
\subsection{Case 1: Conventional Post-Inflationary Dynamics}
\begin{figure}
\centering
\includegraphics[width=0.9 \textwidth]{NatInflTvarRef.pdf}
\caption{ Tensor/scalar ratio $r$ vs spectral index $n_s$ for a variety of values of $T_{\mathrm{re}}$, assuming $\overline{w}=0$. The blue shaded region represents the allowed region for Planck 2018 + BICEP/Keck(BK15) Polarization + BAO, while the black line contours represent the addition of the SHOES $H_0$ data. (Given the statistically significant tension between the SHOES constraint on $H_0$ and the constraint from Planck, combining the two in a Bayesian fit is likely of limited value. We include the SHOES constraint only to show that our conclusion is not sensitive to the assumed value of $H_0$.) Instantaneous reheating comes closest to matching observations, but is still excluded from the 95\% confidence region. Decreasing $T_{\mathrm{re}}$ only increases the tension, moving the $r-n_s$ curves up and to the left, away from the allowed region. }
\label{FigTvarref}
\end{figure}
We first consider the simple case of conventional reheating directly to a radiation-dominated universe after inflation.
Since, near the minimum, the mass term dominates the potential, the average equation of state during reheating is $\overline{w}=0$, though the duration of reheating can vary. This case, shown in Fig. \ref{FigTvarref}, is disfavored at greater than 95\% confidence, and decreasing $T_{\mathrm{re}}$ decreases the number of e-folds of inflation, exacerbating the inconsistency with data. Note that in the limit of instantaneous reheating, $\overline{w}$ becomes irrelevant; the instantaneous reheating line on figures \ref{FigTvarref} and \ref{FigTvar} are identical, and equivalent to the scenario where $\overline{w}=1/3$.
\subsection{Case 2: General Post-Inflationary Dynamics}
While a cosine potential results in an equation of state during reheating of $\overline{w}=0$, `reheating' doesn't necessarily have to be the only thing that occurs during the $\overline{w}$ period. If we posit some other interaction or field domination between the end of inflation and BBN, either during or after the decay of the inflaton field itself, then it could be possible to create a situation where $\overline{w}$ is greater than $1/3$ for an arbitrarily long period between inflation and BBN. If we allow $\overline{w}>\frac{1}{3}$, we can increase $N_{k}$, which shifts the $r-n_s$ curves down and to the right, reducing the tension with observations. Ignoring SHOES, if we take the limit $\overline{w}=1$, we find that Natural Inflation agrees with the 95\% range of current observations for $T_{\mathrm{re}}<10^{13}\ \mathrm{GeV}$, and the 68\% region if we extend the $\overline{w}$ period past the electroweak scale at $100\ \mathrm{GeV}$, as shown in Figure \ref{FigTvar}. Similarly, if we take the limit of $T_{\mathrm{re}}=1\ \mathrm{MeV}$, using a value of $\overline{w}\geq \approx0.38$ brings the $r-n_s$ curve into the 95\% range, and $\overline{w}\geq\approx0.75$ takes us into the 68\% region, as shown in Figure \ref{Figwvar}. However, as shown in Ref. \cite{Tanin:2020qjw}, we must be careful when extending reheating into late times, especially when it represents a `stiff' epoch (i.e. $\overline{w}>1/3$), since the energy density of the gravitational waves is amplified during such a period, and we can tightly constrain the stochastic GW background at BBN.
\begin{figure}
\centering
\includegraphics[width=0.9 \textwidth]{NatInflTvar.pdf}
\caption{Curves in the $r-n_s$ plane for a variety of values of $T_{\mathrm{re}}$ in the limit of $\overline{w}=1$. The blue shaded region represents Planck 2018 + BICEP/Keck(BK15) Polarization + BAO, while the black line contours represent the addition of the SHOES $H_0$ data. Note that, although the range in $n_s$ is different, the curve representing instantaneous reheating here is exactly equivalent to its counterpart on Figure \ref{FigTvarref}.}
\label{FigTvar}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9 \textwidth]{NatInflwvar.pdf}
\caption{Curves in the $r-n_s$ plane for a variety of values of $\overline{w}$ in the limit of $T_{\mathrm{re}}=1\ \mathrm{MeV}$. The blue shaded region represents Planck 2018 + BICEP/Keck(BK15) Polarization + BAO, while the black line contours represent the addition of the SHOES $H_0$ data. }
\label{Figwvar}
\end{figure}
\FloatBarrier
\section{Conclusions}
In this paper, we have shown that the original model of Natural Inflation using a cosine potential is inconsistent with constraints on $r$ and $n_s$ at greater than 95\% confidence. However, if we allow for unconventional reheating characterized by a period of $\overline{w}>1/3$, Natural Inflation can be brought into agreement with current observations. While it is difficult to bring these models into within the 68\% confidence region without extending reheating into temperatures between electroweak symmetry breaking and nucleosynthesis, it is entirely possible to bring them into the 95\% confidence region, and by extending reheating below the electroweak scale, Natural Inflation with nonstandard reheating can generate $r-n_s$ values well within the 68\% confidence range, down to a minimum $r$ value of $r \sim \mathcal{O}(0.01)$. However, if future CMB experiments fail to detect tensor modes on that scale, even this extension of the parameter space accessible to Natural Inflation models will be insufficient, and this choice of scalar field potential will be ruled out entirely.
\section*{Acknowledgments}
This work is supported by the National Science Foundation under grants NSF-PHY-1719690 and NSF-PHY-2014021. This work was performed in part at the University at Buffalo Center for Computational Research.
\FloatBarrier
\bibliographystyle{JHEP}
|
2,869,038,155,294 | arxiv | \section{Introduction. Potential theory on multi-trees}
\label{intro}
Embedding theorems on graphs are interesting in particular because they are related to the structure of spaces of holomorphic functions. For Dirichlet space on disc $\mathbb D:=\{z: |z|<1\}$ this fact has been explored in \cite{ARS2002}, \cite{ARSW},
\cite{ARSW11}, and for Dirichlet space on bi-disc $\mathbb D^2$ in \cite{AMPS}, \cite{AMPVZ-K}, \cite{AHMV}. Bi-disc case is much harder as the corresponding graph has cycles. One particular interesting case is studied in \cite{Saw1}, where a small piece of bi-tree is considered.
The difference between one parameter theory (graph is a tree) and two parameter theory (graph is a bi-tree) is huge.
One explanation is that in a multi-parameter theory all the notions of singular integrals, para-products, BMO, Hardy classes etc become much more subtle than in one parameter settings. There are many examples of this effect. It was demonstrated in results of S.Y. A. Chang, R. Fefferman and L. Carleson, see \cite{Carleson}, \cite{Chang}, \cite{ChF1},\cite{RF1}, \cite{TaoCar}.
The papers dealing with poly-disc and multi-trees mentioned above are all have a common feature: they are based on potential theory on multi-trees. Let us recall the reader the main notations and facts of such a theory. We will do this for bi-tree just for the sake of simplicity.
Let $T$ denote the dyadic rooted tree with root $o$, we can associate the vertices with dyadic sub-intervals of $I^0:=[0,1]$, and $o$ with $I^0$ itself. Similarly, let $T^2$ denote the dyadic rooted bi-tree with root $o$, we can associate the vertices with dyadic sub-rectangles of $Q^0:=[0,1]\times [0,1]$, and $o$ with $Q^0$ itself. Both objects have partial order, which is the same as inclusion for intervals, rectangles correspondingly.
Both objects have a natural integration operator, if $f$ is a non-negative function on $T$ or $T^2$, and $\alpha$ is a vertex of $T$ or $T^2$, then
$$
\mathbb I f(\alpha) := \sum_{\alpha'\ge \alpha} f(\alpha')\,.
$$
We can call $\mathbb I$ the Hardy operator on a corresponding graph: it sums up values from $\alpha$ to $o$ along all directed paths from $o$ to $\alpha$. For $T$ such a path is unique for any $\alpha$, for $T^2$ there are many such paths.
The formally adjoint operator is $\mathbb I^*$ and
$$
\mathbb I^* f(\alpha) := \sum_{\alpha'\le \alpha} f(\alpha')\,.
$$
Let us make a convention that always our $T$ and/or $T^2$ are {\it finite graphs}, maybe very deep, but finite, and leaves are dyadic intervals of size $2^{-N}$ in the case of $T$ or dyadic squares of size $2^{-N}\times 2^{-N}$ in the case of $T^2$.
Then $\mathbb I^*$ is always defined. The set of leaves is a ``boundary'' of the graph and is denoted $\partial T$ or $\partial T^2$ correspondingly.
Now we want to introduce potential of measure. Again for simplicity (this is not at all important) let us call {\it measure} the function $\mu$ on $T^2$ that is identically zero on $T^2\setminus \partial T^2$ and just an arbitrary non-negative function on $\partial T^2$. We have the same way to define measure on $T$. Of course, what we really doing is defining {\it granular measures} on $Q^0$ and $I^0$ correspondingly. Here {\it granular} means that our measure have constant density with respect to dyadic squares of size $2^{-N}\times 2^{-N}$ or dyadic intervals of size $2^{-N}$ correspondingly.
We wish to have all estimates ever met in our theory to {\it not depend} on $N$. Then by making limit when $N\to\infty$ we can consider {\it all} measures on $Q^0$ or $I^0$ eventually.
Given such a measure $\mu$ we define its potential at a vertex $\alpha$ of $T$ or $T^2$ as
$$
\mathbb V^\mu (\alpha) := \mathbb I\circ \mathbb I^*(\mu) (\alpha)\,.
$$
Notice that as $\alpha$ is actually a dyadic rectangle $R=I\times J$ inside $Q^0$ (or dyadic interval $I$ inside $I^0$), then $\mathbb I^*(\mu)(\alpha)$ is just $\mu(R)$ ($\mu(I)$ correspondingly).
\bigskip
But $\mathbb V^\mu(\alpha)$ is a more complicated object, it is the sum of $\mu(R')$ over all $R'$ containing $R$, where $R$ is associated with vertex $\alpha\in T^2$ (correspondingly the sum of $\mu(I')$ over all $I'$ containing $I$, where $I$ is associated with vertex $\alpha\in T$).
\bigskip
Let us be on $T$ and let $\mathbb V^\mu \le 1$ on $\supp\mu$ (these are vertices of $\partial T$ where $mu>0$. Then we can easily see that $\mathbb V^\mu\le 1$ everywhere. In fact, without loss of generality $\mu\neq 0$, and let $\beta\in \partial T$ and let $\mu(\beta)=0$.
\medskip
We can find unique smallest predecessor $\gamma>\beta$ such that there is $\alpha\in \partial T$, $\mu(\alpha)>0$, and $\alpha$ has the same predecessor $\gamma$. The key statement here is that the smallest such $\gamma > \beta$ is unique because we are on a simple tree $T$. Now $\mathbb V^\mu(\gamma) \le \mathbb V^\mu (\alpha)\le 1$ as $\alpha\in \supp\mu$ and potential $\mathbb V$ of any positive measure on $T$ (and on $T^2$) is a decreasing function always.
\medskip
But $\mathbb V^\mu(\beta)=\mathbb V^\mu(\gamma)$ because $\mathbb I^*(\tau) =0$ for all $\tau: \beta\le \tau <\gamma$ by the definition of $\gamma$ as the smallest interval containing interval $\beta$ for which $\mu(\gamma)>0$.
So we proved that $\mathbb V^\mu \le 1$ on $\supp\mu$ implies $\mathbb V^\mu\le 1$ everywhere on $\partial T$. Then by monotonicity of potentials it is $\le 1$ everywhere on $T$.
\bigskip
This claim is blatantly false on $T^2$. The problems is that there can be a huge family $\Gamma$ of $\gamma>\beta$ such that $\mu(\gamma)>0$ and for any pair $\gamma_1, \gamma_2\in \Gamma$ none is smaller than the other. The reasoning above fails, and moreover there are plenty of simple examples of $\mu$ on $\partial T^2$ such that
$$
\mathbb V^\mu\le 1\,\,\text{on}\,\, \supp\mu,\,\,\text{but} \,\, \sup_{T^2}\mathbb V^\mu\ge C,
$$
where $C$ is as large as one wishes (if $N$ is chosen large enough).
\bigskip
This phenomena is called {\it the lack of maximum principle}, and it reveals itself prominently in the following effect.
Let $\mathcal T$ denote either $T$ or $T^2$.
Let us fix $\delta>0$ (not necessarily small but can be small) and consider
$$
E_\delta:=\{\alpha \in \mathcal T: \mathbb V^\mu(\alpha)\le \delta\}\,.
$$
Let
$$
\mathbb V_\delta^\mu(\alpha) = \mathbb I(\mathbf{1}_{E_\delta} \mathbb I^*\mu)(\alpha)\,.
$$
The expression (integration in the second equality is with respect to counting measure on $\mathcal T$)
$$
\mathcal E[\mu] =\int_{\mathcal T} \mathbb V^\mu\, d\mu= \int_{\mathcal T} (\mathbb I^*(\mu))^2
$$
is called the {\it energy of} $\mu$. The expression
$$
\mathcal E_\delta[\mu] =\int_{\mathcal T} \mathbb V_\delta^\mu\, d\mu= \int_{E_\delta} (\mathbb I^*(\mu))^2
$$
is called the {\it partial energy of} $\mu$.
\bigskip
It is trivial that if $\mathcal T=T$ then
\begin{equation}
\label{de}
\mathbb V_\delta^\mu\le \delta
\end{equation}
uniformly. The reasoning is exactly the same as above for maximum principle. The consequence is the following partial energy estimate:
\begin{equation}
\label{en}
\mathcal E_\delta[\mu] \le \delta\, |\mu|\,.
\end{equation}
\bigskip
But \eqref{de} can be easily false if $\mathcal T=T^2$. We will show below the example that even \eqref{en} can be false.
All estimates in papers \cite{AMPS}, \cite{AMPVZ-K}, \cite{AHMV} are based on a weaker version of \eqref{en}, which is true and which we call {\it the surrogate maximum principle}:
\begin{equation}
\label{smp0}
\mathcal E_\delta[\mu] \le C \delta^\tau\, \mathcal E[\mu]^{1-\tau}|\mu|^\tau\,.
\end{equation}
Here $C$ is universal.
For $\mathcal T=T^3$ we can prove that with $\tau=1/3$, for $\mathcal T=T^2$ we could originally prove it for $\tau =1/2$ and lately for any $\tau<1$. For $\tau=1$ it is false on $T^2$, see below. For $\mathcal T=T^4$ we cannot prove \eqref{smp0} at all, even for a very small $\tau$.
\section{Statement of the problem}
\label{state}
Suppose we are living on a rooted directed graph. For example on a dyadic tree $T$, or on $T^2=T\times T$. The latter can be viewed as a graph of all dyadic rectangles inside a unit square (the root). Let $\mathbb I$ be operator of summation ``up the graph''. It has a formally adjoint operator $\mathbb I^*$ of summation ``down the graph''. We use the same notation of this operator for the rooted dyadic tree $T$ and for rooted bi-tree $T^2$. It is convenient to think that our graphs are finite, but very deep. The estimates and the constants must not depend on the depth.
On dyadic tree $T$ we have the following key ``majorization theorem with small energy'':
\begin{theorem}
\label{d1}
Let $f, g: T\to \mathbb R_+$, and 1) $g$ is superadditive, 2) $\supp f \subset \{\mathbb I g \le \delta\}$. Let $\lambda \ge 10\delta$. Then there exists
$\varphi: T\to \mathbb R_+$ such that
\begin{enumerate}
\item $\mathbb I \varphi \ge \mathbb I f$ on $\{2\lambda \le \mathbb I g \le 4\lambda\}$;
\item $\int_T \varphi^2 \le C\frac{\delta^2}{\lambda^2} \int_T f^2$.
\end{enumerate}
\end{theorem}
For a while we tried to prove the similar statement for $T^2$. Namely, we conjectured
\begin{conj}
\label{d2}
Let $f, g: T^2\to \mathbb R_+$, and 1) $g$ is superadditive in each variable, 2) $\supp f \subset \{\mathbb I g \le \delta\}$. Let $\lambda \ge 10\delta$. Then there exists
$\varphi: T^2\to \mathbb R_+$ such that
\begin{enumerate}
\item $\mathbb I \varphi \ge \mathbb I f$ on $\{2\lambda \le \mathbb I g \le 4\lambda\}$;
\item $\int_{T^2} \varphi^2 \le C\frac{\delta}{\lambda} \int_{T^2} f^2$.
\end{enumerate}
\end{conj}
For some very special cases, e. g. for $f=g$, this has been proved, and turned out to be a key result in describing the embedding measures for the Dirichlet spaces in tri--disc into $L^2(\mathbb D^3, d\rho)$. See
\cite{MPV}, \cite{AHMV}.
\section{Counterexample to small energy majorization on bi-tree}
\label{sem-cex}
Now we will show that this is not true in general.
Moreover, below $f, g$ have special form, namely
$$
f=\mathbb I^*\mu, \,g=\mathbb I^*\nu,
$$
with certain positive measures on $T^2$. And measure $\mu$ is trivial, it is a delta measure of mass $1$ at the root $o$ of $T^2$. In particular, $f(o)=1, f(v)=0, \forall v\neq o$. Also $\mathbb I f \equiv 1$ on $T^2$.
\bigskip
The choice of $\nu$ is more sophisticated. Choose large $\log n= 2^s$ and denote $2^M:= \frac{n}{\log n}$.
In the unit square $Q^0$ consider dyadic sub-squares $Q_1, \dots, Q_{2^M}$, which are South-West to North-East diagonal squares of size $2^{-M}\times 2^{-M}$.
In each $Q_j$ choose $\omega_j$, the South-West corner dyadic square of size $2^{-n} \cdot 2^{-M}$.
Measure $\nu$ is the sum of delta measures at $\omega_j, j=1, \dots, \frac{n}{\log n}$, each of muss $\frac1{n^2}$. Obviously
$$
g(o)= \mathbb V^* \nu(o) = \|\nu\|= \frac1{n^2}\cdot \frac{n}{\log n}= \frac1{n\log n}=:\delta.
$$
So we chose $\delta$ and $f, g$ satisfy $\supp f =\{o\}\subset \{ \mathbb I g \le \delta\}$. Also $g$ is sub-additive in both variables on $T^2$: it is true for any function of the form $\mathbb I^*\nu$.
\medskip
Now what is $\lambda$, and what is the set $\{2\lambda \le \mathbb I g \le 4\lambda\}$?
\medskip
Consider (by symmetry this will be enough) $Q_1$ and $\omega_1$ and consider the family $\mathcal F_1$ of dyadic rectangles containing $\omega_1$ and contained in $Q_1$ of the following sort:
$$
[0, 2^{-n} 2^{-M}]\times [0, 2^{-M}], [0, 2^{-n/2} 2^{-M}]\times [0, 2^{-2}2^{-M}], \dots, [0, 2^{-n/2^k} 2^{-M}]\times [0, 2^{-2^k}2^{-M}],
$$
there are approximately $\log n$ of them, and they are called $q_{10}, q_{11},\dots, q_{1k}$, $k\asymp \log n$. We do the same for each $\omega_j, Q_j$ and we get $q_{j0}, q_{j1},\dots, q_{jk}$.
\begin{lemma}
\label{g}
$\mathbb I g(q_{ji}) \asymp \frac1{n}\quad \forall j, i$.
\end{lemma}
It is proved in \cite{MPV}.
\bigskip
Let
\begin{equation}
\label{F}
F:=\cup_{ik} q_{ik}\,.
\end{equation}
\bigskip
So we choose $\lambda=\frac{c}n$ with appropriate $c$. Then
$$
F\subset \{2\lambda \le \mathbb I g \le 4\lambda\}\,.
$$
As it was said that always $\mathbb I f\ge 1$, so if $\varphi$ as in Conjecture \ref{d2} would exist, we would have $\mathbb I \varphi \ge 1$ on $F$ and (by the second claim of Conjecture \ref{d2})
$$
\int_{T^2} \varphi^2 \le \frac{C}{\log n} \int_{T^2} f^2 = \frac{C}{\log n}\,.
$$
By the definition of capacity this would mean that
$$
\text{cap} (F) \le \frac{C}{\log n}\,.
$$
In the next subsection \ref{capFss} we show that $\text{cap} (F) \asymp 1$. Hence, conjecture is false.
\begin{remark}
\label{anyrate}
Moreover, this example shows that even a weaker estimate
$$
\int_{T^2} \varphi^2 \le C\Big(\frac{\delta}{\lambda}\Big)^\varepsilon \int_{T^2} f^2
$$
is unattainable. Even more so, for any estimate of this type
$\int_{T^2} \varphi^2 \le Ch\Big(\frac{\delta}{\lambda}\Big) \int_{T^2} f^2$, where $\lim_{t\to 0}h(t)=0$, we can use the construction above as a counterexample.
\end{remark}
\subsection{Capacity of $F$ is equivalent to $1$}
\label{capFss}
Let $\rho$ on $F$ be a capacitary measure of $F$, and $\rho_{jk}$ be its mass on $q_{jk}$.
By symmetry $\rho_{jk}$ does not depend on $j=1, \dots, \frac{n}{\log n}$.
\medskip
The proof of the fact that
$$
\rho_k:= \rho_{jk},\,\, j=1, \dots, \frac{n}{\log n}
$$
have the average $\ge \frac{c_0}{n}$, that is that
\begin{equation}
\label{E}
\mathbb E \rho = \frac{\sum_{k=0}^{\log n} \rho_k}{\log n} \ge \frac{c_0}{n}
\end{equation}
follows below.
\bigskip
In its turn it gives the required
\begin{equation}
\label{capF}
\text{cap} F \asymp 1\,.
\end{equation}
\medskip
Let us first derive \eqref{capF} from \eqref{E}.
Measure $\mu$ that charges $\rho_k$ on each $q_{jk}, j=1,\dots, \frac{n}{\log n}; k= \frac{\log n}{2},\dots, \frac{3\log n}{4}$ is equilibrium so it gives $\mathbb V^\mu \equiv 1$ on each $q_{jk}$.
Then \eqref{capF} follows like this: $\text{cap} F= \|\mu\|\ge \frac{n}{\log n} \sum_{k=0}^{\log n} \rho_k = n \mathbb E \rho$. Hence $\text{cap} F= \|\mu\|\ge c_0$ if \eqref{E} is proved.
\medskip
Now let us prove \eqref{E}.
Everything is symmetric in $j$, so let $j=1$ and let us fix $k$ in $[\frac{\log n}{2}, \frac{3\log n}{4}]$. We know that
$$
1\le \mathbb V^\mu \quad \text{on} \,\, q_{1k},
$$
and now let us estimate this potential from above. For that we split $\mathbb V^\mu$ to
$\mathbb V_1$, this is the contribution of rectangles containing $Q_1$, to
$\mathbb V_2$, the contribution of rectangles containing $q_{1k}$ and contained in $Q_1$, and
$\mathbb V_3$, the contribution of rectangles containing $q_{1k}$ that strictly intersect $Q_1$ and that are ``vertical'', meaning that there vertical side contains vertical side of $Q_1$.
(There is $\mathbb V_4$ totally symmetric to $\mathbb V_3$.)
Two of those are easy, $\mathbb V_1$ ``almost'' consists of ``diagonal squares containing $Q_1$. Not quite, but other rectangles are also easy to take care.
Denote
$$
r=\|\mu\|, \quad M=\log\frac{n}{\log n}\,.
$$
Then we write diagonal part first and then the rest:
$$
\mathbb V_1= r+\frac{r}{2} +\frac{r}{4} + \dots \frac{r}{2^M} +\frac{r}{2} +\frac{r}{2} + 2\frac{r}{4} + 2\frac{r}{4} +
\dots k\frac{r}{2^k} + 2\frac{r}{2^k} +\dots = C_1 r
$$
To estimate $\mathbb V_2$ notice that there are at most $c n$ rectangles containing $q_{1k}$ and contained in $Q_1$ that do not contain any other $q$, there are $\frac{c n}{2}$ of rectangles contain $q_{1k}$ and one of its sibling (and lie in $Q_1$), there are $\frac{c n}{4}$ of rectangles contain $q_{1k}$ and two of its sibling (and lie in $Q_1$), et cetera.
Hence,
$$
\mathbb V_2 \le Cn\rho_k + \frac{Cn}{2} \rho_{k\pm 1} + \frac{Cn}{4} \rho_{k\pm 2}+\dots
$$
Now consider $\mathbb V_3$. The horizontal size of $q_{1k}$ is $2^{-M}\cdot 2^{-n2^{-k}}$. Its vertical size
is $2^{-M}\cdot 2^{-2^k}$. So the rectangles of the third type that do not contain the siblings: their number is
at most (we are using that $k\ge \frac12 \log n$)
$$
n2^{-k} ( 2^k+M) \le n + \sqrt{n}\log n\,.
$$
Those that contain $q_{1k}$ and one sibling, there number is at most
$$
n2^{-k} ( 2^{k-1}+M) \le \frac{n}2 + \sqrt{n}\log n\,.
$$
We continue, and get that
$$
\mathbb V_3 \le n \rho_k + \frac{n}2 \rho_{k\pm1} + \frac{n}4 \rho_{k\pm 2}+\dots + \sqrt{n}\log n (\sum \rho_s)\,.
$$
Add all $\mathbb V_i$:
$$
1 \le \mathbb V_1+\mathbb V_2 +\mathbb V_3 +\mathbb V_4 \le C_1 r+ C n \rho_k+ \frac{Cn}2 \rho_{k\pm1} + \frac{Cn}4 \rho_{k\pm 2}+\dots + \sqrt{n}\log n (\sum \rho_s)\,.
$$
Now average over $k$. Notice that
$$
r=\|\mu\|= \frac{n}{\log n} \sum\rho_s =\frac12 n \mathbb E \rho
$$
Hence,
$$
1 \le C' n \mathbb E\rho + Cn \mathbb E\rho + \frac{Cn}2 \mathbb E\rho+ \frac{Cn}4 \mathbb E\rho + \dots + \frac12\sqrt{n}\log^2 n \,\mathbb E\rho\,.
$$
Therefore, $\mathbb E\rho\ge \frac{c_0}{n}$ and \eqref{E} is proved.
\section{The lack of $\int_{T^2} \mathbb V^\nu_\varepsilon \, d\nu \le C\varepsilon |\nu|$ estimate}
\label{firstPower}
Let us recall the notations of \cite{MPV, AHMV, MPVZ}. When we write $\int_{T^2}\dots$ without the indication of the measure of integrations, we mean the counting measure on $T^2$.
$$
\mathbb V^\nu := \mathbb I (\mathbb I^*\nu)\,.
$$
$$
E_\varepsilon:= \{(\alpha, \beta)\in T^2: \mathbb V^\nu(\alpha, \beta) \le \varepsilon\}\,.
$$
$$
\mathbb V^\nu_\varepsilon := \mathbb I (\mathbf{1}_{E_\varepsilon}\mathbb I^*\nu)\,.
$$
$$
\mathcal E[\nu]:= \int_{T^2} \mathbb V^\nu\, d\nu= \int_{T^2} (\mathbb I\nu)^2\,.
$$
This is energy of measure $\nu$.
$$
\mathcal E_\varepsilon[\nu]:= \int_{T^2} \mathbb V^\nu_\varepsilon\, d\nu= \int_{E_\varepsilon} (\mathbb I\nu)^2\,.
$$
This is a partial energy concentrated where potential already became small.
\bigskip,
If we were on $T$ instead of $T^2$ we would have a trivial uniform estimate
\begin{equation}
\label{triv}
\mathbb V^\nu_\varepsilon \le \varepsilon \Rightarrow \mathcal E_\varepsilon[\nu]:= \int_{T} \mathbb V^\nu_\varepsilon\, d\nu \le C\varepsilon |\nu|\,.
\end{equation}
And here $C=1$ of course.
The left hand side of \eqref{triv}, and what is more interesting, the right hand side of \eqref{triv} generally fails to hold on $T^2$ with any finite $C$. This section is devoted to a corresponding example.
\bigskip
On the other hand, \cite{MPVZ} proves that something like \eqref{triv} (but weaker) holds on $T^2$. In fact, we have
\begin{theorem}[Surrogate Maximum Principle]
\label{smp}
If $\mathcal E[\nu] \ge 2 \varepsilon |\nu|$ then
$$
\mathcal E_\varepsilon[\nu] \le \varepsilon e^{c_0\sqrt{\log\frac{\mathcal E[\nu]}{\varepsilon |\nu|}}}|\nu|\,.
$$
\end{theorem}
From Theorem \ref{smp} it is easy to deduce the following more transparent estimates:
\begin{equation}
\label{sqlog}
|\nu|\le \mathcal E[\nu]\Rightarrow \mathcal E_\varepsilon[\nu] \le C\varepsilon e^{c\sqrt{\log\frac1{\varepsilon}}} \,\mathcal E[\nu]\,.
\end{equation}
\begin{equation}
\label{tau}
\mathcal E_\varepsilon[\nu] \le C_\tau \varepsilon^{1-\tau} \mathcal E[\nu]^\tau |\nu|^{1-\tau}, \quad \forall \tau\in (0,1)\,.
\end{equation}
\bigskip
Let us build an example of sequences of pairs $\nu, \varepsilon$ that shows that $\tau=0$ cannot be chosen in \eqref{tau}.
\bigskip
In other words let us build a counterexample to this ``universal'' estimate on $T^2$:
\begin{equation}
\label{fd}
``\int_{T^2} \mathbb V_\varepsilon^\nu\, d\nu \le C \varepsilon |\nu|"\,.
\end{equation}
\begin{remark}
\label{FF}
One remark is in order. Notice that change of variables $\varepsilon\to t\varepsilon, \nu\to t\nu$ gets both the left hand side and the right hand side of \eqref{fd} multiplied by the same $t^2$. Thus we can normalize measure and always think that $|\nu|=1$.
Inequality above becomes $\int_{T^2} \mathbb V_\varepsilon^\nu\, d\nu \le C \varepsilon$ for probability measures $\nu$. We show in this Section that it is false. But, in fact, in Section \ref{open} we show that {\it absolutely any} inequality
$$
\int_{T^2} \mathbb V_x^\nu\, d\nu \le C F(x)
$$
is false regardless of function $F$. Notice that on $T$ function $F(x)=x$ makes the above inequality valid. So the counterexample in Section \ref{open} supersedes the counterexample we are going to explain in the current section. But in fact, a simple inspection shows that both counterexamples are based on the same idea.
\end{remark}
To disprove \eqref{fd} we go back to the previous Sections and given $n$ (large power of $2$) we consider $\nu$ as in the previous Sections, and
$$
g=\mathbb I^* \nu, \, \varepsilon =\frac{c}{n},\, G:= g \cdot \mathbf{1}_{\mathbb I g \le \frac{c}{n}}\,.
$$
Consider the set of vertices (rectangles) $F$ introduced in \eqref{F}. Lemma \ref{g} claims that $\mathbb I g\asymp \frac1n$ on $F$.
But then (with the right choice of $c$) $\mathbf{1}_{\mathbb I g\le \frac{c}{n}} \mathbb I g \asymp \frac1n$ on $F$. But it is easy to see that
$$
\mathbb I g \cdot \mathbf{1}_{\mathbb I g \le \lambda} \le \mathbb I (g \cdot \mathbf{1}_{\mathbb I g \le \lambda} )\,.
$$
Thus
\begin{equation}
\label{G}
\mathbb I (G) = \mathbb I (g\cdot \mathbf{1}_{\mathbb I g \le \frac{c}{n}}) \ge \frac{c_0}{n}, \quad\text{on}\,\, F\,
\end{equation}
Suppose now that we want to bring to contradiction:
\begin{equation}
\label{contr}
\int_{T^2} G^2=\int \mathbb V^\nu_{\frac{c}{n}} \, d \nu\le \frac{C}{n} |\nu|= \frac{C}{n} \cdot\frac1{n\log n}=\frac{C}{n^2\log n}\,.
\end{equation}
The definition of capacity and \eqref{G}, \eqref{contr} combined give us
$$
\text{cap}(F) \lesssim \frac1{\log n}\,.
$$
We come to a contradiction, because we proved in the previous Section that $\text{cap} (F) \asymp 1$. Contradictions shows that the only inequality in \eqref{contr} fails.
\section{The shape of the graph of function $x\to \text{cap}( \mathbb V^\nu \ge x)$}
\label{graph}
Below trees and bi-trees are finite, but unboundedly deep.
Let $E$ be a subset of $T$ or $T^2$ and $\nu$ be a capacitary measure for $E$,
$$
\text{cap}(E) =|\nu|, \, \mathbb V^\nu=1 \,\, \text{on} \,\, \supp\nu, \,\, g:= \mathbb I^* \nu=\{f: \int f^2\to \min\,\, \text{for}\,\, \mathbb I f \ge 1\,\, \text{on}\,\, E\}\,.
$$
\bigskip
First consider the case of $T$. Let $x\in [|\nu|, 1]$ and we study the set
$$
D_x:=\{\alpha\in T: \mathbb V^\nu(\alpha)\ge x\}\,.
$$
We want to understand a bit the shape of the graph of
$$
C(x) :=\text{cap}(D_x)\,.
$$
We start with $x=|\nu|=\text{cap}(E)$. Notice that $o$, the root of $T$, is obviously such that $\mathbb V^\nu(o) =|\nu|$, so $0\in D_{|\nu|}$. But $\text{cap}(0) = \text{cap}(T)=1$. Thus
$$
C(|\nu|) =1\,.
$$
Now consider $x=1$. On $E$ we have $\mathbb V^\nu=1$ and maximum principle (we are on $T$, so it exists) says that $E=\{\alpha: \mathbb V^\nu \ge 1\}$. Therefore,
$$
C(1) = \text{cap}(E) =|\nu|\,.
$$
Now let $|\nu|<x<1$.
We know (again this is maximum principle) that
\begin{equation}
\label{MP}
\int_T \mathbf{1}_{\mathbb I g \le x} \cdot g^2 = \int_T \mathbb V^\nu_x \, d\nu \le x |\nu|\,.
\end{equation}
Notice that if $\mathbb I g(\alpha)\le x$ and $\mathbb I g (\text{son}\, \alpha)>x$ then $\mathbb I g(\alpha)\ge x/2$ just because $g=\mathbb I^*\nu$ is monotonically increasing on $T$. But this means that
\begin{equation}
\label{cut}
\mathbb I ( \mathbf{1}_{\mathbb I g \le x} \cdot g) \ge x/2, \quad \text{on} \,\, D_x=\{ \mathbb I g = \mathbb V^\nu \ge x\}\,.
\end{equation}
The definition of capacity and relationships \eqref{MP}, \eqref{cut} show the following:
\begin{theorem}
\label{capT}
On a simple tree $T$ the capacity of the level set $D_x=\{\alpha\in T: \mathbb V^\nu(\alpha)\ge x\}$ for any capacitary measure $\nu$ of a set $E$ satisfies the following inequality
$$
C(x)=\text{cap}(\{\alpha\in T: \mathbb V^\nu(\alpha)\ge x\}) \le \frac{4\text{cap}(E)}{x}= \frac{4|\nu|}{x},\,\, \text{cap}(E)\le x\le 1\,.
$$
\end{theorem}
\bigskip
This is absolutely not the case for $T^2$. The capacity of level set of capacitary potentials on $T^2$ behave in a much stranger and wild way. We saw it in Section \ref{capF}. In fact, our measure $\nu$ in the previous Sections is a capacitary measure,
$$
|\nu| =\frac1{n\log n}\,.
$$
We put
$$
x=\frac{c}n\,.
$$
We saw in Section \ref{capF} that if the absolute constant $c$ is chosen correctly, then
\begin{equation}
\label{capT2}
\text{cap}((\alpha, \beta) \in T^2: \mathbb V^\nu(\alpha, \beta) \ge \frac{c}n) \asymp 1>> \frac{|\nu|}{x}\,.
\end{equation}
This means that Theorem \ref{capT} is false for $T^2$ because if it were true, that we would have $\text{cap}((\alpha, \beta) \in T^2: \mathbb V^\nu(\alpha, \beta) \ge \frac{c}n)\lesssim \frac1{\log n}$.
\bigskip
\subsection {The reason for the effect \eqref{capT2}}
\label{reason}
On $T^2$ we do not have \eqref{triv}, which is \eqref{MP} above. Instead we have \eqref{tau} that makes the estimate of capacity much faster blowing up than in Theorem \ref{capT}. In fact, \eqref{tau} claims
$$
\text{cap}(\{\mathbb V^\nu\ge x\}) \le \frac{C_\tau\text{cap}(E)}{x^{1+\tau}}\,.
$$
and we saw that $\tau$ is indispensable. Of course the capacity of any subset of $T^2$ is bounded by $1$, so we have
$$
\text{cap}(\{\mathbb V^\nu\ge x\}) \le \max\Big( 1, \frac{C_\tau\text{cap}(E)}{x^{1+\tau}}\Big)\,.
$$
This explains a flat piece of graph $C(x)\asymp 1$, when $x$ is between $\frac1{n\log n}$ and $\frac1n$.
In fact, looking at \eqref{sqlog} we may think that the flat piece of graph can be much wider.
\section{One more counterexample}
\label{open}
Here is the question asked by Fedor Nazarov. He also hinted us a possible construction of a counterexample.
\noindent{\bf Question.} Consider normalized measures on the unit square, $|\mu|=1$. Let $x>>1$. Is it always possible to have the estimate
$$
\int_{T^2} \mathbb V^\mu_x \, d\mu=\int_{\mathbb V^\mu\le x} (\mathbb I^*\mu)^2 \le F(x)\,?
$$
\medskip
The meaning of this question is that we always (see Theorem \ref{smp}, and \eqref{sqlog}, \eqref{tau}) have some trace of total energy
in the right hand side of our estimates of partial energy. What if total energy is huge or ``infinite''? Maybe one does not need this total energy contribution into the right hand side as its presence in Theorem \ref{smp}, and in \eqref{sqlog}, \eqref{tau})? Maybe the partial energy is always bounded by a function of its ``cut-off'' parameter $x$ for all normalized measures?
We will show that no estimate as above exists (but on $T$ it does exist with the simplest $F(x)=x$).
\bigskip
Let us fix two large integers $n, M$ ($n$ will be much bigger than $M$) and consider a small modification of the construction
of previous Sections. Namely,
Consider $2^M$ dyadic squares located on the diagonal of $Q^0:=[0,1]^2$, each of size $2^{-M}\times 2^{-M}$. We call them $Q_1, \dots, Q_{2^M}$.
In South West corner of each $Q_j$ choose a dyadic square of size $2^{-n-M}\times 2^{-n-M}$. Call them $\omega_1, \dots, \omega_{2^M}$. We charge each $\omega_j$ with mass $2^{-M}$, this forms measure $\mu$ of mass $1$. Now consider $\omega_1$ and
the family $\mathcal F_1$ of dyadic rectangles containing $\omega_1$ and contained in $Q_1$ of the following sort:
$$
[0, 2^{-n} 2^{-M}]\times [0, 2^{-M}], [0, 2^{-n/2} 2^{-M}]\times [0, 2^{-2}2^{-M}], \dots, [0, 2^{-n/2^k} 2^{-M}]\times [0, 2^{-2^k}2^{-M}],
$$
there are approximately $\log n$ of them, and they are called $q_{10}, q_{11},\dots, q_{1k}$, $k\approx \log n$.
We repeat this for other $\omega_j$, having now $q_{j0}, q_{j1},\dots, q_{jk}$, $j=1, \dots, 2^M$, $k\approx \log n$, and $q_ji$ containing $\omega_j$ and contained in $Q_j$.
Now put
$$
x= n 2^{-M}
$$
and choose $n$ to have $x>>1$.
\medskip
We claim that
\begin{equation}
\label{Vx}
\mathbb V(q_{ji}) \le Cx\quad \forall j, i\,.
\end{equation}
We know from \cite{MPV,MPVZ} that given $j, i$ there are approximately $n$ dyadic rectangles containing $q_{ji}$ and contained on $Q_j$. Each gives contribution $2^{-M}$ into $\mathbb V(q_{ji})$. So if we would count only them in $\mathbb V(q_{ji})$ then we get the total of $\approx n2^{-M}$, and \eqref{Vx} would follow. Let us call this contribution of $\approx n2^{-M}$ obtained as above {\bf the main contribution}. Let us justify that it is the main contribution now.
But there are much more dyadic rectangles containing $q_{ji}$ and contained in $Q^0$. Let us bookkeep their contributions to
$\mathbb V_x(q_{ji})$. We hope that those are not too big in order to prove \eqref{Vx}.
Notice that if \eqref{Vx} is proved we have many rectangles $R$ with $\mathbb V(R) \le Cx$; so many that we can hope to prove that
\begin{equation}
\label{large}
\sum_{R: \mathbb V^\mu(R) \le Cx} \mu(R)^2 \ge F(x)\,.
\end{equation}
\medskip
So we fix, say, $q_{0i} =[0, 2^{-n/2^i} 2^{-M}]\times [0, 2^{-2^i}2^{-M}]$, and we can see that apart of $\approx n$ rectangles between $q_{0i}$ and $Q_0$, there are also
a) tall rectangles $[0, 2^{-n/2^{i'}} 2^{-M}]\times [0, 2^m 2^{-M}]$, $i\le i' \le \log n$, $1\le m\le M$, containing $q_{0i}$;
b) long rectangles $[0, 2^{m} 2^{-M}]\times [0, 2^{-2^{j'}}2^{-M}]$, $0\le j' \le j$, $1\le m\le M$, containing $q_{0i}$;
c) $m$-large rectangles, containing $q_{0i}$: these are rectangles containing dyadic square $Q_0^{(m)}$ with side $2^m 2^{-M}$
that contains $Q_0$, but not containing $Q_0^{(m+1)}$, $m=2, \dots, M$.
The contribution of tall rectangles into $\mathbb V(q_{0i})$ is bounded by $M2^{-M} \log n$, and the same is for the contribution of the
long rectangles.
The contribution of $M$-large rectangles is $1$--in fact, there is only one such rectangle, namely our initial unit square $Q^0$.
The contribution of $M-1$-large rectangles is $\frac12 \cdot (1+1+1)$. In fact we would have $3$ rectangles in the family of $M-1$-large rectangles: $Q_0^{M-1}$ square itself and its two predecessors, one long, one tall.
The contribution of $M-2$-large rectangles is $\frac14 \cdot (1+2+2)$. In fact we would have $5$ rectangles in the family of $M-2$-large rectangles: $Q_0^{M-2}$ square itself and its four predecessors, two long, two tall.
The contribution of $M-3$-large rectangles is $\frac18 \cdot (1+3+3)$. In fact we would have $5$ rectangles in the family of $M-2$-large rectangles: $Q_0^{M-3}$ square itself and its six predecessors, three long, three tall.
Et cetera. The total contribution of all $m$-large rectangles containing $q_{0i}$ is at most
$$
\sum_{m=1}^M \frac1{2^{m}} (2m+1)\le C_1\,.
$$
This is definitely smaller that $x=n2^{-M}$ than then main contribution and can be just absorbed into the main contribution $\approx n2^{-M}=x>>1$.
The contributions from long and tall rectangles listed in a) and b) above is at most $2\log n M2^{-M} \lesssim n^\tau 2^{-\tau M} \lesssim (n 2^{-M})^\tau =x^\tau<< x$ for any $\tau>0$ for example for $\tau=\frac12$. Hence, the contribution from long and tall rectangles listed in a) and b) above is also much smaller than the main contribution of order $x$ and can be absorbed into the main contribution.
We finally proved \eqref{Vx}. Now let us estimate $\sum_{R: \mathbb V^\mu(R) \le Cx} \mu(R)^2 $ from below. From \cite{MPV, MPVZ} we know that for each $q_{ji}$ there is a family of dyadic rectangles $\mathcal F_{ji}$ such that 1) every $R\in \mathcal F_{ji}$ contains $q_{ji}$ and is contained in $Q_j$, $j=1, \dots, 2^M$, 2) the cardinality of $\mathcal F_{ji}$ is at least $c\,n$, $c>0$, 3) families $\mathcal F_{ji}$ are disjoint, $j=1, \dots, 2^M$, $i\le C\log n$. Each rectangle $R$ of $\cup_j\cup_i \mathcal F_{ji}$ has the property that
$$
\mathbb V^\mu(R) \le Cx\,.
$$
We proved this in \eqref{Vx}. So each of those $R$ gives a contribution into the sum $\sum_{R: \mathbb V^\mu(R) \le Cx} \mu(R)^2 $, and this contribution is $2^{-2M}$. Therefore,
$$
\sum_{R: \mathbb V^\mu(R) \le Cx} \mu(R)^2 \ge 2^{-2M}\cdot \sharp j \cdot \sharp i \cdot \sharp (\mathcal F_{ji}) \ge c\, 2^{-2M} 2^M \log n \cdot n =c\,2^{-M} n \cdot \log n=
$$
$$
c\, x \cdot (\log x +M)\,.
$$
Now, given $x>>1$, we can freely choose $M$, e.g. $M=x, x^2, 2^x, F(x)\dots$ and then choose $n$ from $n2^M=x$ and do the construction above. So \eqref{large} is proved.
|
2,869,038,155,295 | arxiv | \section{Introduction}
\IEEEPARstart{W}{elcome} to the updated and simplified documentation to using the IEEEtran \LaTeX \ class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer back to the ``IEEEtran\_HOWTO.pdf''.
This document applies to version 1.8b of IEEEtran.
The IEEEtran template package contains the following example files:
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
These are ``bare bones" templates to quickly understand the document structure.
It is assumed that the reader has a basic working knowledge of \LaTeX. Those who are new to \LaTeX \ are encouraged to read Tobias Oetiker's ``The Not So Short Introduction to \LaTeX '', available at: \url{http://tug.ctan.org/info/lshort/english/lshort.pdf} which provides an overview of working with \LaTeX.
\section{The Design, Intent and \\ Limitations of the Templates}
\noindent The templates are intended to {\bf{approximate the final look and page length of the articles/papers}}. Therefore, {\bf{they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore\textsuperscript{\textregistered}}}. They will help to give the authors an approximation of the number of pages that will be in the final version. The structure of the \LaTeX files, as designed, enable easy conversion to XML for the composition systems used by the IEEE's outsource vendors. The XML files are used to produce the final print/IEEEXplore\textsuperscript{\textregistered} pdf and then converted to HTML for IEEEXplore\textsuperscript{\textregistered}. Have you looked at your article/paper in the HTML version?
\section{\LaTeX \ Distributions: Where to Get Them}
\noindent IEEE recommends using the distribution from the \TeX User Group at \url{http://www.tug.org}. You can join TUG and obtain a DVD distribution or download for free from the links provided on their website: \url{http://www.tug.org/texlive/}. The DVD includes distributions for Windows, Mac OS X and Linux operating systems.
\section{Where to get the IEEEtran Templates}
\noindent The {\bf{IEEE Template Selector}} will always have the most up-to-date versions of the \LaTeX\ and MSWord templates. Please see: \url{https://template-selector.ieee.org/} and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have their own special templates. Many of these are based on IEEEtran, but may have special instructions that vary slightly from those in this document.
\section{Where to get \LaTeX \ help - user groups}
\noindent The following on-line groups are very helpful to beginning and experienced \LaTeX\ users. A search through their archives can provide many answers to common questions.
\begin{list}{}{}
\item{\url{http://www.latex-community.org/}}
\item{\url{https://tex.stackexchange.com/} }
\end{list}
\section{Document Class Options in IEEEtran}
\noindent At the beginning of your \LaTeX\ file you will need to establish what type of publication style you intend to use. The following list shows appropriate documentclass options for each of the types covered by IEEEtran.
\begin{list}{}{}
\item{Regular Journal Article}
\item{{\tt{$\backslash$documentclass[journal]{IEEEtran}}}}\\
\item{{Conference Paper}}
\item{{\tt{$\backslash$documentclass[conference]{IEEEtran}}}}\\
\item{Computer Society Journal Article}
\item{{\tt{$\backslash$documentclass[10pt,journal,compsoc]{IEEEtran}}}}\\
\item{Computer Society Conference Paper}
\item{{\tt{$\backslash$documentclass[conference,compsoc]{IEEEtran}}}}\\
\item{{Communications Society Journal Article}}
\item{{\tt{$\backslash$documentclass[journal,comsoc]{IEEEtran}}}}\\
\item{{Brief, Correspondence or Technote}}
\item{{\tt{$\backslash$documentclass[9pt,technote]{IEEEtran}}}}
\end{list}
There are other options available for each of these when submitting for peer review or other special requirements. IEEE recommends to compose your article in the base 2-column format to make sure all your equations, tables and graphics will fit the final 2-column format. Please refer to the document ``IEEEtran\_HOWTO.pdf'' for more information on settings for peer review submission if required by your EIC.
\section{How to Create Common Front Matter}
\noindent The following sections describe general coding for these common elements. Computer Society publications and Conferences may have their own special variations and will be noted below.
\subsection{Paper Title}
\noindent The title of your paper is coded as:
\begin{verbatim}
\title{The Title of Your Paper}
\end{verbatim}
\noindent Please try to avoid the use of math or chemical formulas in your title if possible.
\subsection{Author Names and Affiliations}
\noindent The author section should be coded as follows:
\begin{verbatim}
\author{Masahito Hayashi
\IEEEmembership{Fellow, IEEE}, Masaki Owari
\thanks{M. Hayashi is with Graduate School
of Mathematics, Nagoya University, Nagoya,
Japan}
\thanks{M. Owari is with the Faculty of
Informatics, Shizuoka University,
Hamamatsu, Shizuoka, Japan.}
}
\end{verbatim}
Be sure to use the $\backslash$IEEEmembership command to identify IEEE membership status.
Please see the ``IEEEtran\_HOWTO.pdf'' for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This will prevent you from creating a blank first page.
\subsection{Running Heads}
\noindent The running heads are declared by using the $\backslash${\tt{markboth}} command. There are two arguments to this command: the first contains the journal name information and the second contains the author names and paper title.
\begin{verbatim}
\markboth{Journal of Quantum Electronics,
Vol. 1, No. 1, January 2021}
{Author1, Author2,
\MakeLowercase{\textit{(et al.)}:
Paper Title}
\end{verbatim}
\subsection{Copyright Line}
\noindent For Transactions and Journals papers, this is not necessary to use at the submission stage of your paper. The IEEE production process will add the appropriate copyright line. If you are writing a conference paper, please see the ``IEEEtran\_HOWTO.pdf'' for specific information on how to code "Publication ID Marks".
\subsection{Abstracts}
\noindent The abstract is the first element of a paper after the $\backslash${\tt{maketitle}} macro is invoked. The coding is simply:
\begin{verbatim}
\begin{abstract}
Text of your abstract.
\end{abstract}
\end{verbatim}
Please try to avoid mathematical and chemical formulas in the abstract.
\subsection{Index Terms}
\noindent The index terms are used to help other researchers discover your paper. Each society may have it's own keyword set. Contact the EIC of your intended publication for this list.
\begin{verbatim}
\begin{IEEEkeywords}
Broad band networks, quality of service
\end{IEEEkeywords}
\end{verbatim}
\section{How to Create Common Body Elements}
\noindent The following sections describe common body text elements and how to code them.
\subsection{Initial Drop Cap Letter}
\noindent The first text paragraph uses a ``drop cap'' followed by the first word in ALL CAPS. This is accomplished by using the $\backslash${\tt{IEEEPARstart}} command as follows:
\begin{verbatim}
\IEEEPARstart{T}{his} is the first paragraph
of your paper. . .
\end{verbatim}
\subsection{Sections and Subsections}
\noindent Section headings use standard \LaTeX\ commands: $\backslash${\tt{section}}, $\backslash${\tt{subsection}} and $\backslash${\tt{subsubsection}}. Numbering is handled automatically for you and varies according to type of publication. It is common to not indent the first paragraph following a section head by using $\backslash${\tt{noindent}} as follows:
\begin{verbatim}
\section{Section Head}
\noindent The text of your paragraph . . .
\end{verbatim}
\subsection{Citations to the Bibliography}
\noindent The coding for the citations are made with the \LaTeX\ $\backslash${\tt{cite}} command. This will produce individual bracketed reference numbers in the IEEE style. At the top of your \LaTeX\ file you should include:
\begin{verbatim}
\usepackage{cite}
\end{verbatim}
For a single citation code as follows:
\begin{verbatim}
see \cite{ams}
\end{verbatim}
This will display as: see \cite{ams}\\
For multiple citations code as follows:
\begin{verbatim}
\cite{ams,oxford,lacomp}
\end{verbatim}
This will display as \cite{ams,oxford,lacomp}
\subsection{Figures}
\noindent Figures are coded with the standard \LaTeX\ commands as follows:
\begin{verbatim}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig1}
\caption{This is the caption for one fig.}
\label{fig1}
\end{figure}
\end{verbatim}
The [!t] argument enables floats to the top of the page to follow IEEE style. Make sure you include:
\begin{verbatim}
\usepackage{graphicx}
\end{verbatim}
\noindent at the top of your \LaTeX file with the other package declarations.
To cross-reference your figures in the text use the following code example:
\begin{verbatim}
See figure \ref{fig1} ...
\end{verbatim}
This will produce:\\
See figure \ref{fig1} . . .
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig1}
\caption{This is the caption for one fig.}
\label{fig1}
\end{figure}
\subsection{Tables}
\noindent Tables should be coded with the standard \LaTeX\ coding. The following example shows a simple table.
\begin{verbatim}
\begin{table}
\begin{center}
\caption{Filter design equations ...}
\label{tab1}
\begin{tabular}{| c | c | c |}
\hline
Order & Arbitrary coefficients &
coefficients\\
of filter & $e_m$ & $b_{ij}$ \\
\hline
1& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$,
& $b_{00}=0$\\
\hline
2&$\beta_{22}=(~1,-1,-1,~~1,~~1,~~1)$ &\\
\hline
3& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$,
& $b_{00}=0$,\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{verbatim}
To reference the table in the text, code as follows:
\begin{verbatim}Table~\ref{tab1} lists the closed-form...\end{verbatim}
to produce:
Table~\ref{tab1} lists the closed-form . . .
\begin{table}
\begin{center}
\caption{A Simple Table Example.}
\label{tab1}
\begin{tabular}{| c | c | c |}
\hline
Order & Arbitrary coefficients & coefficients\\
of filter & $e_m$ & $b_{ij}$ \\
\hline
1& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$\\
\hline
2&$\beta_{22}=(~1,-1,-1,~~1,~~1,~~1)$ &\\
\hline
3& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$,\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Lists}
\noindent In this section, we will consider three types of lists: simple unnumbered, numbered and bulleted. There have been numerous options added to IEEEtran to enhance the creation of lists. If your lists are more complex than those shown below, please refer to the ``IEEEtran\_HOWTO.pdf'' for additional options.\\
\noindent{\bf A plain unnumbered list}
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
\noindent coded as:
\begin{verbatim}
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
\end{verbatim}
\noindent{\bf A simple numbered list}
\begin{enumerate}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{enumerate}
\noindent coded as:
\begin{verbatim}
\begin{enumerate}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{enumerate}
\end{verbatim}
\noindent{\bf A simple bulleted list}
\begin{itemize}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{itemize}
\noindent coded as:
\begin{verbatim}
\begin{itemize}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{itemize}
\end{verbatim}
\subsection{Other Elements}
\noindent For other less common elements such as Algorithms, Theorems and Proofs, and Floating Structures such as page-wide tables, figures or equations, please refer to the ``IEEEtran\_HOWTO.pdf'' section on ``Double Column Floats.''
\section{How to Create Common Back Matter Elements}
\noindent The following sections demonstrate common back matter elements such as Acknowledgments, Bibliographies, Appendicies and Author Biographies.
\subsection{Acknowledgments}
\noindent This should be a simple paragraph before the bibliography to thank those individuals and institutions who have supported your work on this article.
\begin{verbatim}
\section{Acknowledgments}
\noindent Text describing those who
supported your paper.
\end{verbatim}
\subsection{Bibliographies}
\noindent {\bf{References Simplified:}} A simple way of composing references is to use the $\backslash${\tt{bibitem}} macro to define the beginning of a reference as in the following examples:\\
\noindent [6] H. Sira-Ramirez. ``On the sliding mode control of nonlinear systems,'' \textit{Systems \& Control Letters}, vol. 19, pp. 303--312, 1992.
\noindent coded as:
\begin{verbatim}
\bibitem{Sira3}
H. Sira-Ramirez. ``On the sliding mode
control of nonlinear systems,''
\textit{Systems \& Control Letters},
vol. 19, pp. 303--312, 1992.
\end{verbatim}
\noindent [7] A. Levant.``Exact differentiation of signals with unbounded higher derivatives,'' in \textit{Proceedings of the 45th IEEE Conference on Decision and Control}, San Diego, California, USA, pp. 5585--5590, 2006.
\noindent coded as:
\begin{verbatim}\bibitem{Levant}
A. Levant. ``Exact differentiation of
signals with unbounded higher
derivatives,'' in \textit{Proceedings
of the 45th IEEE Conference on
Decision and Control}, San Diego,
California, USA, pp. 5585--5590, 2006.
\end{verbatim}
\noindent [8] M. Fliess, C. Join, and H. Sira-Ramirez. ``Non-linear estimation is easy,'' \textit{International Journal of Modelling, Identification and Control}, vol. 4, no. 1, pp. 12--27, 2008.
\noindent coded as:
\begin{verbatim}
\bibitem{Cedric}
M. Fliess, C. Join, and H. Sira-Ramirez.
``Non-linear estimation is easy,''
\textit{International Journal of Modelling,
Identification and Control}, vol. 4,
no. 1, pp. 12--27, 2008.
\end{verbatim}
\noindent [9] R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez. ``Stabilization of food-chain systems using a port-controlled Hamiltonian description,'' in \textit{Proceedings of the American Control Conference}, Chicago, Illinois, USA, pp. 2245--2249, 2000.
\noindent coded as:
\begin{verbatim}
\bibitem{Ortega}
R. Ortega, A. Astolfi, G. Bastin, and H.
Rodriguez. ``Stabilization of food-chain
systems using a port-controlled Hamiltonian
description,'' in \textit{Proceedings of the
American Control Conference}, Chicago,
Illinois, USA, pp. 2245--2249, 2000.
\end{verbatim}
\subsection{Accented Characters in References}
\noindent When using accented characters in references, please use the standard LaTeX coding for accents. {\bf{Do not use math coding for character accents}}. For example:
\begin{verbatim}
\'e, \"o, \`a, \~e
\end{verbatim}
will produce: \'e, \"o, \`a, \~e
\subsection{Use of BibTeX}
\noindent If you wish to use BibTeX, please see the documentation that accompanies the IEEEtran Bibliography package.
\subsection{Biographies and Author Photos}
\noindent Authors may have options to include their photo or not. Photos should be a bit-map graphic (.tif or .jpg) and sized to fit in the space allowed. Please see the coding samples below:
\begin{verbatim}
\begin{IEEEbiographynophoto}{Jane Doe}
Biography text here without a photo.
\end{IEEEbiographynophoto}
\end{verbatim}
or a biography with a photo
\begin{verbatim}
\begin{IEEEbiography}[{\includegraphics
[width=1in,height=1.25in,clip,
keepaspectratio]{fig1.png}}]
{IEEE Publications Technology Team}
In this paragraph you can place
your educational, professional background
and research and other interests.
\end{IEEEbiography}
\end{verbatim}
Please see the end of this document to see the output of these coding examples.
\section{Mathematical Typography \\ and Why It Matters}
\noindent Typographical conventions for mathematical formulas have been developed to {\bf provide uniformity and clarity of presentation across mathematical texts}. This enables the readers of those texts to both understand the author's ideas and to grasp new concepts quickly. While software such as \LaTeX \ and MathType\textsuperscript{\textregistered} can produce aesthetically pleasing math when used properly, it is also very easy to misuse the software, potentially resulting in incorrect math display.
IEEE aims to provide authors with the proper guidance on mathematical typesetting style and assist them in writing the best possible article.
As such, IEEE has assembled a set of examples of good and bad mathematical typesetting. You will see how various issues are dealt with. The following publications have been referenced in preparing this material:
\begin{list}{}{}
\item{\emph{Mathematics into Type}, published by the American Mathematical Society}
\item{\emph{The Printing of Mathematics}, published by Oxford University Press}
\item{\emph{The \LaTeX Companion}, by F. Mittelbach and M. Goossens}
\item{\emph{More Math into LaTeX}, by G. Gr\"atzer}
\item{AMS-StyleGuide-online.pdf, published by the American Mathematical Society}
\end{list}
Further examples can be seen at \url{http://journals.ieeeauthorcenter.ieee.org/wp-content/uploads/sites/7/IEEE-Math-Typesetting-Guide.pdf}
\subsection{Display Equations}
\noindent A simple display equation example shown below uses the ``equation'' environment. To number the equations, use the $\backslash${\tt{label}} macro to create an identifier for the equation. LaTeX will automatically number the equation for you.
\begin{equation}
\label{deqn_ex1}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation}
\label{deqn_ex1}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
\end{verbatim}
To reference this equation in the text use the $\backslash${\tt{ref}} macro.
Please see (\ref{deqn_ex1})\\
\noindent is coded as follows:
\begin{verbatim}
Please see (\ref{deqn_ex1})\end{verbatim}
\subsection{Equation Numbering}
\noindent {\bf{Consecutive Numbering:}} Equations within an article are numbered consecutively from the beginning of the
article to the end, i.e., (1), (2), (3), (4), (5), etc. Do not use roman numerals or section numbers for equation numbering.\\
\noindent {\bf{Appendix Equations:}} The continuation of consecutively numbered equations is best in the Appendix, but numbering
as (A1), (A2), etc., is permissible.\\
\noindent {\bf{Hyphens and Periods}}: Hyphens and periods should not be used in equation numbers, i.e., use (1a) rather than
(1-a) and (2a) rather than (2.a) for sub-equations. This should be consistent throughout the article.
\subsection{Multi-line equations and alignment}
\noindent Here we show several examples of multi-line equations and proper alignments.
\noindent {\bf{A single equation that must break over multiple lines due to length with no specific alignment.}}
\begin{multline}
\text{The first line of this example}\\
\text{The second line of this example}\\
\text{The third line of this example}
\end{multline}
\noindent is coded as:
\begin{verbatim}
\begin{multline}
\text{The first line of this example}\\
\text{The second line of this example}\\
\text{The third line of this example}
\end{multline}
\end{verbatim}
\noindent {\bf{A single equation with multiple lines aligned at the = signs}}
\begin{align}
a &= c+d \\
b &= e+f
\end{align}
\noindent is coded as:
\begin{verbatim}
\begin{align}
a &= c+d \\
b &= e+f
\end{align}
\end{verbatim}
The {\tt{align}} environment can align on multiple points as shown in the following example:
\begin{align}
x &= y & X & =Y & a &=bc\\
x' &= y' & X' &=Y' &a' &=bz
\end{align}
\noindent is coded as:
\begin{verbatim}
\begin{align}
x &= y & X & =Y & a &=bc\\
x' &= y' & X' &=Y' &a' &=bz
\end{align}
\end{verbatim}
\subsection{Subnumbering}
\noindent The amsmath package provides a {\tt{subequations}} environment to facilitate subnumbering. An example:
\begin{subequations}\label{eq:2}
\begin{align}
f&=g \label{eq:2A}\\
f' &=g' \label{eq:2B}\\
\mathcal{L}f &= \mathcal{L}g \label{eq:2c}
\end{align}
\end{subequations}
\noindent is coded as:
\begin{verbatim}
\begin{subequations}\label{eq:2}
\begin{align}
f&=g \label{eq:2A}\\
f' &=g' \label{eq:2B}\\
\mathcal{L}f &= \mathcal{L}g \label{eq:2c}
\end{align}
\end{subequations}
\end{verbatim}
\subsection{Matrices}
\noindent There are several useful matrix environments that can save you some keystrokes. See the example coding below and the output.
\noindent {\bf{A simple matrix:}}
\begin{equation}
\begin{matrix} 0 & 1 \\
1 & 0 \end{matrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{matrix} 0 & 1 \\
1 & 0 \end{matrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with parenthesis}}
\begin{equation}
\begin{pmatrix} 0 & -i \\
i & 0 \end{pmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{pmatrix} 0 & -i \\
i & 0 \end{pmatrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with square brackets}}
\begin{equation}
\begin{bmatrix} 0 & -1 \\
1 & 0 \end{bmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{bmatrix} 0 & -1 \\
1 & 0 \end{bmatrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with curly braces}}
\begin{equation}
\begin{Bmatrix} 1 & 0 \\
0 & -1 \end{Bmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{Bmatrix} 1 & 0 \\
0 & -1 \end{Bmatrix}
\end{equation}\end{verbatim}
\noindent {\bf{A matrix with single verticals}}
\begin{equation}
\begin{vmatrix} a & b \\
c & d \end{vmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{vmatrix} a & b \\
c & d \end{vmatrix}
\end{equation}\end{verbatim}
\noindent {\bf{A matrix with double verticals}}
\begin{equation}
\begin{Vmatrix} i & 0 \\
0 & -i \end{Vmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{Vmatrix} i & 0 \\
0 & -i \end{Vmatrix}
\end{equation}\end{verbatim}
\subsection{Arrays}
\noindent The {\tt{array}} environment allows you some options for matrix-like equations. You will have to manually key the fences, but you'll have options for alignment of the columns and for setting horizontal and vertical rules. The argument to {\tt{array}} controls alignment and placement of vertical rules.
A simple array
\begin{equation}
\left(
\begin{array}{cccc}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{cccc}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
A slight variation on this to better align the numbers in the last column
\begin{equation}
\left(
\begin{array}{cccr}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{cccr}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
An array with vertical and horizontal rules
\begin{equation}
\left( \begin{array}{c|c|c|r}
a+b+c & uv & x-y & 27\\ \hline
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{c|c|c|r}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
Note the argument now has the pipe "$\vert$" included to indicate the placement of the vertical rules.
\subsection{Cases Structures}
\noindent Many times we find cases coded using the wrong environment, i.e., {\tt{array}}. Using the {\tt{cases}} environment will save keystrokes (from not having to type the $\backslash${\tt{left}}$\backslash${\tt{lbrace}}) and automatically provide the correct column alignment.
\begin{equation*}
{z_m(t)} = \begin{cases}
1,&{\text{if}}\ {\beta }_m(t) \\
{0,}&{\text{otherwise.}}
\end{cases}
\end{equation*}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation*}
{z_m(t)} =
\begin{cases}
1,&{\text{if}}\ {\beta }_m(t),\\
{0,}&{\text{otherwise.}}
\end{cases}
\end{equation*}
\end{verbatim}
\noindent Note that the ``\&'' is used to mark the tabular alignment. This is important to get proper column alignment. Do not use $\backslash${\tt{quad}} or other fixed spaces to try and align the columns. Also, note the use of the $\backslash${\tt{text}} macro for text elements such as ``if'' and ``otherwise''.
\subsection{Function Formatting in Equations}
In many cases there is an easy way to properly format most common functions. Use of the $\backslash$ in front of the function name will in most cases, provide the correct formatting. When this does not work, the following example provides a solution using the $\backslash${\tt{text}} macro.
\begin{equation*}
d_{R}^{KM} = \underset {d_{l}^{KM}} {\text{arg min}} \{ d_{1}^{KM},\ldots,d_{6}^{KM}\}.
\end{equation*}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation*}
d_{R}^{KM} = \underset {d_{l}^{KM}}
{\text{arg min}} \{ d_{1}^{KM},
\ldots,d_{6}^{KM}\}.
\end{equation*}
\end{verbatim}
\subsection{ Text Acronyms inside equations}
\noindent This example shows where the acronym ``MSE" is coded using $\backslash${\tt{text\{\}}} to match how it appears in the text.
\begin{equation*}
\text{MSE} = \frac {1}{n}\sum _{i=1}^{n}(Y_{i} - \hat {Y_{i}})^{2}
\end{equation*}
\begin{verbatim}
\begin{equation*}
\text{MSE} = \frac {1}{n}\sum _{i=1}^{n}
(Y_{i} - \hat {Y_{i}})^{2}
\end{equation*}
\end{verbatim}
\subsection{Obsolete Coding}
\noindent Avoid the use of outdated environments, such as {\tt{eqnarray}} and \$\$ math delimiters, for display equations. The \$\$ display math delimiters are left over from PlainTeX and should not be used in \LaTeX, ever. Poor vertical spacing will result.
\subsection{Use Appropriate Delimiters for Display Equations}
\noindent Some improper mathematical coding advice has been given in various YouTube\textsuperscript{TM} videos on how to write scholarly articles, so please follow these good examples:\\
For {\bf{single-line unnumbered display equations}}, please use the following delimiters:
\begin{verbatim}\[ . . . \] or \end{verbatim}
\begin{verbatim}\begin{equation*} . . . \end{equation*}\end{verbatim}
Note that the * in the environment name turns off equation numbering.\\
For {\bf{multiline unnumbered display equations}} that have alignment requirements, please use the following delimiters:
\begin{verbatim}
\begin{align*} . . . \end{align*}
\end{verbatim}
For {\bf{single-line numbered display equations}}, please use the following delimiters:
\begin{verbatim}
\begin{equation} . . . \end{equation}
\end{verbatim}
For {\bf{multiline numbered display equations}}, please use the following delimiters:
\begin{verbatim}
\begin{align} . . . \end{align}
\end{verbatim}
\section{LaTeX Package Suggestions}
\noindent Immediately after your documenttype declaration at the top of your \LaTeX\ file is the place where you should declare any packages that are being used. The following packages were used in the production of this document.
\begin{verbatim}
\usepackage{amsmath,amsfonts}
\usepackage{algorithmic}
\usepackage{array}
\usepackage[caption=false,font=normalsize,
labelfont=sf,textfont=sf]{subfig}
\u00sepackage{textcomp}
\usepackage{stfloats}
\usepackage{url}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{balance}
\end{verbatim}
\section{Additional Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) or \verb|(\ref{Eq})|
cross references instead of ``hard'' references (e.g., \verb|(1)|).
That will make it possible to combine sections, add equations, or
change the order of figures or citations without having to go through
the file line by line.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Please do not use \verb|\nonumber| or \verb|\notag| inside the
\verb|{array}| environment. It will not stop equation numbers inside
\verb|{array}| (there won't be any anyway) and it might stop a wanted
equation number in the surrounding equation.
\balance
\section{A Final Checklist}
\begin{enumerate}{}{}
\item{Make sure that your equations are numbered sequentially and there are no equation numbers missing or duplicated. Avoid hyphens and periods in your equation numbering. Stay with IEEE style, i.e., (1), (2), (3) or for sub-equations (1a), (1b). For equations in the appendix (A1), (A2), etc.}.
\item{Are your equations properly formatted? Text, functions, alignment points in cases and arrays, etc. }
\item{Make sure all graphics are included.}
\item{Make sure your references are included either in your main LaTeX file or a separate .bib file if calling the external file.}
\end{enumerate}
\section{Problem Formulation}
\label{sec:problemformulation}
We use the following definitions, some of which are similar to~\cite{shirani8849392,bakirtas2021database,noiselesslonger}, to formally describe our problem.
\begin{defn}{\textbf{(Unlabeled Markov Database)}}\label{defn:markovdb}
An ${(m_n,n,\mathbf{P})}$ \emph{unlabeled Markov database} is a randomly generated ${m_n\times n}$ matrix ${\mathbf{D}=\{X_{i,j}\in\mathfrak{X}:i\in[m_n],j\in[n]\}}$ whose rows are \emph{i.i.d.} and follow a first-order stationary Markov process defined over the alphabet ${\mathfrak{X}=\{1,\dots,|\mathfrak{X}|\}}$ with probability transition matrix $\mathbf{P}$ such that
\iftoggle{singlecolumn}{
\begin{gather}
\mathbf{P} = \gamma \mathbf{I} + (1-\gamma) \mathbf{U}\label{eq:markovtransitionmatrix}\\
U_{i,j} = u_j>0, \: \forall (i,j)\in \mathfrak{X}^2\\
\sum\limits_{j\in\mathfrak{X}} u_j =1\\
\gamma \in [0,1)
\end{gather}
}{
\begin{gather}
\mathbf{P} = \gamma \mathbf{I} + (1-\gamma) \mathbf{U}\label{eq:markovtransitionmatrix}\\
U_{i,j} = u_j>0, \: \forall (i,j)\in \mathfrak{X}^2\\
\sum\limits_{j\in\mathfrak{X}} u_j =1\\
\gamma \in [0,1)
\end{gather}
}
where $\mathbf{I}$ is the identity matrix. It is assumed that ${X_{i,1}\overset{\text{i.i.d.}}{\sim}\pi=[u_1,\dots,u_{|\mathfrak{X}|}]}$, $i=1,\dots,m_n$, where $\pi$ is the stationary distribution associated with $\mathbf{P}$.
\end{defn}
Note that, the parameter $\gamma$ determines the correlation among the columns of $\mathbf{D}^{(1)}$. Specifically, $\gamma=0$ corresponds to the case where $X_{i,j}$ are \emph{i.i.d.}
\begin{defn}{\textbf{(Repetition Matrix})}\label{defn:repetitionpattern}
The \emph{repetition matrix} $\mathbf{S}$ is a random matrix of size ${m_n\times n}$ with the following structure:
\begin{itemize}
\item $\mathbf{S}$ consists of independent mutually exclusive blocks of $W_n=\Theta(n^{d_{\text{rep}}})$ consecutive rows.
\item Each block of size ${W_n\times n}$ is obtained by repeating a row vector $W_n$ times, where the row vector consists of $n$ \emph{i.i.d.} entries drawn from a discrete probability distribution $p_S$ with a finite integer support ${\{0,\dots,s_{\max}\}}$.
\end{itemize}
Here $W_n$ and $d_{\text{rep}}$ are called \emph{repetition block size} and \emph{repetition order}, respectively. Furthermore, the parameter ${\delta\triangleq p_S(0)}$ is called the \emph{deletion probability}.
\end{defn}
In most of the paper, we assume a random repetition pattern as in Definition~\ref{defn:repetitionpattern}. In Section~\ref{subsec:adversarialrepetition}, we will discuss the effects of adversarial worst-case repetition patterns.
\begin{defn}{\textbf{(Correlated Repeated Database, Labeling Function)}}\label{defn:labeleddb}
Let $\mathbf{D}^{(1)}$ be an ${(m_n,n,P)}$ unlabeled Markov database, $\mathbf{S}$ be the repetition matrix, $\boldsymbol{\Theta}_n$ be a uniform permutation of $[m_n]$ with $\mathbf{D}^{(1)}$, $\mathbf{S}$ and $\boldsymbol{\Theta}_n$ independently chosen as in Definitions~\ref{defn:markovdb} and \ref{defn:repetitionpattern}. Also, let $p_{Y|X}$ be a conditional probability distribution with both $X$ and $Y$ taking values from $\mathfrak{X}$. Given $\mathbf{D}^{(1)}$, $\mathbf{S}$ and $p_{Y|X}$, the pair ${(\mathbf{D}^{(2)},\boldsymbol{\Theta}_n)}$ is called the \emph{labeled repeated database} if the $(i,j)$\textsuperscript{th} entry $X_{i,j}$ of $\mathbf{D}^{(1)}$ and the $(\boldsymbol{\Theta}_n(i),j)$\textsuperscript{th} entry $Y_{\boldsymbol{\Theta}_n(i),j}$ of $\mathbf{D}^{(2)}$ have the following relation:
\iftoggle{singlecolumn}{
\begin{align}
Y_{\boldsymbol{\Theta}_n(i),j}&=
\begin{cases}
E , & \text{if } S_{\boldsymbol{\Theta}_n(i),j}=0\\
Y^{S_{\boldsymbol{\Theta}_n(i),j}} & \text{if } S_{\boldsymbol{\Theta}_n(i),j}\ge 1
\end{cases}
\end{align}
}{
\begin{align}
Y_{\boldsymbol{\Theta}_n(i),j}&=
\begin{cases}
E , & \text{if } S_{\boldsymbol{\Theta}_n(i),j}=0\\
Y^{S_{\boldsymbol{\Theta}_n(i),j}} & \text{if } S_{\boldsymbol{\Theta}_n(i),j}\ge 1
\end{cases}
\end{align}
}
for $ (i,j)\in[m_n]\times[n]$, where $Y^{S_{\boldsymbol{\Theta}_n(i),j}}$ is a random row vector of length $S_{\boldsymbol{\Theta}_n(i),j}$ with the following probability distribution, conditioned on $X_{i,j}$
\iftoggle{singlecolumn}{
\begin{align}
\Pr\left(Y^{S_{\boldsymbol{\Theta}_n(i),j}}=y^{S_{\boldsymbol{\Theta}_n(i),j}}\Big|X_{i,j}=x\right)&=\prod\limits_{l=1}^{S_{\boldsymbol{\Theta}_n(i),j}} p_{Y|X}(y_l |x) \label{eq:noiseiid}
\end{align}
}{
\begin{align}
\Pr\left(Y^{S_{\boldsymbol{\Theta}_n(i),j}}=y^{S_{\boldsymbol{\Theta}_n(i),j}}\Big|X_{i,j}=x\right)&=\prod\limits_{l=1}^{S_{\boldsymbol{\Theta}_n(i),j}} p_{Y|X}(y_l |x) \label{eq:noiseiid}
\end{align}
}
and ${Y_{\boldsymbol{\Theta}_n(i),j}=E}$ corresponds to $Y_{\boldsymbol{\Theta}_n(i),j}$ being the empty string. $\boldsymbol{\Theta}_n$ and $\mathbf{D}^{(2)}$ are called the \emph{labeling function} and \emph{correlated repeated database}, respectively.
Note that $S_{\boldsymbol{\Theta}_n(i),j}$ indicates the times $X_{i,j}$ is repeated (including deletions and replications). When $S_{\boldsymbol{\Theta}_n(i),j}=0$, $X_{i,j}$ is said to be \emph{deleted} (repeated zero times) and when $S_{\boldsymbol{\Theta}_n(i),j}>1$, $X_{i,j}$ is said to be \emph{replicated} $S_{\boldsymbol{\Theta}_n(i),j}$ times (repeated $S_{\boldsymbol{\Theta}_n(i),j}$ times).
The respective rows $X_{i_1}^n$ and $Y_{i_2}^{K_n}$ of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$ are said to be \emph{matching rows}, if ${\boldsymbol{\Theta}_n(i_1)=i_2}$, where $K_n\triangleq\sum_{j=1}^n S_{i_2,j}$.
\end{defn}
In our model, the correlated repeated database $\mathbf{D}^{(2)}$ is obtained by permuting the rows of the unlabeled Markov database $\mathbf{D}^{(1)}$ with the uniform permutation $\boldsymbol{\Theta}_n$ followed by repetition based on the repetition matrix $\mathbf{S}$ and introduction of noise through $p_{Y|X}$. The relationship between $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$, as described in Definition~\ref{defn:labeleddb}, is illustrated in Figure~\ref{fig:dmc}. As we formalize later, the goal is to recover the labeling function $\boldsymbol{\Theta}_n$ based on the observations of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$.
Note that \eqref{eq:noiseiid} states that we can treat $Y_{\boldsymbol{\Theta}_n(i),j}$ as the output of the discrete memoryless channel (DMC) $p_{Y|X}$ with input sequence consisting of $S_{\boldsymbol{\Theta}_n(i),j}$ copies of $X_{i,j}$ concatenated together. We stress that $p_{Y|X}$ is a general model, capturing any distortion and noise on the database entries, though we refer to this as \say{noise} in this paper.
We are mainly interested in two extremes for the repetition block size $W_n$:
\begin{itemize}
\item Every row of $\mathbf{D}^{(1)}$ experiences the same repetition which we call \emph{identical repetition}. More formally, $\mathbf{S}$ is a matrix consisting of $m_n$ identical copies of a row vector $S^n$ called the \emph{column repetition pattern} and $W_n=m_n\sim2^{n R}$ which we denote by $d_{\text{rep}}=\infty$.
\item Rows of $\mathbf{D}^{(1)}$ experience \emph{i.i.d.} repetition which we call \emph{independent repetition}. More formally, $\mathbf{S}$ has \emph{i.i.d.} rows and $W_n=1$ which we denote by $d_{\text{rep}}=0$.
\end{itemize}
We will observe that these two models pose different challenges to matching and in turn necessitate different solutions with different implications. After focusing on the two extremes described above in Sections~\ref{sec:matchingcapacityWm} and \ref{sec:matchingcapacityW1}, we will discuss the intermediate regimes for the repetition block size $W_n$ in Sections~\ref{subsec:achievabilityW1} and \ref{subsec:converseW1}.
As discussed in Sections~\ref{sec:matchingcapacityWm} and \ref{sec:matchingcapacityW1}, inferring the repetition pattern, particularly deletions, is a difficult, if not impossible, task. Therefore, we assume the availability of \emph{seeds} to help with the inference of the underlying repetition pattern, similar to database matching~\cite{bakirtas2021database} and graph matching \cite{shirani2017seeded, fishkind2019seeded} settings.
\begin{defn}{\textbf{(Seeds)}}
\label{defn:seeds}
A \emph{seed} is a pair of matching rows whose labels are known universally. A \emph{batch of $\Lambda_n$ seeds} $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ is a batch of $\Lambda_n$ correctly-matched row pairs. $\Lambda_n$ is called the \emph{seed size}.
\end{defn}
The relation between the repetition patterns of the database pair $(\mathbf{D}^{(1)},\mathbf{D}^{(2)})$ and the seeds $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ will be clarified in the relevant sections. Furthermore, note that in Definition~\ref{defn:seeds}, for convenience the seeds are assumed to be additional to the databases.
\begin{comment}
Alternatively, one can define the seed matrices $\mathbf{G}^{(1)}$ and $\mathbf{G}^{(2)}$ as sub-databases of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$, respectively, by using a slightly different matching error probability than the one to be specified in Definition~\ref{defn:matchingscheme}.
\end{comment}
Throughout Sections~\ref{sec:matchingcapacityWm} and \ref{sec:matchingcapacityW1}, we assume a double logarithmic seed size $\Lambda_n=\Omega(\log\log m_n)$. We will discuss the effects of not having seeds in Section~\ref{subsec:seedlessWm}.
Besides the seeds, we assume that the locations of some deleted entries are revealed. This is formalized in the following definition:
\begin{defn}{\textbf{(Partial Deletion Location Information)}} \label{defn:partialdellocinfo}
Given the repetition matrix $\mathbf{S}$ with the repetition block size $W_n$, the \emph{partial deletion location information} $\mathbf{A}$ is an ${m_n\times n}$ random matrix, with the following conditional distribution on $\mathbf{S}$ and its structure:
\iftoggle{singlecolumn}{
\begin{align}
& \Pr(A_{i W_n+1,j}= 1|\mathbf{S}) = \alpha \mathbbm{1}_{[S_{i W_n+1,j}=0]}\\
& A_{i W_n+l,j}= A_{i W_n+1,j}\\
&\forall i\in\left\{0,\dots,\left\lfloor\frac{m_n}{W_n}\right\rfloor\right\},\forall j\in[n],\forall l\in[W_n-1]
\end{align}
}{
\begin{align}
& \Pr(A_{i W_n+1,j}= 1|\mathbf{S}) = \alpha \mathbbm{1}_{[S_{i W_n+1,j}=0]}\\
& A_{i W_n+l,j}= A_{i W_n+1,j}\\
&\forall i\in\left\{0,\dots,\left\lfloor\frac{m_n}{W_n}\right\rfloor\right\},\forall j\in[n],\forall l\in[W_n-1]
\end{align}
}
where ${A_{i,j}=1}$ corresponds to $\mathbf{D}^{(1)}_{\Theta_n(i),j}$ being revealed as deleted and ${A_{i,j}=0}$ corresponds to either $\mathbf{D}^{(1)}_{\Theta_n(i),j}$ not being deleted or not being revealed after deletion. The parameter ${\alpha\in[0,1]}$ is called the \emph{deletion detection probability}.
\end{defn}
Definition~\ref{defn:partialdellocinfo} states that in a given ${W_n\times n}$ block in which the repetition matrix has identical rows, the location of each deleted column is revealed with probability $\alpha$. Since the columns of $\mathbf{S}$ are i.i.d. in a given such block and $\mathbf{S}$ and $\mathbf{D}^{(1)}$ are independent, each deleted column is revealed independently of the other columns of $\mathbf{S}$ and $\mathbf{D}^{(1)}$. Furthermore, since for any ${i_1\neq i_2}$, ${\forall j_1,j_2 \in[n]}$, ${\forall l_1,l_2 \in [W_n]}$, ${S_{i_1 W_n+l_1,j_1}}$ and ${{S}_{i_2 W_n+l_2,j_1}}$ are independent, leading to the independence of ${{A}_{i_1 W_n+l_1,j_1}}$ and ${{A}_{i_2 W+l_2,j_1}}$ ${i_1\neq i_2}$, ${\forall j_1,j_2 \in[n]}$, ${\forall l_1,l_2 \in [W_n]}$. In other words, any two entries of $\mathbf{A}$ located in different columns and/or different such ${W_n\times n}$ blocks are independent.
\begin{figure}[t]
\iftoggle{singlecolumn}{
\centerline{\includegraphics[width=0.7\textwidth]{Figures/DMCdiagram.pdf}}
}{
\centerline{\includegraphics[width=0.5\textwidth]{Figures/DMCdiagram.pdf}}
}
\caption{Relation between the unlabeled database $\mathbf{D}^{(1)}$ and the correlated repeated database $\mathbf{D}^{(2)}$.}
\label{fig:dmc}
\end{figure}
\begin{sloppypar}
\begin{defn}{\textbf{(Successful Matching Scheme)}}\label{defn:matchingscheme}
A \emph{matching scheme} is a sequence of mappings ${\phi_n: (\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A})\mapsto \hat{\boldsymbol{\Theta}}_n }$ where $\mathbf{D}^{(1)}$ is the unlabeled Markov database, $\mathbf{D}^{(2)}$ is the correlated column repeated database, $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ are seeds, $\mathbf{A}$ is the partial deletion location information and $\hat{\boldsymbol{\Theta}}_n$ is the estimate of the correct labeling function $\boldsymbol{\Theta}_n$. The scheme $\phi_n$ is \emph{successful} if
\iftoggle{singlecolumn}{
\begin{align}
\Pr\left(\hat{\boldsymbol{\Theta}}_n(J)\neq\boldsymbol{\Theta}_n(J)\right)&\to 0 \text{ as }n\to\infty \label{eq:proberror}
\end{align}}
{
\begin{align}
\Pr\left(\hat{\boldsymbol{\Theta}}_n(J)\neq\boldsymbol{\Theta}_n(J)\right)&\to 0 \text{ as }n\to\infty \label{eq:proberror}
\end{align}
}
where the index $J$ is drawn uniformly from $[m_n]$.
\end{defn}
\end{sloppypar}
We stress that both in database matching and correlation detection settings, the relationship between the row size $m_n$, the column size $n$ and the database distribution parameters are the parameters of interest~\cite{kunisky2022strong,zeynepdetecting2022,tamir2022joint}. Note that for fixed column size $n$, as the row size $m_n$ increases, matching becomes harder. This is because for a given column size $n$, as the row size $m_n$ increases, so does the probability of mismatch as a result of having a larger candidate row set. Furthermore, as stated in~\cite[Theorem 1.2]{kunisky2022strong}, for distributions with parameters constant in $n$ and $m_n$, the regime of interest is the logarithmic regime where $n\sim \log m_n$. Thus, we utilize the \emph{database growth rate} introduced in~\cite{shirani8849392} to characterize the relationship between the row size $m_n$ and the column size $n$.
\begin{defn}\label{defn:dbgrowthrate}{\textbf{(Database Growth Rate)}}
The \emph{database growth rate} $R$ of an unlabeled Markov database with $m_n$ rows and $n$ columns is defined as
\iftoggle{singlecolumn}{
\begin{align}
R&=\lim\limits_{n\to\infty} \frac{1}{n}\log m_n
\end{align}
}{
\begin{align}
R&=\lim\limits_{n\to\infty} \frac{1}{n}\log m_n
\end{align}
}
\end{defn}
In Sections~\ref{sec:matchingcapacityWm} and \ref{sec:matchingcapacityW1}, we assume that the database growth rate $R$ is positive. We will discuss the zero-rate regime $R=0$ in Section~\ref{subsec:zerorateWm}.
\begin{defn}{\textbf{(Achievable Database Growth Rate)}}\label{defn:achievableWm}
Consider a sequence of ${(m_n,n,\mathbf{P})}$ unlabeled Markov databases, a repetition probability distribution $p_S$, a repetition order $d_{\text{rep}}$, a noise distribution $p_{Y|X}$ and the resulting sequence of correlated repeated databases. For a seed size $\Lambda_n$ and a deletion detection probability $\alpha$, a database growth rate $R$ is said to be \emph{achievable} if there exists a successful matching scheme when the unlabeled database has growth rate $R$.
\end{defn}
\begin{defn}{\textbf{(Matching Capacity)}}\label{defn:matchingcapacity}
The \emph{matching capacity} $C(d_{\text{rep}},\alpha)$ is the supremum of the set of all achievable rates corresponding to a probability transition matrix $\mathbf{P}$, repetition probability distribution $p_S$, repetition order $d_{\text{rep}}$, noise distribution $p_{Y|X}$, seed size $\Lambda_n$ and a deletion detection probability $\alpha$.
\end{defn}
In this paper, our goal is to characterize the matching capacity $C(d_{\text{rep}},\alpha)$ in different regimes of the parameters by providing database matching schemes as well as upper bounds on all achievable database growth rates.
\section{Matching Capacity For Independent Repetition}
\label{sec:matchingcapacityW1}
In this section, we investigate the upper and the lower bounds on the matching capacity $C(d_{\text{rep}},\alpha)$ for independent repetition ($W_n=1$, $d_\text{rep}=0$), where we assume a repetition pattern which is independent across all rows.
Due to the independence of the repetition pattern and the independence of the database rows, the seeds do not offer any additional information on the repetition pattern or row matching. Thus, we focus on the matching capacity $C(0,\alpha)$ in the regime with no seeds, \emph{i.e.,} $\Lambda_n=0$. Furthermore, for tractability, we focus on the special case where $\gamma=0$, resulting in an \emph{i.i.d.} database distribution $p_X(x)=u_x$, $\forall x\in\mathfrak{X}$.
We state our main result on the matching capacity for independent repetition in the following theorem:
\begin{thm}{\textbf{(Matching Capacity Bounds for Independent Repetition})}\label{thm:mainresultW1}
Consider a probability transition matrix $\mathbf{P}$ with $\gamma=0$, a noise distribution $p_{Y|X}$ and a repetition distribution $p_S$. Then the matching capacity satisfies
\iftoggle{singlecolumn}{
\begin{align}
C(0,\alpha)&\ge\left[\frac{\mathbb{E}[S]}{s_{\max}} H(X)-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)-\mathbb{E}[S]H(X|Y)\right]^+\label{eq:rowwiseachievable}\\
C(0,\alpha)&\le \inf\limits_{n\ge 1}\frac{1}{n} I({X}^n;{Y}^{K_n},{A}^n)\label{eq:rowwiseconverse}
\end{align}
}{
\begin{align}
C(0,\alpha)&\ge\Big[\frac{\mathbb{E}[S]}{s_{\max}} H(X)-\mathbb{E}[S]H(X|Y)\notag\\&\hspace{2em}-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)\Big]^+\label{eq:rowwiseachievable}\\
C(0,\alpha)&\le \inf\limits_{n\ge 1}\frac{1}{n} I({X}^n;{Y}^{K_n},{A}^n)\label{eq:rowwiseconverse}
\end{align}
}
where $\delta$ and $\alpha$ are the deletion and the deletion detection probabilities, respectively and $s_{\max}\triangleq \max \mathrm{supp}(p_S)$.
Furthermore, for repetition distributions with $\frac{1}{s_{\max}}\mathbb{E}[S]\ge \frac{1-\alpha\delta}{|\mathfrak{X}|}$, the lower bound in equation~\eqref{eq:rowwiseachievable} can be tightened as
\iftoggle{singlecolumn}{
\begin{align}
C(0,\alpha)\ge\Big[(1-\alpha\delta) H(X)-\Big(1&-\alpha\delta-\frac{\mathbb{E}[S]}{s_{\max}} \Big)\min\{H(X),\log(|\mathfrak{X}|-1)\}\notag\\
&-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)-\mathbb{E}[S]H(X|Y)\Big]^+\label{eq:rowwiseachievable2}
\end{align}
}{
\begin{align}
C(0,\alpha&)\ge\Big[(1-\alpha\delta) H(X)-\mathbb{E}[S]H(X|Y)\notag\\
&-\Big(1-\alpha\delta-\frac{\mathbb{E}[S]}{s_{\max}} \Big)\min\{H(X),\log(|\mathfrak{X}|-1)\}\notag\\
&-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)\Big]^+\label{eq:rowwiseachievable2}
\end{align}
}
\end{thm}
We note that the upper bound given in Theorem~\ref{thm:mainresultW1} (equation~\eqref{eq:rowwiseconverse}) is an infimum over the column size $n$. Therefore, its evaluation for any $n\in\mathbb{N}$ yields an upper bound on the matching capacity.
With independent repetition, we cannot perform repetition detection as in Section~\ref{sec:matchingcapacityWm}, and hence we are restricted to using a single-step rowwise matching scheme as done in~\cite{shirani8849392}. This builds an analogy between database matching and channel decoding. In particular, our approach to database matching for independent repetition is related to decoding in the noisy synchronization channel~\cite{cheraghchi2020overview}.
We stress that there are several important differences between the database matching problem and the synchronization channel literature: \emph{i)} In database matching the database distribution is fixed and cannot be designed or optimized, whereas in channel coding the main goal is to optimize the input distribution to find the channel capacity \emph{ii)} the synchronization channel literature mostly focuses on code design with few works, such as~\cite{diggavi1603788}, focusing on random codebook arguments for only a few types of synchronization errors such as deletion~\cite{diggavi1603788} and duplication~\cite{drinea2007improved} and finally \emph{iii)} our database matching result provides an achievability argument for all repetition distributions with finite support, whereas the synchronization channel literature mainly focuses on some families of repetition distributions. As a result, for input-constrained noisy synchronization channels, our generalized random codebook argument, presented in Section~\ref{subsec:achievabilityW1}, is novel and might be of independent interest.
In Section~\ref{subsec:achievabilityW1}, we prove the achievability part of Theorem~\ref{thm:mainresultW1} (equation~\eqref{eq:rowwiseachievable}) by proposing a rowwise matching scheme. Then, in Section~\ref{subsec:converseW1} we prove the converse part (equation~\eqref{eq:rowwiseconverse}). Then, we present strictly tighter upper bounds for a special case with only deletions, \emph{i.e.,} $s_{\max}=1$.
\subsection{Row Matching Scheme and Achievability}
\label{subsec:achievabilityW1}
To prove the achievability, we consider the following matching scheme:
\begin{enumerate}[label=\textbf{\arabic*)},leftmargin=1.3\parindent]
\item Given the $i$\textsuperscript{th} row $Y_i^{K_n}$ of $\mathbf{D}^{(2)}$ and the corresponding row $A_i^n$ of the partial deletion location information $\mathbf{A}$, we discard the $j$\textsuperscript{th} column of $\mathbf{D}^{(1)}$ if $A_{i,j}=1$, $\forall j\in[n]$ to obtain $\bar{\mathbf{D}}^{(1)}$ since it does not offer any additional information due to the independent nature of the database entries.
\item We convert the problem into a deletion-only one by elementwise repeating all the columns of $\bar{\mathbf{D}}^{(1)}$ $s_{\max}$ times, which we call \say{stretching by $s_{\max}$}, to obtain $\tilde{\mathbf{D}}^{(1)}$. At this step, $Y_i^{K_n}$ can be seen as the output of the noisy deletion channel where the $\boldsymbol{\Theta}_n^{-1}(i)$\textsuperscript{th} row of $\tilde{\mathbf{D}}^{(1)}$ is the input.
\item We perform a generalized version of the decoding algorithm introduced in~\cite{bakirtas2021database} for the noiseless deletions with deletion detection probability. Note that the latter itself is an extension of the one proposed in~\cite{diggavi1603788}.
\end{enumerate}
The full proof of the achievability part (equations~\eqref{eq:rowwiseachievable} and \eqref{eq:rowwiseachievable2}) via the matching scheme described above can be found in Appendix~\ref{proof:achievabilityW1}.
\begin{figure}[t]
\centerline{\includegraphics[width=0.5\textwidth]{Figures/stretching.pdf}}
\caption{An illustrative example of the column discarding and the stretching of ${X}^n$ into $\tilde{{X}}^{(n-|{A}^n|) s_{\max}}$, for a given the deletion detection pattern ${A}^n$. First, we discard each know deleted element known from $X^n$ to obtain $\bar{{X}}^{n-|{A}^n|}$. Then, each element of $\bar{{X}}^{(n-|{A}^n|)}$ is repeated $s_{\max}$ times to obtain $\tilde{{X}}^{(n-|{A}^n|) s_{\max}}$.}
\label{fig:elemwiserep}
\end{figure}
An illustrative example of the \say{stretching} is given in Figure~\ref{fig:elemwiserep}. The idea behind this stretching is that since each entry can be repeated at most $s_{\max}$ times when we stretch $X^n$ $s_{\max}$ times to obtain $\tilde{X}^{n s_{\max}}$, the output of the synchronization channel (before the noise $p_{Y|X}$) is guaranteed to be a subsequence of $\tilde{X}^{n s_{\max}}$. This way, we can convert the general noisy synchronization problem into a noisy deletion-only problem. We note that when $s_{\max}$ becomes large compared to the alphabet size $|\mathfrak{X}|$, the lower bound given in~\eqref{eq:rowwiseachievable} goes to zero, even when $p_S(s_{\max})$ is very small.
For any repetition structure described in~Definition~\ref{defn:repetitionpattern}, one can simply ignore the structure and apply the matching scheme described above. Therefore the achievable rate of Theorem~\ref{thm:mainresultW1} (equation~\eqref{eq:rowwiseachievable}) is achievable for any repetition order.
\begin{cor}{\textbf{(Capacity Lower Bound for Arbitrary Repetition Order)}}
\label{cor:achievableWarbitrary}
For any repetition order $d_{\text{rep}}$ and any $\alpha\in[0,1]$, the matching capacity $C(d_{\text{rep}},\alpha)$ satisfies
\iftoggle{singlecolumn}{
\begin{align}
C(d_{\text{rep}},\alpha) &\ge\left[\frac{\mathbb{E}[S]}{s_{\max}} H(X)-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)-\mathbb{E}[S]H(X|Y)\right]^+\label{eq:arbitraryachievable}
\end{align}
}{
\begin{align}
C(&d_{\text{rep}},\alpha)\ge\Big[\frac{\mathbb{E}[S]}{s_{\max}} H(X)-\mathbb{E}[S]H(X|Y)\notag\\
&\hspace{3.1em}-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)\Big]^+\label{eq:arbitraryachievable}
\end{align}
}
\end{cor}
\subsection{Converse}
\label{subsec:converseW1}
In this subsection, we prove the converse part of Theorem~\ref{thm:mainresultW1} and evaluate the given upper bound for some special cases. First, we observe that by following the genie argument provided in the converse of Theorem~\ref{thm:mainresultWm}, we can argue that Theorem~\ref{thm:mainresultWm} is an upper bound on $C(d_{\text{rep}},\alpha)$ for any $\alpha$ and for any $d_{\text{rep}}$.
\begin{cor}{\textbf{(Capacity Upper Bound for Arbitrary Repetition Order)}}\label{cor:converseWarbitrary}
For any repetition order $d_{\text{rep}}$ and any $\alpha\in[0,1]$, the matching capacity $C(d_{\text{rep}},\alpha)$ satisfies
\iftoggle{singlecolumn}{
\begin{align}
C(d_{\text{rep}},\alpha) \le I(X;Y^S,S)
\end{align}
}{
\begin{align}
C(d_{\text{rep}},\alpha) \le I(X;Y^S,S)
\end{align}
}
\end{cor}
We next prove the converse of Theorem~\ref{thm:mainresultW1} (equation \eqref{eq:rowwiseconverse}). We then analytically evaluate this for some $n\in\mathbb{N}$ and we argue that the evaluated upper bounds are strictly tighter than that in Corollary~\ref{cor:converseWarbitrary}.
\begin{proofconverseW1}
We start with the modified Fano's inequality used in Section~\ref{subsec:converseWm}. Let
\iftoggle{singlecolumn}{
\begin{align}
P_e&\triangleq \Pr\left(\boldsymbol{\Theta}_n(J)\neq\hat{\boldsymbol{\Theta}}_n(J)\right),\hspace{1em} J\sim\text{Unif}([m_n])
\end{align}
}{
\begin{align}
P_e&\triangleq \Pr\left(\boldsymbol{\Theta}_n(J)\neq\hat{\boldsymbol{\Theta}}_n(J)\right),\hspace{1em} J\sim\text{Unif}([m_n])
\end{align}
}
Then, we have
\iftoggle{singlecolumn}{
\begin{align}
H(\boldsymbol{\Theta}_n)\le 1+m_n P_e\log m_n+I&(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{A})
\end{align}
}{
\begin{align}
H(\boldsymbol{\Theta}_n)\le 1+m_n P_e\log m_n+I&(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{A})
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{A})
&= I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)}|\mathbf{D}^{(2)},\mathbf{A})\label{eq:converseW1first}\\
&\le I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)},\mathbf{A};\mathbf{D}^{(1)})\label{eq:converseW1seedsnoinfo}\\
&=\sum\limits_{i=1}^{m_n} I(X_i^n;Y_{\Theta_n(i)}^{K_n},A_{\Theta_n(i)}^n) \label{eq:converseW1indeprows}\\
&= m_n I(X^n;Y^{K_n},A^n)\label{eq:converseW1idrows}
\end{align}
}{
\begin{align}
I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{A})
&= I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)}|\mathbf{D}^{(2)},\mathbf{A})\label{eq:converseW1first}\\
&\le I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)},\mathbf{A};\mathbf{D}^{(1)})\label{eq:converseW1seedsnoinfo}\\
&=\sum\limits_{i=1}^{m_n} I(X_i^n;Y_{\Theta_n(i)}^{K_n},A_{\Theta_n(i)}^n) \label{eq:converseW1indeprows}\\
&= m_n I(X^n;Y^{K_n},A^n)\label{eq:converseW1idrows}
\end{align}
}
where \eqref{eq:converseW1indeprows} and \eqref{eq:converseW1idrows} follow from the fact that non-matching rows and their corresponding probabilistic side information on deletion locations are respectively independent and identically distributed.
Following similar steps to Section~\ref{subsec:converseWm}, we obtain
\iftoggle{singlecolumn}{
\begin{align}
R&\le \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n},A^n)}{n}
\end{align}
}{
\begin{align}
R&\le \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n},A^n)}{n}
\end{align}
}
whenever $P_e\to 0$ as $n\to\infty$.
Note that from Fekete's lemma~\cite{fekete1923verteilung}, for any subadditive sequence $\{a_n\}_{n\in\mathbb{N}}$, we have
\iftoggle{singlecolumn}{
\begin{align}
\lim\limits_{n\to\infty}\frac{a_n}{n} = \inf\limits_{n\ge 1}\frac{a_n}{n}
\end{align}
}{
\begin{align}
\lim\limits_{n\to\infty}\frac{a_n}{n} = \inf\limits_{n\ge 1}\frac{a_n}{n}
\end{align}
}
Therefore, it is sufficient to prove the subadditivity of $I(X^n;Y^{K_n},A^n)$.
Choose an arbitrary $r\in[n-1]$ and let $M_r\triangleq \sum_{j=1}^r S_j$ where $S^n$ is the repetition pattern through which $Y^{K_n}$ is obtained from $X^n$. Note that $M_r$ denotes a marker, stating which part of $Y^{K_n}$ depends on the first $r$ elements of $X^n$, denoted by $X_1^r$. Therefore we have a bijective relation between $(Y^{K_n},M_r)$ and $(Y_1^{\sum_{j=1}^r S_j},Y_{\sum_{j=1}^r S_j +1}^{K_n})$ where the subscripts and the superscripts denote the starting and the ending points of the vectors, respectively. Thus,
\iftoggle{singlecolumn}{
\begin{align}
I(X^n;Y^{K_n},A^n)&\le I(X^n;Y^{K_n},M_r,A^n)\\
&= I(X^n;Y_1^{\sum_{j=1}^r S_j},Y_{\sum_{j=1}^r S_j +1}^{K_n},A^n)\\
&= I(X_1^r,X_{r+1}^n;Y_1^{\sum_{j=1}^r S_j},Y_{\sum_{j=1}^r S_j +1}^{K_n},A_1^r,A_{r+1}^n)\\
&= I(X_1^r;Y_1^{\sum_{j=1}^r S_j},A_1^r) +I(X_{r+1}^n;Y_{\sum_{j=1}^r S_j +1}^{K_n},A_{r+1}^n)\label{eq:subadditivityupperbound}
\end{align}
}{
\begin{align}
I(&X^n;Y^{K_n},A^n)\notag\\&\le I(X^n;Y^{K_n},M_r,A^n)\\
&= I(X^n;Y_1^{\sum_{j=1}^r S_j},Y_{\sum_{j=1}^r S_j +1}^{K_n},A^n)\\
&= I(X_1^r,X_{r+1}^n;Y_1^{\sum_{j=1}^r S_j},Y_{\sum_{j=1}^r S_j +1}^{K_n},A_1^r,A_{r+1}^n)\\
&= I(X_1^r;Y_1^{\sum_{j=1}^r S_j},A_1^r) +I(X_{r+1}^n;Y_{\sum_{j=1}^r S_j +1}^{K_n},A_{r+1}^n)\label{eq:subadditivityupperbound}
\end{align}
}
where \eqref{eq:subadditivityupperbound} follows from the fact that $X^n$ and $A^n$ have \emph{i.i.d.} entries and the noise $p_{Y|X}$ acts independently on the entries. Thus, $I(X^n;Y^{K_n},A^n)$ is a subadditive sequence.
Hence,
\iftoggle{singlecolumn}{
\begin{align}
R&\le \inf\limits_{n\ge 1} \frac{I(X^n;Y^{K_n},A^n)}{n}\label{eq:W1infimum}
\end{align}
}{
\begin{align}
R&\le \inf\limits_{n\ge 1} \frac{I(X^n;Y^{K_n},A^n)}{n}\label{eq:W1infimum}
\end{align}
}
whenever $P_e\to 0$ as $n\to\infty$, concluding the proof.
\end{proofconverseW1}
We note that since the upper bound given in Theorem~\ref{thm:mainresultW1} is the infimum over all $n\ge1$, its evaluation at any $n\in\mathbb{N}$ yields an upper bound on the matching capacity. In Corollaries~\ref{cor:W1conversenoiselessub} and \ref{cor:W1conversenoisybinaryub}, we analytically evaluate this upper bound at $n=2$ under some assumptions on $p_{X,Y}$ when $s_{\max}=1$, \emph{i.e.,} when we only have deletions, and explicitly demonstrate the gap between the upper bounds given in Corollary~\ref{cor:converseWarbitrary} and Theorem~\ref{thm:mainresultW1}.
First, we consider a noiseless deletion setting with arbitrary database distribution $p_X$ in Corollary~\ref{cor:W1conversenoiselessub}.
\begin{sloppypar}
\begin{cor}{\textbf{(Upper Bound for Noiseless Deletion)}}\label{cor:W1conversenoiselessub}
Consider a noiseless deletion setting where ${p_{Y|X}(y|x) = \mathbbm{1}_{[x=y]}}$, ${\forall(x,y)\in\mathfrak{X}^2}$ and ${S\sim\text{Bernoulli}(1-\delta)}$. Then for any input distribution $p_X$, we have
\iftoggle{singlecolumn}{
\begin{align}
C(0,\alpha) &\le \frac{1}{2} I(X^2;Y^K,A^2)\\&= (1-\delta)H(X)-(1-\alpha)\delta(1-\delta)\left(1-\hat{q}\right)\label{eq:W1noiselessub}
\end{align}
}{
\begin{align}
C(0,\alpha) &\le \frac{1}{2} I(X^2;Y^K,A^2)\\
&= (1-\delta)H(X)-(1-\alpha)\delta(1-\delta)\left(1-\hat{q}\right)\label{eq:W1noiselessub}
\end{align}
}
where $\hat{q} \triangleq \sum_{x\in\mathfrak{X}} p_X(x)^2$.
\end{cor}
\end{sloppypar}
\begin{proof}
See Appendix~\ref{proof:rowwiseconversenoiselessub}.
\end{proof}
Note that for any $\mathfrak{X}$ with $|\mathfrak{X}|\ge 2$ and $\alpha\in[0,1)$ the upper bound given in Corollary~\ref{cor:W1conversenoiselessub} is strictly lower than the one provided in Corollary~\ref{cor:converseWarbitrary} which is
\iftoggle{singlecolumn}{
\begin{align}
I(X;Y,S) = (1-\delta) H(X).
\end{align}
}{
\begin{align}
I(X;Y,S) = (1-\delta) H(X).
\end{align}
}
\begin{figure}[t]
\centerline{\includegraphics[width=0.50\textwidth]{Figures/rowwiseratesBW.pdf}}\caption{The evaluation of the lower and upper bounds on the matching capacity for the binary noisy deletion case with $p_X\sim$Bernoulli$(\nicefrac{1}{2})$, $p_{S}\sim \text{Bernoulli}(1-\delta)$, $\alpha=0.7$ and $p_{Y|X}\sim \text{BSC}(0.05)$. The blue curve is the achievable rate stated in Theorem~\ref{thm:mainresultW1}. The yellow and the red curves are the evaluations of the upper bound stated in Theorem~\ref{thm:mainresultW1}, at $n=10$ and $n=2$, respectively. The purple curve shows the loose upper bound given in Corollary~\ref{cor:converseWarbitrary}. We see that the gap between the lower and the upper bounds shrinks as $n$ increases.}
\label{fig:entryrates}
\end{figure}
Next, we consider a noisy deletion setting with binary $X$ and arbitrary noise $p_{Y|X}$ in Corollary~\ref{cor:W1conversenoisybinaryub}.
\begin{cor}{\textbf{(Upper Bound for Binary Noisy Deletion)}}\label{cor:W1conversenoisybinaryub}
Consider a binary noisy deletion setting where $X\sim\text{Bernoulli(p)}$ and ${S\sim\text{Bernoulli}(1-\delta)}$. Then, for any binary DMC $p_{Y|X}$, we have
\iftoggle{singlecolumn}{
\begin{align}
C(0,\alpha) &\le \frac{1}{2} I(X^2;Y^K,A^2) \\&= (1-\delta) I(X;Y)-2 (1-\alpha) \delta(1-\delta)p(1-p) I(U;V) \label{eq:W1noisybinaryub}
\end{align}
}{
\begin{align}
C(0,\alpha) &\le \frac{1}{2} I(X^2;Y^K,A^2)\\ & = (1-\delta) I(X;Y)\notag\\&\hspace{0.5em}-2 (1-\alpha) \delta(1-\delta)p(1-p) I(U;V) \label{eq:W1noisybinaryub}
\end{align}
}
where $U$ and $V$ are binary random variables with $U\sim\text{Bernoulli}(\nicefrac{1}{2})$ and ${p_{V|U}=p_{Y|X}}$.
\end{cor}
\begin{proof}
See Appendix~\ref{proof:W1conversenoisybinaryub}.
\end{proof}
Again, for any $p\in(0,1)$ and $\alpha\in[0,1)$, the upper bound given in Corollary~\ref{cor:W1conversenoisybinaryub} is strictly lower than the one provided in Corollary~\ref{cor:converseWarbitrary} which is
\iftoggle{singlecolumn}{
\begin{align}
I(X;Y,S)=(1-\delta)I(X;Y)
\end{align}
}{
\begin{align}
I(X;Y,S)=(1-\delta)I(X;Y)
\end{align}
}
We note that the tighter upper bounds in Corollaries~\ref{cor:W1conversenoiselessub} and \ref{cor:W1conversenoisybinaryub} become generalizations of the upper bound on the noiseless deletion channel mutual information, given in~\cite[Corollary 1]{drmota6283980}. Specifically,~\cite{drmota6283980} considers noiseless deletion channel with \emph{i.i.d.} Bernoulli inputs. Corollary~\ref{cor:W1conversenoiselessub} extends the results to noiseless deletion channels with arbitrary alphabet sizes. Furthermore, Corollary~\ref{cor:W1conversenoisybinaryub} extends the results to binary noisy deletion channels with arbitrary noise.
For the binary noisy case considered in Corollary~\ref{cor:W1conversenoisybinaryub}, the numerical comparison of the lower bound and the two upper bounds on the matching capacity is provided in Figure~\ref{fig:entryrates}. Note that the upper bound provided by Corollary~\ref{cor:W1conversenoisybinaryub} is not tight as it can be shown that a larger value of $n$ gives a tighter upper bound, implying that the gap between the lower and the upper bounds in Theorem~\ref{thm:mainresultW1} is smaller than the one shown in Figure~\ref{fig:entryrates}.
\section{Matching Capacity For Identical Repetition}
\label{sec:matchingcapacityWm}
In this section, we present the matching capacity $C(d_{\text{rep}},\alpha)$ for an identical repetition pattern ($W_n=m_n$, $d_{\text{rep}}=\infty$) with seed size $\Lambda_n=\Omega(\log\log m_n)$. We will show that when $\Lambda_n=\Omega(\log\log m_n)$, the repetition pattern, including the deletion locations, can be inferred. Therefore partial deletion location information $\mathbf{A}$ will become obsolete in this case and our results hold for any $\alpha\ge 0$.
We state the main result of this section in Theorem~\ref{thm:mainresultWm} and prove its achievability by proposing a three-step approach: \emph{i)} noisy replica detection and \emph{ii)} deletion detection using seeds, followed by \emph{iii)} row matching. Then, we prove the converse part. Finally, we focus on the noiseless setting as a special case where we prove that we can devise a new detection algorithm specific to the noiseless model which renders the seeds obsolete.
\begin{thm}{\textbf{(Matching Capacity for Identical Repetition})}\label{thm:mainresultWm}
Consider a probability transition matrix $\mathbf{P}$, a column repetition distribution $p_S$ with an identical repetition pattern and a noise distribution $p_{Y|X}$. Then, for any deletion detection probability $\alpha\ge 0$ and a seed size ${\Lambda_n=\Omega(\log\log m_n)}$, the matching capacity is
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,\alpha) &= \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n},S^n)}{n}\label{eq:matchingcap}
\end{align}
}{
\begin{align}
C(\infty,\alpha) &= \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n},S^n)}{n}\label{eq:matchingcap}
\end{align}
}
where $X^n$ is a Markov chain with probability transition matrix $\mathbf{P}$ and stationary distribution $\mu$, $S_i\overset{\text{iid}}{\sim} p_S$ and ${Y^{K_n}=Y^{S_1}_1,\dots,Y^{S_n}_n}$ with $K_n=\sum_{j=1}^n S_j$ such that
\iftoggle{singlecolumn}{
\begin{align}
\Pr(Y^{S_i}_i=y^{S_i}|X_i=x_i)&=\begin{cases}
\prod\limits_{j=1}^{S_i} p_{Y|X}(y_j|x_i)&\text{if }S_i>0\\
\mathbbm{1}_{[y^{s_i} = E]} &\text{if }S_i=0
\end{cases},
\end{align}
}{
\begin{align}
\Pr(Y^{S_i}_i=y^{S_i}|X_i=x_i)&=\begin{cases}
\prod\limits_{j=1}^{S_i} p_{Y|X}(y_j|x_i)&\text{if }S_i>0\\
\mathbbm{1}_{[y^{s_i} = E]} &\text{if }S_i=0
\end{cases},
\end{align}
}
for all $i\in[n]$ with $E$ denoting the empty string.
\end{thm}
Because of the independence of $X^n$ and $S^n$, \eqref{eq:matchingcap} can also be represented as
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,\alpha) &= \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n}|S^n)}{n}.
\end{align}
}{
\begin{align}
C(\infty,\alpha) &= \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n}|S^n)}{n}.
\end{align}
}
Hence, Theorem~\ref{thm:mainresultWm} states that although the repetition pattern $S^n$ is not known a-priori, for a seed size $\Lambda_n=\Omega(\log\log m_n)$, we can achieve a database growth rate as if we knew $S^n$. Since the utility of seeds increases with the seed size $\Lambda_n$, we will focus on $\Lambda_n=\Theta(\log\log m_n)$, which we show is sufficient to achieve the matching capacity.
The rest of this section is on the proof of Theorem~\ref{thm:mainresultWm}. In Section~\ref{subsec:replicadetection}, we discuss our noisy replica detection algorithm which does not utilize the seeds and prove its asymptotic performance. In Section~\ref{subsec:seededdeletiondetection}, we introduce a deletion detection algorithm which uses seeds and derive a seed size sufficient for an asymptotic performance guarantee. Then, in Section~\ref{subsec:matchingschemeWm}, we combine these two algorithms and prove the achievability of Theorem~\ref{thm:mainresultWm} by proposing a typicality-based matching scheme for rows, which is performed once replicas and deletions are detected. In Section~\ref{subsec:converseWm}, we prove the converse part of Theorem~\ref{thm:mainresultWm}. Finally, in Section~\ref{subsec:noiselessWm}, we focus on the special case of no noise on the repeated entries and provide a single repetition (replica and deletion) detection algorithm which does not require any seeds.
Note that when the two databases are independent, Theorem~\ref{thm:mainresultWm} states that the matching capacity becomes zero, hence our results trivially hold. As a result, throughout this section, we assume that the two databases are not independent. Furthermore, our achievability result assumes $\alpha =0 $ and its proof holds for any $\alpha\ge0$.
\subsection{Noisy Replica Detection}\label{subsec:replicadetection}
We propose to detect the replicas by extracting permutation-invariant features of the columns of $\mathbf{D}^{(2)}$. Our algorithm only considers the columns of $\mathbf{D}^{(2)}$ and as such, can only detect replicas, not deletions. Note that our replica detection algorithm does not require any seeds unlike seeded deletion detection discussed in Section~\ref{subsec:seededdeletiondetection}.
Our proposed replica detection algorithm adopts the \emph{Hamming distance between consecutive columns} of $\mathbf{D}^{(2)}$ as a permutation-invariant feature of the columns. The permutation-invariance allows us to perform replica detection on $\mathbf{D}^{(2)}$ with no a-priori information on $\boldsymbol{\Theta}_n$.
Let $K_n$ denote the number of columns of $\mathbf{D}^{(2)}$, $C^{m_n}_j$ denote the $j$\textsuperscript{th} column of $\mathbf{D}^{(2)}$, $j=1,\dots,K_n$. The replica detection algorithm works as follows: We first compute the Hamming distances $d_H(C^{m_n}_j,C^{m_n}_{j+1})$ between consecutive columns $C^{m_n}_j$ and $C^{m_n}_{j+1}$, for $j\in[K_n-1]$.
For some average Hamming distance threshold $\tau\in(0,1)$ chosen based on $\mathbf{P}$ and $p_{Y|X}$ (See Appendix~\ref{proof:noisyreplicadetection}), the algorithm decides that $C^{m_n}_{j}$ and $C^{m_n}_{j+1}$ are replicas only if $d_H(C^{m_n}_{j},C^{m_n}_{j+1})<m_n \tau$, and correspond to distinct columns of $\mathbf{D}^{(1)}$ otherwise. In the following lemma, we show that this algorithm can infer the replicas with high probability.
\begin{lem}{\textbf{(Noisy Replica Detection)}}\label{lem:noisyreplicadetection}
Let $E_j$ denote the event that the Hamming distance-based algorithm described above fails to infer the correct replica relationship between the columns $C^{m_n}_{j}$ and $C^{m_n}_{j+1}$ of $\mathbf{D}^{(2)}$, $j=1,\dots,K_n-1$. The total probability of replica detection error diminishes as $n\to\infty$, that is
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{K_n-1} E_j)&\to 0\text{ as }n\to\infty. \label{eq:replicadetection}
\end{align}
}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{K_n-1} E_j)&\to 0\text{ as }n\to\infty. \label{eq:replicadetection}
\end{align}
}
\end{lem}
\begin{proof}
See Appendix~\ref{proof:noisyreplicadetection}.
\end{proof}
\subsection{Deletion Detection Using Seeds}\label{subsec:seededdeletiondetection}
Since the replica detection algorithm discussed in Section~\ref{subsec:replicadetection} only uses $\mathbf{D}^{(2)}$ and thus only the retained columns, it cannot detect column deletions. We next propose a deletion detection algorithm which uses seeds.
Let $\smash{(\mathbf{G}^{(1)},\mathbf{G}^{(2)})}$ be a batch of $\smash{\Lambda_n=\Theta(\log\log m_n)}$ seeds with the identical repetition pattern $S^n$. In other words, let $\smash{(\mathbf{G}^{(1)},\mathbf{G}^{(2)})}$ have the same repetition pattern as $\smash{(\mathbf{D}^{(1)},\mathbf{D}^{(2)})}$. Our deletion detection algorithm works as follows: After finding the replicas as in Section~\ref{subsec:replicadetection}, we discard all but one of the noisy replicas from $\mathbf{G}^{(2)}$, to obtain $\smash{\tilde{\mathbf{G}}^{(2)}}$, whose column size is denoted by $\hat{K}_n$. At this step, we only have deletions.
Next, for each index pair $(i,j)\in[n]\times[\hat{K}_n]$, we compute the Hamming distance $\smash{d_H(C_i^{(1)},C_j^{(2)})}$ between the $i$\textsuperscript{th} column $\smash{C_i^{(1)}}$ of $\smash{\mathbf{G}^{(1)}}$ and the $j$\textsuperscript{th} column $\smash{C_j^{(2)}}$ of $\smash{\tilde{\mathbf{G}}^{(2)}}$. More formally, we compute
\iftoggle{singlecolumn}{
\begin{align}
d_H(C_i^{(1)},C_j^{(2)}) &= \sum_{t=1}^{\Lambda_n} \mathbbm{1}_{\left[{G}^{(1)}_{t,i}\neq \tilde{{G}}^{(2)}_{t,j}\right]}.
\end{align}
}{
\begin{align}
d_H(C_i^{(1)},C_j^{(2)}) &= \sum_{t=1}^{\Lambda_n} \mathbbm{1}_{\left[{G}^{(1)}_{t,i}\neq \tilde{{G}}^{(2)}_{t,j}\right]}.
\end{align}
}
Then, for each index $i\in[n]$, the algorithm decides $\smash{C_i^{(1)}}$ is retained (not deleted) only if there exists a column $\smash{C_j^{(2)}}$ in $\smash{\tilde{\mathbf{G}}^{(2)}}$ with $\smash{d_H(C_i^{(1)},C_j^{(2)})\le \Lambda_n \bar{\tau}}$, for some average Hamming distance threshold $\bar{\tau}\in(0,1)$ chosen based on $\mathbf{P}$ and $p_{Y|X}$. In this case, we assign $\hat{I}_i=0$. Otherwise, the algorithm decides $\smash{C_i^{(1)}}$ is deleted, assigning $\hat{I}_i= 1$. At the end of this procedure, the algorithm outputs an estimate $\hat{I}^n=(\hat{I}_1,\dots,\hat{I}_n)$ of the true deletion pattern $I^n_{\text{del}}=(I_1,\dots,I_n)$. Here, for each $i\in[n]$ we have
\iftoggle{singlecolumn}{
\begin{align}
I_i&\triangleq \mathbbm{1}_{[S_i=0]}\\
\hat{I}_i &\triangleq \mathbbm{1}_{\left[\exists j\in [\hat{K}_n]:\: d_H(C_i^{(1)},C_j^{(2)})\le \Lambda_n \bar{\tau}\right]}
\end{align}
}{
\begin{align}
I_i&\triangleq \mathbbm{1}_{[S_i=0]}\\
\hat{I}_i &\triangleq \mathbbm{1}_{\left[\exists j\in [\hat{K}_n]:\: d_H(C_i^{(1)},C_j^{(2)})\le \Lambda_n \bar{\tau}\right]}
\end{align}
}
Note that such a Hamming distance-based strategy depends on pairs of matching entries in a pair of seed rows in $\smash{\mathbf{G}^{(1)}}$ and $\smash{\tilde{\mathbf{G}}^{(2)}}$ having a higher probability of being equal than non-matching entries. More formally, WLOG, let $S_j\neq 0$ and $\tilde{X}_{i,j}$ and $\tilde{Y}_{i,j}$ denote the respective $(i,j)$\textsuperscript{th} entries of $\mathbf{G}^{(1)}$ and $\tilde{\mathbf{G}}^{(2)}$. Given a matching pair ${(\tilde{X}_{i,j},\tilde{Y}_{i,j})}$ of entries and any non-matching pair ${(\tilde{X}_{i,l},\tilde{Y}_{i,j})}$, $l\neq j$ we need
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\tilde{Y}_{i,j}\neq \tilde{X}_{i,j})<\Pr(\tilde{Y}_{i,j}\neq \tilde{X}_{i,l})\label{eq:conditiondeletiondetection}
\end{align}
}{
\begin{align}
\Pr(\tilde{Y}_{i,j}\neq \tilde{X}_{i,j})<\Pr(\tilde{Y}_{i,j}\neq \tilde{X}_{i,l})\label{eq:conditiondeletiondetection}
\end{align}
}
which may not be true in general.
For example, suppose we have a binary uniform \emph{i.i.d.} distribution, \emph{i.e.,} ${\mathfrak{X}=\{0,1\}}$ with $\gamma=0$ and ${u_1=\nicefrac{1}{2}}$ (recall Definition~\ref{defn:markovdb}). Further assume that $p_{Y|X}$ follows BSC($q$), \emph{i.e.} ${p_{Y|X}(x|x)=1-q}$, ${x=0,1}$. Note that when ${q>\nicefrac{1}{2}}$, equation~\eqref{eq:conditiondeletiondetection} is not satisfied. However, in this example, we can flip the labels in $Y$ by applying the bijective remapping ${\sigma=\left(\begin{smallmatrix}
0 & 1\\
1 & 0
\end{smallmatrix}\right)}$ to $Y$ in order to satisfy equation~\eqref{eq:conditiondeletiondetection}.
Thus, as long as such a permutation ${\sigma}$ of $\mathfrak{X}$ satisfying equation~\eqref{eq:conditiondeletiondetection} exists, we can use the aforementioned deletion detection algorithm.
Now, suppose that such a mapping $\sigma$ exists. We apply $\sigma$ to the entries of $\smash{\tilde{\mathbf{G}}^{(2)}}$ to construct $\smash{\tilde{\mathbf{G}}_{\sigma}^{(2)}}$. Then, our deletion detection algorithm follows the above steps computing $\smash{d_H(C_i^{(1)},C_j^{(2)}(\sigma))}$ for each index pair $(i,j)\in[n]\times[\hat{K}_n]$ and outputs the deletion pattern estimate $\hat{I}^n(\sigma)=(\hat{I}_1(\sigma),\dots,\hat{I}_n(\sigma))$ where
\iftoggle{singlecolumn}{
\begin{align}
\hat{I}_i(\sigma) &\triangleq \mathbbm{1}_{\left[\exists j\in [\hat{K}_n]:\: d_H(C_i^{(1)},C_j^{(2)}(\sigma))\le \Lambda_n \bar{\tau}\right]}
\end{align}
}{
\begin{align}
\hat{I}_i(\sigma) &\triangleq \mathbbm{1}_{\left[\exists j\in [\hat{K}_n]:\: d_H(C_i^{(1)},C_j^{(2)}(\sigma))\le \Lambda_n \bar{\tau}\right]}
\end{align}
}
and $C_j^{(2)}(\sigma)$ is the $j$\textsuperscript{th} column of
$\smash{\tilde{\mathbf{G}}_{\sigma}^{(2)}}$.
\begin{sloppypar}
The following lemma states that such a bijective mapping $\sigma$ always exists and for a seed size ${\Lambda_n=\Theta(\log n)=\Theta(\log\log m_n)}$, this algorithm can infer the deletion locations with high probability.
\end{sloppypar}
\begin{lem}{\textbf{(Seeded Deletion Detection)}}\label{lem:seededdeletiondetection}
For a repetition pattern ${S}^n$, let ${I_\text{del}=\{j\in[n]|S_j=0\}}$. Then there exists a bijective mapping $\sigma$ such that equation~\eqref{eq:conditiondeletiondetection} holds after the remapping. In addition, for a seed size $\Lambda_n=\Theta(\log n)=\Theta(\log\log m_n)$, using the algorithm above, we have
\iftoggle{singlecolumn}{
\vspace{-0.5em}
\begin{align}
\Pr\left(\hat{I}(\sigma)=I_\text{del}\right)&\to 1\text{ as }n\to\infty.
\end{align}
}{
\vspace{-0.5em}
\begin{align}
\Pr\left(\hat{I}(\sigma)=I_\text{del}\right)&\to 1\text{ as }n\to\infty.
\end{align}
}
\end{lem}
\begin{proof}
See Appendix~\ref{proof:seededdeletiondetection}.
\end{proof}
\subsection{Row Matching Scheme and Achievability}\label{subsec:matchingschemeWm}
We are now ready to prove the achievability of Theorem~\ref{thm:mainresultWm}.
\begin{proofachievableWm}
Let $S^n$ be the underlying column repetition pattern and $K_n\triangleq\sum_{j=1}^n S_j$ be the number of columns in $\mathbf{D}^{(2)}$. The matching scheme we propose follows these steps:
\begin{enumerate}[label=\textbf{ \arabic*)},leftmargin=1.3\parindent]
\item Perform replica detection as in Section~\ref{subsec:replicadetection}. The probability of error in this step is denoted by $\rho_n$.
\item Perform deletion detection using seeds as in Section~\ref{subsec:seededdeletiondetection}. The probability of error is denoted by $\mu_n$. At this step, we have an estimate $\hat{S}^n$ of $S^n$.
\item Using $\hat{S}^n$, place markers between the noisy replica runs of different columns to obtain $\tilde{\mathbf{D}}^{(2)}$. If a run has length 0, \emph{i.e.} deleted, introduce a column consisting of erasure symbol $\ast\notin\mathfrak{X}$. Note that provided that the detection algorithms in Steps~1 and 2 have performed correctly, there are exactly $n$ such runs, where the $j$\textsuperscript{th} run in $\tilde{\mathbf{D}}^{(2)}$ corresponds to the noisy copies of the $j$\textsuperscript{th} column of $\Theta_n\circ\tilde{\mathbf{D}}^{(1)}$ if $S_j\neq 0$, and an erasure column otherwise.
\begin{figure}[t]
\centerline{\includegraphics[width=0.50\textwidth]{Figures/noisygrouping.pdf}}
\caption{An example of the construction of $\tilde{\mathbf{D}}^{(2)}$, as described in Step~3 of the proof of Theorem~\ref{thm:mainresultWm} in Section~\ref{subsec:matchingschemeWm}, illustrated over a pair of rows $X^n$ of $\mathbf{D}^{(1)}$ and $Y^K$ of $\mathbf{D}^{(2)}$. After these steps, in Step~4 we check the joint typicality of the rows $X^n$ of $\mathbf{D}^{(1)}$ and $\tilde{Y}$ of $\tilde{\mathbf{D}}^{(2)}$.}
\label{fig:marker}
\end{figure}
\item Fix $\epsilon>0$. Match the $l$\textsuperscript{th} row $Y^{K_n}_{l}$ of $\tilde{\mathbf{D}}^{(2)}$ with the $i$\textsuperscript{th} row $X^n_i$ of $\tilde{\mathbf{D}}^{(1)}$ if $X_i^n$ is the only row of $\tilde{\mathbf{D}}^{(1)}$ jointly $\epsilon$-typical with $Y^{K_n}_l$ according to $p_{X^n,Y^{K_n},S^n}$,
where $S_i\overset{\text{iid}}{\sim} p_S$ and ${Y^{K_n}=Y^{S_1}_1,\dots,Y^{S_n}_n}$ such that
\iftoggle{singlecolumn}{
\begin{align}
p_{X^n,{Y}^K|S^n}(x^n,y^k|s^n)&=p_{X^n}(x^n) \prod\limits_{i: s_i>0}\left(\prod\limits_{j=1}^{s_i} p_{Y|X}((y^{s_i})_j|x_i)\right)
\prod\limits_{i: s_i=0} \mathbbm{1}_{[y^{s_i} = \ast]}
\end{align}
}{
\begin{align}
p&_{X^n,{Y}^K|S^n}(x^n,y^k|s^n)\notag\\&=p_{X^n}(x^n) \prod\limits_{i: s_i>0}\left(\prod\limits_{j=1}^{s_i} p_{Y|X}((y^{s_i})_j|x_i)\right)\notag\\
&\hspace{4.35em} \prod\limits_{i: s_i=0} \mathbbm{1}_{[y^{s_i} = \ast]}
\end{align}
}
with $y^k=y^{s_1}\dots y^{s_n}$.
Assign $\hat\Theta_n(i)=l$. If there is no such jointly typical row, or there are more than one, declare an error.
\end{enumerate}
The column discarding and the marker addition as described in Steps~3-4, are illustrated in Figure~\ref{fig:marker}.
Using the union bound and the generalized Asymptotic Equipartition Property (AEP)~\cite[Proposition 3]{shirani8849392}, the total probability of error of this scheme (as in~\eqref{eq:proberror}) can be bounded as follows
\iftoggle{singlecolumn}{
\begin{align}
P_e
&\le 2^{n R} 2^{-n(\bar{I}(X;Y^S,S)-3 \epsilon)}+\epsilon+\rho_n+\mu_n\label{eq:perrunion}
\end{align}
}{
\begin{align}
P_e
&\le 2^{n R} 2^{-n(\bar{I}(X;Y^S,S)-3 \epsilon)}+\epsilon+\rho_n+\mu_n\label{eq:perrunion}
\end{align}
}
where $\bar{I}(X;Y^S,S)$ is the mutual information rate~\cite{graymutualinforate} defined as
\iftoggle{singlecolumn}{
\begin{align}
\bar{I}(X;Y^S,S)&\triangleq\lim\limits_{n\to\infty} \frac{1}{n} I(X^n;Y^{K_n},S^n)
\end{align}
}{
\begin{align}
\bar{I}(X;Y^S,S)&\triangleq\lim\limits_{n\to\infty} \frac{1}{n} I(X^n;Y^{K_n},S^n)
\end{align}
}
Note that since $m_n$ is exponential in $n$, from Lemma~\ref{lem:noisyreplicadetection} we have $\rho_n\to0$. Furthermore, since $\Lambda_n=\Theta(\log n)$, from Lemma~\ref{lem:seededdeletiondetection} we have $\mu_n\to0$ as $n\to\infty$. Thus $P_e\le \epsilon$ as $n\to\infty$ if
\iftoggle{singlecolumn}{
\begin{align}
R&<\lim\limits_{n\to\infty} \frac{1}{n} I(X^n;Y^{K_n},S^n)
\end{align}
}{
\begin{align}
R&<\lim\limits_{n\to\infty} \frac{1}{n} I(X^n;Y^{K_n},S^n)
\end{align}
}
concluding the proof of the achievability part.
\end{proofachievableWm}
\subsection{Converse}
\label{subsec:converseWm}
In this subsection, we prove that the database growth rate achieved in Theorem~\ref{thm:mainresultWm} is in fact tight using a genie-aided proof where the column repetition pattern $S^n$ is known. Since the rows are \emph{i.i.d.} conditioned on the repetition pattern $S^n$, the seeds $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ do not offer any additional information when $S^n$ is given. Thus, the genie-aided proof holds for any seed size $\Lambda_n$.
\begin{proofconverseWm}
While Theorem~\ref{thm:mainresultWm} is stated for $\Lambda_n=\Omega(\log\log m_n)$, in the converse we assume any seed size $\Lambda_n$. We prove the converse using the modified Fano's inequality presented in~\cite{shirani8849392}.
Let $R$ be the database growth rate and $P_e$ be the probability that the scheme is unsuccessful for a uniformly-selected row pair. More formally,
\iftoggle{singlecolumn}{
\begin{align}
P_e&\triangleq \Pr\left(\boldsymbol{\Theta}_n(J)\neq\hat{\boldsymbol{\Theta}}_n(J)\right),\hspace{1em} J\sim\text{Unif}([m_n])
\end{align}
}{
\begin{align}
P_e&\triangleq \Pr\left(\boldsymbol{\Theta}_n(J)\neq\hat{\boldsymbol{\Theta}}_n(J)\right),\hspace{1em} J\sim\text{Unif}([m_n])
\end{align}
}
Suppose $P_e\to0$ as $n\to\infty$. Furthermore, let $S^n$ be the repetition pattern and $K_n=\sum_{j=1}^n S_j$. Since $\boldsymbol{\Theta}_n$ is a uniform permutation, from Fano's inequality, we have
\iftoggle{singlecolumn}{
\begin{align}
H(\boldsymbol{\Theta}_n)&\le 1+m_n P_e\log m_n+I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n)\label{eq:converseWmfirst}
\end{align}
}{
\begin{align}
H(\boldsymbol{\Theta}_n)\le & 1+m_n P_e\log m_n\notag\\
&+I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n)\label{eq:converseWmfirst}
\end{align}
}
From the independence of $\boldsymbol{\Theta}_n$, $\mathbf{D}^{(2)}$, $S^n$, $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ and $\mathbf{A}$, we get
\iftoggle{singlecolumn}{
\begin{align}
I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)},\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n)
&= I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)}|\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n)\\
&\le I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n;\mathbf{D}^{(1)})\\
&\le I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)},S^n;\mathbf{D}^{(1)})\label{eq:converseassumeS}\\
&= I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)};\mathbf{D}^{(1)}|S^n)\\
&=\sum\limits_{i=1}^{m_n} I(X_i^n;Y_{\Theta_n(i)}^{K_n}|S_{\Theta_n(i)}^n) \label{eq:converseWmindeprows}\\
&= m_n I(X^n;Y^{K_n}|S^n)\label{eq:converseWmidrows}\\
&= m_n I(X^n;Y^{K_n},S^n)\label{eq:converseWmidrows2}
\end{align}
}{
\begin{align}
I(\boldsymbol{\Theta}_n;\mathbf{D}&^{(1)},\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n)
\notag\\&= I(\boldsymbol{\Theta}_n;\mathbf{D}^{(1)}|\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n)\\
&\le I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)},\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{A},S^n;\mathbf{D}^{(1)})\\
&\le I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)},S^n;\mathbf{D}^{(1)})\label{eq:converseassumeS}\\
&= I(\boldsymbol{\Theta}_n,\mathbf{D}^{(2)};\mathbf{D}^{(1)}|S^n)\\
&=\sum\limits_{i=1}^{m_n} I(X_i^n;Y_{\Theta_n(i)}^{K_n}|S_{\Theta_n(i)}^n) \label{eq:converseWmindeprows}\\
&= m_n I(X^n;Y^{K_n}|S^n)\label{eq:converseWmidrows}\\
&= m_n I(X^n;Y^{K_n},S^n)\label{eq:converseWmidrows2}
\end{align}
}
where \eqref{eq:converseassumeS} follows from the fact that given the repetition pattern $S^n$, the seeds $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ and $\mathbf{A}$ do not offer any additional information on $\boldsymbol{\Theta}_n$. Equation~\eqref{eq:converseWmindeprows} follows from the conditional independence of the non-matching rows given $S^n$. Equation~\eqref{eq:converseWmidrows} follows from the fact that the matching rows are identically distributed conditioned on the repetition pattern ${S}^n$. Finally, \eqref{eq:converseWmidrows2} follows from the independence of $X^n$ and $S^n$.
Note that from Stirling's approximation~\cite[Chapter 3.2]{cormen2022introduction} and the uniformity of $\boldsymbol{\Theta}_n$, we get
\iftoggle{singlecolumn}{
\begin{align}
H(\boldsymbol{\Theta}_n)&= \log m_n!\\
&= m_n\log m_n - m_n \log e + O(\log m_n)
\end{align}
\begin{align}
\lim\limits_{n\to\infty}\frac{1}{m_n n}H(\boldsymbol{\Theta_n})&=\lim\limits_{n\to\infty}\frac{1}{m_n n} \left[m_n\log m_n - m_n \log e + O(\log m_n)\right]\\
&= \lim\limits_{n\to\infty}\frac{1}{n}\log m_n\\
&= R \label{eq:converseWmlast}
\end{align}
}{
\begin{align}
H(\boldsymbol{\Theta}_n)&= \log m_n!\\
&= m_n\log m_n - m_n \log e + O(\log m_n)
\end{align}
\begin{align}
\lim\limits_{n\to\infty}\frac{1}{m_n n}H(\boldsymbol{\Theta_n})&=\lim\limits_{n\to\infty}\frac{1}{m_n n} [m_n\log m_n \notag\\&\hspace{2.5em} - m_n \log e + O(\log m_n)]\\
&= \lim\limits_{n\to\infty}\frac{1}{n}\log m_n\\
&= R \label{eq:converseWmlast}
\end{align}
}
Finally, from \eqref{eq:converseWmfirst}-\eqref{eq:converseWmlast} we obtain
\iftoggle{singlecolumn}{
\begin{align}
R&= \lim\limits_{n\to\infty}\frac{1}{m_n n}H(\boldsymbol{\Theta_n})\\
&\le \lim\limits_{n\to\infty}\left[ \frac{1}{m_n n}+P_e R+\frac{1}{n} I(X^n;Y^{K_n},S^n)\right]\\
&= \lim\limits_{n\to\infty}\frac{I(X^n;Y^{K_n},S^n)}{n} \label{eqn:converselast}
\end{align}
}{
\begin{align}
R&= \lim\limits_{n\to\infty}\frac{1}{m_n n}H(\boldsymbol{\Theta_n})\\
&\le \lim\limits_{n\to\infty}\left[ \frac{1}{m_n n}+P_e R+\frac{1}{n} I(X^n;Y^{K_n},S^n)\right]\\
&= \lim\limits_{n\to\infty}\frac{I(X^n;Y^{K_n},S^n)}{n} \label{eqn:converselast}
\end{align}
}
where \eqref{eqn:converselast} follows from the fact that $P_e\to 0$ as $n\to\infty$.
\end{proofconverseWm}
\subsection{Noiseless Setting}
\label{subsec:noiselessWm}
Lemmas~\ref{lem:noisyreplicadetection} and \ref{lem:seededdeletiondetection} state that given a seed size $\Lambda_n$ double logarithmic with the row size $m_n$, the repetition pattern can be inferred through the aforementioned replica and deletion detection algorithms for any noise distribution $p_{Y|X}$. Thus, the results of Section~\ref{subsec:replicadetection} through Section~\ref{subsec:matchingschemeWm} trivially apply to the noiseless setting where
\iftoggle{singlecolumn}{
\begin{align}
p_{Y|X}(y|x) &=\mathbbm{1}_{[y=x]}\:\forall (x,y)\in\mathfrak{X}^2.
\end{align}
}{
\begin{align}
p_{Y|X}(y|x) &=\mathbbm{1}_{[y=x]}\:\forall (x,y)\in\mathfrak{X}^2.
\end{align}
}
We note that when there is no noise, the capacity expression of Theorem~\ref{thm:mainresultWm} (Equation \ref{eq:matchingcap}) can be further simplified as
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,0) &= (1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r H(X_0|X_{-r-1}).
\end{align}
}{
\begin{align}
C(\infty,0) &= (1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r H(X_0|X_{-r-1}).
\end{align}
}
In this subsection, we show that in the noiseless setting, seeds can be made obsolete by the use of a novel detection algorithm.
In other words, in the noiseless setting, we show that Theorem~\ref{thm:mainresultWm} can be extended to any seed size $\Lambda_n$.
\begin{thm}{\textbf{(Noiseless Matching Capacity for Identical Repetition)}}\label{thm:noiselesscapacityWm}
Consider a probability transition matrix $\mathbf{P}$ and a repetition probability distribution $p_S$. Suppose there is no noise, \emph{i.e.,}
\iftoggle{singlecolumn}{
\begin{align}
p_{Y|X}(y|x) &=\mathbbm{1}_{[y=x]}\:\forall (x,y)\in\mathfrak{X}^2.
\end{align}
}{
\begin{align}
p_{Y|X}(y|x) &=\mathbbm{1}_{[y=x]}\:\forall (x,y)\in\mathfrak{X}^2.
\end{align}
}
Then, the matching capacity is
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,\alpha) &= (1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r H(X_0|X_{-r-1})\label{eq:noiselesscapacity}
\end{align}
}{
\begin{align}
C(\infty,\alpha) &= (1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r H(X_0|X_{-r-1})\label{eq:noiselesscapacity}
\end{align}
}
for any seed size $\Lambda_n$ and deletion detection probability $\alpha\ge0$. Here $\delta\triangleq p_S(0)$ is the deletion probability and $H(X_0|X_{-r-1})$ is the entropy rate associated with the probability transition matrix
\iftoggle{singlecolumn}{
\begin{align}
\mathbf{P}^{r+1}&=\gamma^{r+1} \mathbf{I}+(1- \gamma^{r+1}) \mathbf{U} \label{eq:Ppower}
\end{align}
}{
\begin{align}
\mathbf{P}^{r+1}&=\gamma^{r+1} \mathbf{I}+(1- \gamma^{r+1}) \mathbf{U} \label{eq:Ppower}
\end{align}
}
The capacity can further be simplified as
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,\alpha) &= \frac{(1-\delta)(1-\gamma)}{(1-\gamma\delta)} [H(\pi)+\sum\limits_{i\in\mathfrak{X}} u_i^2\log u_i]- (1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r \sum\limits_{i\in \mathfrak{X}} u_i \eta_{r,i} \log \eta_{r,i}\label{eq:thm2eval}
\end{align}
}{
\begin{align}
C(\infty,\alpha) &= \frac{(1-\delta)(1-\gamma)}{(1-\gamma\delta)} [H(\pi)+\sum\limits_{i\in\mathfrak{X}} u_i^2\log u_i]\notag\\&\hspace{1.75em}- (1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r \sum\limits_{i\in \mathfrak{X}} u_i \eta_{r,i} \log \eta_{r,i}\label{eq:thm2eval}
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
\eta_{r,i}\triangleq (1-u_i) \gamma^{r+1}+ u_i.
\end{align}
}{
\begin{align}
\eta_{r,i}\triangleq (1-u_i) \gamma^{r+1}+ u_i.
\end{align}
}
\end{thm}
Observe that the RHS of~\eqref{eq:noiselesscapacity} is the mutual information rate for an erasure channel with erasure probability $\delta$ with first-order Markov $(\mathbf{P})$ inputs, as stated in~\cite[Corollary II.2]{li2014input}. Thus, Theorem~\ref{thm:noiselesscapacityWm} states that we can achieve the erasure bound which assumes a-priori knowledge of the column repetition pattern.
The proof of Theorem~\ref{thm:noiselesscapacityWm} hinges on the observation that in the noiseless setting deletion and replica detection can be performed without seeds. Inspired by the idea of extracting permutation-invariant features as done in Section~\ref{subsec:replicadetection}, our noiseless repetition detection algorithm uses the histogram (and equivalently the type) of each column of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$ as the permutation-invariant feature. Our repetition detection algorithm works as follows: First, for tractability, we \say{collapse} the Markov chain into a binary-valued one. We pick a symbol $x$ from the alphabet $\mathfrak{X}$, WLOG $x=1$, and define the \emph{collapsed} databases $\tilde{\mathbf{D}}^{(1)}$ and $\tilde{\mathbf{D}}^{(2)}$ as follows:
\iftoggle{singlecolumn}{
\begin{align}
\tilde{\mathbf{D}}^{(r)}_{i,j} &= \begin{cases}
1 & \text{if } {\mathbf{D}}^{(r)}_{i,j} = 1\\
2 & \text{if } {\mathbf{D}}^{(r)}_{i,j} \neq 1
\end{cases}, \: \forall (i,j),\: r=1,2
\end{align}
}{
\begin{align}
\tilde{\mathbf{D}}^{(r)}_{i,j} &= \begin{cases}
1 & \text{if } {\mathbf{D}}^{(r)}_{i,j} = 1\\
2 & \text{if } {\mathbf{D}}^{(r)}_{i,j} \neq 1
\end{cases}, \: \forall (i,j),\: r=1,2
\end{align}
}
Next, we construct the collapsed histogram vectors $\tilde{{H}}^{(1),n}$ and $\tilde{{H}}^{(2),{K_n}}$ as
\iftoggle{singlecolumn}{
\begin{align}
\tilde{H}_j^{(r)}&=\sum\limits_{i=1}^{m_n} \mathbbm{1}_{\left[\tilde{D}^{(r)}_{i,j}=2 \right]},\quad
\begin{cases}
\forall j\in [n],&\text{if } r=1 \\
\forall j\in [{K_n}] & \text{if } r=2
\end{cases}\label{eq:histogramdefn}
\end{align}
}{
\begin{align}
\tilde{H}_j^{(r)}&=\sum\limits_{i=1}^{m_n} \mathbbm{1}_{\left[\tilde{D}^{(r)}_{i,j}=2 \right]},\quad
\begin{cases}
\forall j\in [n],&\text{if } r=1 \\
\forall j\in [{K_n}] & \text{if } r=2
\end{cases}\label{eq:histogramdefn}
\end{align}
}
Then, the algorithm declares the $j$\textsuperscript{th} column deleted if $\tilde{H}^{(1)}_j$ is absent in $\tilde{{H}}^{(2),{K_n}}$ and declares the $j$\textsuperscript{th} column replicated $s$ times if $\tilde{H}^{(1)}_j$ is present $s\ge 1$ times in $\tilde{{H}}^{(2),{K_n}}$.
Note that as long as column histograms $\tilde{H}^{(1)}_j$ of the collapsed database $\tilde{\mathbf{D}}^{(1)}$ are unique, this detection process is error-free.
The following lemma provides conditions for the asymptotic uniqueness of column histograms ${\tilde{H}_j^{(1)}}$, ${j\in[n]}$.
\begin{lem}{\textbf{(Asymptotic Uniqueness of the Column Histograms)}}\label{lem:histogram}
Let $\tilde{H}^{(1)}_j$ denote the histogram of the $j$\textsuperscript{th} column of $\tilde{\mathbf{D}}^{(1)}$, as in~\eqref{eq:histogramdefn}.
Then, for $m_n=\omega(n^4)$, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr\left(\exists i,j\in [n],\: i\neq j,\tilde{H}^{(1)}_i=\tilde{H}^{(1)}_j\right)\to 0 \text{ as }n\to \infty.
\end{align}
}{
\begin{align}
\Pr\left(\exists i,j\in [n],\: i\neq j,\tilde{H}^{(1)}_i=\tilde{H}^{(1)}_j\right)\to 0 \text{ as }n\to \infty.
\end{align}
}
\end{lem}
\begin{proof}
See Appendix~\ref{proof:histogram}.
\end{proof}
When the databases are not collapsed, the order relation given in Lemma~\ref{lem:histogram} can be tightened. See Section~\ref{subsec:zerorateWm} for more details.
Note that by Definition~\ref{defn:dbgrowthrate}, the row size $m_n$ is exponential in the column size $n$ and the order relation of Lemma~\ref{lem:histogram} is automatically satisfied.
Next, we present the proof of the achievability part of Theorem~\ref{thm:noiselesscapacityWm}.
\begin{proofachievablenoiseless}
Let $S^n$ be the underlying repetition pattern and ${K_n}\triangleq\sum_{j=1}^n S_j$ be the number of columns in $\mathbf{D}^{(2)}$. Our matching scheme consists of the following steps:
\begin{enumerate}[label=\textbf{\arabic*)},leftmargin=1.3\parindent]
\item Construct the collapsed histogram vectors $\tilde{{H}}^{(1),n}$ and $\tilde{{H}}^{(2),{K_n}}$ as in~\eqref{eq:histogramdefn}.
\item Check the uniqueness of the entries $\tilde{H}^{(1)}_j$ $j\in[n]$ of $\tilde{{H}}^{(1),n}$. If there are at least two which are identical, declare a \emph{detection error} whose probability is denoted by $\mu_n$. Otherwise, proceed with Step~3.
\item If $\tilde{H}^{(1)}_j$ is absent in $\tilde{{H}}^{(2),{K_n}}$, declare it deleted, assigning $\hat{S}_j=0$. Note that, conditioned on the uniqueness of the column histograms $\tilde{H}^{(1)}_j$ $\forall j\in[n]$, this step is error-free.
\item If $\tilde{H}^{(1)}_j$ is present $s\ge 1$ times in $\tilde{{H}}^{(2),{K_n}}$ , assign $\hat{S}_j=s$. Again, if there is no detection error in Step~2, this step is error-free. Note that at the end of this step, provided there are no detection errors, we recover $S^n$, \emph{i.e.}, $\hat{{S}}^n={S}^n$.
\item Based on $\hat{{S}}^n$, $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$, construct $\bar{\mathbf{D}}^{(2)}$ as the following:
\begin{itemize}
\item If $\hat{S}_j = 0$, the $j$\textsuperscript{th} column of $\bar{\mathbf{D}}^{(2)}$ is a column consisting of erasure symbol $\ast\notin\mathfrak{X}$.
\item If $\hat{S}_j \ge 1$, the $j$\textsuperscript{th} column of $\bar{\mathbf{D}}^{(2)}$ is the $j$\textsuperscript{th} column of $\mathbf{D}^{(1)}$.
\end{itemize}
Note that after the removal of the additional replicas and the introduction of the erasure symbols, $\bar{\mathbf{D}}^{(2)}$ has $n$ columns.
\item Fix $\epsilon>0$. Let $q_{\bar{Y}|X}$ be the probability transition matrix of an erasure channel with erasure probability $\delta$, that is $\forall (x,\bar{y})\in\mathfrak{X}\times(\mathfrak{X}\cup \{\ast\})$
\iftoggle{singlecolumn}{
\begin{align}
q_{\bar{Y}|X}(\bar{y}|x) &= \begin{cases}
1-\delta &\text{if }\bar{y}=x\\
\delta &\text{if }\bar{y}=\ast
\end{cases}. \label{eq:erasure}
\end{align}
}{
\begin{align}
q_{\bar{Y}|X}(\bar{y}|x) &= \begin{cases}
1-\delta &\text{if }\bar{y}=x\\
\delta &\text{if }\bar{y}=\ast
\end{cases}. \label{eq:erasure}
\end{align}
}
We consider the input to the memoryless erasure channel as the $i$\textsuperscript{th} row $X^n_i$ of $\mathbf{D}^{(1)}$. The output $\bar{Y}^n$ is the matching row of $\bar{\mathbf{D}}^{(2)}$. For our row matching algorithm, we match the $l$\textsuperscript{th} row $\bar{{Y}}^n_{l}$ of $\bar{\mathbf{D}}^{(2)}$ with the $i$\textsuperscript{th} row $X^n_i$ of $\mathbf{D}^{(1)}$, if $X^n_i$ is the only row of $\mathbf{D}^{(1)}$ jointly $\epsilon$-typical~\cite[Chapter 3]{cover2006elements} with $\bar{{Y}}^n_l$ with respect to $p_{X^n,Y^n}$, where
\iftoggle{singlecolumn}{
\begin{align}
p_{X^n,\bar{Y}^n}(x^n,\bar{y}^n) &= p_{X^n}(x^n) \prod\limits_{j=1}^n q_{Y|X}(\bar{y}_j|x_j)\label{eq:markovinput}
\end{align}
}{
\begin{align}
p_{X^n,\bar{Y}^n}(x^n,\bar{y}^n) &= p_{X^n}(x^n) \prod\limits_{j=1}^n q_{Y|X}(\bar{y}_j|x_j)\label{eq:markovinput}
\end{align}
}
where $X^n$ denotes the Markov chain of length $n$ with probability transition matrix $\mathbf{P}$. This results in $\hat\Theta_n(1)=l$.
Otherwise, declare \emph{collision error}.
\end{enumerate}
Similar to~\eqref{eq:perrunion}, using the union bound and the generalized AEP, the total probability of error of this scheme can be bounded as follows
\iftoggle{singlecolumn}{
\begin{align}
P_e &\le \mu_n + \epsilon + 2^{n(R-\bar{I}(X;\bar{Y})+3\epsilon)}
\end{align}
}{
\begin{align}
P_e &\le \mu_n + \epsilon + 2^{n(R-\bar{I}(X;\bar{Y})+3\epsilon)}
\end{align}
}
Since $m_n$ is exponential in $n$, by Lemma~\ref{lem:histogram}, ${\mu_n\to0}$ as ${n\to\infty}$. Thus
\iftoggle{singlecolumn}{
\begin{align}
P_e&< 3 \epsilon \text{ as }n\to\infty
\end{align}
}{
\begin{align}
P_e&< 3 \epsilon \text{ as }n\to\infty
\end{align}
}
if
$R<\bar{I}(X;\bar{Y})-3\epsilon$. Thus, we can argue that any database growth rate $R$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
R&<\bar{I}(X;\bar{Y})\label{eq:achievable}
\end{align}
}{
\begin{align}
R&<\bar{I}(X;\bar{Y})\label{eq:achievable}
\end{align}
}
is achievable, by taking $\epsilon$ small enough. From~\cite[Corollary II.2]{li2014input} we have
\iftoggle{singlecolumn}{
\begin{align}
\bar{I}(X;\bar{Y})&=(1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r H(X_0|X_{-r-1}) \label{eq:MIrate}
\end{align}
}{
\begin{align}
\bar{I}(X;\bar{Y})&=(1-\delta)^2 \sum\limits_{r=0}^\infty \delta^r H(X_0|X_{-r-1}) \label{eq:MIrate}
\end{align}
}
where $H(X_0|X_{-r-1})$ is the entropy rate associated with the probability transition matrix $\mathbf{P}^{r+1}$.
Now, we argue that \eqref{eq:Ppower} can be proven via induction on $r$ by taking \eqref{eq:markovtransitionmatrix} as a base case and observing that $\mathbf{U}^2 = \mathbf{U}$. Finally, plugging $\pi$ and $\mathbf{P}^{r+1}$ directly into~\cite[Theorem 4.2.4]{cover2006elements} yields \eqref{eq:thm2eval}, concluding the achievability part of the proof.
\end{proofachievablenoiseless}
Next, we move on to prove the converse part of Theorem~\ref{thm:noiselesscapacityWm}.
\begin{proofconversenoiseless}
Since the converse part of Theorem~\ref{thm:mainresultWm} holds for any seed size $\Lambda_n$, in the noiseless setting, we trivially have
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,\alpha) &\le \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n},S^n)}{n}.
\end{align}
}{
\begin{align}
C(\infty,\alpha) &\le \lim\limits_{n\to\infty} \frac{I(X^n;Y^{K_n},S^n)}{n}.
\end{align}
}
Next, note that there is a bijective mapping between $(Y^{K_n},S^n)$ and $(\bar{{Y}}^n,{S}^n)$. Therefore, we have
\iftoggle{singlecolumn}{
\begin{align}
I({X}^n;{Y}^{K_n},{S}^n) &= I({X}^n;\bar{{Y}}^n,{S}^n)\label{eq:noadditionalinfo1}\\
&= I({X}^n;\bar{{Y}}^n) + I({X}^n;{S}^n|\bar{{Y}}^n)\\
&= I({X}^n;\bar{{Y}}^n) \label{eq:noadditionalinfo2}
\end{align}
}{
\begin{align}
I({X}^n;{Y}^{K_n},{S}^n) &= I({X}^n;\bar{{Y}}^n,{S}^n)\label{eq:noadditionalinfo1}\\
&= I({X}^n;\bar{{Y}}^n) + I({X}^n;{S}^n|\bar{{Y}}^n)\\
&= I({X}^n;\bar{{Y}}^n) \label{eq:noadditionalinfo2}
\end{align}
}
where \eqref{eq:noadditionalinfo2} follows from the independence of ${S}^n$ and ${X}^n$ conditioned on $\bar{{Y}}^n$. This is because since $\bar{{Y}}^n$ is stripped of all extra replicas, from $(X^n,\bar{{Y}}^n)$ we can only infer the zeros of $S^n$, which is already known through $\bar{{Y}}^n$ via erasure symbols. Thus, we have
\iftoggle{singlecolumn}{
\begin{align}
C(\infty,\alpha) &\le \bar{I}(X;\bar{Y})
\end{align}
}{
\begin{align}
C(\infty,\alpha) &\le \bar{I}(X;\bar{Y})
\end{align}
}
where $\bar{I}(X;\bar{Y})$ is defined in~\eqref{eq:MIrate}, concluding the proof of the converse part.
\end{proofconversenoiseless}
\section{Extensions}
\label{sec:discussion}
In this section, we discuss extensions to the system model and results. Specifically, in Section~\ref{subsec:adversarialrepetition}, we investigate the adversarial repetition case instead of random repetitions, where the repetitions are not due to random sampling of the time-indexed data, but due to a constrained privacy mechanism. In Section~\ref{subsec:seedlessWm}, we consider the identical repetition model with no seeds. In Section~\ref{subsec:zerorateWm}, we discuss the zero-rate regime, where the row size $m_n$ is not necessarily exponential in the column size $n$, and derive conditions necessary for the detection algorithms discussed in Section~\ref{sec:matchingcapacityWm} to work.
\subsection{What If Repetitions Are Intentional?}
\label{subsec:adversarialrepetition}
So far, as stated in Definition~\ref{defn:repetitionpattern}, we have assumed that the repetitions occur randomly according to a discrete probability distribution $p_S$ with finite integer support. In this subsection, we study the case of an adversary who controls the repetition pattern (under some constraints) to make matching as difficult as possible. This could arise for example where a privacy-preserving mechanism denies the sampling of the geolocation data when that data contains the most information about the users, such as their home addresses. We consider the adversarial setting under identical repetition assumption.
We stress that in the identical repetition setting, \emph{i.e.,} $W_n=m_n$, the replicas either have no effect on the matching capacity as in the noiseless case (Theorem~\ref{thm:noiselesscapacityWm}) or offer additional information acting as a repetition code of random length, in turn increasing the matching capacity (Theorem~\ref{thm:mainresultWm}). Hence, it is expected that any adversary who tries to hinder the matching process to not allow the replication of entries. Therefore in the adversarial repetition setting, it is natural to focus on the deletion-only case. We assume an adversary with a $\delta$-\emph{deletion budget}, which can delete up to $\delta$ fraction of the columns, to maximize the mismatch probability. For tractability, we focus on the noiseless case with \emph{i.i.d.} database entries. More formally, we assume $X_i\overset{\text{iid}}{\sim}p_X$ where
\iftoggle{singlecolumn}{
\begin{align}
p_{Y|X}(y|x) &=\mathbbm{1}_{[y=x]},\hspace{1em}\forall (x,y)\in\mathfrak{X}^2
\end{align}
}{
\begin{align}
p_{Y|X}(y|x) &=\mathbbm{1}_{[y=x]},\hspace{1em}\forall (x,y)\in\mathfrak{X}^2
\end{align}
}
Under these assumptions, we define the adversarial matching capacity as follows:
\begin{defn}{\textbf{(Adversarial Matching Capacity)}}\label{defn:matchingcapacityadversarial}
The \emph{adversarial matching capacity} $C^{\text{adv}}(\delta)$ is the supremum of the set of all achievable rates corresponding to a database distribution $p_X$ and an adversary with a $\delta$-\emph{deletion budget} when there is identical repetition. More formally,
\iftoggle{singlecolumn}{
\begin{align}
C^{\text{adv}}(\delta) &\triangleq \sup \{R: \forall I_{\text{del}}=(i_1,\dots,i_{n\delta})\subseteq [n], \Pr(\hat{\Theta}_n(J)\neq \Theta_n(J))\overset{n\to\infty}{\longrightarrow} 0, J\sim\text{Uniform}([m_n]))\}
\end{align}
}{
\begin{align}
C^{\text{adv}}(\delta) &\triangleq \sup \{R: \forall I_{\text{del}}=(i_1,\dots,i_{n\delta})\subseteq [n],\notag\\& \hspace{3em}\Pr(\hat{\Theta}_n(J)\neq \Theta_n(J))\overset{n\to\infty}{\longrightarrow} 0,\notag\\ &\hspace{3em}J\sim\text{Uniform}([m_n]))\}
\end{align}
}
where the dependence of the matching scheme $\hat{\Theta}_n$ on the database growth rate $R$ and the column deletion index set $I_{\text{del}}$ is omitted for brevity.
\end{defn}
Note that in this setting, although the deletions are not random, the matching error is still a random variable due to the random natures of $\mathbf{D}^{(1)}$ and $\boldsymbol{\Theta}_n$.
In the proof of Theorem~\ref{thm:adversarialWm} below (Appendix~\ref{proof:adversarialWm}), we argue that in the adversarial setting, we can still convert deletions into erasures via the histogram-based repetition detection algorithm of Section~\ref{subsec:noiselessWm}. After the detection part, we use the following matching scheme: We first remove deleted columns from $\mathbf{D}^{(1)}$, and then perform exact sequence matching.
We state our main result on the adversarial matching capacity in the following theorem:
\begin{thm}{\textbf{(Adversarial Matching Capacity)}}\label{thm:adversarialWm}
Consider a database distribution $p_X$ and an adversary with a $\delta$-\emph{deletion budget} when there is identical repetition ($W_n=m_n$). Then the adversarial matching capacity is
\iftoggle{singlecolumn}{
\begin{align}
C^{\text{adv}}(\delta) &= \begin{cases}
D(\delta\|1-\hat{q}),&\text{if } \delta\le 1-\hat{q}\\
0, &\text{if } \delta> 1-\hat{q}
\end{cases}
\end{align}
}{
\begin{align}
C^{\text{adv}}(\delta) &= \begin{cases}
D(\delta\|1-\hat{q}),&\text{if } \delta\le 1-\hat{q}\\
0, &\text{if } \delta> 1-\hat{q}
\end{cases}
\end{align}
}
where $\hat{q} \triangleq \sum_{x\in\mathfrak{X}} p_X(x)^2$.
\end{thm}
\begin{proof}
See Appendix~\ref{proof:adversarialWm}.
\end{proof}
\begin{figure}[t]
\centerline{\includegraphics[width=0.5\textwidth]{Figures/adversarialmatchingcapacity.pdf}}
\caption{Matching capacities $C$ vs. deletion probability/budget ($\delta$) when $X\sim \text{Unif}(\mathfrak{X})$, $\mathfrak{X}=[5]$. Notice that in this case $\hat{q}=0.2$ and for $\delta>1-\hat{q}=0.8$ the adversarial matching capacity $C^{\text{adv}}(\delta)$ is zero, while the matching capacity with random deletions $C(\infty,0)$ is positive.}
\label{fig:adversarialcapacity}
\end{figure}
\begin{sloppypar}
The matching capacities for random and adversarial deletions as a function of the deletion probability/budget are illustrated in Figure~\ref{fig:adversarialcapacity}. Note that for $\delta>1-\hat{q}$, we have $C^{\text{adv}}(\delta)=0$ whereas ${C(\infty,0)=(1-\delta) H(X)>0}$. Furthermore, when $\delta\le 1-\hat{q}$ the matching capacity is significantly reduced when the column deletions are intentional rather than random.
\end{sloppypar}
\subsection{What If There Were No Seeds?}
\label{subsec:seedlessWm}
In Section~\ref{sec:matchingcapacityWm}, we assumed the availability of seeds with a seed size $\Lambda_n=\Omega(\log\log m_n)$. Now, we focus on the identical repetition scenario with no seeds.
Note that the replica detection algorithm of Section~\ref{subsec:replicadetection} does not require any seeds. Therefore in the seedless scenario, we can still detect the replicas with a vanishing probability of error. On the other hand, in the general noisy setting, the deletion detection algorithm of Section~\ref{subsec:seededdeletiondetection} necessitates seeds. Therefore, in the case of no seeds, we cannot perform deletion detection and we need to modify the matching scheme of Section~\ref{subsec:matchingschemeWm} to obtain lower bounds on the matching capacity $C^{\text{seedless}}(\infty,0)$.
For tractability, we focus on the case with \emph{i.i.d.} database entries, \emph{i.e.,} $\gamma=0$. More formally, we assume $X_i\overset{\text{iid}}{\sim}p_X$.
Under this assumption, we state a lower bound on the unseeded matching capacity with identical repetition in the following theorem.
\begin{thm}{\textbf{(Seedless Matching Capacity with Identical Repetition})}\label{thm:mainresultseedless}
Consider a database distribution $p_X$, a noise distribution $p_{Y|X}$, a repetition distribution $p_S$ and an identical repetition pattern. Then, in the seedless case, the matching capacity $C^{\text{seedless}}(\infty,0)$ satisfies
\iftoggle{singlecolumn}{
\begin{align}
C^{\text{seedless}}(\infty,0)&\ge \left[I(X;Y^S,S)-H_b(\delta)\right]^+\label{eq:seedlessachievable}\\
C^{\text{seedless}}(\infty,0) &\le I(X;Y^S,S)\label{eq:seedlessconverse}
\end{align}
}{
\begin{align}
C^{\text{seedless}}(\infty,0)&\ge \left[I(X;Y^S,S)-H_b(\delta)\right]^+\label{eq:seedlessachievable}\\
C^{\text{seedless}}(\infty,0) &\le I(X;Y^S,S)\label{eq:seedlessconverse}
\end{align}
}
where $\delta\triangleq p_S(0)$ is the deletion probability, $S\sim p_S$ and ${Y^S}$ has the following distribution conditioned on $X$ such that
\iftoggle{singlecolumn}{
\begin{align}
\Pr(Y^{S}=y^{S}|X=x)&=\begin{cases}
\prod\limits_{j=1}^{S} p_{Y|X}(y_j|x)&\text{if }S>0\\
\mathbbm{1}_{[y^{s} = E]} &\text{if }S=0
\end{cases}
\end{align}
}{
\begin{align}
\Pr(Y^{S}=y^{S}|X=x)&=\begin{cases}
\prod\limits_{j=1}^{S} p_{Y|X}(y_j|x)&\text{if }S>0\\
\mathbbm{1}_{[y^{s} = E]} &\text{if }S=0
\end{cases}
\end{align}
}
where $E$ denotes the empty string.
Furthermore, for repetition distributions with $\delta\le 1-\nicefrac{1}{|\mathfrak{X}|}$, the lower bound can be tightened as
\iftoggle{singlecolumn}{
\begin{align}
C^{\text{no seed}}(\infty,0)&\ge \left[I(X;Y^S,S)+\delta[H(X)-\log(|\mathfrak{X}|-1)]^+-H_b(\delta)\right]^+
\end{align}
}{
\begin{align}
C^{\text{no seed}}(\infty,0)&\ge [I(X;Y^S,S)-H_b(\delta)\notag\\&\hspace{0.5em}+\delta[H(X)-\log(|\mathfrak{X}|-1)]^+]^+
\end{align}
}
\end{thm}
\begin{proof}
See Appendix~\ref{proof:mainresultseedless}.
\end{proof}
We note that although the converse results of Theorems~\ref{thm:mainresultWm} and \ref{thm:mainresultseedless} match, the achievable rates differ by $H_b(\delta)$. In other words, Theorem~\ref{thm:mainresultseedless} implies that the gap between the lower and the upper bounds on the seedless matching capacity is at most $H_b(\delta)$. We note that this gap is due to our inability to detect deletions in the achievability part. Hence, we conjecture that the lower bound in Theorem~\ref{thm:mainresultseedless} is loose while the converse is tight. This is because in the noiseless setting, as discussed in Section~\ref{subsec:noiselessWm}, deletion detection can be performed without seeds and the achievability bound is indeed improved and tight.
\subsection{Zero-Rate Regime}
\label{subsec:zerorateWm}
In Section~\ref{sec:matchingcapacityWm}, we considered at the matching capacity $C(\infty,0)$ for $\Lambda_n=\Omega(\log\log m_n)$ when the database growth rate $R$ is positive. In other words, so far, we have assumed
\iftoggle{singlecolumn}{
\begin{align}
\lim\limits_{n\to\infty} \frac{1}{n}\log m_n &>0
\end{align}
}{
\begin{align}
\lim\limits_{n\to\infty} \frac{1}{n}\log m_n &>0
\end{align}
}
The detection algorithms we presented in Sections~\ref{subsec:replicadetection} through \ref{subsec:noiselessWm} depended on the row size $m_n$ being large compared to the column size $n$. In this section, we further investigate these algorithms to derive the sufficient and/or necessary conditions on the relation between $m_n$ and $n$ in order for them to work in the zero-rate regime where
\iftoggle{singlecolumn}{
\begin{align}
\lim\limits_{n\to\infty} \frac{1}{n}\log m_n &=0.
\end{align}
}{
\begin{align}
\lim\limits_{n\to\infty} \frac{1}{n}\log m_n &=0.
\end{align}
}
Since $R=0$, we define the non-asymptotic database growth rate $R_n$ as
\iftoggle{singlecolumn}{
\begin{align}
R_n &\triangleq \frac{1}{n}\log m_n.
\end{align}
}{
\begin{align}
R_n &\triangleq \frac{1}{n}\log m_n.
\end{align}
}
Here, $R=0$ trivially implies $R_n\to0$ as $n\to\infty$. Below we investigate the sufficient conditions on $R_n$ such that the results of Sections~\ref{sec:matchingcapacityWm} and \ref{sec:matchingcapacityW1} hold.
\subsubsection{Noisy Replica Detection}
We consider the replica detection algorithm discussed in Section~\ref{subsec:replicadetection}. Note that the RHS of equation~\eqref{eq:replicadetectionlast} of Appendix~\ref{proof:noisyreplicadetection} has $2K-2\le 2 n s_{\max} = O(n)$ additive terms, each decaying exponentially in $m_n$. Thus, for a given average Hamming distance threshold $\tau\in(p_1,p_0)$ which is chosen based based on $\mathbf{P}$ and $p_{Y|X}$ and in turn constant with respect to $n$
\iftoggle{singlecolumn}{
\begin{align}
m_n &\ge \frac{\log(n s_{\max})}{\min\{D(\tau\|p_0),D(1-\tau\|1-p_1)\}}=\Theta(\log n)
\end{align}
}{
\begin{align}
m_n &\ge \frac{\log(n s_{\max})}{\min\{D(\tau\|p_0),D(1-\tau\|1-p_1)\}}\notag\\&=\Theta(\log n)
\end{align}
}
is enough to ensure a vanishing replica detection error probability. In other words, as long as $m_n =\Omega(\log n)$ and in turn
\iftoggle{singlecolumn}{
\begin{align}
R_n &= \Omega\left(\frac{\log \log n}{n}\right)
\end{align}
}{
\begin{align}
R_n &= \Omega\left(\frac{\log \log n}{n}\right)
\end{align}
}
our replica detection algorithm works.
\subsubsection{Seeded Deletion Detection}
We study the seeded deletion detection algorithm discussed in Section~\ref{subsec:seededdeletiondetection}. Note that we only run the deletion detection algorithm on the seeds $(\mathbf{G}^{(1)},\mathbf{G}^{(2)})$ and not on the database pair $(\mathbf{D}^{(1)},\mathbf{D}^{(2)})$ directly, the relationship between $m_n$ and $n$ does not affect the success of the deletion detection. Thus, as long as the seed size $\Lambda_n=\Omega(\log n)$ our deletion detection algorithm works for any database growth rate, including the zero-rate regime. This in turn implies that $m_n\ge \Lambda_n=\Omega(\log n)$ and
\iftoggle{singlecolumn}{
\begin{align}
R_n &= \Omega\left(\frac{\log \log n}{n}\right).
\end{align}
}{
\begin{align}
R_n &= \Omega\left(\frac{\log \log n}{n}\right).
\end{align}
}
\subsubsection{Noiseless Joint Deletion-Replication Detection}
We investigate the histogram-based joint deletion-replication detection algorithm introduced in Section~\ref{subsec:noiselessWm} for the noiseless scenario. By Lemma~\ref{lem:histogram}, $m_n=\omega(n^4)$ is sufficient. Thus, as long as $\log m_n \ge 4 \log n$, the histogram-based detection can be performed with a performance guarantee. In turn, for any
\iftoggle{singlecolumn}{
\begin{align}
R_n&= \Omega\left(\frac{\log n}{n}\right)
\end{align}
}{
\begin{align}
R_n&= \Omega\left(\frac{\log n}{n}\right)
\end{align}
}
the histogram-based detection algorithm has a vanishing probability of error.
Therefore, in the noiseless setting, database growth rate $\smash{R_n=\Omega\left(\nicefrac{\log n}{n}\right)}$ provides enough granularity on the column histograms and we can perform detection with a decaying probability of error which then leads to asymptotically-zero mismatch probability.
Note that, for tractability, so far we have collapsed the databases into binary-valued ones. Further, in Lemma~\ref{lem:histogram}, we showed that for the collapsed databases $m_n=\omega(n^4)$ is enough for the asymptotic uniqueness of the column histograms.
We now tighten this order relation for the special case where $\gamma=0$ results in an \emph{i.i.d.} database distribution $X_{i,j}\overset{\text{i.i.d.}}{\sim}p_X$ with support $\mathfrak{X}$.
\begin{lem}{\textbf{(Asymptotic Uniqueness of the Uncollapsed Histograms)}}\label{lem:histogramuncollapsed}
Consider an \emph{i.i.d.} database distribution $p_X$. Let ${H}^{(1)}_j$ denote the histogram of the $j$\textsuperscript{th} column of $\mathbf{D}^{(1)}$.
Then,
\iftoggle{singlecolumn}{
\begin{align}
\Pr\left(\exists i,j\in [n],\: i\neq j,H^{(1)}_i=H^{(1)}_j\right)\to 0 \text{ as }n\to \infty
\end{align}
}{
\begin{align}
\Pr\left(\exists i,j\in [n],\: i\neq j,H^{(1)}_i=H^{(1)}_j\right)\to 0 \text{ as }n\to \infty
\end{align}
}
if $m_n=\omega(n^\frac{4}{|\mathfrak{X}|-1})$.
\end{lem}
\begin{proof}
See Appendix~\ref{proof:histogramuncollapsed}.
\end{proof}
Note that in the binary setting the results of Lemmas~\ref{lem:histogram} and \ref{lem:histogramuncollapsed} agree.
Lemma~\ref{lem:histogramuncollapsed} implies that we only need a row size $m_n$ polynomial in $n$ to guarantee enough granularity for the uniqueness of $H^{(1)}_i$ and that the degree of the polynomial scales inversely with the alphabet size $|\mathfrak{X}|$. Furthermore, to demonstrate the tightness of this requirement of having ${m_n=\omega(n^\frac{4}{|\mathfrak{X}|-1}})$, we consider the special case where $p_X$ is uniform over $\mathfrak{X}$. This leads to the following proposition:
\begin{prop}\label{prop:uniformhistogram}
Let ${H}^{(1)}_j$ denote the histogram of the $j$\textsuperscript{th} column of $\mathbf{D}^{(1)}$. If ${p_X(x)=\frac{1}{|\mathfrak{X}|},\:\forall x\in\mathfrak{X}}$, then
\iftoggle{singlecolumn}{
\begin{align}
\Pr\left(\exists i,j\in [n],\: i\neq j,H^{(1)}_i=H^{(1)}_j\right)&=n^2 m_n^{\frac{1-|\mathfrak{X}|}{2}} C_{|\mathfrak{X}|} (1+o_n(1))
\end{align}
}{
\begin{align}
\Pr\Big(\exists i,j\in [n],\: &i\neq j,H^{(1)}_i=H^{(1)}_j\Big)\notag\\&=n^2 m_n^{\frac{1-|\mathfrak{X}|}{2}} C_{|\mathfrak{X}|} (1+o_n(1))
\end{align}
}
where ${C_{|\mathfrak{X}|}=\left(4\pi\right)^{\frac{1-|\mathfrak{X}|}{2}} |\mathfrak{X}|^{\frac{|\mathfrak{X}|}{2}}}$.
\end{prop}
\begin{proof}
See Appendix~\ref{proof:uniformhistogram}.
\end{proof}
Proposition~\ref{prop:uniformhistogram} states that in the setting with \emph{i.i.d.} uniform database distribution, for the asymptotic uniqueness of the column histograms $m_n=\omega(n^{\frac{4}{|\mathfrak{X}|-1}})$ is not only sufficient but also necessary.
\subsubsection{Independent Repetition Row Matching Scheme}
In the independent repetition scenario, we have no detection algorithms which depend on the large-$m_n$ assumption. Therefore, so long as the RHS of \eqref{eq:rowwiseachievable} is positive, any $R_n=o_n(1)$ is achievable. We stress that this observation trivially applies to the identical repetition case as well since one can simply ignore any underlying structure and perform the matching scheme given in Section~\ref{subsec:achievabilityW1}.
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{W}{ith} the exponential boom in smart devices and the growing popularity of big data, companies and institutions have been gathering more and more personal data from users which is then either published or sold for research or commercial purposes. Although the published data is typically \emph{anonymized}, \emph{i.e.,} explicit identifiers of the users, such as names and dates of birth are removed, there has been a growing concern over potential privacy leakage from anonymized data, approached from legal~\cite{ohm2009broken} and corporate~\cite{bigdata} points of view. These concerns are also articulated in the respective literature through successful practical de-anonymization attacks on real data~\cite{naini2015you,datta2012provable,narayanan2008robust,sweeney1997weaving,takbiri2018matching,wondracek2010practical,su2017anonymizing,shusterman2019robust,gulmezoglu2017perfweb,bilge2009all,srivatsa2012deanonymizing,cheng2010you,kinsella2011m,kim2016inferring,de2013unique}. \emph{Obfuscation}, which refers to the deliberate addition of noise to the database entries, has been suggested as an additional measure to protect privacy~\cite{sweeney1997weaving}. While extremely valuable, this line of work does not provide a fundamental and rigorous understanding of the conditions under which anonymized and obfuscated databases are prone to privacy attacks.
In the light of the above practical privacy attacks on databases, several groups initiated rigorous analyses of the graph matching problem~\cite{erdos1960evolution,babai1980random,janson2011random,czajka2008improved,yartseva2013performance,pedarsani2013bayesian,fiori2013robust,lyzinski2014seeded,onaran2016optimal,cullina2016improved}. Correlated graph matching has applications beyond privacy, such as image processing~\cite{sanfeliu2002graph}, computer vision~\cite{galstyan2021optimal}, single-cell biological data alignment~\cite{zhu2021robust,tran2020benchmark} and DNA sequencing, which is shown to be equivalent to matching bipartite graphs~\cite{blazewicz2002dna}. Matching of correlated databases, also equivalent to bipartite graph matching, has also been investigated from information-theoretic~\cite{cullina,shirani8849392,dai2019database,bakirtas2021database,bakirtas2022seeded,noiselesslonger} and statistical~\cite{kunisky2022strong} perspectives. In \cite{cullina}, Cullina \emph{et al.} introduced \textit{cycle mutual information} as a correlation metric and derived sufficient conditions for successful matching and a converse result using perfect recovery as the error criterion. In~\cite{shirani8849392}, Shirani \emph{et al.} considered a pair of anonymized and obfuscated databases and drew analogies between database matching and channel decoding. By doing so, they derived necessary and sufficient conditions on the \emph{database growth rate} for reliable matching, in the presence of noise on the database entries. In~\cite{dai2019database}, Dai \emph{et al.} considered the matching of a pair of databases with jointly Gaussian attributes with perfect recovery constraint. Similarly, in~\cite{kunisky2022strong}, Kunisky and Niles-Weed considered the same problem from the statistical perspective in different regimes of database size and under several recovery criteria. In~\cite{zeynepdetecting2022}, Kahraman and Nazer investigated the necessary and the sufficient conditions for detecting whether two Gaussian databases are correlated. More recently, motivated by the need for aligning single-cell data obtained from multiple biological sources/experiments~\cite{zhu2021robust,tran2020benchmark}, in~\cite{chen2022one} Chen \emph{et al.} investigated the matching of two noisy databases which are the noisy observations of a single underlying database under the fractional-error criterion, where the noise is assumed to be the Gaussian. They proposed a data-driven approach and analytically derived minimax lower bounds for successful matching.
\begin{figure}[t]
\iftoggle{singlecolumn}{
\centerline{\includegraphics[width=0.75\textwidth,trim={0 13.5cm 2cm 0},clip]{Figures/intro.pdf}}
}{
\centerline{\includegraphics[width=0.5\textwidth,trim={0 13.5cm 2cm 0},clip]{Figures/intro.pdf}}
}
\caption{An illustrative example of database matching under identical repetition, where each row experiences the same synchronization error. The columns circled in red are deleted whereas the fourth column, which is circled in blue, is repeated twice, \emph{i.e.,} replicated. For each $(i,j)$, $Y_{i,j}$ is the noisy observation of $X_{i,j}$. Furthermore, for each $i$, $Y_{i,4}(1)$ and $Y_{i,4}(2)$ are noisy replicas of $X_{i,4}$. Our goal is to estimate the row permutation $\boldsymbol{\Theta}_n$ which is in this example given as; $\boldsymbol{\Theta}_n(1)=5$, $\boldsymbol{\Theta}_n(2)=1$, $\boldsymbol{\Theta}_n(3)=4$,
$\boldsymbol{\Theta}_n(4)=3$ and $\boldsymbol{\Theta}_n(5)=2$, by matching the rows of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$. Here the $i$\textsuperscript{th} row of $\mathbf{D}^{(1)}$ corresponds to the $\Theta_n(i)$\textsuperscript{th} row of $\mathbf{D}^{(2)}$.}
\label{fig:intro}
\end{figure}
Motivated by the synchronization errors in the sampling of time-series datasets, in this paper, we present a unified generalized framework of the database matching problem under noisy synchronization errors with near-exact recovery criterion. Specifically, we investigate the matching of Markov databases under arbitrary noise and synchronization errors. Our goal is to investigate necessary and sufficient conditions on the database growth rate~\cite{shirani8849392} for the successful matching of database rows. The generalized Markov database model captures correlations of the attributes (columns), where synchronization errors, in the form of random entry deletions and replications, are followed by noise. As such, this paper generalizes the aforementioned work on database matching under only noise. Our setting is illustrated in Figure~\ref{fig:intro}.
We consider two extreme regimes regarding the nature of synchronization errors, as results derived for these corner cases provides insights into the intermediate regime. To this end, first, we focus on the \emph{identical repetition} setting where the repetition pattern is constant across rows. In other words, in this setting, deletions and replications only take place columnwise. We consider a two-phase matching scheme, where we first infer the underlying repetition structure by using permutation-invariant features of columns. This is followed by the matching phase which relies on the known replica and deletion locations. We show that as long as the databases are not independent, in the first phase replicas can be found with high probability through a series of hypothesis tests on the Hamming distances between columns. Furthermore, assuming \emph{seed} rows whose identities are known in both databases~\cite{shirani2017seeded,fishkind2019seeded} we show that if the seed size $\Lambda_n$ grows double-logarithmically with the number of rows $m_n$, where $n$ denotes the column size, deletion locations can also be extracted. In the absence of noise, seeds are not needed and column histograms can be used to detect both replicas and deletions. Once the repetition (including deletions and replications) locations are identified, in the second phase, we propose a joint typicality-based row matching scheme to derive sufficient conditions on the database growth rate for successful matching. Finally, we prove a tight converse result through a modified version of Fano's inequality, completely characterizing the matching capacity when the repetition pattern is constant across the rows.
Next, we focus on the other extreme, namely the \emph{independent repetition} setting where the repetition pattern is independent in each row and there is no underlying repetition structure across rows. Under probabilistic side information on the deletion locations, we propose a row matching scheme and derive an achievable database growth rate. This, together with an outer bound obtained through Fano's inequality, provides upper and lower bounds on the matching capacity in the independent repetition setting. Comparing the bounds in the two extremes, we show that the matching capacity is lower and hence matching is more difficult under the independent repetition model. Finally, based on these two extreme models, we state bounds on the matching capacity for any intermediate repetition structure.
We also discuss the adversarial repetition model, where we assume that synchronization errors, in the form of column deletions, are chosen by a constrained adversary whose goal is to hinder the matching of databases, where the constraint is of the form of a fractional column deletion budget which naturally provides a trade-off between utility and privacy. Since this adversarial model forces us to focus on the worst-case scenario and in turn, prohibits the use of typicality and Fano's inequality, we propose an exact sequence matching and perform a more careful analysis of the worst case error, focusing on the Hamming distances between the rows (users) of the databases, as is the case in the adversarial channel literature~\cite{bassily2014causal}, in our achievability and converse analyses. Under the identical repetition model, we completely characterize the adversarial matching capacity.
In addition to the characterization of the matching capacity under various assumptions, our results provide sufficient conditions on the number and the size for column histograms to be asymptotically unique. Since histograms naturally show up frequently in information theory, probability theory and statistics, this result could be of independent interest. In addition, our novel matching scheme in the independent repetition case can be directly converted to a decoding strategy for input-constrained noisy synchronization channels, a well-investigated model in the information theory literature~\cite{gallager1961sequential,5629489,6915855,9056064}.
\subsection{Paper Organization}
\label{subsec:organization}
The organization of this paper is as follows: Section~\ref{sec:problemformulation} contains the problem formulation. In Section~\ref{sec:matchingcapacityWm}, our main results on the matching capacity under the identical repetition model are presented. Section~\ref{sec:matchingcapacityW1} contains our main results on the matching capacity under the independent repetition assumption. In Section~\ref{sec:discussion}, we discuss the underlying model assumptions and investigate how variations on these assumptions impact some of the results. Finally, in Section~\ref{sec:conclusion} the results and ongoing work are discussed.
\subsection{Notations}
\label{subsec:notations}
In this paper we use the following notations:
\begin{itemize}
\item $[n]$ denotes the set of integers $\{1,...,n\}$.
\item Matrices are denoted with uppercase bold letters. For a matrix $\mathbf{D}$, $D_{i,j}$ denotes the $(i,j)$\textsuperscript{th} entry.
\item $a^n$ denotes a row vector consisting of scalars $a_1,\dots,a_n$.
\item Random variables are denoted by uppercase letters while their realizations are denoted by lowercase ones.
\item The indicator of event $E$ is denoted by $\mathbbm{1}_E$.
\item $H$ and $H_b$ denote the Shannon entropy and the binary entropy functions~\cite[Chapter 2]{cover2006elements}, respectively.
\item $O$, $o$, $\Theta$, $\omega$ and $\Omega$ denote the standard asymptotic growth notations~\cite[Chapter 3]{cormen2022introduction}.
\item $D_{KL}(p_X\|q_X)$ denotes the Kullback-Leibler divergence~\cite[Chapter 2.3]{cover2006elements} between the probability distributions $p_X$ and $q_X$. For scalars $p,q\in(0,1)$, $D(p\|q)$ denotes the Kullback-Leibler divergence between two Bernoulli distributions with respective parameters $p$ and $q$. More formally,
\iftoggle{singlecolumn}{
\begin{align}
D(p\| q) &= (1-p) \log\frac{1-p}{1-q}+ p\log\frac{p}{q}
\end{align} }
{
\begin{align}
D(p\| q) &= (1-p) \log\frac{1-p}{1-q}+ p\log\frac{p}{q}
\end{align}
}
\item The logarithms, unless stated explicitly, are in base $2$.
\end{itemize}
\section{Conclusion}
\label{sec:conclusion}
In this work, we have presented a unified information-theoretic foundation for database matching under noise and synchronization errors. We have showed that when the repetition pattern is constant across rows, the running Hamming distances between the consecutive columns of the correlated repeated database can be used to detect replicas. In addition, given seeds whose size grows double-logarithmic with the number of rows, a Hamming distance-based threshold testing, after an adequate remapping of database entries, can be used to infer the locations of the deletions. Using the proposed detection algorithms, and a joint typicality-based rowwise matching scheme, we have derived an achievable database growth rate, which we prove is tight. Therefore, we have completely characterized the database matching capacity under noisy column repetitions. Furthermore, we have derived achievable database growth rates proposing a typicality-based matching scheme and a converse result for the setting where the repetition takes place entrywise, where we build analogy between database matching and synchronization channel decoding. We have also discussed some extensions, such as the adversarial column deletion setting rather then the random one.
Other natural extensions beyond those studied in this paper include the finite column size regime, where tools from finite-blocklength information theory could be useful, and practical algorithms with theoretical guarantees. An extensive analysis of the parallels between database matching under synchronization errors and two-dimensional synchronization channels~\cite{yaakobitit,yaakobiisit} and the construction of codes tailored to correct the error patterns investigated in this paper could be an interesting line of future work. Finally, one can extend our adversarial setting into a noisy one where the privacy-preserving mechanism not only deletes columns but also introduces intentional noise on the microdata, and investigate the adversarial matching capacity through a worst-case analysis.
\section{Proof of Lemma~\ref{lem:noisyreplicadetection}}
\label{proof:noisyreplicadetection}
Observe that since the rows of $\mathbf{D}^{(2)}$ are \emph{i.i.d.} conditioned on the column repetition pattern $S^n$, the Hamming distance $d_H(C^{m_n}_j,C^{m_n}_{j+1})$ between consecutive columns $C^{m_n}_j$ and $C^{m_n}_{j+1}$ follows a Binomial distribution whose success parameter depends on whether $C^{m_n}_j$ and $C^{m_n}_{j+1}$ are noisy replicas or not.
\begin{sloppypar}
Let $H_1$ denote the case where two random variables $Y_1,Y_2\sim p_Y$ are noisy replicas and $H_0$ otherwise. Further, let
\iftoggle{singlecolumn}{
\begin{align}
p_0&\triangleq \Pr(Y_1\neq Y_2|H_0)\\
p_1&\triangleq \Pr(Y_1\neq Y_2|H_1)
\end{align}
}{
\begin{align}
p_0&\triangleq \Pr(Y_1\neq Y_2|H_0)\\
p_1&\triangleq \Pr(Y_1\neq Y_2|H_1)
\end{align}
}
Then we have $d_H(C^{m_n}_j,C^{m_n}_{j+1})\sim \text{Binom}(m_n,p_1)$ if $C^{m_n}_j$ and $C^{m_n}_{j+1}$ are noisy replicas and ${d_H(C^{m_n}_j,C^{m_n}_{j+1})\sim \text{Binom}(m_n,p_0)}$ otherwise. Thus, proving that $p_0$ and $p_1$ are bounded away from one another will allow us to use the running Hamming distance based threshold test discussed in Section~\ref{subsec:replicadetection}.
\end{sloppypar}
\begingroup
\allowdisplaybreaks
Our goal is to prove that $p_0>p_1$. First, we can formally rewrite $p_0$ as
\iftoggle{singlecolumn}{
\begin{align}
p_0&= \sum\limits_{x_1\in\mathfrak{X}}\sum\limits_{x_2\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} \Pr(X_1=x_1)\Pr(X_2=x_2|X_1=x_1)\Pr(Y_1=y|X_1=x_1)\Pr(Y_2\neq y|X_2=x_2)\\
&=\sum\limits_{x_1\in\mathfrak{X}}\sum\limits_{x_2\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} \Pr(X_1=x_1)\Pr(X_2=x_2|X_1=x_1) p_{Y|X}(y|x_1)[1-p_{Y|X}(y|x_2)]\\
&= \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: P_{i,j}\: p_{Y|X}(k|i)\: [1-p_{Y|X}(k|j)]\\
&=\sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: [(1-\gamma)u_j+\gamma \delta_{i j}]\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\\
&= \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: [(1-\gamma)u_i+\gamma]\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]\notag\\
&\hspace{5em}+ \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j\neq i} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: [(1-\gamma)u_j]\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\\
&= (1-\gamma) \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|} u_i\: u_j\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)] + \gamma \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]\\
&= (1-\gamma) p_0^{\prime} + \gamma p_1^{\prime}
\end{align}}
{
\begin{align}
p_0&= \sum\limits_{x_1\in\mathfrak{X}}\sum\limits_{x_2\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} \Pr(X_1=x_1)\Pr(X_2=x_2|X_1=x_1)\notag\\
&\hspace{4em}\Pr(Y_1=y|X_1=x_1)\Pr(Y_2\neq y|X_2=x_2)\\
&=\sum\limits_{x_1\in\mathfrak{X}}\sum\limits_{x_2\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} \Pr(X_1=x_1)\Pr(X_2=x_2|X_1=x_1)\notag\\
&\hspace{6em} p_{Y|X}(y|x_1)[1-p_{Y|X}(y|x_2)]\\
&= \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: P_{i,j}\: p_{Y|X}(k|i)\: [1-p_{Y|X}(k|j)]\\
&=\sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: [(1-\gamma)u_j+\gamma \delta_{i j}]\notag\\
&\hspace{6em} p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\\
&= \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: [(1-\gamma)u_i+\gamma]\notag\\&\hspace{6em} p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]\notag\\
&\hspace{1em}+ \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j\neq i} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: [(1-\gamma)u_j]\notag\\
&\hspace{7em} p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\\
&= (1-\gamma) \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|} u_i\: u_j\notag\\&\hspace{8em}p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\notag\\
&\hspace{2em}+ \gamma \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]\\
&= (1-\gamma) p_0^{\prime} + \gamma p_1^{\prime}
\end{align}
}
\endgroup
where
\iftoggle{singlecolumn}{
\begin{align}
p_0^{\prime}&\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|} u_i\: u_j\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\\
p_1^{\prime} &\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]
\end{align}
}{
\begin{align}
p_0^{\prime}&\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{j=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|} u_i\: u_j\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|j)]\\
p_1^{\prime} &\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]
\end{align}
}
Similarly, we rewrite $p_1$ as
\iftoggle{singlecolumn}{
\begin{align}
p_1 &= \sum\limits_{x\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} \Pr(X=x) \Pr(Y_1=y|X=x) \Pr(Y_2\neq y|X=x) \\
&=\sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]\\
&= p_1^{\prime}
\end{align}
}{
\begin{align}
p_1 &= \sum\limits_{x\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} \Pr(X=x)\notag\\&\hspace{2em} \Pr(Y_1=y|X=x) \Pr(Y_2\neq y|X=x) \\
&=\sum\limits_{i=1}^{|\mathfrak{X}|} \sum\limits_{k=1}^{|\mathfrak{X}|}
u_i\: p_{Y|X}(k|i) \:[1-p_{Y|X}(k|i)]\\
&= p_1^{\prime}
\end{align}
}
Thus, for any $\gamma\in[0,1)$ we have
\iftoggle{singlecolumn}{
\begin{align}
p_0>p_1 \iff p_0^{\prime}>p_1^{\prime}
\end{align}
}{
\begin{align}
p_0>p_1 \iff p_0^{\prime}>p_1^{\prime}
\end{align}
}
Note that $p_0^{\prime}$ and $p_1^{\prime}$ would correspond to
\iftoggle{singlecolumn}{
\begin{align}
p_0^{\prime}&= \Pr(Y_1\neq Y_2|H_0)\\
p_1^{\prime}&= \Pr(Y_1\neq Y_2|H_1)
\end{align}
}{
\begin{align}
p_0^{\prime}&= \Pr(Y_1\neq Y_2|H_0)\\
p_1^{\prime}&= \Pr(Y_1\neq Y_2|H_1)
\end{align}
}
if the entries $X_{i,j}$ of $\mathbf{D}^{(1)}$ were drawn \emph{i.i.d.} from the stationary distribution $\pi$ of $\mathbf{P}$, instead of a Markov process. Thus, to consider the \emph{i.i.d.} database entries case, we introduce the discrete random variable $W$ with
\iftoggle{singlecolumn}{
\begin{align}
p_W(i) &= u_i,\:\forall i\in\mathfrak{X}\label{eq:iidW1}\\
p_{Y|W}(y|w)&=p_{Y|X}(y|w),\:\forall (w,y)\in\mathfrak{X}^2 \label{eq:iidW2}
\end{align}
}{
\begin{align}
p_W(i) &= u_i,\:\forall i\in\mathfrak{X}\label{eq:iidW1}\\
p_{Y|W}(y|w)&=p_{Y|X}(y|w),\:\forall (w,y)\in\mathfrak{X}^2 \label{eq:iidW2}
\end{align}
}
Then, we can rewrite $p_0^{\prime}$ and $p_1^{\prime}$ as
\iftoggle{singlecolumn}{
\begin{align}
p_0^{\prime} &=\sum\limits_{w_1\in\mathfrak{X}}\sum\limits_{w_2\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w_1) p_W(w_2) p_{Y|W}(y|w_1) \left[1-p_{Y|W}(y|w_2)\right]\\
&=\sum\limits_{w_1\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w_1) p_{Y|W}(y|w_1)\sum\limits_{w_2\in\mathfrak{X}}p_W(w_2) \left[1-p_{Y|W}(y|w_2)\right]\\
&=\sum\limits_{w\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w) p_{Y|W}(y|w)\left[1-p_Y(y)\right]\\
p_1^{\prime} &=\sum\limits_{w\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w) p_{Y|W}(y|w) \left[1-p_{Y|W}(y|w)\right]
\end{align}
}{
\begin{align}
p_0^{\prime} &=\sum\limits_{w_1\in\mathfrak{X}}\sum\limits_{w_2\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w_1) p_W(w_2)\notag\\&\hspace{5em} p_{Y|W}(y|w_1) \left[1-p_{Y|W}(y|w_2)\right]\\
&=\sum\limits_{w_1\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w_1) p_{Y|W}(y|w_1)\notag\\&\hspace{4em}\sum\limits_{w_2\in\mathfrak{X}}p_W(w_2) \left[1-p_{Y|W}(y|w_2)\right]\\
&=\sum\limits_{w\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w) p_{Y|W}(y|w)\left[1-p_Y(y)\right]\\
p_1^{\prime} &=\sum\limits_{w\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_W(w) p_{Y|W}(y|w) \left[1-p_{Y|W}(y|w)\right]
\end{align}
}
Thus, we have
\iftoggle{singlecolumn}{
\begin{align}
p_0^{\prime} - p_1^{\prime} &=\sum\limits_{w\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_{W,Y}(w,y)\left[p_{Y|W}(y|w)-p_Y(y)\right].
\end{align}
}{
\begin{align}
p_0^{\prime} - p_1^{\prime} &=\sum\limits_{w\in\mathfrak{X}}\sum\limits_{y\in\mathfrak{X}} p_{W,Y}(w,y)\left[p_{Y|W}(y|w)-p_Y(y)\right].
\end{align}
}
For every $y\in\mathfrak{X}$, let
\iftoggle{singlecolumn}{
\begin{align}
\psi(y) &\triangleq \sum\limits_{w\in\mathfrak{X}}p_W(w)\left[p_{Y|W}(y|w)-p_Y(y)\right]^2\\
&=\sum\limits_{w\in\mathfrak{X}}p_W(w)\left[p_{Y|W}(y|w)-\sum\limits_{z\in\mathfrak{X}}p_{Y|W}(y|z)p_W(z)\right]^2\\
&\ge 0 \label{eq:psinonnegative}
\end{align}
}{
\begin{align}
\psi(y) &\triangleq \sum\limits_{w\in\mathfrak{X}}p_W(w)\left[p_{Y|W}(y|w)-p_Y(y)\right]^2\\
&=\sum\limits_{w\in\mathfrak{X}}p_W(w)\notag\\&\hspace{1.1em}\left[p_{Y|W}(y|w)-\sum\limits_{z\in\mathfrak{X}}p_{Y|W}(y|z)p_W(z)\right]^2\\
&\ge 0 \label{eq:psinonnegative}
\end{align}
}
where \eqref{eq:psinonnegative} follows from the non-negativity of the square term in the summation. It must be noted that $\psi(y)=0$ only if $p_{Y|W}(y|w)=p_Y(y),\: \forall w\in\mathfrak{X}$ with $p_W(w)=u_w>0$.
Expanding the square term, we obtain
\iftoggle{singlecolumn}{
\begin{align}
\psi(y)&= \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)^2-2 p_Y(y) \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)+\sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y}(y)^2\\
&= \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)^2 - 2 p_Y(y)^2 +p_Y(y)^2\\
&= \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)^2 - p_Y(y)^2
\end{align}
}{
\begin{align}
\psi(y)&= \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)^2\notag\\&\hspace{3em}-2 p_Y(y) \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)\notag\\&\hspace{3em}+\sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y}(y)^2\\
&= \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)^2 - 2 p_Y(y)^2 +p_Y(y)^2\\
&= \sum\limits_{w\in\mathfrak{X}}p_W(w) p_{Y|W}(y|w)^2 - p_Y(y)^2
\end{align}
}
Next, we rewrite $p_0^\prime-p_1^\prime$ as
\iftoggle{singlecolumn}{
\begin{align}
p_0^\prime-p_1^\prime&=\sum\limits_{y\in\mathfrak{X}}\sum\limits_{w\in\mathfrak{X}} p_{W,Y}(w,y)\left[p_{Y|W}(y|w)-p_Y(y)\right]\\
&= \sum\limits_{y\in\mathfrak{X}}\left[\left(\sum\limits_{w\in\mathfrak{X}} p_W(w) p_{Y|W}(y|w)^2\right)-p_Y(y)^2\right]\\
&= \sum\limits_{y\in\mathfrak{X}} \psi(y)\\
&\ge 0
\end{align}
}{
\begin{align}
p_0^\prime-p_1^\prime&=\sum\limits_{y\in\mathfrak{X}}\sum\limits_{w\in\mathfrak{X}} p_{W,Y}(w,y)\left[p_{Y|W}(y|w)-p_Y(y)\right]\\
&= \sum\limits_{y\in\mathfrak{X}}\left[\left(\sum\limits_{w\in\mathfrak{X}} p_W(w) p_{Y|W}(y|w)^2\right)-p_Y(y)^2\right]\\
&= \sum\limits_{y\in\mathfrak{X}} \psi(y)\\
&\ge 0
\end{align}
}
with $p_0^\prime-p_1^\prime=0$ only when $p_{Y|W}(y|w)=p_Y(y)$, $\forall (w,y)\in\mathfrak{X}^2$. In other words $p_0^\prime>p_1^\prime$ and in turn $p_0>p_1$ as long as the two databases are not independent.
We next choose any $\tau\in(p_1,p_0)$ bounded away from both $p_0$ and $p_1$. Let $A_j$ denote the event that $C^{m_n}_{j}$ and $C^{m_n}_{j+1}$ are noisy replicas and $B_j$ denote the event that the algorithm detects $C^{m_n}_{j}$ and $C^{m_n}_{j+1}$ as replicas. Via the union bound, we can upper bound the total probability of replica detection error as
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j)&\le \sum\limits_{j=1}^{{K_n}-1} \Pr(A_j ^c) \Pr(B_j|A_j ^c)+ \Pr(A_j) \Pr(B_j^c|A_j)\label{eq:replicadetectionbound}
\end{align}
}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j)&\le \sum\limits_{j=1}^{{K_n}-1} \Pr(A_j ^c) \Pr(B_j|A_j ^c) \notag\\&\hspace{5em}+\Pr(A_j) \Pr(B_j^c|A_j)\label{eq:replicadetectionbound}
\end{align}
}
\begin{sloppypar}
Note that conditioned on $A_j^c$, $d_H(C^{m_n}_j,C^{m_n}_{j+1})\sim\text{Binom}(m_n,p_0)$ and conditioned on $A_j$, $d_H(C^{m_n}_j,C^{m_n}_{j+1})\sim\text{Binom}(m_n,p_1)$. Then, from the Chernoff bound~\cite[Lemma 4.7.2]{ash2012information}, we get
\iftoggle{singlecolumn}{
\begin{align}
\Pr(B_j|A_j ^c)&\le 2^{-m_n D\left(\tau\|p_0\right)}\label{eq:chernoff1}\\
\Pr(B_j^c|A_j)&\le 2^{-m_n D\left(1-\tau \|1-p_1\right)}\label{eq:chernoff2}
\end{align}
}{
\begin{align}
\Pr(B_j|A_j ^c)&\le 2^{-m_n D\left(\tau\|p_0\right)}\label{eq:chernoff1}\\
\Pr(B_j^c|A_j)&\le 2^{-m_n D\left(1-\tau \|1-p_1\right)}\label{eq:chernoff2}
\end{align}
}
Thus, we get
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j)&\le ({K_n}-1)\left[ 2^{-m_n D\left(\tau\|p_0\right)}+ 2^{-m_n D\left(1-\tau\|1-p_1\right)}\right]\label{eq:replicadetectionlast}
\end{align}
}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j)&\le ({K_n}-1)\Big[ 2^{-m_n D\left(\tau\|p_0\right)}\notag\\&\hspace{5.1em}+ 2^{-m_n D\left(1-\tau\|1-p_1\right)}\Big]\label{eq:replicadetectionlast}
\end{align}
}
Observe that since the RHS of \eqref{eq:replicadetectionlast} has $2{K_n}-2=O(n)$ terms decaying exponentially in~$m_n$, for any $m_n=\omega(\log n)$ we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j) \to 0 \:\text{ as } n\to\infty.
\end{align}
}{
\begin{align}
\Pr(\bigcup\limits_{j=1}^{{K_n}-1} E_j) \to 0 \:\text{ as } n\to\infty.
\end{align}
}
Finally observing that $n\sim\log m_n$ concludes the proof.\qed
\end{sloppypar}
\section{Proof of Lemma~\ref{lem:seededdeletiondetection}}
\label{proof:seededdeletiondetection}
Let ${(\tilde{X}_{i,j},\tilde{Y}_{i,j})}$ be a pair of matching entries. Since the database distribution is stationary, WLOG, we can assume $(i,j)=(1,1)$. Now, given ${(\tilde{X}_{1,1},\tilde{Y}_{1,1})}$, and the non-matching pair ${(\tilde{X}_{1,j},\tilde{Y}_{1,1})}$ with $j-1=r\neq 0$,
we first prove the existence of such a bijective mapping $\sigma$ such that for any $r\in[n-1]$
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,1})<\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,r+1}).
\end{align}
}{
\begin{align}
\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,1})<\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,r+1}).
\end{align}
}
For given $\sigma$ and $r\in[n-1]$ let
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^{(r)}&\triangleq\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,r+1})\\
q_{1,\sigma}&\triangleq\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,1})
\end{align}
}{
\begin{align}
q_{0,\sigma}^{(r)}&\triangleq\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,r+1})\\
q_{1,\sigma}&\triangleq\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,1})
\end{align}
}
Here, our goal is to show that there exists at least one $\sigma$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^{(r)}>q_{1,\sigma}\label{eq:q0biggerthanq1},\:\forall r\in[n-1].
\end{align}
}{
\begin{align}
q_{0,\sigma}^{(r)}>q_{1,\sigma}\label{eq:q0biggerthanq1},\:\forall r\in[n-1].
\end{align}
}
We can rewrite $q_{0,\sigma}^{(r)}$ as
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^{(r)}&=\sum\limits_{x_1\in\mathfrak{X}}\sum\limits_{x_2\in\mathfrak{X}} \Pr(\tilde{X}_{1,1}=x_1) \Pr(\tilde{X}_{1,r+1}=x_2|\tilde{X}_{1,1}=x_1) \Pr(\sigma(\tilde{Y}_{1,1})\neq x_2|\tilde{X}_{1,1} = x_1) \\
&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: (\mathbf{P}^r)_{i,j}\: [1-p_{Y|X}(\sigma^{-1}(j)|i)]\\
&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: [(1-\gamma^r)u_j+\gamma^r \delta_{i j}]\: [1-p_{Y|X}(\sigma^{-1}(j)|i)]\\
&=(1-\gamma^r) \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: u_j\: [1-p_{Y|X}(\sigma^{-1}(j)|i)] + \gamma^r \sum\limits_{i=1}^{|\mathfrak{X}|} u_i\: [1-p_{Y|X}(\sigma^{-1}(i)|i)]\\
&= (1-\gamma^r) q_{0,\sigma}^\prime+ \gamma^r q_{1,\sigma}^\prime
\end{align}
}{
\begin{align}
q_{0,\sigma}^{(r)}&=\sum\limits_{x_1\in\mathfrak{X}}\sum\limits_{x_2\in\mathfrak{X}} \Pr(\tilde{X}_{1,1}=x_1) \notag\\&\hspace{4em}\Pr(\tilde{X}_{1,r+1}=x_2|\tilde{X}_{1,1}=x_1)\notag\\&\hspace{4em} \Pr(\sigma(\tilde{Y}_{1,1})\neq x_2|\tilde{X}_{1,1} = x_1) \\
&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: (\mathbf{P}^r)_{i,j}\: [1-p_{Y|X}(\sigma^{-1}(j)|i)]\\
&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: [(1-\gamma^r)u_j+\gamma^r \delta_{i j}]\notag\\&\hspace{6em} [1-p_{Y|X}(\sigma^{-1}(j)|i)]\\
&=(1-\gamma^r) \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: u_j\: [1-p_{Y|X}(\sigma^{-1}(j)|i)] \notag\\ &\hspace{3.5em}+ \gamma^r \sum\limits_{i=1}^{|\mathfrak{X}|} u_i\: [1-p_{Y|X}(\sigma^{-1}(i)|i)]\\
&= (1-\gamma^r) q_{0,\sigma}^\prime+ \gamma^r q_{1,\sigma}^\prime
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^\prime&\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: u_j\: [1-p_{Y|X}(\sigma^{-1}(j)|i)]\\
q_{1,\sigma}^\prime &\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|} u_i\: [1-p_{Y|X}(\sigma^{-1}(i)|i)]
\end{align}
}{
\begin{align}
q_{0,\sigma}^\prime&\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} u_i\: u_j\: [1-p_{Y|X}(\sigma^{-1}(j)|i)]\\
q_{1,\sigma}^\prime &\triangleq \sum\limits_{i=1}^{|\mathfrak{X}|} u_i\: [1-p_{Y|X}(\sigma^{-1}(i)|i)]
\end{align}
}
Similarly, we rewrite $q_{1,\sigma}$ as
\iftoggle{singlecolumn}{
\begin{align}
q_{1,\sigma}&=\sum\limits_{x\in\mathfrak{X}} \Pr(\tilde{X}_{1,1}=x) \Pr(\sigma(\tilde{Y}_{1,1})\neq x|\tilde{X}_{1,1} = x) \\
&=\sum\limits_{i=1}^{|\mathfrak{X}|} u_i [1-p_{Y|X}(\sigma^{-1}(i)|i)]\\
&= q_{1,\sigma}^\prime
\end{align}
}{
\begin{align}
q_{1,\sigma}&=\sum\limits_{x\in\mathfrak{X}} \Pr(\tilde{X}_{1,1}=x) \Pr(\sigma(\tilde{Y}_{1,1})\neq x|\tilde{X}_{1,1} = x) \\
&=\sum\limits_{i=1}^{|\mathfrak{X}|} u_i [1-p_{Y|X}(\sigma^{-1}(i)|i)]\\
&= q_{1,\sigma}^\prime
\end{align}
}
Thus, for any $\gamma\in[0,1)$, we have
\iftoggle{singlecolumn}{
\begin{align}
\exists \sigma,\: \forall r\in[n-1], \: q_{0,\sigma}^{(r)}>q_{1,\sigma} \iff \exists \sigma,\: q_{0,\sigma}^\prime>q_{1,\sigma}^\prime
\end{align}
}{
\begin{align}
\exists \sigma,\: \forall r\in[n-1], \: q_{0,\sigma}^{(r)}>q_{1,\sigma} \iff \exists \sigma,\: q_{0,\sigma}^\prime>q_{1,\sigma}^\prime
\end{align}
}
Note that $q_{0,\sigma}^\prime$ and $q_{1,\sigma}^\prime$ correspond to
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^\prime&=\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,j}),\hspace{1em} j\neq 1\\
q_{1,\sigma}^\prime&=\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,1})
\end{align}
}{
\begin{align}
q_{0,\sigma}^\prime&=\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,j}),\hspace{1em} j\neq 1\\
q_{1,\sigma}^\prime&=\Pr(\sigma(\tilde{Y}_{1,1})\neq \tilde{X}_{1,1})
\end{align}
}
if the entries $\tilde{X}_{i,j}$ of $\mathbf{G}^{(1)}$ were drawn \emph{i.i.d.} from the distribution $\pi=[u_1,\dots,u_{|\mathfrak{X}|}]$, instead of a Markov process. Thus, we recall the discrete random variable $W$, defined in equations~\eqref{eq:iidW1}-\eqref{eq:iidW2}, with
\iftoggle{singlecolumn}{
\begin{align}
p_W(i) &= u_i,\:\forall i\in\mathfrak{X}\\
p_{Y|W}(y|w)&=p_{Y|X}(y|w),\:\forall (w,y)\in\mathfrak{X}^2
\end{align}
}{
\begin{align}
p_W(i) &= u_i,\:\forall i\in\mathfrak{X}\\
p_{Y|W}(y|w)&=p_{Y|X}(y|w),\:\forall (w,y)\in\mathfrak{X}^2
\end{align}
}
Then, we can rewrite $q_{0,\sigma}^\prime$ and $q_{1,\sigma}^\prime$ as
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^\prime&= \sum\limits_{w_1\in\mathfrak{X}}\sum\limits_{w_2\in\mathfrak{X}}p_W(w_1) p_W(w_2)[1-p_{Y|X}(\sigma^{-1}(w_2)|w_1)]\\
q_{1,\sigma}^\prime
&= \sum\limits_{w\in\mathfrak{X}}p_W(w) [1-p_{Y|W}(\sigma^{-1}(w)|w)]
\end{align}
}{
\begin{align}
q_{0,\sigma}^\prime&= \sum\limits_{w_1\in\mathfrak{X}}\sum\limits_{w_2\in\mathfrak{X}}p_W(w_1) p_W(w_2)\notag\\ &\hspace{5em}[1-p_{Y|X}(\sigma^{-1}(w_2)|w_1)]\\
q_{1,\sigma}^\prime
&= \sum\limits_{w\in\mathfrak{X}}p_W(w) [1-p_{Y|W}(\sigma^{-1}(w)|w)]
\end{align}
}
We first prove the following:
\iftoggle{singlecolumn}{
\begin{align}
\sum\limits_{\sigma} q_{0,\sigma}^\prime-q_{1,\sigma}^\prime=0\label{eq:q0q1overphizero}
\end{align}
}{
\begin{align}
\sum\limits_{\sigma} q_{0,\sigma}^\prime-q_{1,\sigma}^\prime=0\label{eq:q0q1overphizero}
\end{align}
}
where the summation is over all permutations of $\mathfrak{X}$. For brevity, let
\iftoggle{singlecolumn}{
\begin{align}
Q_{i,j}\triangleq p_{Y|W}(j|i)\quad \forall i,j\in\mathfrak{X} \label{eq:definep}
\end{align}
}{
\begin{align}
Q_{i,j}\triangleq p_{Y|W}(j|i)\quad \forall i,j\in\mathfrak{X} \label{eq:definep}
\end{align}
}
Note that from \eqref{eq:definep}, we have
\iftoggle{singlecolumn}{
\begin{align}
\sum\limits_{j=1}^{|\mathfrak{X}|}& Q_{i,j}=1\quad \forall i\in \mathfrak{X}\label{eq:pijsumto1}\\
\sum\limits_{i=1}^{|\mathfrak{X}|}&\sum\limits_{j=1}^{|\mathfrak{X}|} Q_{i,j}=|\mathfrak{X}|\label{eq:pijsumtoX}
\end{align}
}{
\begin{align}
\sum\limits_{j=1}^{|\mathfrak{X}|}& Q_{i,j}=1\quad \forall i\in \mathfrak{X}\label{eq:pijsumto1}\\
\sum\limits_{i=1}^{|\mathfrak{X}|}&\sum\limits_{j=1}^{|\mathfrak{X}|} Q_{i,j}=|\mathfrak{X}|\label{eq:pijsumtoX}
\end{align}
}
Taking the sum over all $\sigma$, we obtain
\iftoggle{singlecolumn}{
\begin{align}
\sum\limits_{\sigma} q_{0,\sigma}^\prime-q_{1,\sigma}^\prime
&= \sum\limits_{\sigma}\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{i,\sigma^{-1}(j)}-\sum\limits_{\sigma}\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,\sigma^{-1}(i)}\label{eq:q0q1sumoverphi}
\end{align}
}{
\begin{align}
\sum\limits_{\sigma} q_{0,\sigma}^\prime-q_{1,\sigma}^\prime
&= \sum\limits_{\sigma}\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{i,\sigma^{-1}(j)}\notag\\&\hspace{3em}-\sum\limits_{\sigma}\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,\sigma^{-1}(i)}\label{eq:q0q1sumoverphi}
\end{align}
}
Combining \eqref{eq:pijsumto1}-\eqref{eq:q0q1sumoverphi}, we now show that both terms on the RHS of \eqref{eq:q0q1sumoverphi} are equal to $(|\mathfrak{X}|-1)!$.
We first look at the second term on the RHS of \eqref{eq:q0q1sumoverphi}.
\iftoggle{singlecolumn}{
\begin{align}
\sum\limits_{\sigma}\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,\sigma^{-1}(i)}&=\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) \sum\limits_{\sigma} Q_{i,\sigma^{-1}(i)}\\
&=(|\mathfrak{X}|-1)!\sum\limits_{j=1}^{|\mathfrak{X}|}\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,j}\label{eq:permlhs}\\
&= (|\mathfrak{X}|-1)!\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_{W,Y}(i,j)\\
&= (|\mathfrak{X}|-1)!\label{eq:philhs}
\end{align}
}{
\begin{align}
\sum\limits_{\sigma}&\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,\sigma^{-1}(i)}\notag\\&=\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) \sum\limits_{\sigma} Q_{i,\sigma^{-1}(i)}\\
&=(|\mathfrak{X}|-1)!\sum\limits_{j=1}^{|\mathfrak{X}|}\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,j}\label{eq:permlhs}\\
&= (|\mathfrak{X}|-1)!\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_{W,Y}(i,j)\\
&= (|\mathfrak{X}|-1)!\label{eq:philhs}
\end{align}
}
where \eqref{eq:permlhs} follows from the fact that for any $j\in\mathfrak{X}$, we have exactly $(|\mathfrak{X}|-1)!$ permutations assigning $j$ to $i$ (or equivalently $\sigma^{-1}(i)=j$).
Now we look at the first term.
\iftoggle{singlecolumn}{
\begin{align}
\sum\limits_{\sigma}\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i)p_W(j) Q_{i,\sigma^{-1}(j)}&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) \sum\limits_{\sigma} Q_{i,\sigma^{-1}(j)}\\
&=\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) (|\mathfrak{X}|-1)!\sum\limits_{k=1}^{|\mathfrak{X}|}Q_{i,k}\label{eq:permrhs}\\
&= (|\mathfrak{X}|-1)!\label{eq:phirhs}
\end{align}
}{
\begin{align}
\sum\limits_{\sigma}&\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i)p_W(j) Q_{i,\sigma^{-1}(j)}\notag\\&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) \sum\limits_{\sigma} Q_{i,\sigma^{-1}(j)}\\
&=\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) (|\mathfrak{X}|-1)!\sum\limits_{k=1}^{|\mathfrak{X}|}Q_{i,k}\label{eq:permrhs}\\
&= (|\mathfrak{X}|-1)!\label{eq:phirhs}
\end{align}
}
Again, \eqref{eq:permrhs} follows from the fact that for each $k\in\mathfrak{X}$, there are exactly $(|\mathfrak{X}|-1)!$ permutations $\sigma$ which map $k$ to $j$ (or equivalently $\sigma^{-1}(j)=k$).
Thus, we have showed that both terms on the RHS of \eqref{eq:q0q1sumoverphi} are equal to $(|\mathfrak{X}|-1)!$, proving~\eqref{eq:q0q1overphizero}. Now, we only need to show that
\iftoggle{singlecolumn}{
\begin{align}
\exists \sigma\quad q_{0,\sigma}^\prime-q_{1,\sigma}^\prime\neq 0. \label{eq:phifinal}
\end{align}
}{
\begin{align}
\exists \sigma\quad q_{0,\sigma}^\prime-q_{1,\sigma}^\prime\neq 0. \label{eq:phifinal}
\end{align}
}
This is because unless $q_{0,\sigma}^\prime-q_{1,\sigma}^\prime= 0$ $\forall \sigma$, due to~\eqref{eq:q0q1overphizero}, we automatically have a $\sigma$ such that this difference is strictly positive. This follows from the fact if $\exists\sigma$ $q_{0,\sigma}^\prime-q_{1,\sigma}^\prime\neq 0$, we have either
\begin{itemize}
\item $q_{0,\sigma}^\prime-q_{1,\sigma}^\prime> 0$, which is the desired result, or
\item $q_{0,\sigma}^\prime-q_{1,\sigma}^\prime<0$, which from~\eqref{eq:q0q1sumoverphi} requires the existence of another permutation $\tilde{\sigma}$ with $q_{0,\tilde{\sigma}}^\prime-q_{1,\tilde{\sigma}}^\prime> 0$.
\end{itemize}
We will prove~\eqref{eq:phifinal} by arguing that
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^\prime-q_{1,\sigma}^\prime= 0\quad \forall \sigma \iff p_{Y|W}(y|w)=p_Y(y)\quad \forall (w,y)\in\mathfrak{X}^2 \label{eq:phicondition}
\end{align}
}{
\begin{align}
q_{0,\sigma}^\prime&-q_{1,\sigma}^\prime= 0\hspace{0.5em} \forall \sigma \notag\\&\iff p_{Y|W}(y|w)=p_Y(y)\hspace{0.5em} \forall (w,y)\in\mathfrak{X}^2 \label{eq:phicondition}
\end{align}
}
which contradicts our $p_{Y|X}\neq p_Y$ assumption.
We first prove the \say{only if} part. Suppose ${p_{Y|W}(y|w)=p_Y(y)}$, $\forall (w,y)\in\mathfrak{X}^2$. In other words, $Q_{i,k}=Q_{j,k}$, $\forall{(i,j,k)\in\mathfrak{X}^3}$. Then for any $\sigma$, we have
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^\prime&=\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{i,\sigma^{-1}(j)}\\&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{k,\sigma^{-1}(j)}, \hspace{1em} k\neq i\\
&=\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(j) Q_{k,\sigma^{-1}(j)}\\
&=\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(j) Q_{j,\sigma^{-1}(j)}\\
&= q_{1,\sigma}^\prime
\end{align}
}{
\begin{align}
q_{0,\sigma}^\prime&=\sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{i,\sigma^{-1}(j)}\\&= \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{k,\sigma^{-1}(j)}, \hspace{1em} k\neq i\\
&=\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(j) Q_{k,\sigma^{-1}(j)}\\
&=\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(j) Q_{j,\sigma^{-1}(j)}\\
&= q_{1,\sigma}^\prime
\end{align}
}
finishing the proof of the \say{only if} part.
Now, we prove the \say{if} part. Suppose the LHS of \eqref{eq:phicondition} holds. In other words, for any $\sigma$
\iftoggle{singlecolumn}{
\begin{align}
\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,\sigma^{-1}(i)} = \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{i,\sigma^{-1}(j)} \label{eq:lhsequivalence}
\end{align}
}{
\begin{align}
\sum\limits_{i=1}^{|\mathfrak{X}|} p_W(i) Q_{i,\sigma^{-1}(i)} = \sum\limits_{i=1}^{|\mathfrak{X}|}\sum\limits_{j=1}^{|\mathfrak{X}|} p_W(i) p_W(j) Q_{i,\sigma^{-1}(j)} \label{eq:lhsequivalence}
\end{align}
}
First, we look at the binary case $\mathfrak{X}=\{1,2\}$. In this case, we obtain
\iftoggle{singlecolumn}{
\begin{align}
p_W(1) Q_{1,1}+p_W(2) Q_{2,2} &= p_W(1)^2 Q_{1,1}+ p_W(1) p_W(2) Q_{1,2}\notag\\
&\hspace{4em}+ p_W(2) p_W(1) Q_{2,1}+ p_W(2)^2 Q_{2,2}\\
Q_{1,1}+Q_{2,2}&=Q_{1,2}+Q_{2,1}\\
Q_{1,1}+Q_{2,2}&=1-Q_{1,1}+1-Q_{2,2}\\
Q_{1,1}+Q_{2,2}&= 1
\end{align}
}{
\begin{align}
p_W(1)& Q_{1,1}+p_W(2) Q_{2,2} \notag\\&= p_W(1)^2 Q_{1,1}+ p_W(1) p_W(2) Q_{1,2}\notag\\
&+ p_W(2) p_W(1) Q_{2,1}+ p_W(2)^2 Q_{2,2}
\end{align}
\begin{align}
Q_{1,1}+Q_{2,2}&=Q_{1,2}+Q_{2,1}\\
Q_{1,1}+Q_{2,2}&=1-Q_{1,1}+1-Q_{2,2}\\
Q_{1,1}+Q_{2,2}&= 1
\end{align}
}
for the identity permutation. This implies that $Q_{1,1}=Q_{2,1}$ and $Q_{1,2}=Q_{2,2}$ and this in turn implies ${p_{Y|W}(y|w)=p_Y(y)}$ ${\forall (w,y)\in\mathfrak{X}^2}$, concluding the proof for the binary case.
Now, we investigate the larger alphabet sizes ($|\mathfrak{X}|\ge 3$). Since the equality holds for all $\sigma$, we now carefully select some one-cycle permutations $\sigma$ to construct a system of linear equations.
Let $\sigma_{\text{id}}$ be the identity permutation and $\sigma_{i-j},\sigma_{i-k},\sigma_{i-j-k}$ denote the one-cycle permutations with the respective cycles $(i\: j)$, $(i\: k)$ and $(i\: j\: k)$ for some distinct $(i,j,k)$ triplet. For the rest of this proof, we will jointly solve the system of equations put forward by these permutations.
Recall that $p_W(l)=u_l$, $\forall l\in\mathfrak{X}$. Then, $\sigma_{\text{id}}$ leads to
\iftoggle{singlecolumn}{
\begin{align}
u_i Q_{i,i}+u_j Q_{j,j}
+u_k Q_{k,k}&+\sum\limits_{l\neq i,j,k}u_l Q_{l,l} \notag\\
&=u_i \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,i}+ u_j \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j} + u_k \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k} + \sum\limits_{l\neq i,j,k} u_l \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,l}\label{eq:phiid}
\end{align}
}{
\begin{align}
u_i Q_{i,i}&+u_j Q_{j,j}
+u_k Q_{k,k}+\sum\limits_{l\neq i,j,k}u_l Q_{l,l} \notag\\
&=u_i \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,i}+ u_j \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j} \notag\\&\hspace{2em}+ u_k \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k} + \sum\limits_{l\neq i,j,k} u_l \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,l}\label{eq:phiid}
\end{align}
}
Similarly, $\sigma_{i-j}$ leads to
\iftoggle{singlecolumn}{
\begin{align}
u_i Q_{i,j}+u_j Q_{j,i}+u_k Q_{k,k}&+\sum\limits_{l\neq i,j,k}u_l Q_{l,l}\notag\\
&= u_i \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j}+ u_j \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,i}
+ u_k \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k} + \sum\limits_{l\neq i,j,k} u_l \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,l}\label{eq:phiij}
\end{align}
}{
\begin{align}
u_i Q_{i,j}&+u_j Q_{j,i}+u_k Q_{k,k}+\sum\limits_{l\neq i,j,k}u_l Q_{l,l}\notag\\
&= u_i \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j}+ u_j \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,i}
\notag\\&\hspace{2em} + u_k \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k} + \sum\limits_{l\neq i,j,k} u_l \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,l}\label{eq:phiij}
\end{align}
}
When we subtract \eqref{eq:phiij} from \eqref{eq:phiid}, we obtain
\iftoggle{singlecolumn}{
\begin{align}
u_i (Q_{i,i}-Q_{i,j})-u_j (Q_{j,i}-Q_{j,j}) &= (u_i-u_j) \sum\limits_{t=1}^{|\mathfrak{X}|} u_t (Q_{t,i}-Q_{t,j})
\end{align}
}{
\begin{align}
u_i (Q_{i,i}-Q_{i,j})&-u_j (Q_{j,i}-Q_{j,j}) \notag\\&= (u_i-u_j) \sum\limits_{t=1}^{|\mathfrak{X}|} u_t (Q_{t,i}-Q_{t,j})
\end{align}
}
Equivalently, we have
\iftoggle{singlecolumn}{
\begin{align}
p_{W,Y}(i,i)&-p_{W,Y}(i,j)-p_{W,Y}(j,i)+p_{W,Y}(j,j)\notag\\ &=p_W(i) p_Y(i)-p_W(i) p_Y(j)-p_W(j) p_Y(i)+p_W(j) p_Y(j)
\end{align}
}{
\begin{align}
p_{W,Y}(i,i)&-p_{W,Y}(i,j)-p_{W,Y}(j,i)+p_{W,Y}(j,j)\notag\\ &=p_W(i) p_Y(i)-p_W(i) p_Y(j)\notag\\&\hspace{2em}-p_W(j) p_Y(i)+p_W(j) p_Y(j)
\end{align}
}
Following the same steps, from $\sigma_{i-k}$ we get
\iftoggle{singlecolumn}{
\begin{align}
p_{W,Y}&(i,i)-p_{W,Y}(i,k)-p_{W,Y}(k,i)+p_{W,Y}(k,k)\notag\\ &=p_W(i) p_Y(i)-p_W(i) p_Y(k)-p_W(k) p_Y(i)+p_W(k) p_Y(k)\label{eq:phiikrearrange}
\end{align}
}{
\begin{align}
p_{W,Y}(i,i)&-p_{W,Y}(i,k)-p_{W,Y}(k,i)+p_{W,Y}(k,k)\notag\\ &=p_W(i) p_Y(i)-p_W(i) p_Y(k)\notag\\&\hspace{2em}-p_W(k) p_Y(i)+p_W(k) p_Y(k)\label{eq:phiikrearrange}
\end{align}
}
We can rearrange the terms in \eqref{eq:phiikrearrange} to obtain
\iftoggle{singlecolumn}{
\begin{align}
p_{W,Y}&(i,k)= p_{W,Y}(i,i)-p_{W,Y}(k,i)+p_{W,Y}(k,k)\notag\\ &-p_W(i) p_Y(i)+p_W(i) p_Y(k)+p_W(k) p_Y(i)-p_W(k) p_Y(k) \label{eq:pxyik}
\end{align}
}{
\begin{align}
p_{W,Y}(i,k)&= p_{W,Y}(i,i)-p_{W,Y}(k,i)+p_{W,Y}(k,k)\notag\\&\hspace{2em}-p_W(i) p_Y(i)+p_W(i) p_Y(k)\notag\\&\hspace{2em}+p_W(k) p_Y(i)-p_W(k) p_Y(k) \label{eq:pxyik}
\end{align}
}
Furthermore, $\sigma_{i-j-k}$ gives us
\iftoggle{singlecolumn}{
\begin{align}
u_i Q_{i,k}+u_j Q_{j,i}+u_k Q_{k,j}&+\sum\limits_{l\neq i,j,k}u_l Q_{l,l}\notag\\&= u_i \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k}+ u_j \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,i}
+ u_k \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j} + \sum\limits_{l\neq i,j,k} u_l \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,l}\label{eq:phiijk}
\end{align}
}{
\begin{align}
u_i Q_{i,k}&+u_j Q_{j,i}+u_k Q_{k,j}+\sum\limits_{l\neq i,j,k}u_l Q_{l,l}\notag\\&= u_i \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k}+ u_j \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,i}
\notag\\&\hspace{2em}+ u_k \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j} + \sum\limits_{l\neq i,j,k} u_l \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,l}\label{eq:phiijk}
\end{align}
}
Subtracting \eqref{eq:phiijk} from \eqref{eq:phiij} yields
\iftoggle{singlecolumn}{
\begin{align}
u_i(Q_{i,j}-Q_{i,k})+u_k(Q_{k,k}-Q_{k,j})&= (u_i-u_k)\left[ \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j} - \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k}\right]
\end{align}
}{
\begin{align}
u_i(Q_{i,j}&-Q_{i,k})+u_k(Q_{k,k}-Q_{k,j})\notag\\&= (u_i-u_k)\left[ \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,j} - \sum\limits_{t=1}^{|\mathfrak{X}|} u_t Q_{t,k}\right]
\end{align}
}
Equivalently,
\iftoggle{singlecolumn}{
\begin{align}
p_{W,Y}&(i,j)-p_{W,Y}(i,k)-p_{W,Y}(k,j)+p_{W,Y}(k,k)\notag\\ &=p_W(i) p_Y(j)-p_W(i) p_Y(k)-p_W(k) p_Y(j)+p_W(k) p_Y(k)\label{eq:pxyiksub}
\end{align}
}{
\begin{align}
p_{W,Y}(i,j)&-p_{W,Y}(i,k)-p_{W,Y}(k,j)+p_{W,Y}(k,k)\notag\\ &=p_W(i) p_Y(j)-p_W(i) p_Y(k)\notag\\&\hspace{2em}-p_W(k) p_Y(j)+p_W(k) p_Y(k)\label{eq:pxyiksub}
\end{align}
}
Plugging $p_{W,Y}(i,k)$ from \eqref{eq:pxyik} into \eqref{eq:pxyiksub} yields
\iftoggle{singlecolumn}{
\begin{align}
p_{W,Y}&(i,j)-p_{W,Y}(i,i)-p_{W,Y}(k,j)+p_{W,Y}(k,i)\notag\\ &=p_W(i) p_Y(j)-p_W(i) p_Y(i)-p_W(k) p_Y(j)+p_W(k) p_Y(i) \label{eq:phisumk}
\end{align}
}{
\begin{align}
p_{W,Y}(i,j)&-p_{W,Y}(i,i)-p_{W,Y}(k,j)+p_{W,Y}(k,i)\notag\\ &=p_W(i) p_Y(j)-p_W(i) p_Y(i)\notag\\&\hspace{2em}-p_W(k) p_Y(j)+p_W(k) p_Y(i) \label{eq:phisumk}
\end{align}
}
Taking a summation over $k$ in \eqref{eq:phisumk} gives us
\iftoggle{singlecolumn}{
\begin{align}
|\mathfrak{X}| p_{W,Y}&(i,j)-|\mathfrak{X}| p_{W,Y}(i,i)-p_{Y}(j)+p_{Y}(i)\notag\\ &=|\mathfrak{X}| p_W(i) p_Y(j)-|\mathfrak{X}| p_W(i) p_Y(i)- p_Y(j)+ p_Y(i)\\
p_{W,Y}&(i,j)- p_{W,Y}(i,i)=p_W(i) p_Y(j)- p_W(i) p_Y(i)\label{eq:pxyiisub}
\end{align}
}{
\begin{align}
|\mathfrak{X}| p_{W,Y}(i,j)&-|\mathfrak{X}| p_{W,Y}(i,i)-p_{Y}(j)+p_{Y}(i)\notag\\ &=|\mathfrak{X}| p_W(i) p_Y(j)-|\mathfrak{X}| p_W(i) p_Y(i)\notag\\&\hspace{2em}- p_Y(j)+ p_Y(i)\\
p_{W,Y}(i,j)&- p_{W,Y}(i,i)\notag\\&=p_W(i) p_Y(j)- p_W(i) p_Y(i)\label{eq:pxyiisub}
\end{align}
}
Similarly, taking a summation over $j$ in \eqref{eq:pxyiisub} yields
\iftoggle{singlecolumn}{
\begin{align}
p_{W}(i)-|\mathfrak{X}| p_{W,Y}(i,i)&=p_W(i) -|\mathfrak{X}| p_W(i) p_Y(i)\\
p_{W,Y}(i,i) &= p_W(i) p_Y(i)\label{eq:pxyii}
\end{align}
}{
\begin{align}
p_{W}(i)-|\mathfrak{X}| p_{W,Y}(i,i)&=p_W(i) -|\mathfrak{X}| p_W(i) p_Y(i)\\
p_{W,Y}(i,i) &= p_W(i) p_Y(i)\label{eq:pxyii}
\end{align}
}
Plugging \eqref{eq:pxyii} into \eqref{eq:pxyiisub} yields
\iftoggle{singlecolumn}{
\begin{align}
p_{W,Y}(i,j)- p_{W,Y}(i,i)&=p_W(i) p_Y(j)- p_W(i) p_Y(i)\\
p_{W,Y}(i,j)&=p_W(i) p_Y(j)
\end{align}
}{
\begin{align}
p_{W,Y}(i,j)- p_{W,Y}(i,i)&=p_W(i) p_Y(j)- p_W(i) p_Y(i)\\
p_{W,Y}(i,j)&=p_W(i) p_Y(j)
\end{align}
}
\begin{sloppypar}
Note that $i$ and $j$ are chosen arbitrarily. Therefore the condition given in \eqref{eq:lhsequivalence} implies that ${p_{Y|W}(y|w)=p_Y(y)}$, ${\forall(w,y)\in\mathfrak{X}^2}$, concluding the proof of the \say{if} part.
\end{sloppypar}
Hence, we have proved~\eqref{eq:phifinal}. Thus, there exists a deterministic bijective mapping $\sigma$ satisfying
$q_{0,\sigma}^\prime>q_{1,\sigma}^\prime$
and in turn $q_{0,\sigma}^{(r)}>q_{1,\sigma}^\prime$, $\forall r\in [n-1]$.
Now choose such a mapping $\sigma$ and note that for any $\gamma\in[0,1)$
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^{(r)}-q_{1,\sigma}^\prime&= (1-\gamma^r) [q_0^\prime(\sigma)-q_1^\prime(\sigma)]\\
&\ge (1-\gamma) [q_0^\prime(\sigma)-q_1^\prime(\sigma)],\: \forall r\in [n-1]\label{eq:reducetoiid}\\
&> 0 ,\: \forall r\in [n-1]
\end{align}
}{
\begin{align}
q_{0,\sigma}^{(r)}-q_{1,\sigma}^\prime&= (1-\gamma^r) [q_0^\prime(\sigma)-q_1^\prime(\sigma)]\\
&\ge (1-\gamma) [q_0^\prime(\sigma)-q_1^\prime(\sigma)],\: \forall r\in [n-1]\label{eq:reducetoiid}\\
&> 0 ,\: \forall r\in [n-1]
\end{align}
}
Next, define
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^{\text{min}}&\triangleq (1-\gamma) q_{0,\sigma}^{\prime}+\gamma q_{1,\sigma}^{\prime}
\end{align}
}{
\begin{align}
q_{0,\sigma}^{\text{min}}&\triangleq (1-\gamma) q_{0,\sigma}^{\prime}+\gamma q_{1,\sigma}^{\prime}
\end{align}
}
and choose a $\bar{\tau}\in\left(q_{1,\sigma}^{\prime},q_{0,\sigma}^{\text{min}}\right)$ bounded away from both ends of the interval.
Let $\hat{K}_n\triangleq n-\sum_{j=1}^n I_j$ and $L_j$ denote the $j$\textsuperscript{th} $0$ in $I^n$, $j=1,\dots,\hat{K}_n$. In other words, $L_j$ holds the index of the $j$\textsuperscript{th} retained column $C_j^{(2)}(\sigma)$ of $\tilde{\mathbf{G}}_{\sigma}^{(2)}$ in $\mathbf{G}^{(1)}$. Similarly, for $i$ with $I_i=0$, let $R_i\triangleq i- \sum_{l=1}^i I_l$ store the index of $C_i^{(1)}$ in $\tilde{\mathbf{G}}_{\sigma}^{(2)}$.
\begin{sloppypar}
Now note that when we have $I_i=1$, ${d_H(C_i^{(1)},C_j^{(2)}(\sigma))\sim\text{Binom}(\Lambda_n,q_{0,\sigma}^{(|i-L_j|)})}$ and when $I_i=0$, ${d_H(C_i^{(1)},C_{R_i}^{(2)}(\sigma))\sim\text{Binom}(\Lambda_n,q_{1,\sigma}^{\prime})}$.
\end{sloppypar}
Next, we write the misdetection probability $P_{e,i}$ of $C^{(1)}_i$ as
\iftoggle{singlecolumn}{
\begin{align}
P_{e,i} &= \Pr\left(\exists j\in[\hat{K}_n]: \Delta_{i,j}(\sigma) \le \Lambda_n \bar{\tau}, I_i=1\right)\notag\\
&\hspace{3em}+\Pr\left(\forall j\in[\hat{K}_n]: \Delta_{i,j}(\sigma) > \Lambda_n \bar{\tau}, I_i=0\right)\\
&\le \Pr\left(\exists j\in[\hat{K}_n]: \Delta_{i,j}(\sigma) \le \Lambda_n \bar{\tau}, I_i=1\right)\notag\\
&\hspace{3em}+\Pr\left( \Delta_{i,R_i}(\sigma) > \Lambda_n \bar{\tau}, I_i=0\right)
\end{align}
}{
\begin{align}
P_{e,i} &= \Pr\left(\exists j\in[\hat{K}_n]: \Delta_{i,j}(\sigma) \le \Lambda_n \bar{\tau}, I_i=1\right)\notag\\
&\hspace{0.6em}+\Pr\left(\forall j\in[\hat{K}_n]: \Delta_{i,j}(\sigma) > \Lambda_n \bar{\tau}, I_i=0\right)\\
&\le \Pr\left(\exists j\in[\hat{K}_n]: \Delta_{i,j}(\sigma) \le \Lambda_n \bar{\tau}, I_i=1\right)\notag\\
&\hspace{0.6em}+\Pr\left( \Delta_{i,R_i}(\sigma) > \Lambda_n \bar{\tau}, I_i=0\right)
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
\Delta_{i,j}(\sigma) &\triangleq d_H(C_i^{(1)},C_j^{(2)}(\sigma)).
\end{align}
}{
\begin{align}
\Delta_{i,j}(\sigma) &\triangleq d_H(C_i^{(1)},C_j^{(2)}(\sigma)).
\end{align}
}
From the union bound and Chernoff bound~\cite[Lemma 4.7.2]{ash2012information}, we obtain
\iftoggle{singlecolumn}{
\begin{align}
P_{e,i}&\le \sum\limits_{j=1}^{\hat{K}_n} \Pr\left(\Delta_{i,j}(\sigma) \le \Lambda_n \bar{\tau}, I_i=1\right)+\Pr\left( \Delta_{i,R_i}(\sigma) > \Lambda_n \bar{\tau}, I_i=0\right)\\
&\le \sum\limits_{j=1}^{\hat{K}_n} 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)})}+ 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}
\end{align}
}{
\begin{align}
P_{e,i}&\le \sum\limits_{j=1}^{\hat{K}_n} \Pr\left(\Delta_{i,j}(\sigma) \le \Lambda_n \bar{\tau}, I_i=1\right)\notag\\
&\hspace{3em}+\Pr\left( \Delta_{i,R_i}(\sigma) > \Lambda_n \bar{\tau}, I_i=0\right)\\
&\le \sum\limits_{j=1}^{\hat{K}_n} 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)})}+ 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}
\end{align}
}
It is straightforward to show that $D(\bar{\tau}\|p)$ is an increasing function of $p$ for $p>\bar{\tau}$. Thus $\forall i\in[n], j\in[\hat{K}_n]$, we have
\iftoggle{singlecolumn}{
\begin{align}
q_{0,\sigma}^{(|i-L_j|)} &\ge q_{0,\sigma}^{\prime}\\
D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)}) &\ge D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})\\
2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)})} &\le 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})}
\end{align}
}{
\begin{align}
q_{0,\sigma}^{(|i-L_j|)} &\ge q_{0,\sigma}^{\prime}\\
D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)}) &\ge D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})\\
2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)})} &\le 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})}
\end{align}
}
Thus, we have
\iftoggle{singlecolumn}{
\begin{align}
P_{e,i} &\le \sum\limits_{j=1}^{\hat{K}_n} 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)})} + 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}\\
&\le \sum\limits_{j=1}^{\hat{K}_n} 2^{-\Lambda_n D(\tau\|q_{0,\sigma}^{\text{min}})} + 2^{-\Lambda_n D(1-\tau\|1-q_{1,\sigma}^{\prime})}\\
&= \hat{K}_n 2^{-\Lambda_n D(\tau\|q_{0,\sigma}^{\text{min}})} + 2^{-\Lambda_n D(1-\tau\|1-q_{1,\sigma}^{\prime})}
\end{align}
}{
\begin{align}
P_{e,i} &\le \sum\limits_{j=1}^{\hat{K}_n} 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{(|i-L_j|)})} + 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}\\
&\le \sum\limits_{j=1}^{\hat{K}_n} 2^{-\Lambda_n D(\tau\|q_{0,\sigma}^{\text{min}})} + 2^{-\Lambda_n D(1-\tau\|1-q_{1,\sigma}^{\prime})}\\
&= \hat{K}_n 2^{-\Lambda_n D(\tau\|q_{0,\sigma}^{\text{min}})} + 2^{-\Lambda_n D(1-\tau\|1-q_{1,\sigma}^{\prime})}
\end{align}
}
Thus, by simple union bound the total misdetection probability $P_{e,total}$ can be bounded as
\iftoggle{singlecolumn}{
\begin{align}
P_{e,total} &\le \sum\limits_{i=1}^n P_{e,i}\\
&\le \sum\limits_{i=1}^n \hat{K}_n 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})} + 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}\\
&= n \hat{K}_n 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})} + n 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}\\
&\le n^2 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})} + n 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}
\end{align}
}{
\begin{align}
P_{e,total} &\le \sum\limits_{i=1}^n P_{e,i}\\
&\le \sum\limits_{i=1}^n \hat{K}_n 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})} + 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}\\
&= n \hat{K}_n 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})} + n 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}\\
&\le n^2 2^{-\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})} + n 2^{-\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})}
\end{align}
}
Hence, $P_{e,total}\to 0$ as $n\to\infty$ if the seed size $\Lambda_n$ satisfies
\iftoggle{singlecolumn}{
\begin{align}
\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})- 2\log n &>0\\
\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime}) - \log n&>0
\end{align}
}{
\begin{align}
\Lambda_n D(\bar{\tau}\|q_{0,\sigma}^{\text{min}})- 2\log n &>0\\
\Lambda_n D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime}) - \log n&>0
\end{align}
}
Thus any seed size $\Lambda_n$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
\Lambda_n &>\frac{\log n}{\min \left\{\frac{1}{2}D(\bar{\tau}\|q_{0,\sigma}^{\text{min}}), D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})\right\}}
\end{align}
}{
\begin{align}
\Lambda_n &>\frac{\log n}{\min \left\{\frac{1}{2}D(\bar{\tau}\|q_{0,\sigma}^{\text{min}}), D(1-\bar{\tau}\|1-q_{1,\sigma}^{\prime})\right\}}
\end{align}
}
is sufficient to drive $P_{e,total}$ to 0. Thus a seed size $\Lambda_n=\Omega(\log n)=\Omega(\log \log m_n)$ is enough for successful deletion detection.
\qed
\section{Proof of Lemma~\ref{lem:histogram}}
\label{proof:histogram}
First, observe that from~\cite[Theorem 3]{burke1958markovian} and \eqref{eq:markovtransitionmatrix}, the rows of the collapsed database $\tilde{\mathbf{D}}^{(1)}$ become \emph{i.i.d.} first-order stationary binary Markov chains, with the following probability transition matrix and stationary distribution:
\iftoggle{singlecolumn}{
\begin{align}
\tilde{\mathbf{P}}&=\begin{bmatrix}\gamma+(1-\gamma) u_1 & (1-\gamma)(1-u_1)\\
(1-\gamma)u_1 & 1-(1-\gamma)u_1
\end{bmatrix}\\
\tilde{\pi}&=\begin{bmatrix}
u_1 & 1-u_1
\end{bmatrix}
\end{align}
}{
\begin{align}
\tilde{\mathbf{P}}&=\begin{bmatrix}\gamma+(1-\gamma) u_1 & (1-\gamma)(1-u_1)\\
(1-\gamma)u_1 & 1-(1-\gamma)u_1
\end{bmatrix}\\
\tilde{\pi}&=\begin{bmatrix}
u_1 & 1-u_1
\end{bmatrix}
\end{align}
}
For brevity, we let ${\mu_n\triangleq \Pr(\exists i,j\in [n],\: i\neq j,\tilde{{H}}^{(1)}_{i}=\tilde{{H}}^{(1)}_j)}$. Next, from the union bound, we obtain
\iftoggle{singlecolumn}{
\begin{align}
\mu_n&\le \sum\limits_{{(i,j)\in[n]^2:i<j}} \Pr(\tilde{{H}}^{(1)}_{i}=\tilde{{H}}^{(1)}_{j})\\
&\le n^2 \max\limits_{{(i,j)\in[n]^2:i<j}} \Pr(\tilde{{H}}^{(1)}_{i}=\tilde{{H}}^{(1)}_{j})
\end{align}
}{
\begin{align}
\mu_n&\le \sum\limits_{{(i,j)\in[n]^2:i<j}} \Pr(\tilde{{H}}^{(1)}_{i}=\tilde{{H}}^{(1)}_{j})\\
&\le n^2 \max\limits_{{(i,j)\in[n]^2:i<j}} \Pr(\tilde{{H}}^{(1)}_{i}=\tilde{{H}}^{(1)}_{j})
\end{align}
}
Due to stationarity of $\tilde{\mathbf{P}}$, this maximum is equal to $\Pr(\tilde{{H}}^{(1)}_1=\tilde{{H}}^{(1)}_{s+1})$ for some $s$. For brevity, let $\mathbf{Q}\triangleq\tilde{\mathbf{P}}^s$ and $q\triangleq \Pr(\tilde{{H}}^{(1)}_1=\tilde{{H}}^{(1)}_{s+1})$. Observe that
$\tilde{{H}}^{(1)}_1$ and $\tilde{{H}}^{(1)}_{s+1}$ are correlated Binom($m_n,1-u_1$) random variables and for any $s$, $\mathbf{Q}$ has positive values, \emph{i.e.,} the collapsed Markov chain is irreducible for any $s$. Now, we have
\iftoggle{singlecolumn}{
\vspace{-1em}
\begin{adjustwidth}{-0.25cm}{0pt}
\begin{align}
q&= \sum\limits_{r=0}^{m_n} \Pr(\tilde{{H}}^{(1)}_1=r) \Pr(\tilde{{H}}^{(1)}_{s+1}=r|\tilde{{H}}^{(1)}_1=r)\\
&= \sum\limits_{r=0}^{m_n} \binom{m}{r}(1-u_1)^r u_1^{m_n-r} \Pr(\tilde{{H}}^{(1)}_{s+1}=r|\tilde{{H}}^{(1)}_1=r)
\end{align}
\end{adjustwidth}
}{
\vspace{-1em}
\begin{adjustwidth}{-0.25cm}{0pt}
\begin{align}
q&= \sum\limits_{r=0}^{m_n} \Pr(\tilde{{H}}^{(1)}_1=r) \Pr(\tilde{{H}}^{(1)}_{s+1}=r|\tilde{{H}}^{(1)}_1=r)\\
&= \sum\limits_{r=0}^{m_n} \binom{m}{r}(1-u_1)^r u_1^{m_n-r} \Pr(\tilde{{H}}^{(1)}_{s+1}=r|\tilde{{H}}^{(1)}_1=r)
\end{align}
\end{adjustwidth}
}
Note that since the rows of $\tilde{\mathbf{D}}^{(1)}$ are \emph{i.i.d.}, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\tilde{{H}}^{(1)}_{s+1}=r|\tilde{{H}}^{(1)}_1=r) = \Pr(M+N=r)
\end{align}
}{
\begin{align}
\Pr(\tilde{{H}}^{(1)}_{s+1}=r|\tilde{{H}}^{(1)}_1=r) = \Pr(M+N=r)
\end{align}
}
where $M\sim\text{Binom}(r,Q_{2,2})$ and $N\sim\text{Binom}(m_n-r,Q_{1,2})$ are independent. Note that there are two ways leading to the state 2 in the collapsed column after $s$ steps. The first one is the state $2$ staying in the same state after $s$ steps, and the second one is state $1$ being converted to state $2$ after $s$ steps. Here the Binomial random variables $E$ and $F$ keep counts of the former and the latter ways, respectively.
Then, from Stirling's approximation~\cite[Chapter 3.2]{cormen2022introduction} on the factorial terms in the Binomial coefficient and~\cite[Theorem 11.1.2]{cover2006elements}, we get
\iftoggle{singlecolumn}{
\begin{align}
q
&= \sum\limits_{r=0}^{m_n} \binom{{m_n}}{r}(1-u_1)^r u_1^{{m_n}-r} \Pr(M+N=r)\\
&\le \frac{e}{\sqrt{2\pi}} {m_n}^{-1/2} \sum\limits_{r=0}^{m_n} \Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r)
\end{align}
}{
\begin{align}
q
&= \sum\limits_{r=0}^{m_n} \binom{{m_n}}{r}(1-u_1)^r u_1^{{m_n}-r} \Pr(M+N=r)\\
&\le \frac{e}{\sqrt{2\pi}} {m_n}^{-1/2} \sum\limits_{r=0}^{m_n} \Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\notag\\&\hspace{11em}\Pr(M+N=r)
\end{align}
}
where $\Pi_r=\frac{r}{{m_n}}(1-\frac{r}{{m_n}})$. Let
\iftoggle{singlecolumn}{
\begin{align}
T &= \sum\limits_{r=0}^{m_n} \Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r)\\ &= T_1+T_2
\end{align}
}{
\begin{align}
T &= \sum\limits_{r=0}^{m_n} \Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r)\\ &= T_1+T_2
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
T_1 &= \sum_{\mathclap{\hspace{4em} r:D(\frac{r}{{m_n}}\|1-u_1)> \frac{\epsilon_n^2}{2\log_e 2}}} \hspace{2em}\Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r)\label{eq:T1}\\
T_2 &= \sum_{\mathclap{\hspace{4em} r:D(\frac{r}{{m_n}}\|1-u_1)\le \frac{\epsilon_n^2}{2\log_e 2}}} \hspace{2em} \Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r),\label{eq:T2}
\end{align}
}{
\begin{align}
T_1 &= \sum_{\mathclap{\hspace{4em} r:D(\frac{r}{{m_n}}\|1-u_1)> \frac{\epsilon_n^2}{2\log_e 2}}} \hspace{2em}\Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r)\label{eq:T1}\\
T_2 &= \sum_{\mathclap{\hspace{4em} r:D(\frac{r}{{m_n}}\|1-u_1)\le \frac{\epsilon_n^2}{2\log_e 2}}} \hspace{2em} \Pi_r^{-1} 2^{-{m_n} D(\frac{r}{{m_n}}\|(1-u_1))}\Pr(M+N=r),\label{eq:T2}
\end{align}
}
$\epsilon_n>0$, which is described below in more detail, is such that $\epsilon_n\to0$ as $n\to\infty$.
First, we look at $T_1$. Note that for any $r\in\mathbb{N}$, we have ${\Pi_r\le {m_n}^2}$, suggesting the multiplicative term in the summation in~\eqref{eq:T1} is polynomial with ${m_n}$. Note that we can simply separate the cases $r=0$, $r={m_n}$ whose probabilities vanish exponentially in ${m_n}$. Therefore, as long as ${{m_n} \epsilon_n^2\to\infty}$, $T_1$ has a polynomial number of elements which decay exponentially with ${m_n}$. Thus
\iftoggle{singlecolumn}{
\begin{align}
T_1\to0\text{ as }n\to\infty\label{eq:t1}
\end{align}
}{
\begin{align}
T_1\to0\text{ as }n\to\infty\label{eq:t1}
\end{align}
}
as long as ${{m_n} \epsilon_n^2\to\infty}$.
Now, we focus on $T_2$. From Pinsker's inequality~\cite[Lemma 11.6.1]{cover2006elements}, we have
\iftoggle{singlecolumn}{
\begin{align}
D\left(\frac{r}{{m_n}}\Big\|1-u_1\right)\le \frac{\epsilon_n^2}{2\log_e 2}\Rightarrow \text{TV}\left(\frac{r}{{m_n}}, 1-u_1\right)\le \epsilon_n
\end{align}
}{
\begin{align}
D\Big(\frac{r}{{m_n}}\Big\|1-u_1\Big)&\le \frac{\epsilon_n^2}{2\log_e 2}\notag\\&\Longrightarrow \text{TV}\left(\frac{r}{{m_n}}, 1-u_1\right)\le \epsilon_n
\end{align}
}
where TV denotes the total variation distance between the Bernoulli distributions with given parameters. Therefore
\iftoggle{singlecolumn}{
\begin{align}
\Big|\{r:D\Big(\frac{r}{{m_n}}\Big\|1-u_1\Big)\le \frac{\epsilon_n^2}{2\log_e 2}\}\Big|
&\le \Big|\{r:\text{TV}\Big(\frac{r}{{m_n}}, 1-u_1\Big)\le \epsilon_n\}\Big| \\
&= O({m_n}\epsilon_n)
\end{align}
}{
\begin{align}
\Big|\{r:D\Big(\frac{r}{{m_n}}&\Big\|1-u_1\Big)\le \frac{\epsilon_n^2}{2\log_e 2}\}\Big|\notag\\
&\le \Big|\{r:\text{TV}\Big(\frac{r}{{m_n}}, 1-u_1\Big)\le \epsilon_n\}\Big| \\
&= O({m_n}\epsilon_n)
\end{align}
}
for small $\epsilon_n$. Furthermore, if $\text{TV}\left(\frac{r}{{m_n}}, 1-u_1\right)\le \epsilon_n$, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pi_{r}^{-1} &\le \frac{1}{(1-u_1) u_1}
\end{align}
}{
\begin{align}
\Pi_{r}^{-1} &\le \frac{1}{(1-u_1) u_1}
\end{align}
}
Now, we investigate $\Pr(M+N=r)$ for the values of $r$ in the interval $[{{m_n}(1-u_1-\epsilon_n)},{{m_n}(1-u_1+\epsilon_n)}]$.
\iftoggle{singlecolumn}{
\begin{align}
\Pr(M+N=r)&=\sum\limits_{i=1}^r \Pr(M=r-i) \Pr(N=i)+ \Pr(M=r)\Pr(N=0)\\
&=Q_{2,2}^r Q_{1,1}^{{m_n}-r}+ \sum\limits_{i=1}^r \binom{r}{i} Q_{2,2}^{r-i} (1-Q_{2,2})^i \binom{{m_n}-r}{i} Q_{1,2}^i (1-Q_{1,2})^{{m_n}-r-i}\label{eq:binom2}
\end{align}
}{
\begin{align}
\Pr(M+N=r)&=\sum\limits_{i=1}^r \Pr(M=r-i) \Pr(N=i)\notag\\&\hspace{2em}+ \Pr(M=r)\Pr(N=0)\\
&=Q_{2,2}^r Q_{1,1}^{{m_n}-r}\notag\\&\hspace{0.5em}+ \sum\limits_{i=1}^r \binom{r}{i} Q_{2,2}^{r-i} (1-Q_{2,2})^i \notag\\&\hspace{2em}\binom{{m_n}-r}{i} Q_{1,2}^i (1-Q_{1,2})^{{m_n}-r-i}\label{eq:binom2}
\end{align}
}
Again, from Stirling's approximation~\cite[Chapter 3.2]{cormen2022introduction} on the factorial terms in the Binomial coefficient in \eqref{eq:binom2} and from~\cite[Theorem 11.1.2]{cover2006elements}, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(M+N=r)&\le Q_{2,2}^r Q_{1,1}^{{m_n}-r} + \frac{e^2}{2\pi}[r({m_n}-r)]^{-1/2} U
\end{align}
}{
\begin{align}
\Pr(M+N=r)&\le Q_{2,2}^r Q_{1,1}^{{m_n}-r} + \frac{e^2}{2\pi}[r({m_n}-r)]^{-1/2} U
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
U = \sum\limits_{i=1}^r \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-r D(1-\frac{i}{r}\|Q_{2,2}) -({m_n}-r) D(\frac{i}{{m_n}-r}\|Q_{1,2})}
\end{align}
}{
\begin{align}
U = \sum\limits_{i=1}^r \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-r D(1-\frac{i}{r}\|Q_{2,2}) -({m_n}-r) D(\frac{i}{{m_n}-r}\|Q_{1,2})}
\end{align}
}
Then, from $r\in[{{m_n}(1-u_1-\epsilon_n)},{{m_n}(1-u_1+\epsilon_n)}]$ we obtain
\iftoggle{singlecolumn}{
\begin{align}
\Pr(M+N=r)&\le Q_{2,2}^r Q_{1,1}^{{m_n}-r} + \frac{e^2}{2\pi}\frac{{m_n}^{-1}}{\sqrt{(1-u_1-\epsilon_n)(u_1-\epsilon_n)}} U
\end{align}
}{
\begin{align}
\Pr(M+N=r)&\le Q_{2,2}^r Q_{1,1}^{{m_n}-r} \notag\\&\hspace{1em}+ \frac{e^2}{2\pi}\frac{{m_n}^{-1}}{\sqrt{(1-u_1-\epsilon_n)(u_1-\epsilon_n)}} U
\end{align}
}
and
\iftoggle{singlecolumn}{
\begin{align}
U&\le \sum\limits_{i=1}^r \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-{m_n} (1-u_1-\epsilon_n) D(1-\frac{i}{r}\|Q_{2,2})}2^{-{m_n} (u_1-\epsilon_n) D(\frac{i}{{m_n}-r}\|Q_{1,2})}\\
&= \sum\limits_{i\notin \mathcal{R}(\epsilon_n)} \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-{m_n} (1-u_1-\epsilon_n) D(1-\frac{i}{r}\|Q_{2,2})} 2^{-{m_n} (u_1-\epsilon_n) D(\frac{i}{{m_n}-r}\|Q_{1,2})}\notag\\
&\hspace{2em}+ \sum\limits_{i\in \mathcal{R}(\epsilon_n)} \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-{m_n} (1-u_1-\epsilon_n) D(1-\frac{i}{r}\|Q_{2,2})} 2^{-{m_n} (u_1-\epsilon_n) D(\frac{i}{{m_n}-r}\|Q_{1,2})}\label{eq:U}
\end{align}
}{
\begin{align}
U&\le \sum\limits_{i=1}^r \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-{m_n} (1-u_1-\epsilon_n) D(1-\frac{i}{r}\|Q_{2,2})}\notag\\&\hspace{8em}2^{-{m_n} (u_1-\epsilon_n) D(\frac{i}{{m_n}-r}\|Q_{1,2})}\\
&= \sum\limits_{i\notin \mathcal{R}(\epsilon_n)} \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-{m_n} (1-u_1-\epsilon_n) D(1-\frac{i}{r}\|Q_{2,2})} \notag\\&\hspace{8em}2^{-{m_n} (u_1-\epsilon_n) D(\frac{i}{{m_n}-r}\|Q_{1,2})}\notag\\
&\hspace{1em}+ \sum\limits_{i\in \mathcal{R}(\epsilon_n)} \Pi^{-1}_{i/r} \Pi^{-1}_{i/{m_n}-r} 2^{-{m_n} (1-u_1-\epsilon_n) D(1-\frac{i}{r}\|Q_{2,2})} \notag\\&\hspace{8em}2^{-{m_n} (u_1-\epsilon_n) D(\frac{i}{{m_n}-r}\|Q_{1,2})}\label{eq:U}
\end{align}
}
where we define the set $\mathcal{R}(\epsilon_n)$ as
\iftoggle{singlecolumn}{
\begin{align}
\mathcal{R}(\epsilon_n) &\triangleq\Big\{i\in[r]:D\Big(1-\frac{i}{r}\Big\|Q_{2,2}\Big),D\Big(\frac{i}{{m_n}-r}\Big\|Q_{1,2}\Big)\le \frac{\epsilon_n^2}{2\log_e 2}\Big\}
\end{align}
}{
\begin{align}
\mathcal{R}(\epsilon_n) &\triangleq\Big\{i\in[r]:D\Big(1-\frac{i}{r}\Big\|Q_{2,2}\Big)\le \frac{\epsilon_n^2}{2\log_e 2},\notag\\
&\hspace{3em}D\Big(\frac{i}{{m_n}-r}\Big\|Q_{1,2}\Big)\le \frac{\epsilon_n^2}{2\log_e 2}\Big\}
\end{align}
}
Note that similar to $T_1$, the first summation in \eqref{eq:U} vanishes exponentially in ${m_n}$ whenever ${m_n}\epsilon_n^2\to\infty$, and using Pinsker's inequality once more, the second term can be upper bounded by
\iftoggle{singlecolumn}{
\begin{align}
O(|\mathcal{R}(\epsilon_n)|)=O({m_n}\epsilon_n)
\end{align}
}{
\begin{align}
O(|\mathcal{R}(\epsilon_n)|)=O({m_n}\epsilon_n)
\end{align}
}
Now, we choose ${\epsilon_n={m_n}^{-\frac{1}{2}} V_n}$ for some $V_n$ satisfying ${V_n=\omega(1)}$ and ${V_n=o(m_n^{1/2})}$. Thus, $T_1$ vanishes exponentially fast since ${m_n\epsilon_n^2=V_n^2\to\infty}$ and
\iftoggle{singlecolumn}{
\begin{gather}
\Pr(M+N=r) = O(\epsilon_n)\\
T = O({m_n} \epsilon_n^2)=O(V_n^2)\\
\mu_n = O(n^2 {m_n}^{-1/2} V_n^2)
\end{gather}
}{
\begin{gather}
\Pr(M+N=r) = O(\epsilon_n)\\
T = O({m_n} \epsilon_n^2)=O(V_n^2)\\
\mu_n = O(n^2 {m_n}^{-1/2} V_n^2)
\end{gather}
}
By the assumption ${m_n=\omega(n^4)}$, we have ${m_n=n^4 Z_n}$ for some $Z_n$ satisfying ${\lim\limits_{n\to\infty} Z_n=\infty}$. Now, taking ${V_n=o(Z_n^{1/4})}$ (e.g. ${V_n=Z_n^{1/6}}$), we get
\iftoggle{singlecolumn}{
\begin{align}
\mu_n&\le O( Z_n^{-1/2} V_n^2)
= o(1)
\end{align}
}{
\begin{align}
\mu_n&\le O( Z_n^{-1/2} V_n^2)
= o(1)
\end{align}
}
Thus $m_n=\omega(n^4)$ is sufficient to have $\mu_n\to0$ as $n\to\infty$, concluding the proof. \qed
\section{Proof of Achievability of Theorem~\ref{thm:mainresultW1}}
\label{proof:achievabilityW1}
The proof of the achievability part follows from successive union bounds exploiting the following:
\begin{itemize}
\item For any typical row $Y^{K_n}$ of $\mathbf{D}^{(2)}$, there are approximately $2^{{K_n} H(X|Y)}$ jointly typical sequences with respect to $p_{X,Y}$.
\item If the output of the synchronization channel has length ${K_n}$ then there are at least $k_{\min}=\left\lceil \frac{{K_n}}{s_{\max}}\right\rceil$ retained (not deleted) elements.
\item For the number of columns $n$, the number of deletion patterns with $k_{\min}$ retained elements is
\iftoggle{singlecolumn}{
\begin{align}
\binom{n}{k_{\min}}&\le 2^{n H_b(k_{\min}/n)}
\end{align}
}{
\begin{align}
\binom{n}{k_{\min}}&\le 2^{n H_b(k_{\min}/n)}
\end{align}
}
\item Any stretched row has the same probability as the original row.
\item If the original length-$n$ sequence and the retained length-$k_{\min}$ sequence after the deletion channel are $\epsilon$-typical with respect to $p_X$, then the complementary length-$(n-k_{\min})$ subsequence is $\tilde{\epsilon}$-typical with respect to $p_X$, where $\tilde{\epsilon}=\frac{n+k_{\min}}{n-k_{\min}}$.
\item The cardinality of the set of $\tilde{\epsilon}$-typical sequences of length $n-k_{\min}$ with respect to $p_X$ is approximately $2^{(n-k_{\min}) H(X)}$.
\end{itemize}
We need to show that for a given pair of matching rows, WLOG, $X^n_1$ of $\mathbf{D}^{(1)}$ and $Y^{K_n}_l$ of $\mathbf{D}^{(2)}$ with $\boldsymbol{\Theta}_n(1)=l$, the probability of error ${P_e\triangleq\Pr(\hat{\Theta}_n (1)\neq l)}$ of the following matching scheme can be made arbitrarily small asymptotically where ${K_n}=\sum_{j=1}^n S_{1,j}$ is the random variable corresponding to the length of $Y^{K_n}_{l}$. The matching scheme we propose follows these steps:
\begin{enumerate}[label=\textbf{\arabic*)},leftmargin=1.3\parindent]
\item For all $j\in[n]$, discard the $j$\textsuperscript{th} column of $\mathbf{D}^{(1)}$ if $A_j=1$ to obtain $\bar{\mathbf{D}}^{(1)}$ whose column size is $n-A$ where $A=\sum_{j=1}^n A_j$.
\item Stretch each row $\bar{X}^{n-A}_i=\bar{X}_{i,1},\dots,\bar{X}_{i,n-A}$ of $\bar{\mathbf{D}}^{(1)}$ into $\tilde{X}_{i}^{(n-A) s_{\max}}$, by repeating each element of $\bar{X}^{n-A}_i$ $s_{\max}$ times as follows
\iftoggle{singlecolumn}{
\begin{align}
\tilde{X}_{i}^{(n-A) s_{\max}}&= 1^{s_{\max}}\otimes \bar{X}_{i,1},\dots,1^{s_{\max}}\otimes \bar{X}_{i,n-A}
\end{align}
}{
\begin{align}
\tilde{X}_{i}^{(n-A) s_{\max}}&= 1^{s_{\max}}\otimes \bar{X}_{i,1},\dots,1^{s_{\max}}\otimes \bar{X}_{i,n-A}
\end{align}
}
where $1^{s_{\max}}$ is an all-one row vector of length $s_{\max}$ and $\otimes$ denotes the Kronecker product.
\item Fix $\epsilon>0$. If $K_n<k\triangleq n(\mathbb{E}[S]-\epsilon)$ declare error, whose probability is denoted by $\kappa_n$ where $k$ is assumed to be an integer for computational simplicity. Otherwise, proceed with the next step.
\item If $A<a=n(\alpha\delta-\epsilon)$ declare error, whose probability is denoted by $\mu_n$. Otherwise, proceed with the next step.
\item Match the $l$\textsuperscript{th} row $Y^{K_n}_{l}$ of $\mathbf{D}^{(2)}$ $X^n_{1}$ of $\mathbf{D}^{(1)}$, assigning $\hat{\boldsymbol{\Theta}}_n(1)=l$, if $i=1$ is the only index in $[m_n]$ such that \emph{i)} $\bar{X}^{n-A}_i$ is $\epsilon$-typical and \emph{ii)} $\tilde{X}_i^{(n-A) s_{\max}}$ contains a subsequence jointly $\epsilon$-typical with $Y^{K_n}_{l}$ with respect to $p_{X,Y}$. Otherwise, declare a \emph{collision} error.
\end{enumerate}
Since additional columns in $\mathbf{D}^{(2)}$ and additional detected deleted columns in $\mathbf{D}^{(1)}$ would decrease the collision probability, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\text{collision between 1 and }i|K_n\ge k,A\ge a)\le \Pr(\text{collision between 1 and }i||K_n=k,A=a)
\end{align}
}{
\begin{align}
\Pr&(\text{collision between 1 and }i|K_n\ge k,A\ge a)\notag\\&\le \Pr(\text{collision between 1 and }i||K_n=k,A=a)
\end{align}
}
for any $i\in[m_n]\setminus\{1\}$. Thus, we can focus on the case $K_n=k$, $A=a$, as it yields an upper bound on the error probability of our matching scheme.
Let $A_\epsilon^{(n-a)}(X)$ denote the set of $\epsilon$-typical (with respect to $p_X$) sequences of length $n-a$ and $A_\epsilon(X^k|Y^k_l)$ denote the set of sequences of length $k$ jointly $\epsilon$-typical (with respect to $p_{X,Y}$) with $Y^k_l$. For the matching rows $X^n_1$, $Y^k_l$ of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$, define the pairwise collision probability between $X^n_1$ and $X^n_i$ for any $i\in[m_n]\setminus\{1\}$ as
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,i}}\triangleq \Pr(\exists {z}^k: {z}^k\in A_\epsilon({X}^k|{Y}^k_l) \text{ and }{z}^k \text{ is a subsequence of } \tilde{X}_{i}^{(n-a) s_{\max}}.).
\end{align}
}{
\begin{align}
P_{\text{col,i}}&\triangleq \Pr(\exists {z}^k: {z}^k\in A_\epsilon({X}^k|{Y}^k_l) \text{ and }{z}^k \text{ is a} \notag\\&\hspace{4em} \text{subsequence of } \tilde{X}_{i}^{(n-a) s_{\max}}.).
\end{align}
}
Therefore given the correct labeling for $Y^k_l\in\mathbf{D}^{(2)}$ is $X^n_1\in\mathbf{D}^{(1)}$, the probability of error $P_e$ can be bounded as
\iftoggle{singlecolumn}{
\begin{align}
P_e
&\le \Pr(\nexists {z}^k: {z}^k\in A_\epsilon({X}^k|{Y}^k_l) \text{ and }{z}^k \text{ is a subsequence of } \tilde{X}_{1}^{(n-a) s_{\max}}.)\notag\\
&\qquad +\Pr(X^n_1\notin A_\epsilon^{(n)}(X))+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+ \kappa_n+\mu_n\\
&\le 2\epsilon+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+\kappa_n+\mu_n\\
&\le 2\epsilon+ 2^{n R} P_{\text{col,2}}+\kappa_n+\mu_n \label{eq:Perowwise}
\end{align}
}{
\begin{align}
P_e
&\le \Pr(\nexists {z}^k: {z}^k\in A_\epsilon({X}^k|{Y}^k_l) \text{ and }{z}^k \text{ is a} \notag\\&\hspace{4em}\text{subsequence of } \tilde{X}_{1}^{(n-a) s_{\max}}.)\notag\\
&\hspace{2em} +\Pr(X^n_1\notin A_\epsilon^{(n)}(X))\notag\\
&\hspace{2em}+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+ \kappa_n+\mu_n\\
&\le 2\epsilon+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+\kappa_n+\mu_n\\
&\le 2\epsilon+ 2^{n R} P_{\text{col,2}}+\kappa_n+\mu_n \label{eq:Perowwise}
\end{align}
}
where \eqref{eq:Perowwise} follows from the fact the the rows are \emph{i.i.d.} and thus $P_{\text{col,i}}=P_{\text{col,2}}$, $\forall i\in[m_n]\setminus\{1\}$.
We now upper bound $P_{\text{col,2}}$.
First, we investigate repetition distributions with $\frac{1}{s_{\max}}\mathbb{E}[S]\ge \frac{1-\alpha\delta}{|\mathfrak{X}|}$. Let $F(n,k,|\mathfrak{X}|)$ denote the number of $|\mathfrak{X}|$-ary sequences of length $n$, which contain a fixed $|\mathfrak{X}|$-ary sequence of length $k$. We note that this $F(n,k,|\mathfrak{X}|)$ is constant for any $|\mathfrak{X}|$-ary sequence of length $k$ \cite[Lemma 1]{chvatal1975longest}. Now we define $G_{z^k}(n s_{\max},k,|\mathfrak{X}|)$ as the number of $s_{\max}$ times stretched sequences of length $n s_{\max}$, containing a $|\mathfrak{X}|$-ary sequence $z^k$ of length $k$. We stress that this counting function $G_{z^k}$ will not be independent of $z^k$ as is the case for the counting function $F$. For example, let $s_{\max}=2$, $\mathfrak{X}=\{0,1\}$, $n=2$, $k=2$, $z^k_1=01$ and $z^k_2=00$. Then we have $G_{z^k_1}(n s_{\max},k,|\mathfrak{X}|)=1$ since only $0011$ contains $z^k_1=01$, whereas $G_{z^k_2}(n s_{\max},k,|\mathfrak{X}|)=3$ since $0000$, $0011$ and $1100$ all contain $z^k_2=00$.
Observe that the maximum value of $G_{z^k}(n s_{\max},k,|\mathfrak{X}|)$ is attained when $z^k$ consists only of one symbol repeated $k$ times, as this grouping of elements in $z^k$ yields the maximum number of possible elementwise replicated sequences. WLOG, let $z^k=00\dots0$. Then, to count $G_{z^k}(n s_{\max},k,|\mathfrak{X}|)$, we group the consecutive $s_{\max}$ 0's in $z^k$ together, allowing the last group to have possibly fewer than $s_{\max}$ elements. It is clear that there are $\left\lceil \frac{k}{s_{\max}}\right\rceil$ of such groups of 0's. Since we put a stretching constraint on the sequences of length $n s_{\max}$ when we count $G_{z^k}(n s_{\max},k,|\mathfrak{X}|)$, we are looking for sequences of length $n$, containing a subsequence of length $\left\lceil \frac{k}{s_{\max}}\right\rceil$. Thus, counting this number will be the same as counting $F\left(n,\left\lceil \frac{k}{s_{\max}}\right\rceil,|\mathfrak{X}|\right)$. Thus we have
\iftoggle{singlecolumn}{
\begin{align}
G_{z^k}(n s_{\max},k,|\mathfrak{X}|)\le F\left(n,\left\lceil \nicefrac{k}{s_{\max}}\right\rceil,|\mathfrak{X}|\right),\quad \forall z^k\in \mathfrak{X}^k\label{eqn:ineqFG}
\end{align}
}{
\begin{align}
G_{z^k}(n s_{\max},k,|\mathfrak{X}|)\le F\left(n,\left\lceil \nicefrac{k}{s_{\max}}\right\rceil,|\mathfrak{X}|\right),\quad \forall z^k\in \mathfrak{X}^k\label{eqn:ineqFG}
\end{align}
}
We note that the inequality given in \eqref{eqn:ineqFG} is the tightest upper bound independent of $z^k$, equality being achieved when $z^k$ is a constant (\textit{e.g.,} all-zeros) sequence.
Now, let
\iftoggle{singlecolumn}{
\begin{align}
T(z^k,A^n)&\triangleq \{x^n\in\mathfrak{X}^n:\bar{x}^{(n-a)}\in A_\epsilon^{(n-a)}(X) \text{ and }\tilde{x}^{(n-a) s_{\max}} \text{ contains } z^k. \}
\end{align}
}{
\begin{align}
T(z^k,A^n)&\triangleq \{x^n\in\mathfrak{X}^n:\bar{x}^{(n-a)}\in A_\epsilon^{(n-a)}(X)\notag\\&\hspace{2em} \text{ and }\tilde{x}^{(n-a) s_{\max}} \text{ contains } z^k.\}
\end{align}
}
Then, we obtain
\iftoggle{singlecolumn}{
\begin{align}
|T(z^k,A^n)|&\le G_{z^k}((n-a) s_{\max},k,|\mathfrak{X}|)\\
&\le F\left(n-a,\left\lceil\nicefrac{k}{s_{\max}}\right\rceil,|\mathfrak{X}|\right)\label{eq:tset}
\end{align}
}{
\begin{align}
|T(z^k,A^n)|&\le G_{z^k}((n-a) s_{\max},k,|\mathfrak{X}|)\\
&\le F\left(n-a,\left\lceil\nicefrac{k}{s_{\max}}\right\rceil,|\mathfrak{X}|\right)\label{eq:tset}
\end{align}
}
For the sake of computational simplicity, suppose $\frac{k}{s_{\max}}$ is an integer. Since ${\frac{1}{s_{\max}}\mathbb{E}[S]\ge \frac{1-\alpha\delta}{|\mathfrak{X}|}}$, from~\cite{chvatal1975longest} and~\cite[Chapter 11]{cover2006elements} we have the following upper bound:
\iftoggle{singlecolumn}{
\begin{align}
F\left(n-a,\nicefrac{k}{s_{\max}},|\mathfrak{X}|\right)&\le (n-a) 2^{(n-a) H_b\left(\frac{k}{s_{\max}(n-a)}\right)} (|\mathfrak{X}|-1)^{\left( n-a-\frac{k}{s_{max}}\right)}
\end{align}
}{
\begin{align}
F\left(n-a,\nicefrac{k}{s_{\max}},|\mathfrak{X}|\right)&\le (n-a) 2^{(n-a) H_b\left(\frac{k}{s_{\max}(n-a)}\right)} \notag\\&\hspace{1em}(|\mathfrak{X}|-1)^{\left( n-a-\frac{k}{s_{max}}\right)}
\end{align}
}
Furthermore, for any $x^n\in T(z^k,A^n)$, since $T(z^k,A^n)\subseteq A_\epsilon^{(n-a)}(X)$, we have
\iftoggle{singlecolumn}{
\begin{align}
p_{X^n}(x^n)\le 2^{-(n-a) (H(X)-\epsilon)}
\end{align}
}{
\begin{align}
p_{X^n}(x^n)\le 2^{-(n-a) (H(X)-\epsilon)}
\end{align}
}
and since the rows $X^n_i$ of $\mathbf{D}^{(1)}$ are \emph{i.i.d.}, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(X^n_2 \in T(z^k,A^n)|X^n_1 \in T(z^k,A^n))=\Pr(X^n_2 \in T(z^k,A^n))
\end{align}
}{
\begin{align}
\Pr(X^n_2 \in T(z^k,A^n)|X^n_1 &\in T(z^k,A^n))\notag\\&=\Pr(X^n_2 \in T(z^k,A^n))
\end{align}
}
Finally, we have
\iftoggle{singlecolumn}{
\begin{align}
|A_\epsilon(X^k|Y^k_l)| &\le 2^{k(H(X|Y)+\epsilon)}\label{eq:condtypicalset}
\end{align}
}{
\begin{align}
|A_\epsilon(X^k|Y^k_l)| &\le 2^{k(H(X|Y)+\epsilon)}\label{eq:condtypicalset}
\end{align}
}
Combining \eqref{eq:tset}-\eqref{eq:condtypicalset}, we can upper bound $P_{\text{col,2}}$ as
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,2}} &\le \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} \Pr(X^n_2 \in T(z^k,A^n))\\
&=\sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} \sum\limits_{x^n\in T(z^k,A^n)} p_{X^n}(x^n)\\
&\le \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} \sum\limits_{x^n\in T(z^k,A^n)} 2^{-(n-a)(H(X)-\epsilon)}\\
&= \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} |T(z^k,A^n)| 2^{-(n-a)(H(X)-\epsilon)}\label{eq:tsetseparation}\\
&\le \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} 2^{-(n-a)(H(X)-\epsilon)} F\left(n-a,\nicefrac{k}{s_{\max}},|\mathfrak{X}|\right)\\
&= |A_\epsilon(X^k|Y^k_l)| 2^{-(n-a)(H(X)-\epsilon)} F\left(n-a,\nicefrac{k}{s_{\max}},|\mathfrak{X}|\right)\\
&\le |A_\epsilon(X^k|Y^k_l)| (n-a) 2^{-(n-a)\left[H(X)-\epsilon-H_b(\frac{k}{s_{\max}(n-a)})\right]} (|\mathfrak{X}|-1)^{(n-a-\frac{k}{s_{\max}})}\\
&\le 2^{k(H(X|Y)+\epsilon)} (n-a) 2^{-(n-a) \left[H(X)-\epsilon-H_b(\frac{k}{s_{\max}(n-a)})\right]} (|\mathfrak{X}|-1)^{(n-a-\frac{k}{s_{max}})}
\end{align}
}{
\begin{align}
P_{\text{col,2}} &\le \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} \Pr(X^n_2 \in T(z^k,A^n))\\
&=\sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} \sum\limits_{x^n\in T(z^k,A^n)} p_{X^n}(x^n)\\
&\le \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} \sum\limits_{x^n\in T(z^k,A^n)} 2^{-(n-a)(H(X)-\epsilon)}\\
&= \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} |T(z^k,A^n)| 2^{-(n-a)(H(X)-\epsilon)}\label{eq:tsetseparation}\\
&\le \sum\limits_{z^k\in A_\epsilon(X^k|Y^k_l)} 2^{-(n-a)(H(X)-\epsilon)}\notag\\&\hspace{4em} F\left(n-a,\nicefrac{k}{s_{\max}},|\mathfrak{X}|\right)\\
&= |A_\epsilon(X^k|Y^k_l)| 2^{-(n-a)(H(X)-\epsilon)} \notag\\&\hspace{4em}F\left(n-a,\nicefrac{k}{s_{\max}},|\mathfrak{X}|\right)\\
&\le |A_\epsilon(X^k|Y^k_l)| (n-a) (|\mathfrak{X}|-1)^{(n-a-\frac{k}{s_{\max}})}\notag\\&\hspace{4em}2^{-(n-a)\left[H(X)-\epsilon-H_b(\frac{k}{s_{\max}(n-a)})\right]}\\
&\le 2^{k(H(X|Y)+\epsilon)} (n-a) (|\mathfrak{X}|-1)^{(n-a-\frac{k}{s_{max}})}\notag\\&\hspace{4em}2^{-(n-a) \left[H(X)-\epsilon-H_b(\frac{k}{s_{\max}(n-a)})\right]}
\end{align}
}
Thus, we have the following upper bound on the error probability
\iftoggle{singlecolumn}{
\begin{align}
P_e &\le 2\epsilon+ 2^{n R} 2^{k(H(X|Y)+\epsilon)} (n-a) 2^{-(n-a) \left[H(X)-\epsilon-H_b(\frac{k}{s_{\max}(n-a)})\right]} (|\mathfrak{X}|-1)^{(n-a-\frac{k}{s_{max}})} +\kappa_n+\mu_n
\end{align}
}{
\begin{align}
P_e &\le 2\epsilon+\kappa_n+\mu_n\notag\\
&\hspace{1em}+ 2^{n R} 2^{k(H(X|Y)+\epsilon)} (n-a)(|\mathfrak{X}|-1)^{(n-a-\frac{k}{s_{max}})} \notag\\&\hspace{4em} 2^{-(n-a) \left[H(X)-\epsilon-H_b(\frac{k}{s_{\max}(n-a)})\right]}
\end{align}
}
By LLN, we have $\kappa_n\to0$ and $\mu_n\to0$ as $n\to\infty$. Hence, we can argue that any database growth rate $R$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
R&<\Big[(1-\alpha\delta)\left(H(X)-H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)\right) -\left(1-\alpha\delta-\frac{\mathbb{E}[S]}{s_{\max}} \right)\log\left(|\mathfrak{X}|-1\right)-\mathbb{E}[S]H(X|Y)\Big]^+ \label{eq:rowwiseachievablerate}
\end{align}
}{
\begin{align}
R&<\Big[(1-\alpha\delta)\left(H(X)-H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)\right) \notag\\&-\left(1-\alpha\delta-\frac{\mathbb{E}[S]}{s_{\max}} \right)\log\left(|\mathfrak{X}|-1\right)-\mathbb{E}[S]H(X|Y)\Big]^+ \label{eq:rowwiseachievablerate}
\end{align}
}
is achievable, by taking $\epsilon$ small enough.
Now, we focus on general repetition distributions. For any subsequence $z^k$ of $s_{\max}$-times stretched sequence of length $(n-a)s_{\max}$, let $r(z^k)$ be the number of runs in $z^k$ with at most $s_{\max}$ elements and note that $r(z^k)\le n-a$. Then, let $\tilde{z}^{r(z^k)}$ be the sequence storing the values of each run in $z^k$. Observe that for any $z^{k}\in A_\epsilon(X^{k}|Y^k_l)$, we have $\tilde{z}^{r(z^k)}\in A_\epsilon^{(r(z^k))}(X)$.
\begin{sloppypar}
For any such grouping of $r(z^k)$ runs, the $\epsilon$-typicality of $x^n=(x_1,\dots,x_n)\in T(z^k,A^n)$ and $\tilde{z}^{r(z^k)}$ with respect to $p_X$ implies the $\tilde{\epsilon}$-typicality of the remaining sequence of length ${n-a-r(z^k)}$ obtained after discarding $\tilde{z}^{r(z^k)}$ from $\bar{x}^{n-a}$, where $\tilde{\epsilon}=\frac{n-a+r(z^k)}{n-a-r(z^k)}\epsilon$. Furthermore, by a similar argument made above, we stress that $T(z^k,A^n)$ attains its maximum value when $r(z^k)$ is the minimum, which is $k_{\min}\triangleq\lceil \frac{k}{s_{\max}}\rceil$, attained when $z^k$ is a $s_{\max}$ times stretched sequence itself. Therefore for any $z^{k}\in A_\epsilon(X^{k}|Y^k_l)$, taking the union bound over all possible groupings with $r(z^k)$ runs, the cardinality of $T(z^{k},A^n)$ can be upper bounded as
\iftoggle{singlecolumn}{
\begin{align}
|T(z^{k},A^n)| &\le \binom{n-a}{k_{\min}} |A_{\tilde{\epsilon}}^{(n-a-k_{\min})}(X)|\\
&\le 2^{(n-a) H_b\left(\frac{k_{\min}}{n-a}\right)} |A_{\tilde{\epsilon}}^{(n-a-\hat{k})}(X)|\\
&\le 2^{(n-a) H_b\left(\frac{k_{\min}}{n-a}\right)} 2^{(n-a-k_{\min})(H(X)+\tilde{\epsilon})}\\
&= 2^{n \left[(1-\frac{a}{n})H_b\left(\frac{k_{\min}}{n-a}\right)+(1-\frac{a}{n}-\frac{k_{\min}}{n})(H(X)+\tilde{\epsilon})\right]}\label{eq:tsettypicalbound2}
\end{align}
}{
\begin{align}
|T(z^{k},A^n)| &\le \binom{n-a}{k_{\min}} |A_{\tilde{\epsilon}}^{(n-a-k_{\min})}(X)|\\
&\le 2^{(n-a) H_b\left(\frac{k_{\min}}{n-a}\right)} |A_{\tilde{\epsilon}}^{(n-a-\hat{k})}(X)|\\
&\le 2^{(n-a) H_b\left(\frac{k_{\min}}{n-a}\right)} 2^{(n-a-k_{\min})(H(X)+\tilde{\epsilon})}\\
&= 2^{n \left[(1-\frac{a}{n})H_b\left(\frac{k_{\min}}{n-a}\right)+(1-\frac{a}{n}-\frac{k_{\min}}{n})(H(X)+\tilde{\epsilon})\right]}\label{eq:tsettypicalbound2}
\end{align}
}
\end{sloppypar}
\begin{sloppypar}
Plugging \eqref{eq:tsettypicalbound2} into \eqref{eq:tsetseparation} and following the same steps, one can show that any rate $R$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
R&<\left[\frac{\mathbb{E}[S]}{s_{\max}} H(X)-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)-\mathbb{E}[S]H(X|Y)\right]^+\label{eq:rowwiseachievablerate2}
\end{align}
}{
\begin{align}
R&<\Big[\frac{\mathbb{E}[S]}{s_{\max}} H(X)-\mathbb{E}[S]H(X|Y)\notag\\&\hspace{3em}-(1-\alpha\delta)H_b\left( \frac{\mathbb{E}[S]}{(1-\alpha\delta)s_{\max}}\right)\Big]^+\label{eq:rowwiseachievablerate2}
\end{align}
}
is achievable. Simply taking the maximum of the two proven achievable rates (\eqref{eq:rowwiseachievablerate} and \eqref{eq:rowwiseachievablerate2}) when ${\frac{1}{s_{\max}}\mathbb{E}[S]\ge \frac{1-\alpha\delta}{|\mathfrak{X}|}}$ yields \eqref{eq:rowwiseachievable2}. This concludes the proof. \qed
\end{sloppypar}
\section{Proof of Corollary~\ref{cor:W1conversenoiselessub}}
\label{proof:rowwiseconversenoiselessub}
Let $E$ denote the empty string and $\Tilde{X}$ denote the sequence obtained after discarding the detected deleted entries from $X^2$. The dependence of $\Tilde{X}$ on $X^2$ and $A^2$ and that of $Y$ on $X^2$ and $S^2$ are omitted for brevity.
We start with the fact that since the entries of $X^2$ are independent, the deleted entries do not offer any information. Thus, we can discard them without any information loss. Thus, we have
\iftoggle{singlecolumn}{
\begin{align}
I(X^2;Y,A^2)&= I(\Tilde{X};Y|A^2)\\
&=H(\Tilde{X}|A^2)-H(\Tilde{X}|Y,{A}^2)
\end{align}
}{
\begin{align}
I(X^2;Y,A^2)&= I(\Tilde{X};Y|A^2)\\
&=H(\Tilde{X}|A^2)-H(\Tilde{X}|Y,{A}^2)
\end{align}
}
\iftoggle{singlecolumn}{
We have
\begin{align}
H(\Tilde{X}|{A}^2)&=\sum\limits_{{a}^2\in\{0,1\}^2}\Pr({A}^2={a}^2) H(\Tilde{{X}}|{A}^2={a}^2)\\
&=\Pr({A}^2=00)H(\Tilde{{X}}|{A}^2=00)+\Pr({A}^2=01)H(\Tilde{{X}}|{A}^2=01)\notag\\
&\hspace{3em}+ \Pr({A}^2=10)H(\Tilde{{X}}|{A}^2=10)+\Pr({A}^2=11)H(\Tilde{{X}}|{A}^2=11)\\
&= (1-\alpha\delta)^2 2H(X)+\alpha\delta(1-\alpha\delta)H(X) + (1-\alpha\delta)\alpha\delta H(X)+0\\
&=2(1-\alpha\delta)H(X)\label{eq:HXtildeA}
\end{align}
}{
\begingroup
\allowdisplaybreaks
We have
\begin{align}
H(\Tilde{X}|{A}^2)&=\sum\limits_{{a}^2\in\{0,1\}^2}\Pr({A}^2={a}^2) H(\Tilde{{X}}|{A}^2={a}^2)\\
&=\Pr({A}^2=00)H(\Tilde{{X}}|{A}^2=00) \notag\\&\hspace{1em} +\Pr({A}^2=01)H(\Tilde{{X}}|{A}^2=01)\notag\\
&\hspace{1em}+ \Pr({A}^2=10)H(\Tilde{{X}}|{A}^2=10) \notag\\&\hspace{1em} +\Pr({A}^2=11)H(\Tilde{{X}}|{A}^2=11)\\
&= (1-\alpha\delta)^2 2H(X) \notag\\&\hspace{1em} +\alpha\delta(1-\alpha\delta)H(X) \notag\\&\hspace{1em} + (1-\alpha\delta)\alpha\delta H(X) \notag\\&\hspace{1em} +0\\
&=2(1-\alpha\delta)H(X)\label{eq:HXtildeA}
\end{align}
\endgroup
}
Furthermore, we have
\iftoggle{singlecolumn}{
\begin{align}
H(\Tilde{{X}}|{Y},{A}^2)&=\sum\limits_{{y},{a}^2} \Pr({Y}={y},{A}^2={a}^2) H(\Tilde{{X}}|{Y}={y},{A}^2={a}^2)\\
&= \Pr({Y}=E,{A}^2=00)H(\Tilde{{X}}|{Y}=E,{A}^2=00)\notag\\
&\hspace{2em} +\Pr({Y}=E,{A}^2=01)H(\Tilde{{X}}|{Y}=E,{A}^2=01)\notag\\
&\hspace{2em} +\Pr({Y}=E,{A}^2=10)H(\Tilde{{X}}|{Y}=E,{A}^2=10)\notag\\
&\hspace{2em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=00)H(\Tilde{{X}}|{Y}=x,{A}^2=00)\notag\\
&\hspace{2em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=01)H(\Tilde{{X}}|{Y}=x,{A}^2=01)\notag\\
&\hspace{2em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=10)H(\Tilde{{X}}|{Y}=x,{A}^2=10)\label{eq:HXtildeYA}
\end{align}
}{
\begin{align}
H&(\Tilde{{X}}|{Y},{A}^2)=\sum\limits_{{y},{a}^2} \Pr({Y}={y},{A}^2={a}^2) \notag\\&\hspace{7em} H(\Tilde{{X}}|{Y}={y},{A}^2={a}^2)\\
&= \Pr({Y}=E,{A}^2=00)H(\Tilde{{X}}|{Y}=E,{A}^2=00)\notag\\
&\hspace{1em} +\Pr({Y}=E,{A}^2=01)H(\Tilde{{X}}|{Y}=E,{A}^2=01)\notag\\
&\hspace{1em} +\Pr({Y}=E,{A}^2=10)H(\Tilde{{X}}|{Y}=E,{A}^2=10)\notag\\
&\hspace{1em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=00)H(\Tilde{{X}}|{Y}=x,{A}^2=00)\notag\\
&\hspace{1em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=01)H(\Tilde{{X}}|{Y}=x,{A}^2=01)\notag\\
&\hspace{1em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=10)H(\Tilde{{X}}|{Y}=x,{A}^2=10)\label{eq:HXtildeYA}
\end{align}
}
Note that in \eqref{eq:HXtildeYA}, we discarded the terms with ${A}^2=11$ for $|{Y}|\ge 1$, since ${\Pr(|{Y}|\ge 1,{A}^2=11)=0}$. We can further discard the terms with $|{Y}|=n=2$, since in that case we have no deletion and ${Y}={Y}^2={X}^2$. Finally, we can also discard the last two terms in \eqref{eq:HXtildeYA} since for any $x\in\mathfrak{X}$ we have
\iftoggle{singlecolumn}{
\begin{align}
H(\Tilde{{X}}|{Y}=x,{A}^2=01)=H(\Tilde{{X}}|{Y}=x,{A}^2=10)=0
\end{align}
}{
\begin{align}
H(\Tilde{{X}}|{Y}=x,{A}^2=01)=H(\Tilde{{X}}|{Y}=x,{A}^2=10)=0
\end{align}
}
\iftoggle{singlecolumn}{
Thus, we have
\begin{align}
H(\Tilde{{X}}|{Y},{A}^2)&=\delta^2(1-\alpha)^2 2H(X)+\delta^2(1-\alpha)\alpha H(X)+\delta^2\alpha(1-\alpha)H(X)\notag\\&\hspace{2em}+\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=00)H(\Tilde{{X}}|{Y}=x,{A}^2=00)\\
&= 2\delta^2(1-\alpha)H(X)+\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=00)H(\Tilde{{X}}|{Y}=x,{A}^2=00)\label{eq:HXtilde1}
\end{align}
}{
\begingroup
Thus, we have
\begin{align}
H(\Tilde{{X}}|{Y},{A}^2)&=\delta^2(1-\alpha)^2 2H(X) \notag\\&\hspace{1em} +\delta^2(1-\alpha)\alpha H(X) \notag\\&\hspace{1em} +\delta^2\alpha(1-\alpha)H(X)\notag\\&\hspace{1em}+\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=00) \notag\\&\hspace{4em} H(\Tilde{{X}}|{Y}=x,{A}^2=00)\\
&= 2\delta^2(1-\alpha)H(X) \notag\\&\hspace{1em} +\sum\limits_{x\in\mathfrak{X}} \Pr({Y}=x,{A}^2=00) \notag\\&\hspace{4em} H(\Tilde{{X}}|{Y}=x,{A}^2=00)\label{eq:HXtilde1}
\end{align}
\endgroup
}
We first compute $\Pr({Y}=x,{A}^2=00)$. For any $x\in\mathfrak{X}$, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr({Y}=x,{A}^2=00) &= \sum\limits_{{x}^2\in\mathfrak{X}^2}\Pr({Y}=x,{A}^2=00,{X}^2={x}^2)\\
&= \Pr({Y}=x,{A}^2=00,{X}^2=xx) + 2\sum\limits_{y\neq x}\Pr({Y}=x,{A}^2=00,{X}^2=xy)
\\
&= p_X(x)^2 2\delta(1-\delta)(1-\alpha) + 2\sum\limits_{y\neq x} p_X(x) p_X(y) \delta(1-\delta)(1-\alpha)
\\
&= 2\delta(1-\delta)(1-\alpha)p_X(x)\sum\limits_{y\in\mathfrak{X}} p_X(y)\\
&= 2\delta(1-\delta)(1-\alpha)p_X(x)\label{eq:HXtilde2}
\end{align}
}{
\begin{align}
\Pr&({Y}=x,{A}^2=00)\notag \\&= \sum\limits_{{x}^2\in\mathfrak{X}^2}\Pr({Y}=x,{A}^2=00,{X}^2={x}^2)\\
&= \Pr({Y}=x,{A}^2=00,{X}^2=xx) \notag\\&\hspace{1em} + 2\sum\limits_{y\neq x}\Pr({Y}=x,{A}^2=00,{X}^2=xy)
\\
&= p_X(x)^2 2\delta(1-\delta)(1-\alpha) \notag\\&\hspace{1em} + 2\sum\limits_{y\neq x} p_X(x) p_X(y) \delta(1-\delta)(1-\alpha)
\\
&= 2\delta(1-\delta)(1-\alpha)p_X(x)\sum\limits_{y\in\mathfrak{X}} p_X(y)\\
&= 2\delta(1-\delta)(1-\alpha)p_X(x)\label{eq:HXtilde2}
\end{align}
}
Now, we compute $H(\Tilde{{X}}|{Y}=x,{A}^2=00)$. For any $x\in\mathfrak{X}$ we have $2|\mathfrak{X}|-1$ possible patterns for $\Tilde{{X}}$, given that ${Y}=x$. $2|\mathfrak{X}|-2$ of these patterns have probabilities proportional to $p_X(x) p_X(y)$ $y\in \mathfrak{X}\setminus\{x\}$ and the remaining pattern has probability proportional to $2 p_X(x)^2$. Thus we have
\iftoggle{singlecolumn}{
\begin{align}
H(\Tilde{{X}}|{Y}=x,{A}^2=00)&= H\Big(\frac{p_X(1) p_X(x)}{c},\frac{p_X(x) p_X(1)}{c},\dots,\frac{2 (p_X(x))^2}{c},\dots,\frac{p_X(|\mathfrak{X}|) p_X(x)}{c},\frac{p_X(x) p_X(|\mathfrak{X}|)}{c}\Big)
\end{align}
}{
\begin{align}
H&(\Tilde{{X}}|{Y}=x,{A}^2=00)\notag\\&= H\Big(\frac{p_X(1) p_X(x)}{c},\frac{p_X(x) p_X(1)}{c},\notag\\&\hspace{3em}\dots,\frac{2 p_X(x)^2}{c},\dots,\notag\\&\hspace{4em}\frac{p_X(|\mathfrak{X}|) p_X(x)}{c},\frac{p_X(x) p_X(|\mathfrak{X}|)}{c}\Big)
\end{align}
}
where the normalization constant $c$ is $c=2p_X(x)$. Thus,
\iftoggle{singlecolumn}{
\begin{align}
H(\Tilde{{X}}|{Y}=x,{A}^2=00)&= H\Big(\frac{p_X(1) }{2},\frac{ p_X(1)}{2},\dots, p_X(x),\dots,\frac{p_X(|\mathfrak{X}|)}{2},\frac{ p_X(|\mathfrak{X}|)}{2}\Big)\\
&= H(X)+1-p_X(x)\label{eq:HXtilde3}
\end{align}
}{
\begin{align}
H(\Tilde{{X}}|{Y}=x,&{A}^2=00)\notag\\&= H\Big(\frac{p_X(1) }{2},\frac{ p_X(1)}{2},\notag\\&\hspace{3em}\dots, p_X(x),\dots,\notag\\&\hspace{4em}\frac{p_X(|\mathfrak{X}|)}{2},\frac{ p_X(|\mathfrak{X}|)}{2}\Big)\\
&= H(X)+1-p_X(x)\label{eq:HXtilde3}
\end{align}
}
\iftoggle{singlecolumn}{
Combining \eqref{eq:HXtilde1}-\eqref{eq:HXtilde3}, we can compute $H(\Tilde{{X}}|{Y},{A}^2)$ as
\begin{align}
H(\Tilde{{X}}|{Y},{A}^2)&=2\delta^2(1-\alpha)H(X)+\sum\limits_{x\in\mathfrak{X}} 2\delta(1-\delta)(1-\alpha)p_X(x) [H(X)+1-p_X(x)]\\
&= 2\delta^2(1-\alpha)H(X)+2\delta(1-\delta)(1-\alpha) \left(H(X)+1-\hat{q}\right)\\
&= 2\delta(1-\alpha)H(X)+2\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)\label{eq:HXtildeYAfinal}
\end{align}
}{
\begingroup
\allowdisplaybreaks
Combining \eqref{eq:HXtilde1}-\eqref{eq:HXtilde3}, we can compute $H(\Tilde{{X}}|{Y},{A}^2)$ as
\begin{align}
H(\Tilde{{X}}|{Y},{A}^2)&=2\delta^2(1-\alpha)H(X)\notag\\&\hspace{1em}+\sum\limits_{x\in\mathfrak{X}} 2\delta(1-\delta)(1-\alpha)p_X(x)\notag\\&\hspace{5em} [H(X)+1-p_X(x)]\\
&= 2\delta^2(1-\alpha)H(X)\notag\\&\hspace{1em}+2\delta(1-\delta)(1-\alpha) \notag\\&\hspace{3em}\left(H(X)+1-\hat{q}\right)\\
&= 2\delta(1-\alpha)H(X)\notag\\&\hspace{1em}+2\delta(1-\delta)(1-\alpha)(1-\hat{q})\label{eq:HXtildeYAfinal}
\end{align}
\endgroup
}
Finally, combining \eqref{eq:HXtildeA} and \eqref{eq:HXtildeYAfinal}, we obtain
\iftoggle{singlecolumn}{
\begin{align}
I(\Tilde{{X}};{Y}^K|{A}^2)
&=H(\Tilde{{X}}|{A}^2)-H(\Tilde{{X}}|{Y}({X}^2),{A}^2)\\
&= 2(1-\alpha\delta)H(X)-2\delta(1-\alpha)H(X)-2\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)\\
&=2(1-\delta)H(X)-2\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)\label{eqn:MI2}
\end{align}
}{
\begin{align}
I(\Tilde{{X}};{Y}^K|{A}^2)
&=H(\Tilde{{X}}|{A}^2)-H(\Tilde{{X}}|{Y}({X}^2),{A}^2)\\
&= 2(1-\alpha\delta)H(X)-2\delta(1-\alpha)H(X)\notag\\
&\hspace{1em}-2\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)\\
&=2(1-\delta)H(X)\notag\\&\hspace{1em}-2\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)\label{eqn:MI2}
\end{align}
}
Thus, we have
\iftoggle{singlecolumn}{
\begin{align}
\frac{1}{2}I({X}^2;{Y}^K,{A}^2)
&=(1-\delta)H(X)-\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)
\end{align}
}{
\begin{align}
\frac{1}{2}I({X}^2;{Y}^K,{A}^2)
&=(1-\delta)H(X)\notag\\&\hspace{1em}-\delta(1-\delta)(1-\alpha)\left(1-\hat{q}\right)
\end{align}
}
concluding the proof.
\qed
\section{Proof of Corollary~\ref{cor:W1conversenoisybinaryub}}
\label{proof:W1conversenoisybinaryub}
We start by observing that
\iftoggle{singlecolumn}{
\begin{align}
I({X}^2;{Y},{A}^2)&= I({X}^2;{Y},|{Y}|,{A}^2)\\
&= H({X}^2)-H({X}^2|{Y},|{Y}|,{A}^2)\\
&= 2H(X)-H({X}^2|{Y},|{Y}|,{A}^2)\label{eq:MIXYAnoisy1}
\end{align}
}{
\begin{align}
I({X}^2;{Y},{A}^2)&= I({X}^2;{Y},|{Y}|,{A}^2)\\
&= H({X}^2)-H({X}^2|{Y},|{Y}|,{A}^2)\\
&= 2H(X)-H({X}^2|{Y},|{Y}|,{A}^2)\label{eq:MIXYAnoisy1}
\end{align}
}
\iftoggle{singlecolumn}{
Furthermore, we have
\begin{align}
H({X}^2|{Y},|{Y}|,{A}^2)&= \sum\limits_{i=0}^2 \Pr(|{Y}|=i) H({X}^2|{Y},|{Y}|=i,{A}^2)\\
&=\delta^2 H({X}^2|{Y},|{Y}|=0,{A}^2) \notag\\
&\hspace{2em}+ 2\delta(1-\delta) H({X}^2|{Y},|{Y}|=1,{A}^2)\notag\\
&\hspace{2em}+ (1-\delta)^2 H({X}^2|{Y},|{Y}|=2,{A}^2)\\
&= \delta^2 2 H(X) \notag\\
&\hspace{2em}+ 2\delta(1-\delta) H({X}^2|{Y},|{Y}|=1,{A}^2)\notag\\
&\hspace{2em}+ (1-\delta)^2 2 H(X|Y)\\
&= \delta^2 2 H(X) \notag\\
&\hspace{2em}+ 2\delta(1-\delta)\alpha [H(X)+H(X|Y)]\notag\\
&\hspace{2em}+ 2\delta(1-\delta)(1-\alpha) H({X}^2|{Y},|{Y}|=1,{A}^2=00)\notag\\
&\hspace{2em}+ (1-\delta)^2 2 H(X|Y)
\end{align}
}{
\begingroup
Furthermore, we have
\begin{align}
H&({X}^2|{Y},|{Y}|,{A}^2)\notag\\&= \sum\limits_{i=0}^2 \Pr(|{Y}|=i) H({X}^2|{Y},|{Y}|=i,{A}^2)\\
&=\delta^2 H({X}^2|{Y},|{Y}|=0,{A}^2) \notag\\
&\hspace{1em}+ 2\delta(1-\delta) H({X}^2|{Y},|{Y}|=1,{A}^2)\notag\\
&\hspace{1em}+ (1-\delta)^2 H({X}^2|{Y},|{Y}|=2,{A}^2)\\
&= \delta^2 2 H(X) \notag\\
&\hspace{1em}+ 2\delta(1-\delta) H({X}^2|{Y},|{Y}|=1,{A}^2)\notag\\
&\hspace{1em}+ (1-\delta)^2 2 H(X|Y)\\
&= \delta^2 2 H(X) \notag\\
&\hspace{1em}+ 2\delta(1-\delta)\alpha [H(X)+H(X|Y)]\notag\\
&\hspace{1em}+ 2\delta(1-\delta)(1-\alpha) H({X}^2|{Y},|{Y}|=1,{A}^2=00)\notag\\
&\hspace{1em}+ (1-\delta)^2 2 H(X|Y)
\end{align}
\endgroup
}
Note that we can rewrite $H({X}^2|{Y},|{Y}|=1,{A}^2=00)$ as
\iftoggle{singlecolumn}{
\begin{align}
H({X}^2|{Y},|{Y}|=1,{A}^2=00)&=H({X}^2|{Y},|{Y}|=1)\\
&= 2H(X)- I({X}^2;{Y}||{Y}|=1)\\
&= 2H(X) - [H(Y)-H({Y}|{X}^2,|{Y}|=1)]
\end{align}
}{
\begin{align}
H&({X}^2|{Y},|{Y}|=1,{A}^2=00)\notag\\&\hspace{1em}=H({X}^2|{Y},|{Y}|=1)\\
&\hspace{1em}= 2H(X)- I({X}^2;{Y}||{Y}|=1)\\
&\hspace{1em}= 2H(X) - [H(Y)-H({Y}|{X}^2,|{Y}|=1)]
\end{align}
}
where we have
\iftoggle{singlecolumn}{
\begin{align}
H({Y}|{X}^2,|{Y}|=1)&= \sum\limits_{{x}^2\in\mathfrak{X}^2} \Pr({X}^2={x}^2) H({Y}|{X}^2={x}^2,|{Y}|=1)\label{eq:HYgivenXlengthY1}
\end{align}
}{
\begin{align}
H&({Y}|{X}^2,|{Y}|=1)\notag\\
\hspace{1em}&= \sum\limits_{{x}^2\in\mathfrak{X}^2} \Pr({X}^2={x}^2)\notag\\[-1em]&\hspace{6em} H({Y}|{X}^2={x}^2,|{Y}|=1)\label{eq:HYgivenXlengthY1}
\end{align}
}
Writing the sum in \eqref{eq:HYgivenXlengthY1} explicitly, we obtain
\iftoggle{singlecolumn}{
\begin{align}
H({Y}|{X}^2,|{Y}|=1)&= (1-p)^2 H({Y}|{X}^2=00,|{Y}|=1) + p^2 H({Y}|{X}^2=11,|{Y}|=1)\notag\\
&\hspace{1em} +p(1-p) H({Y}|{X}^2=01,|{Y}|=1) + p(1-p) H({Y}|{X}^2=10,|{Y}|=1)
\end{align}
}{
\begin{align}
H({Y}|&{X}^2,|{Y}|=1)\notag\\
&= (1-p)^2 H({Y}|{X}^2=00,|{Y}|=1) \notag\\&\hspace{1em}+ p^2 H({Y}|{X}^2=11,|{Y}|=1)\notag\\
&\hspace{1em} +p(1-p) H({Y}|{X}^2=01,|{Y}|=1) \notag\\&\hspace{1em}+ p(1-p) H({Y}|{X}^2=10,|{Y}|=1)
\end{align}
}
Observing the following,
\iftoggle{singlecolumn}{
\begin{align}
H({Y}|{X}^2=00,|{Y}|=1)&= H(Y|X=0)\\
H({Y}|{X}^2=11,|{Y}|=1)&=H(Y|X=1)\\
H({Y}|{X}^2=01,|{Y}|=1)&= H(V)\\
H({Y}|{X}^2=10,|{Y}|=1)&= H(V)\\
H(Y|X=0) + H(Y|X=1)&= 2\left[\frac{1}{2} H(Y|X=0) + \frac{1}{2} H(Y|X=1)\right] \\
&= 2 H(V|U)
\end{align}
}{
\begin{align}
H({Y}|{X}^2=00,|{Y}|=1)&= H(Y|X=0)\\
H({Y}|{X}^2=11,|{Y}|=1)&=H(Y|X=1)\\
H({Y}|{X}^2=01,|{Y}|=1)&= H(V)\\
H({Y}|{X}^2=10,|{Y}|=1)&= H(V)
\end{align}
\begin{align}
H(&Y|X=0) + H(Y|X=1)\notag\\
&= 2\left[\frac{1}{2} H(Y|X=0) + \frac{1}{2} H(Y|X=1)\right] \\
&= 2 H(V|U)
\end{align}
}
we obtain
\iftoggle{singlecolumn}{
\begin{align}
H({Y}|{X}^2,|{Y}|=1)&= (1-p) H(Y|X=0)-p(1-p)H(Y|X=0)\notag\\
&\hspace{1em}+ p H(Y|X=1)-p(1-p)H(Y|X=1)\notag\\
&\hspace{1em} +2p(1-p) H(V)\\
&= H(Y|X)+2p(1-p) I(U;V)
\end{align}
}{
\begin{align}
H({Y}|{X}^2,|{Y}|=1)&= (1-p) H(Y|X=0)\notag\\&\hspace{1em}-p(1-p)H(Y|X=0)\notag\\
&\hspace{1em}+ p H(Y|X=1)\notag\\&\hspace{1em}-p(1-p)H(Y|X=1)\notag\\
&\hspace{1em} +2p(1-p) H(V)\\
&= H(Y|X)+2p(1-p) I(U;V)
\end{align}
}
Hence, we have
\iftoggle{singlecolumn}{
\begin{align}
H({X}^2|{Y},|{Y}|=1,{A}^2=00)&=2H(X)-I(X;Y) + 2p(1-p) I(U;V) \label{eq:MIXYAnoisyn}
\end{align}
}{
\begin{align}
H({X}^2|{Y},|{Y}|=1,{A}^2=00)&=2H(X)-I(X;Y) \notag\\&\hspace{1em}+ 2p(1-p) I(U;V) \label{eq:MIXYAnoisyn}
\end{align}
}
Combining \eqref{eq:MIXYAnoisy1}-\eqref{eq:MIXYAnoisyn}, have
\iftoggle{singlecolumn}{
\begin{align}
\frac{1}{2} I({X}^2;{Y}^{K},{A}^2) = (1-\delta) I(X;Y)-2 \delta(1-\delta) (1-\alpha)p(1-p) I(U;V)
\end{align}
}{
\begin{align}
\frac{1}{2} I({X}^2;{Y}^{K},{A}^2) &= (1-\delta) I(X;Y)\notag\\&\hspace{0.4em}-2 \delta(1-\delta) (1-\alpha)p(1-p) I(U;V)
\end{align}
}
concluding the proof.\qed
\section{Proof of Theorem~\ref{thm:adversarialWm}}
\label{proof:adversarialWm}
First, we focus on $\delta\le 1-\hat{q}$ and prove the achievability part. For a given pair of matching rows, WLOG, $X_1^n$ of $\mathbf{D}^{(1)}$ and $Y_l^{K_n}$ of $\mathbf{D}^{(2)}$ with $\boldsymbol{\Theta}_n(1)=l$, let $P_e\triangleq \Pr(\hat{\boldsymbol{\Theta}}_n(1)\neq l)$ be the probability of error of the following matching scheme:
\begin{enumerate}[label=\textbf{\arabic*)},leftmargin=1.3\parindent]
\item Construct the collapsed histogram vectors $\tilde{{H}}_j^{(1),n}$ and $\tilde{{H}}_j^{(2),K_n}$ as
\iftoggle{singlecolumn}{
\begin{align}
\tilde{H}_j^{(r)}&=\sum\limits_{i=1}^{m_n} \mathbbm{1}_{\left[\tilde{D}^{(r)}_{i,j}=2 \right]},\quad
\begin{cases}
\forall j\in [n],&\text{if } r=1 \\
\forall j\in [K_n] & \text{if } r=2
\end{cases}
\end{align}
}{
\begin{align}
\tilde{H}_j^{(r)}&=\sum\limits_{i=1}^{m_n} \mathbbm{1}_{\left[\tilde{D}^{(r)}_{i,j}=2 \right]},\quad
\begin{cases}
\forall j\in [n],&\text{if } r=1 \\
\forall j\in [K_n] & \text{if } r=2
\end{cases}
\end{align}
}
where $K_n$ denotes the column size of $\mathbf{D}^{(2)}$.
\item Check the uniqueness of the entries $\tilde{H}^{(1)}_j$ $j\in[n]$ of $\tilde{{H}}^{(1),n}$. If there are at least two which are identical, declare a \emph{detection error} whose probability is denoted by $\mu_n$. Otherwise, proceed with Step~3.
\item $\forall i\in[n]$ if $ \nexists j\in[K_n]$, $\tilde{H}_i^{(1)}=\tilde{H}_j^{(2)}$, declare the $i$\textsuperscript{th} column of $\mathbf{D}^{(1)}$ deleted, assigning $i\in \hat{I}_{\text{del}}$. Note that conditioned on Step~2, this step is error-free.
\item Match the $l$\textsuperscript{th} row $Y^{K_n}_{l}$ of $\mathbf{D}^{(2)}$ with the $1$\textsuperscript{st} row $X^n_1$ of $\mathbf{D}^{(1)}$, assigning $\hat{\boldsymbol{\Theta}}_n(1)=l$ if the $1$\textsuperscript{st} row $\hat{X}_1^{K_n}(\hat{I}_{\text{del}})$ of $\hat{\mathbf{D}}^{(1)}$ is the only row of $\hat{\mathbf{D}}^{(1)}$ equal to $Y^{K_n}_{l}$ where $\hat{X}_i^{K_n}(\hat{I}_{\text{del}})$ is obtained by discarding the elements of $X_i^n$ whose indices lie in $\hat{I}_{\text{del}}$. Otherwise, declare a \emph{collision error}.
\end{enumerate}
Let $I(\delta)$ be the set of all deletion patterns with up to $n\delta$ deletions. For the matching rows $X^n_1$, $Y^k_l$ of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$, define the pairwise adversarial collision probability between $X^n_1$ and $X^n_i$ for any $i\in[m_n]\setminus\{1\}$ as
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,i}}&\triangleq \Pr(\exists \hat{I}_{\text{del}} \in I(\delta):\: \hat{X}_i^{K_n}(\hat{I}_{\text{del}})=Y_l^{K_n})\\
&=\Pr(\exists \hat{I}_{\text{del}} \in I(\delta):\: \hat{X}_i^{K_n}(\hat{I}_{\text{del}})=\hat{X}_1^{K_n}(\hat{I}_{\text{del}})).
\end{align}
}{
\begin{align}
P_{\text{col,i}}&\triangleq \Pr(\exists \hat{I}_{\text{del}} \in I(\delta):\: \hat{X}_i^{K_n}(\hat{I}_{\text{del}})=Y_l^{K_n})\\
&=\Pr(\exists \hat{I}_{\text{del}} \in I(\delta):\: \hat{X}_i^{K_n}(\hat{I}_{\text{del}})=\hat{X}_1^{K_n}(\hat{I}_{\text{del}})).
\end{align}
}
Note that the statement $\exists \hat{I}_{\text{del}} \in I(\delta):\: \hat{X}_i^{K_n}(\hat{I}_{\text{del}})=\hat{X}_1^{K_n}(\hat{I}_{\text{del}})$ is equivalent to the case when the Hamming distance between $X^n_i$ and $X^n_1$ being upper bounded by $n\delta$. In other words,
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,i}} &= \Pr(d_H(X_1^n,X_i^n)\le n\delta)
\end{align}
}{
\begin{align}
P_{\text{col,i}} &= \Pr(d_H(X_1^n,X_i^n)\le n\delta)
\end{align}
}
where
\iftoggle{singlecolumn}{
\begin{align}
d_H(X_1^n,X_i^n) &= \sum\limits_{j=1}^n \mathbbm{1}_{[X_{1,j}\neq X_{i,j}]}
\end{align}
}{
\begin{align}
d_H(X_1^n,X_i^n) &= \sum\limits_{j=1}^n \mathbbm{1}_{[X_{1,j}\neq X_{i,j}]}
\end{align}
}
Note that due to the \emph{i.i.d.} nature of the database elements, $d_H(X_1^n,X_i^n)\sim \text{Binom}(n,1-\hat{q})$. Thus, for any $\delta\le 1-\hat{q}$, using Chernoff bound~\cite[Lemma 4.7.2]{ash2012information}, we have
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,i}} &= \Pr(d_H(X_1^n,X_i^n)\le n\delta)\\
&\le 2^{-n D(\delta\|1-\hat{q})}\label{eq:advchernoff}
\end{align}
}{
\begin{align}
P_{\text{col,i}} &= \Pr(d_H(X_1^n,X_i^n)\le n\delta)\\
&\le 2^{-n D(\delta\|1-\hat{q})}\label{eq:advchernoff}
\end{align}
}
Therefore given the correct labeling for $Y^k_l\in\mathbf{D}^{(2)}$ is $X^n_1\in\mathbf{D}^{(1)}$, the probability of error $P_e$ can be bounded as
\iftoggle{singlecolumn}{
\begin{align}
P_e &\le \Pr(\exists i\in[m_n]\setminus\{1\}: \hat{X}_i^{K_n}=\hat{X}_1^{K_n})\\
&\le \sum\limits_{i=2}^{2^{n R}} P_{col,i}+ \kappa_n\\
&\le 2^{n R} P_{\text{col,2}}+\kappa_n\label{eq:Perowwiseadv}
\end{align}
}{
\begin{align}
P_e &\le \Pr(\exists i\in[m_n]\setminus\{1\}: \hat{X}_i^{K_n}=\hat{X}_1^{K_n})\\
&\le \sum\limits_{i=2}^{2^{n R}} P_{col,i}+ \kappa_n\\
&\le 2^{n R} P_{\text{col,2}}+\kappa_n\label{eq:Perowwiseadv}
\end{align}
}
where \eqref{eq:Perowwiseadv} follows from the fact the the rows are \emph{i.i.d.} and thus $P_{\text{col,i}}=P_{\text{col,2}},\:\forall i\in[m_n]\setminus\{1\}$. Combining \eqref{eq:advchernoff}-\eqref{eq:Perowwiseadv}, we get
\iftoggle{singlecolumn}{
\begin{align}
P_e &\le 2^{n R} \Pr(d_H(X_1^n,X_i^n)\le n\delta) + \kappa_n\\
&\le 2^{n R} 2^{-n D(\delta\|1-\hat{q})}+ \kappa_n\\
&= 2^{-n \left[D(\delta\|1-\hat{q})-R\right]}+\kappa_n
\end{align}
}{
\begin{align}
P_e &\le 2^{n R} \Pr(d_H(X_1^n,X_i^n)\le n\delta) + \kappa_n\\
&\le 2^{n R} 2^{-n D(\delta\|1-\hat{q})}+ \kappa_n\\
&= 2^{-n \left[D(\delta\|1-\hat{q})-R\right]}+\kappa_n
\end{align}
}
By Lemma~\ref{lem:histogram}, $\kappa_n\to0$ as $n\to \infty$. Thus, we argue that any rate $R$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
R&<D(\delta\|1-\hat{q})
\end{align}
}{
\begin{align}
R&<D(\delta\|1-\hat{q})
\end{align}
}
is achievable.
Now we prove the converse part. Suppose $P_e\to0$. Then, we have
\iftoggle{singlecolumn}{
\begin{align}
P_e &= \Pr(\exists i\in[m_n]\setminus\{1\}: d_H(X_1^n,X_i^n)\le n\delta)\\
&= 1 - \Pr(\forall i\in[m_n]\setminus\{1\}: d_H(X_1^n,X_i^n)> n\delta)\label{eq:advconverse1}\\
&= 1 -\prod\limits_{i=2}^{m_n} \Pr(d_H(X_1^n,X_i^n)> n\delta)\\
&= 1 -\prod\limits_{i=2}^{m_n} [1-\Pr(d_H(X_1^n,X_i^n)\le n\delta)]\\
&= 1-[1-\Pr(d_H(X_1^n,X_2^n)\le n\delta)]^{m_n-1} \label{eq:advconverse2}
\end{align}
}{
\begin{align}
P_e &= \Pr(\exists i\in[m_n]\setminus\{1\}: d_H(X_1^n,X_i^n)\le n\delta)\\
&= 1 - \Pr(\forall i\in[m_n]\setminus\{1\}: d_H(X_1^n,X_i^n)> n\delta)\label{eq:advconverse1}\\
&= 1 -\prod\limits_{i=2}^{m_n} \Pr(d_H(X_1^n,X_i^n)> n\delta)\\
&= 1 -\prod\limits_{i=2}^{m_n} [1-\Pr(d_H(X_1^n,X_i^n)\le n\delta)]\\
&= 1-[1-\Pr(d_H(X_1^n,X_2^n)\le n\delta)]^{m_n-1} \label{eq:advconverse2}
\end{align}
}
where \eqref{eq:advconverse1}-\eqref{eq:advconverse2} follow from the \emph{i.i.d.}ness of the rows of $\mathbf{D}^{(1)}$. Since $D_{n,2}\sim\text{Binom}(n,1-\hat{q})$, for ${\delta\le 1-\hat{q}}$, from~\cite[Lemma 4.7.2]{ash2012information}, we obtain
\iftoggle{singlecolumn}{
\begin{align}
\Pr(D_{n,2}\le n\delta) &\ge \frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}}\label{eq:advChernoffLB}
\end{align}
}{
\begin{align}
\Pr(D_{n,2}\le n\delta) &\ge \frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}}\label{eq:advChernoffLB}
\end{align}
}
Plugging \eqref{eq:advChernoffLB} into \eqref{eq:advconverse2}, we get
\iftoggle{singlecolumn}{
\begin{align}
P_e&\ge 1-\left[1-\frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}}\right]^{m_n-1}
\end{align}
}{
\begin{align}
P_e&\ge 1-\left[1-\frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}}\right]^{m_n-1}
\end{align}
}
Now let $y=-\frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}}\in(-1,0)$. Then, we get
\iftoggle{singlecolumn}{
\begin{align}
P_e&\ge 1-(1+y)^{m_n-1}
\end{align}
}{
\begin{align}
P_e&\ge 1-(1+y)^{m_n-1}
\end{align}
}
Since $y\ge -1$, and $m_n\in\mathbb{N}$, we have
\iftoggle{singlecolumn}{
\begin{align}
1+y(m_n-1)&\le (1+y)^{m_n-1}\le e^{y (m_n-1)}\label{eq:bernoulli}
\end{align}
}{
\begin{align}
1+y(m_n-1)&\le (1+y)^{m_n-1}\le e^{y (m_n-1)}\label{eq:bernoulli}
\end{align}
}
where the LHS of \eqref{eq:bernoulli} follows from Bernoulli's inequality~\cite[Theorem 1]{brannan2006first} and the RHS of \eqref{eq:bernoulli} follows from the fact that
\iftoggle{singlecolumn}{
\begin{align}
\forall x\in\mathbb{R},\hspace{1em} \forall r\in\mathbb{R}_{\ge0} \hspace{1em} (1+x)^r &\le e^{x r}
\end{align}
}{
\begin{align}
\forall x\in\mathbb{R},\hspace{1em} \forall r\in\mathbb{R}_{\ge0} \hspace{1em} (1+x)^r &\le e^{x r}
\end{align}
}
Thus, we get
\begin{align}
P_e&\ge 1-(1+y)^{m_n-1}\\
&\ge 1-e^{y (m_n-1)}\\
&\ge 0
\end{align}
since $y<0$, $m_n-1>0$. Note that since $P_e\to 0$, by the Squeeze Theorem~\cite[Theorem 2]{brannan2006first}, we have
\iftoggle{singlecolumn}{
\begin{align}
\lim\limits_{n\to\infty} 1-e^{y (m_n-1)}&\to 0.
\end{align}
}{
\begin{align}
\lim\limits_{n\to\infty} 1-e^{y (m_n-1)}&\to 0.
\end{align}
}
This, in turn, implies $y m_n\to0$ since the exponential function is continuous everywhere. In other words,
\iftoggle{singlecolumn}{
\begin{align}
\lim\limits_{n\to\infty}& -\frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}} m_n
\to0.
\end{align}
}{
\begin{align}
\lim\limits_{n\to\infty}& -\frac{2^{-n D(\delta\|1-\hat{q})}}{\sqrt{2n}} m_n
\to0.
\end{align}
}
Equivalently, from the continuity of the logarithm function, we get
\iftoggle{singlecolumn}{
\begin{align}
\lim\limits_{n\to\infty}&-n D(\delta\|1-\hat{q})+\log m_n - \frac{1}{2}\log (2 n)\to - \infty\\
\lim\limits_{n\to\infty}&-n \left[D(\delta\|1-\hat{q})-\frac{1}{n}\log m_n+\frac{\log (2 n)}{2n}\right]\to-\infty\\
\lim\limits_{n\to\infty}& \left[D(\delta\|1-\hat{q})-\frac{1}{n}\log m_n+\frac{\log (2 n)}{2n}\right]\ge 0
\end{align}
}{
\begin{align}
\lim\limits_{n\to\infty}&-n D(\delta\|1-\hat{q})+\log m_n - \frac{1}{2}\log (2 n)\to - \infty\\
\lim\limits_{n\to\infty}&-n \left[D(\delta\|1-\hat{q})-\frac{1}{n}\log m_n+\frac{\log (2 n)}{2n}\right]\to-\infty\\
\lim\limits_{n\to\infty}& \left[D(\delta\|1-\hat{q})-\frac{1}{n}\log m_n+\frac{\log (2 n)}{2n}\right]\ge 0
\end{align}
}
This implies
\iftoggle{singlecolumn}{
\begin{align}
D(\delta\|1-\hat{q})&\ge \lim\limits_{n\to\infty} \frac{1}{n}\log m_n\\
&= R
\end{align}
}{
\begin{align}
D(\delta\|1-\hat{q})&\ge \lim\limits_{n\to\infty} \frac{1}{n}\log m_n\\
&= R
\end{align}
}
finishing the proof for $\delta\le 1-\hat{q}$. Thus, we have showed that
\iftoggle{singlecolumn}{
\begin{align}
C^{\text{adv}}(\delta)&=D(\delta\|1-\hat{q})
\end{align}
}{
\begin{align}
C^{\text{adv}}(\delta)&=D(\delta\|1-\hat{q})
\end{align}
}
for $\delta\le 1-\hat{q}$.
We argue that for $\delta>1-\hat{q}$, the adversarial matching capacity is zero, by using two facts: \emph{i)} Since the adversarial deletion budget is an upper bound on deletions, the adversarial matching capacity satisfies
\iftoggle{singlecolumn}{
\begin{align}
C^{\text{adv}}(\delta)&\le C^{\text{adv}}(\delta^\prime),\hspace{1em}\forall \delta^\prime\le \delta
\end{align}
}{
\begin{align}
C^{\text{adv}}(\delta)&\le C^{\text{adv}}(\delta^\prime),\hspace{1em}\forall \delta^\prime\le \delta
\end{align}
}
and \emph{ii)} $C^{\text{adv}}(1-\hat{q})=0$. Thus, $\forall \delta>1-\hat{q}$, ${C^{\text{adv}}(\delta)=0}$. This finishes the proof. \qed
\section{Proof of Theorem~\ref{thm:mainresultseedless}}
\label{proof:mainresultseedless}
First, note that the converse part of Theorem~\ref{thm:mainresultseedless} (equation~\eqref{eq:seedlessconverse}) is trivially true since $C(\infty,0)$ is a non-decreasing function of the seed size $\Lambda_n$. Hence it is sufficient to prove the achievability part of Theorem~\ref{thm:mainresultseedless} (equation~\eqref{eq:seedlessachievable}).
For the achievability, we use a matching scheme which \emph{i)} utilizes replica detection and marker addition as done in Section~\ref{subsec:matchingschemeWm} and \emph{ii)} checks the existence of jointly typical subsequences as done in Section~\ref{subsec:achievabilityW1}. The matching scheme we propose is as follows:
\begin{enumerate}[label=\textbf{ \arabic*)},leftmargin=1.3\parindent]
\item Perform replica detection as in Section~\ref{subsec:replicadetection}. The probability of error of this step is denoted by $\rho_n$.
\item Based on the replica detection step, place markers between the noisy replica runs of different columns to obtain $\tilde{\mathbf{D}}^{(2)}$. Note that at this step we cannot detect runs of length 0 as done in Section~\ref{subsec:matchingschemeWm}. Therefore conditioned on the success of the replica detection we have $\tilde{K}_n=\sum_{j=1}^n \mathbbm{1}_{[S_j\neq0]}$ runs separated with markers.
\item Fix $\epsilon>0$. If $K_n<k\triangleq n(\mathbb{E}[S]-\epsilon)$ or $\hat{K}_n<\hat{k}\triangleq n(1-\delta-\epsilon)$ declare error, whose probability is denoted by $\kappa_n$ where $k$ and $\hat{k}$ are assumed to be integers for computational simplicity. Otherwise, proceed with the next step.
\item Match the $l$\textsuperscript{th} row $Y^{K_n}_{l}$ of $\mathbf{D}^{(2)}$ $X^n_{i}$ of $\mathbf{D}^{(1)}$, assigning $\hat{\boldsymbol{\Theta}}_n(i)=l$, if $i$ is the only index in $[m_n]$ such that \emph{i)} $X^n_i$ is $\epsilon$-typical with respect to $p_X$ and \emph{ii)} $\tilde{X}_i^{n}$ contains a subsequence of length $\tilde{K}_n$, jointly $\epsilon$-typical with $\tilde{Y}^K_{l}$ with respect to $p_{X,Y,\hat{S}}$ where $\hat{S}\sim p_{\hat{S}}$ with
\iftoggle{singlecolumn}{
\begin{align}
p_{\hat{S}}(s) &=\begin{cases}
\frac{p_S(s)}{1-\delta}&\text{if }s\in \{1,\dots,s_{\max}\}\\
0 & \text{otherwise}
\end{cases}
\end{align}
}{
\begin{align}
p_{\hat{S}}(s) &=\begin{cases}
\frac{p_S(s)}{1-\delta}&\text{if }s\in \{1,\dots,s_{\max}\}\\
0 & \text{otherwise}
\end{cases}
\end{align}
}
and
\iftoggle{singlecolumn}{
\begin{align}
\Pr(Y^S=y^S|X=x,\hat{S}=s)&= \prod\limits_{j=1}^{s} p_{Y|X}(y_j|x).
\end{align}
}{
\begin{align}
\Pr(Y^S=y^S|X=x,\hat{S}=s)&= \prod\limits_{j=1}^{s} p_{Y|X}(y_j|x).
\end{align}
}
Otherwise, declare a \emph{collision} error.
\end{enumerate}
Since additional runs in $\mathbf{D}^{(2)}$ and additional columns in each run would decrease the collision probability, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(\text{collision between 1 and }i|K_n\ge k, \tilde{K}_n\ge \tilde{k})\le \Pr(\text{collision between 1 and }i||K_n=k,\tilde{K}_n= \tilde{k})
\end{align}
}{
\begin{align}
\Pr&(\text{collision between 1 and }i|K_n\ge k, \tilde{K}_n\ge \tilde{k})\notag\\&\le \Pr(\text{collision between 1 and }i||K_n=k,\tilde{K}_n= \tilde{k})
\end{align}
}
for any $i\in[m_n]\setminus\{1\}$. Thus, for the sake of simplicity, we can focus on the case $K=k$ as it yields an upper bound on the error probability of our matching scheme.
Let $A_\epsilon^{(n)}(X)$ denote the set of $\epsilon$-typical (with respect to $p_X$) sequences of length $n$ and $A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})$ denote the set of sequences of length $\hat{k}$ jointly $\epsilon$-typical (with respect to $p_{X,Y,\hat{S}}$) with $Y^k_l$ conditioned on $\hat{S}^n$. For the matching rows $X^n_1$, $Y^k_l$ of $\mathbf{D}^{(1)}$ and $\mathbf{D}^{(2)}$, define the pairwise collision probability between $X^n_1$ and $X^n_i$ where $i\neq 1$ as
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,i}}\triangleq \Pr(X_{i}^{n}\in A_{\epsilon}^{(n)}(X) \text{ and }\exists {z}^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}}) \text{ which is a subsequence of } X_{i}^{n}.).
\end{align}
}{
\begin{align}
P_{\text{col,i}}&\triangleq \Pr(X_{i}^{n}\in A_{\epsilon}^{(n)}(X) \text{ and }\exists {z}^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})\notag\\&\hspace{2.3em} \text{ which is a subsequence of } X_{i}^{n}.).
\end{align}
}
Therefore given the correct labeling for $Y^k_l\in\mathbf{D}^{(2)}$ is $X^n_1\in\mathbf{D}^{(1)}$, the probability of error $P_e$ can be bounded as
\iftoggle{singlecolumn}{
\begin{align}
P_e
&\le \Pr(\nexists {z}^{\hat{k}}: {z}^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}}) \text{ and }{z}^{\hat{k}} \text{ is a subsequence of } X_{1}^{n}.)\notag\\
&\qquad +\Pr(X^n_1\notin A_\epsilon^{(n)}(X))+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+ \kappa_n+\rho_n\\
&\le 2\epsilon+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+\kappa_n+\rho_n\\
&\le 2\epsilon+ 2^{n R} P_{\text{col,2}}+\kappa_n+\rho_n \label{eq:Perowwiseseedless}
\end{align}
}{
\begin{align}
P_e
&\le \Pr(\nexists {z}^{\hat{k}}: {z}^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}}) \text{ and }{z}^{\hat{k}}\notag\\&\hspace{7em} \text{ is a subsequence of } X_{1}^{n}.)\notag\\
&\hspace{0.5em} +\Pr(X^n_1\notin A_\epsilon^{(n)}(X))+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+ \kappa_n+\rho_n\\
&\le 2\epsilon+\sum\limits_{i=2}^{2^{n R}} P_{\text{col,i}}+\kappa_n+\rho_n\\
&\le 2\epsilon+ 2^{n R} P_{\text{col,2}}+\kappa_n+\rho_n \label{eq:Perowwiseseedless}
\end{align}
}
where \eqref{eq:Perowwiseseedless} follows from the fact the the rows are \emph{i.i.d.} and thus $P_{\text{col,i}}=P_{\text{col,2}}$, $\forall i\in[m_n]\setminus\{1\}$.
We now upper bound $P_{\text{col,2}}$. For any $z^{\hat{k}}$ define
\iftoggle{singlecolumn}{
\begin{align}
T(z^{\hat{k}})&\triangleq \{x^n\in\mathfrak{X}^n:x^{n}\in A_\epsilon^{(n)}(X) \text{, }x^{n} \text{ contains } z^{\hat{k}}.\}.
\end{align}
}{
\begin{align}
T(z^{\hat{k}})&\triangleq \{x^n\in\mathfrak{X}^n:x^{n}\in A_\epsilon^{(n)}(X) \text{, }x^{n} \text{ contains } z^{\hat{k}}.\}.
\end{align}
}
Observe that for any $z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})$, we have $z^{\hat{k}}\in A_\epsilon^{(\hat{k})}(X)$. Furthermore, for a given deletion pattern with $n-\hat{k}=\Theta(n)$ deletions, WLOG $(\hat{k}+1,\dots,n)$, the $\epsilon$-typicality of $x^n=(x_1,\dots,x_n)$ and $z^{\hat{k}}=(x_1,\dots,x_{\hat{k}})$ with respect to $p_X$ implies the $\tilde{\epsilon}$-typicality of $(x_{\hat{k}+1},\dots,x_n)$, where $\tilde{\epsilon}=\frac{2-\delta-\epsilon}{\delta+\epsilon}\epsilon$. Therefore for any $z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})$, taking the union bound over all possible deletion patterns with $n-\hat{k}$ deletions, the cardinality of $T(z^{\hat{k}})$ can be upper bounded as
\iftoggle{singlecolumn}{
\begin{align}
|T(z^{\hat{k}})| &\le \binom{n}{\hat{k}} |A_{\tilde{\epsilon}}^{(n-\hat{k})}(X)|\\
&\le 2^{n H_b(\frac{\hat{k}}{n})} |A_{\tilde{\epsilon}}^{(n-\hat{k})}(X)|\\
&\le 2^{n H_b(\frac{\hat{k}}{n})} 2^{(n-\hat{k})(H(X)+\tilde{\epsilon})}\\
&= 2^{n \left[H_b(\frac{\hat{k}}{n})+(1-\frac{\hat{k}}{n})(H(X)+\tilde{\epsilon})\right]}\label{eq:tsettypicalbound}
\end{align}
}{
\begin{align}
|T(z^{\hat{k}})| &\le \binom{n}{\hat{k}} |A_{\tilde{\epsilon}}^{(n-\hat{k})}(X)|\\
&\le 2^{n H_b(\frac{\hat{k}}{n})} |A_{\tilde{\epsilon}}^{(n-\hat{k})}(X)|\\
&\le 2^{n H_b(\frac{\hat{k}}{n})} 2^{(n-\hat{k})(H(X)+\tilde{\epsilon})}\\
&= 2^{n \left[H_b(\frac{\hat{k}}{n})+(1-\frac{\hat{k}}{n})(H(X)+\tilde{\epsilon})\right]}\label{eq:tsettypicalbound}
\end{align}
}
Furthermore, for any $x^n\in T(z^{\hat{k}})$, since ${T(z^{\hat{k}})\subseteq A_\epsilon^{(n)}(X)}$, we have
\iftoggle{singlecolumn}{
\begin{align}
p_{X^n}(x^n)\le 2^{-n(H(X)-\epsilon)}
\end{align}
}{
\begin{align}
p_{X^n}(x^n)\le 2^{-n(H(X)-\epsilon)}
\end{align}
}
and since the rows $X^n_i$ of $\mathbf{D}^{(1)}$ are \emph{i.i.d.}, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pr(X^n_2 \in T(z^{\hat{k}})|X^n_1 \in T(z^{\hat{k}}))=\Pr(X^n_2 \in T(z^{\hat{k}})).
\end{align}
}{
\begin{align}
\Pr(X^n_2 \in T(z^{\hat{k}})|X^n_1 \in T(z^{\hat{k}}))=\Pr(X^n_2 \in T(z^{\hat{k}})).
\end{align}
}
Finally, we note that
\iftoggle{singlecolumn}{
\begin{align}
|A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})| &\le 2^{\hat{k}(H(X|Y^{\hat{S}},\hat{S})+\epsilon)}
\end{align}
}{
\begin{align}
|A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})| &\le 2^{\hat{k}(H(X|Y^{\hat{S}},\hat{S})+\epsilon)}
\end{align}
}
and
\iftoggle{singlecolumn}{
\begin{align}
H(X|Y^{\hat{S}},\hat{S})&=\sum\limits_{s=1}^{s_{\max}} p_{\hat{S}}(s) H(X|Y^{\hat{S}},\hat{S}=s)\\
&=\frac{1}{1-\delta} \sum\limits_{s=1}^{s_{\max}} p_{S}(s) H(X|Y^{\hat{S}},\hat{S}=s)\\
&=\frac{1}{1-\delta}\left[\sum\limits_{s=0}^{s_{\max}} p_{S}(s) H(X|Y^{\hat{S}},\hat{S}=s)-\delta H(X|Y^{S},S=0)\right]\\
&=\frac{1}{1-\delta} \left[H(X|Y^S,S)-\delta H(X)\right] \\
&=\frac{1}{1-\delta} \left[(1-\delta) H(X)-I(X;Y^S,S)\right]\\
&= H(X)-\frac{I(X;Y^S,S)}{1-\delta}
\end{align}
}{
\begin{align}
H(X|Y^{\hat{S}},\hat{S})&=\sum\limits_{s=1}^{s_{\max}} p_{\hat{S}}(s) H(X|Y^{\hat{S}},\hat{S}=s)\\
&=\frac{1}{1-\delta} \sum\limits_{s=1}^{s_{\max}} p_{S}(s) H(X|Y^{\hat{S}},\hat{S}=s)\\
&=\frac{1}{1-\delta}\Big[\sum\limits_{s=0}^{s_{\max}} p_{S}(s) H(X|Y^{\hat{S}},\hat{S}=s)\notag\\&\hspace{4em}-\delta H(X|Y^{S},S=0)\Big]\\
&=\frac{1}{1-\delta} \left[H(X|Y^S,S)-\delta H(X)\right] \\
&=\frac{1}{1-\delta} \left[(1-\delta) H(X)-I(X;Y^S,S)\right]\\
&= H(X)-\frac{I(X;Y^S,S)}{1-\delta}
\end{align}
}
Thus, we get
\iftoggle{singlecolumn}{
\begin{align}
|A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})|&\le 2^{\hat{k}\left[H(X)-\frac{I(X;Y^S,S)}{1-\delta} +\epsilon\right]}.\label{eq:condtypicalset2}
\end{align}
}{
\begin{align}
|A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})|&\le 2^{\hat{k}\left[H(X)-\frac{I(X;Y^S,S)}{1-\delta} +\epsilon\right]}.\label{eq:condtypicalset2}
\end{align}
}
Combining \eqref{eq:tsettypicalbound}-\eqref{eq:condtypicalset2}, we can upper bound $P_{\text{col,2}}$ as
\iftoggle{singlecolumn}{
\begin{align}
P_{\text{col,2}} &\le \sum\limits_{z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} \Pr(X^n_2 \in T(z^{\hat{k}}))\\
&=\sum\limits_{z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} \sum\limits_{x^n\in T(z^{\hat{k}})} p_{X^n}(x^n)\\
&\le \sum\limits_{z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} \sum\limits_{x^n\in T(z^{\hat{k}})} 2^{-n(H(X)-\epsilon)}\\
&= \sum\limits_{z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} |T(z^{\hat{k}})| 2^{-n(H(X)-\epsilon)}\label{eq:tsetseparation2}\\
&\le \sum\limits_{z^{\hat{k}}\in A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} 2^{-n(H(X)-\epsilon)} 2^{n \left[H_b(\frac{\hat{k}}{n})+(1-\frac{\hat{k}}{n})(H(X)+\tilde{\epsilon})\right]}\\
&= |A_\epsilon(X^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})| 2^{-\left[\hat{k}H(X)- n \epsilon-H_b(\frac{\hat{k}}{n})-(n-\hat{k})\tilde{\epsilon}\right]} \\
&\le 2^{\hat{k}\left[H(X)-\frac{I(X;Y^S,S)}{1-\delta} +\epsilon\right]} 2^{-\left[\hat{k}H(X)- n \epsilon-n H_b(\frac{\hat{k}}{n})-(n-\hat{k})\tilde{\epsilon}\right]}\\
&= 2^{-n\left[\frac{1-\delta-\epsilon}{1-\delta} I(X;Y^S,S)-H_b(\delta+\epsilon)-(\delta+\epsilon)(\epsilon+\tilde{\epsilon}) \right]}\\
&=2^{-n\left[\frac{1-\delta-\epsilon}{1-\delta} I(X;Y^S,S)-H_b(\delta+\epsilon)-2\epsilon \right]}
\end{align}
}{
\begin{align}
P_{\text{col,2}} &\le \sum\limits_{z^{\hat{k}}\in A_\epsilon(Z^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} \Pr(X^n_2 \in T(z^{\hat{k}}))\\
&=\sum\limits_{z^{\hat{k}}\in A_\epsilon(Z^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} \sum\limits_{x^n\in T(z^{\hat{k}})} p_{X^n}(x^n)\\
&\le \sum\limits_{z^{\hat{k}}\in A_\epsilon(Z^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} \sum\limits_{x^n\in T(z^{\hat{k}})} 2^{-n(H(X)-\epsilon)}\\
&= \sum\limits_{z^{\hat{k}}\in A_\epsilon(Z^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} |T(z^{\hat{k}})| 2^{-n(H(X)-\epsilon)}\label{eq:tsetseparation2}\\
&\le \sum\limits_{z^{\hat{k}}\in A_\epsilon(Z^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})} 2^{-n(H(X)-\epsilon)} \notag\\&\hspace{6.2em} 2^{n \left[H_b(\frac{\hat{k}}{n})+(1-\frac{\hat{k}}{n})(H(X)+\tilde{\epsilon})\right]}\\
&= |A_\epsilon(Z^{\hat{k}}|Y^k_l,\hat{S}^{\hat{k}})| 2^{-\left[\hat{k}H(X)- n \epsilon-H_b(\frac{\hat{k}}{n})-(n-\hat{k})\tilde{\epsilon}\right]} \\
&\le 2^{\hat{k}\left[H(X)-\frac{I(X;Y^S,S)}{1-\delta} +\epsilon\right]}\notag\\&\hspace{4em} 2^{-\left[\hat{k}H(X)- n \epsilon-n H_b(\frac{\hat{k}}{n})-(n-\hat{k})\tilde{\epsilon}\right]}\\
&= 2^{-n\left[\frac{1-\delta-\epsilon}{1-\delta} I(X;Y^S,S)-H_b(\delta+\epsilon)-(\delta+\epsilon)(\epsilon+\tilde{\epsilon}) \right]}\\
&=2^{-n\left[\frac{1-\delta-\epsilon}{1-\delta} I(X;Y^S,S)-H_b(\delta+\epsilon)-2\epsilon \right]}
\end{align}
}
Thus, we have the following upper bound on the error probability
\iftoggle{singlecolumn}{
\begin{align}
P_e &\le 2\epsilon+ 2^{n R} 2^{-n\left[\frac{1-\delta-\epsilon}{1-\delta} I(X;Y^S,S)-H_b(\delta+\epsilon)-2\epsilon \right]} +\kappa_n+\rho_n
\end{align}
}{
\begin{align}
P_e &\le 2\epsilon+ 2^{n R} 2^{-n\left[\frac{1-\delta-\epsilon}{1-\delta} I(X;Y^S,S)-H_b(\delta+\epsilon)-2\epsilon \right]} \notag\\&\hspace{1em}+\kappa_n+\rho_n
\end{align}
}
By LLN, we have $\kappa_n\to0$ and from Lemma~\ref{lem:noisyreplicadetection}, we have $ \rho_n\to0$ as $n\to\infty$. Hence, we can argue that any database growth rate $R$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
R<I(X;Y^S,S)-H_b(\delta)
\end{align}
}{
\begin{align}
R<I(X;Y^S,S)-H_b(\delta)
\end{align}
}
is achievable by taking $\epsilon$ small enough.
Now, we investigate repetition distributions with $\delta\le 1-\frac{1}{|\mathfrak{X}|}$. Recall from Appendix~\ref{proof:achievabilityW1} the counting function $F(n,\hat{k},|\mathfrak{X}|)$ denoting the number of $|\mathfrak{X}|$-ary sequences of length $n$, which contain a fixed $|\mathfrak{X}|$-ary sequence of length $\hat{k}$ as a subsequence. From~\cite{chvatal1975longest,diggavi1603788}, we have
\iftoggle{singlecolumn}{
\begin{align}
F(n,\hat{k},|\mathfrak{X}|)&\le n 2^{n\left[ H_b\left(\frac{\hat{k}}{n}\right)+(1-\frac{\hat{k}}{n})\log(|\mathfrak{X}|-1))\right]}.
\end{align}
}{
\begin{align}
F(n,\hat{k},|\mathfrak{X}|)&\le n 2^{n\left[ H_b\left(\frac{\hat{k}}{n}\right)+(1-\frac{\hat{k}}{n})\log(|\mathfrak{X}|-1))\right]}.
\end{align}
}
Furthermore, disregarding the typicality constraint, we can trivially bound the cardinality of $T(z^{\hat{k}})$ as
\iftoggle{singlecolumn}{
\begin{align}
|T(z^{\hat{k}})|&\le |\{x^n\in\mathfrak{X}^n: x^{n} \text{ contains } z^{\hat{k}} \}|\\
&\le F(n,\hat{k},|\mathfrak{X}|)\\
&\le n 2^{n\left[ H_b\left(\frac{\hat{k}}{n}\right)+(1-\frac{\hat{k}}{n}\log(|\mathfrak{X}|-1))\right]}\label{eq:tsettrivialbound2}
\end{align}
}{
\begin{align}
|T(z^{\hat{k}})|&\le |\{x^n\in\mathfrak{X}^n: x^{n} \text{ contains } z^{\hat{k}} \}|\\
&\le F(n,\hat{k},|\mathfrak{X}|)\\
&\le n 2^{n\left[ H_b\left(\frac{\hat{k}}{n}\right)+(1-\frac{\hat{k}}{n}\log(|\mathfrak{X}|-1))\right]}\label{eq:tsettrivialbound2}
\end{align}
}
Plugging \eqref{eq:tsettrivialbound2} into \eqref{eq:tsetseparation2} and following the same steps, one can show that any rate $R$ satisfying
\iftoggle{singlecolumn}{
\begin{align}
R&<\left[I(X;Y^S,S)+\delta(H(X)-\log(|\mathfrak{X}|-1))-H_b(\delta)\right]^+
\end{align}
}{
\begin{align}
R&<\Big[I(X;Y^S,S)-H_b(\delta)\notag\\&\hspace{5em}+\delta(H(X)-\log(|\mathfrak{X}|-1))\Big]^+
\end{align}
}
is achievable. Simply taking the maximum of the two proven achievable rates when $\delta\le 1-\nicefrac{1}{|\mathfrak{X}|}$ yields the desired achievability result. This concludes the proof. \qed
\section{Proof of Lemma~\ref{lem:histogramuncollapsed}}\label{proof:histogramuncollapsed}
For brevity, we let $\mu_n$ denote ${\Pr(\exists i,j\in [n],\: i\neq j,H^{(1)}_i=H^{(1)}_j)}$. Notice that since the entries of $\mathbf{D}^{(1)}$ are \emph{i.i.d.}, $H^{(1)}_i$ are \emph{i.i.d.} Multinomial$(m_n,p_X)$ random variables. Then,
\iftoggle{singlecolumn}{
\begin{align}
\mu_n&\le n^2 \Pr(H^{(1)}_1=H^{(1)}_2)\\
&=n^2 \sum\limits_{h^{|\mathfrak{X}|}} \Pr(H^{(1)}_1=h^{|\mathfrak{X}|})^2
\end{align}
}{
\begin{align}
\mu_n&\le n^2 \Pr(H^{(1)}_1=H^{(1)}_2)\\
&=n^2 \sum\limits_{h^{|\mathfrak{X}|}} \Pr(H^{(1)}_1=h^{|\mathfrak{X}|})^2
\end{align}
}
where the sum is over all vectors of length $|\mathfrak{X}|$, summing up to $m_n$. Let $m_i\triangleq h(i)$, $\forall i\in\mathfrak{X}$. Then,
\iftoggle{singlecolumn}{
\begin{align}
\Pr(H^{(1)}_1=h^{|\mathfrak{X}|})&= \binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}} \prod\limits_{i=1}^{|\mathfrak{X}|} p_X(i)^{m_i}
\end{align}
}{
\begin{align}
\Pr(H^{(1)}_1=h^{|\mathfrak{X}|})&= \binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}} \prod\limits_{i=1}^{|\mathfrak{X}|} p_X(i)^{m_i}
\end{align}
}
Hence, we have
\iftoggle{singlecolumn}{
\begin{align}
\mu_n &\le n^2\sum\limits_{m_1+\dots+m_{|\mathfrak{X}|}=m_n} \binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}}^2 \prod\limits_{i=1}^{|\mathfrak{X}|} p_X(i)^{2 m_i}\label{eq:multinomial}
\end{align}
}{
\begin{align}
\mu_n &\le n^2\sum\limits_{m_1+\dots+m_{|\mathfrak{X}|}=m_n} \binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}}^2 \notag\\&\hspace{8em}\prod\limits_{i=1}^{|\mathfrak{X}|} p_X(i)^{2 m_i}\label{eq:multinomial}
\end{align}
}
where $\smash{\binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}}}$ is the multinomial coefficient corresponding to the $|\mathfrak{X}|$-tuple $(m_1,\dots,m_{|\mathfrak{X}|})$ and the summation is over all possible non-negative indices $m_1,\dots,m_{|\mathfrak{X}|}$ which add up to $m_n$.
From~\cite[Theorem 11.1.2]{cover2006elements}, we have
\iftoggle{singlecolumn}{
\begin{align}
\prod\limits_{i=1}^{|\mathfrak{X}|} p_X(i)^{2 m_i}=2^{-2m_n(H(\Tilde{p})+D(\Tilde{p}\|p_X))}\label{eq:covertype}
\end{align}
}{
\begin{align}
\prod\limits_{i=1}^{|\mathfrak{X}|} p_X(i)^{2 m_i}=2^{-2m_n(H(\Tilde{p})+D(\Tilde{p}\|p_X))}\label{eq:covertype}
\end{align}
}
where $\Tilde{p}$ is the type corresponding to $|\mathfrak{X}|$-tuple ${(m_1,\dots,m_{|\mathfrak{X}|})}$:
\iftoggle{singlecolumn}{
\begin{align}
\Tilde{p}&=\left(\frac{m_1}{m_n},\dots,\frac{m_{|\mathfrak{X}|}}{m_n}\right).
\end{align}
}{
\begin{align}
\Tilde{p}&=\left(\frac{m_1}{m_n},\dots,\frac{m_{|\mathfrak{X}|}}{m_n}\right).
\end{align}
}
From Stirling's approximation~\cite[Chapter 3.2]{cormen2022introduction}, we get
\iftoggle{singlecolumn}{
\begin{align}
\binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}}^2\le \frac{e^2} {(2\pi)^{|\mathfrak{X}|}} m_n^{1-|\mathfrak{X}|} \Pi_{\Tilde{p}}^{-1} 2^{2m_n H(\Tilde{p})}\label{eq:stirling}
\end{align}
}{
\begin{align}
\binom{m_n}{m_1,m_2,\dots,m_{|\mathfrak{X}|}}^2\le \frac{e^2} {(2\pi)^{|\mathfrak{X}|}} m_n^{1-|\mathfrak{X}|} \Pi_{\Tilde{p}}^{-1} 2^{2m_n H(\Tilde{p})}\label{eq:stirling}
\end{align}
}
where $\Pi_{\Tilde{p}}=\prod_{i=1}^{|\mathfrak{X}|} \Tilde{p}(i)$.
Combining \eqref{eq:multinomial}-\eqref{eq:stirling}, we get
\iftoggle{singlecolumn}{
\begin{align}
\mu_n\le \frac{e^2} {(2\pi)^{|\mathfrak{X}|}} n^2 m_n^{1-|\mathfrak{X}|} \sum\limits_{\Tilde{p}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)}
\end{align}
}{
\begin{align}
\mu_n\le \frac{e^2} {(2\pi)^{|\mathfrak{X}|}} n^2 m_n^{1-|\mathfrak{X}|} \sum\limits_{\Tilde{p}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)}
\end{align}
}
Let
\iftoggle{singlecolumn}{
\begin{align}
\tilde{T}=\sum\limits_{\Tilde{p}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)} = \tilde{T}_1 + \tilde{T}_2
\end{align} where
\begin{align}
\tilde{T}_1&=\sum\limits_{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)> \frac{\epsilon_n^2}{2\log_e 2}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)}\label{eq:T1iid}\\
\tilde{T}_2&=\sum\limits_{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)\le\frac{\epsilon_n^2}{2\log_e 2}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)},\label{eq:T2iid}
\end{align}
}{
\begin{align}
\tilde{T}=\sum\limits_{\Tilde{p}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)} = \tilde{T}_1 + \tilde{T}_2
\end{align} where
\begin{align}
\tilde{T}_1&=\sum\limits_{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)> \frac{\epsilon_n^2}{2\log_e 2}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)}\label{eq:T1iid}\\
\tilde{T}_2&=\sum\limits_{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)\le\frac{\epsilon_n^2}{2\log_e 2}} \Pi_{\Tilde{p}}^{-1} 2^{-2m_n D_{KL}(\Tilde{p}\|p_X)},\label{eq:T2iid}
\end{align}
}
$\epsilon_n$, which is described below in more detail, is a small positive number decaying with $n$.
First, we look at $\tilde{T}_2$. From Pinsker's inequality~\cite[Lemma 11.6.1]{cover2006elements}, we have
\iftoggle{singlecolumn}{
\begin{align}
D_{KL}(\Tilde{p}\|p_X)\le \frac{\epsilon_n^2}{2\log_e 2}\Rightarrow \text{TV}(\Tilde{p},p_X)\le \epsilon_n
\end{align}
}{
\begin{align}
D_{KL}(\Tilde{p}\|p_X)\le \frac{\epsilon_n^2}{2\log_e 2}\Longrightarrow \text{TV}(\Tilde{p},p_X)\le \epsilon_n
\end{align}
}
where TV denotes the total variation distance. Therefore
\iftoggle{singlecolumn}{
\begin{align}
\left|\{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)\le \frac{\epsilon_n^2}{2\log_e 2}\}\right|&\le |\{\Tilde{p}:\text{TV}(\Tilde{p},p_X)\le \epsilon_n\}|\notag \\
&= O(m_n^{|\mathfrak{X}|-1}\epsilon_n^{|\mathfrak{X}|-1})
\end{align}
}{
\begin{align}
\Big|\{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)\le &\frac{\epsilon_n^2}{2\log_e 2}\}\Big|\notag\\&\le |\{\Tilde{p}:\text{TV}(\Tilde{p},p_X)\le \epsilon_n\}|\notag \\
&= O(m_n^{|\mathfrak{X}|-1}\epsilon_n^{|\mathfrak{X}|-1})
\end{align}
}
where the last equality follows from the fact in a type we have $|\mathfrak{X}|-1$ degrees of freedom, since the sum of the $|\mathfrak{X}|$-tuple $(m_1,\dots,m_{|\mathfrak{X}|})$ is fixed.
Furthermore, when $\text{TV}(\Tilde{p},p_X)\le \epsilon_n$, we have
\iftoggle{singlecolumn}{
\begin{align}
\Pi_{\Tilde{p}} &\ge \prod\limits_{i=1}^{|\mathfrak{X}|} (p_X(i)-\epsilon_n)\ge \Pi_{p_X}-\epsilon_n \sum\limits_{i=1}^{|\mathfrak{X}|} \prod\limits_{j\neq i} p_X(j)
\end{align}
}{
\begin{align}
\Pi_{\Tilde{p}} &\ge \prod\limits_{i=1}^{|\mathfrak{X}|} (p_X(i)-\epsilon_n)\ge \Pi_{p_X}-\epsilon_n \sum\limits_{i=1}^{|\mathfrak{X}|} \prod\limits_{j\neq i} p_X(j)
\end{align}
}
Hence
\iftoggle{singlecolumn}{
\begin{align}
\Pi_{\Tilde{p}}^{-1}&\le \frac{1}{\Pi_{p_X}-\epsilon_n \sum\limits_{i=1}^{|\mathfrak{X}|} \prod\limits_{j\neq i} p_X(j)}
\end{align}
}{
\begin{align}
\Pi_{\Tilde{p}}^{-1}&\le \frac{1}{\Pi_{p_X}-\epsilon_n \sum\limits_{i=1}^{|\mathfrak{X}|} \prod\limits_{j\neq i} p_X(j)}
\end{align}
}
and
\iftoggle{singlecolumn}{
\begin{align}
\tilde{T}_2 &\le \frac{1}{\Pi_{p_X}-\epsilon_n \sum\limits_{i=1}^{|\mathfrak{X}|} \prod\limits_{j\neq i} p_X(j)} O(m_n^{|\mathfrak{X}|-1}\epsilon_n ^{|\mathfrak{X}|-1})\\
&= O(m_n^{|\mathfrak{X}|-1}\epsilon_n^{|\mathfrak{X}|-1})
\end{align}
}{
\begin{align}
\tilde{T}_2 &\le \frac{1}{\Pi_{p_X}-\epsilon_n \sum\limits_{i=1}^{|\mathfrak{X}|} \prod\limits_{j\neq i} p_X(j)} O(m_n^{|\mathfrak{X}|-1}\epsilon_n ^{|\mathfrak{X}|-1})\\
&= O(m_n^{|\mathfrak{X}|-1}\epsilon_n^{|\mathfrak{X}|-1})
\end{align}
}
for small $\epsilon_n$.
Now, we look at $\tilde{T}_1$. Note that since $m_i\in \mathbb{Z}_+$, we have ${\Pi_{\Tilde{p}}\le m_n^{|\mathfrak{X}|}}$, suggesting the multiplicative term in the summation in~\eqref{eq:T1iid} is polynomial with $m_n$. If $m_i=0$ we can simply discard it and return to Stirling's approximation with the reduced number of categories. Furthermore, from~\cite[Theorem 11.1.1]{cover2006elements}, we have
\iftoggle{singlecolumn}{
\begin{align}
\left|\{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)> \frac{\epsilon_n^2}{2\log_e 2}\}\right|&\le |\{\Tilde{p}\}|\\&\le (m_n+1)^{|\mathfrak{X}|}
\end{align}
}{
\begin{align}
\left|\{\Tilde{p}:D_{KL}(\Tilde{p}\|p_X)> \frac{\epsilon_n^2}{2\log_e 2}\}\right|&\le |\{\Tilde{p}\}|\\&\le (m_n+1)^{|\mathfrak{X}|}
\end{align}
}
suggesting the number of terms which we take the summation over in~\eqref{eq:T1iid} is polynomial with $m_n$ as well. Therefore, as long as ${m_n \epsilon_n^2\to\infty}$, $\tilde{T}_1$ has a polynomial number of elements which decay exponentially with $m_n$. Thus
\iftoggle{singlecolumn}{
\begin{align}
\tilde{T}_1\to0\text{ as }n\to\infty.\label{eq:t1iid}
\end{align}
}{
\begin{align}
\tilde{T}_1\to0\text{ as }n\to\infty.\label{eq:t1iid}
\end{align}
}
Define
\iftoggle{singlecolumn}{
\begin{align}
U_i&=e^2 (2\pi)^{-|\mathfrak{X}|} m_n^{1-|\mathfrak{X}|} \tilde{T}_i,\quad i=1,2\label{eq:ui}
\end{align}
}{
\begin{align}
U_i&=e^2 (2\pi)^{-|\mathfrak{X}|} m_n^{1-|\mathfrak{X}|} \tilde{T}_i,\quad i=1,2\label{eq:ui}
\end{align}
}
and choose ${\epsilon_n=m_n^{-\frac{1}{2}} V_n}$ for some $V_n$ satisfying ${V_n=\omega(1)}$ and ${V_n=o(m_n^{1/2})}$. Thus, $U_1$ vanishes exponentially fast since ${m_n\epsilon_n^2=V_n^2\to\infty}$ and \iftoggle{singlecolumn}{
\begin{align}
U_2&=O(\epsilon_n^{|\mathfrak{X}|-1})=O(m_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)}).\label{eq:u2}
\end{align}
}{
\begin{align}
U_2&=O(\epsilon_n^{|\mathfrak{X}|-1})=O(m_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)}).\label{eq:u2}
\end{align}
}
Combining \eqref{eq:t1iid}-\eqref{eq:u2}, we have
\iftoggle{singlecolumn}{
\begin{align}
U=U_1+U_2=O(m_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)})
\end{align}
}{
\begin{align}
U=U_1+U_2=O(m_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)})
\end{align}
}
and we get
\iftoggle{singlecolumn}{
\begin{align}
\mu_n\le n^2 O(m_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)})
\end{align}
}{
\begin{align}
\mu_n\le n^2 O(m_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)})
\end{align}
}
By the assumption ${m=\omega(n^\frac{4}{|\mathfrak{X}|-1})}$, we have ${m_n=n^\frac{4}{|\mathfrak{X}|-1} Z_n}$ for some $Z_n$ satisfying ${\lim\limits_{n\to\infty} Z_n=\infty}$. Now, taking ${V_n=o(Z_n^{1/2})}$ (e.g.~$V_n=Z_n^{1/3}$), we get
\iftoggle{singlecolumn}{
\begin{align}
\mu_n&\le O(n^2 n^{-2} Z_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)})
= o(1)
\end{align}
}{
\begin{align}
\mu_n&\le O(n^2 n^{-2} Z_n^{(1-|\mathfrak{X}|)/2} V_n^{(|\mathfrak{X}|-1)})
= o(1).
\end{align}
}
Thus $m_n=\omega(n^\frac{4}{|\mathfrak{X}|-1})$ is enough to have $\mu_n\to0$ as $n\to\infty$. \qed
\section{Proof of Proposition~\ref{prop:uniformhistogram}}\label{proof:uniformhistogram}
For brevity, we let $\mu_n$ denote ${\Pr(\exists i,j\in [n],\: i\neq j,H^{(1)}_i=H^{(1)}_j)}$. Then,
\iftoggle{singlecolumn}{
\begin{align}
\mu_n&= n(n-1)\Pr(H^{(1)}_1=H^{(1)}_2)\\
&= n(n-1)\sum\limits_{h^{|\mathfrak{X}|}} \Pr(H^{(1)}_1=h^{|\mathfrak{X}|})^2\\
&=n(n-1)\sum\limits_{m_1+\dots+m_{|\mathfrak{X}|}=m_n} \binom{m_n}{m_1,\dots,m_{|\mathfrak{X}|}}^2 |\mathfrak{X}|^{-2m_n}\\
&= n(n-1) |\mathfrak{X}|^{-2m_n} \sum\limits_{m_1+\dots+m_{|\mathfrak{X}|}=m_n} \binom{m_n}{m_1,\dots,m_{|\mathfrak{X}|}}^2\\
&= n(n-1) |\mathfrak{X}|^{|\mathfrak{X}|/2}(4\pi m_n)^{(1-|\mathfrak{X}|)/2}(1+o_{m_n}(1))(1-o_n(1)) \label{eqn:multinomialuniform}\\
&= n^2 m_n^{\frac{1-|\mathfrak{X}|}{2}} (4\pi)^{(1-|\mathfrak{X}|)/2} |\mathfrak{X}|^{|\mathfrak{X}|/2} (1+o_{m_n}(1))(1-o_n(1))
\end{align}
}{
\begin{align}
\mu_n&= n(n-1)\Pr(H^{(1)}_1=H^{(1)}_2)\\
&= n(n-1)\sum\limits_{h^{|\mathfrak{X}|}} \Pr(H^{(1)}_1=h^{|\mathfrak{X}|})^2\\
&=n(n-1) \hspace{-1em} \sum\limits_{m_1+\dots+m_{|\mathfrak{X}|}=m_n} \binom{m_n}{m_1,\dots,m_{|\mathfrak{X}|}}^2 |\mathfrak{X}|^{-2m_n}\\
&= n(n-1) |\mathfrak{X}|^{-2m_n} \hspace{-1em} \sum\limits_{m_1+\dots+m_{|\mathfrak{X}|}=m_n} \binom{m_n}{m_1,\dots,m_{|\mathfrak{X}|}}^2\\
&= n(n-1) |\mathfrak{X}|^{|\mathfrak{X}|/2}(4\pi m_n)^{(1-|\mathfrak{X}|)/2}\notag\\&\hspace{7em}(1+o_{m_n}(1))(1-o_n(1)) \label{eqn:multinomialuniform}\\
&= n^2 m_n^{\frac{1-|\mathfrak{X}|}{2}} (4\pi)^{(1-|\mathfrak{X}|)/2} |\mathfrak{X}|^{|\mathfrak{X}|/2}\notag\\&\hspace{7em} (1+o_{m_n}(1))(1-o_n(1))
\end{align}
}
where \eqref{eqn:multinomialuniform} follows from \cite[Theorem 4]{richmond2008counting}.\qed
|
2,869,038,155,296 | arxiv | \chapter*{\bibname}
\addcontentsline{toc}{chapter}{\bibname}
\chaptermark{\bibname}
\bgroup\list{\arabic{enumi}.}
{\settowidth\labelwidth{#1}
\leftmargin\labelwidth
\itemindent -\leftmargin
\itemsep 0pt
\parsep \itemsep
\advance\leftmargin\labelsep
\usecounter{enumi}}}
\def\newblock{\hskip .11em plus .33em minus .07em
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax}
\definecolor{deepblue}{rgb}{0,0,0.6}
\definecolor{deepred}{rgb}{0.6,0,0}
\definecolor{deepgreen}{rgb}{0,0.5,0}
\usepackage{listings}
\newcommand\pythonstyle{\lstset{
language=Python,
basicstyle=\ttm,
otherkeywords={self},
keywordstyle=\ttb\color{deepblue},
emph={MyClass,__init__},
emphstyle=\ttb\color{deepred},
stringstyle=\color{deepgreen},
frame=tb,
showstringspaces=false %
}}
\lstnewenvironment{python}[1][]
{
\pythonstyle
\lstset{backgroundcolor=\color{shadecolor}}
}
{}
\lstnewenvironment{pythonout}[1][]
{
\pythonstyle
\lstset{backgroundcolor=\color{shadecolor2}}
}
{}
\lstnewenvironment{pythonout2}[1][]
{\begin{lrbox}{\thmbox}%
\begin{minipage}{\dimexpr\linewidth-2\fboxsep}
}%
{\end{minipage}%
\end{lrbox}%
\begin{trivlist}
\item[]\colorbox{lightgray}{\usebox\thmbox}
\end{trivlist}}
\newcommand\pythonexternal[2][]{{
\pythonstyle
\lstinputlisting[#1]{#2}}}
\newcommand\pythoninline[1]{{\pythonstyle\lstinline!#1!}}
\newcommand\pyi[1]{{\pythonstyle\lstinline!#1!}}
\newcommand\pyin[1]{{\pythonstyle\lstinline!#1!}}
\newcommand\pline[1]{{\pythonstyle\lstinline!#1!}}
\newcommand\mathstyle{\lstset{
language=Mathematica,
basicstyle=\ttfamily\bfseries\small,
otherkeywords={self},
keywordstyle=\ttfamily\bfseries\small\color{deepblue},
emph={MyClass,__init__},
emphstyle=\ttfamily\bfseries\small\color{deepred},
stringstyle=\color{deepgreen},
frame=tb,
showstringspaces=false %
}}
\lstnewenvironment{mathematica}[1][]
{
\mathstyle
\lstset{#1}
}
{}
\newcommand\mline[1]{{\mathstyle\lstinline!#1!}}
\newcommand{\texttt{pinchcr}}{\texttt{pinchcr}}
\ifarxiv
\title{Introduction to the Spectrum of \texorpdfstring{${\mathcal{N}=4}$}{N=4} SYM and the Quantum Spectral Curve}
\author{Nikolay Gromov}
\date{\small Mathematics Department, King's College London,
The Strand, London WC2R 2LS, UK. \& St.Petersburg INP, Gatchina, 188 300, St.Petersburg,
Russia\\ \vspace{10mm}
{\bf Abstract.}
{\it
This review is based on the lectures given by the author at the Les Houches Summer School 2016. It describes the recently developed Quantum Spectral Curve (QSC) for a non-perturbative planar spectrum of N=4 Super Yang-Mills theory in a pedagogical way starting from the harmonic oscillator and avoiding a long historical path. We give many examples and provide exercises. At the end we give a list of the recent and possible future applications of the QSC. }\\ \vspace{100mm}
{\bf Dedication.} In memory of Ludvig Dmitrievich Faddeev.
}
\begin{document}
\begin{titlingpage}
\maketitle
\end{titlingpage}
\tableofcontents
\else
\title{Les Houches Lecture Notes:\\ Spectrum of N=4 SYM and the Quantum Spectral Curve}
\author{Nikolay Gromov}
\affiliation{Mathematics Department, King's College London,
The Strand, London WC2R 2LS, UK. \& St.Petersburg INP, Gatchina, 188 300, St.Petersburg,
Russia}
\begin{document}
\maketitle
\dedication{In memory of Ludvig Dmitrievich Faddeev.}
\acknowledgements
I am very grateful to M.Alfimov, A.Cavagli\`a, S.Leurent, F.Levkovich-Malyuk, G.Sizov, D.Volin, and especially to V.Kazakov and P.Vieira for numerous discussions on closely related topics. I am thankful to D.Grabner, D.Lee and J.\footnote{i.e. Julius, who only has a first name} for carefully reading the manuscript.
The work was supported by the European Research Council (Programme
``Ideas" ERC-2012-AdG 320769 AdS-CFT-solvable). We are grateful to Humboldt
University (Berlin) for the hospitality and financial support of this work in the framework of the ``Kosmos" programe. We wish to thank STFC for support from Consolidated grant number ST/J002798/1. This work has received funding from the People Programme (Marie Curie Actions)
of the European Union's Seventh Framework Programme FP7/2007-2019/ under REA Grant Agreement No 317089 (GATIS).\\
{ }\\
Please report typos or send other improvement requests for these lecture notes to \url{[email protected]}.
\tableofcontents
\maintext
\fi
\chapter{Introduction}
The importance of AdS/CFT correspondence in modern theoretical physics and the role of ${\mathcal N}=4$ SYM in it is hard to over-appreciate.
In these lecture notes we try to give a pedagogical introduction to the Quantum Spectral Curve (QSC) of ${\mathcal N}=4$ SYM,
a beautiful mathematical structure which describes the non-perturbative spectrum of strings/anomalous dimensions of all single trace operators.
The historical development leading to the discovery of the QSC~\cite{Gromov:2011cx,Gromov:2014caa} is a very long and interesting story by itself, and there are several reviews trying to cover the main steps on this route \cite{Beisert:2010jr,Bombardelli:2016rwb}.
For the purposes of the lectures we took another approach and try to motivate the construction by emphasizing numerous analogies between the QSC construction and
basic quantum integrable systems such as the harmonic oscillator, Heisenberg spin chains, and classical sigma-models. In this way the QSC comes out naturally, bypassing
extremely complicated and technical stages such as derivation of the S-matrix \cite{Beisert:2005tm}, dressing phase \cite{Janik:2006dc}, mirror theory \cite{Ambjorn:2005wa}, Y-system \cite{Gromov:2009tv}, Thermodynamic Bethe Ansatz
\cite{Gromov:2009bc,Bombardelli:2009ns,Arutyunov:2009ur,Cavaglia:2010nm}, NLIE~\cite{Gromov:2011cx,Balog:2012zt} and finally derivation of the QSC~\cite{Gromov:2011cx,Gromov:2014caa}.
We also give examples of analytic solutions of the QSC and in the last chapter describe step-by-step the numerical algorithm allowing us to get the non-perturbative spectrum with almost unlimited precision~\cite{Gromov:2015wca}.
We also briefly discuss the analytic continuation of the anomalous dimension to the Regge (BFKL) limit relevant for more realistic QCD.
The structure is the following: in the Chapter~1 we re-introduce the harmonic oscillator and the Heisenberg spin chains in a way suitable for generalization to the QSC.
Chapter~\ref{ch:2} describes classical integrability of strings in a curved background, which give some important hints about the construction of the QSC. In Chapter~\ref{ch:3} we give a clear formulation of the QSC.
In Chapter~\ref{ch:4} we consider some analytic examples. And in the last Chapter~\ref{ch:5} we present the numerical method.
\newpage
\paragraph*{Acknowledgment}
I am very grateful to M.Alfimov, A.Cavagli\`a, S.Leurent, F.Levkovich-Malyuk, G.Sizov, D.Volin, and especially to V.Kazakov and P.Vieira for numerous discussions on closely related topics. I am thankful to D.Grabner, D.Lee and J.\footnote{i.e. Julius, who only has a first name} for carefully reading the manuscript.
The work was supported by the European Research Council (Programme
``Ideas" ERC-2012-AdG 320769 AdS-CFT-solvable). We are grateful to Humboldt
University (Berlin) for the hospitality and financial support of this work in the framework of the ``Kosmos" programe. We wish to thank STFC for support from Consolidated grant number ST/J002798/1. This work has received funding from the People Programme (Marie Curie Actions)
of the European Union's Seventh Framework Programme FP7/2007-2019/ under REA Grant Agreement No 317089 (GATIS).\\
{ }\\
Please report typos or send other improvement requests for these lecture notes to \url{[email protected]}.
\chapter{From Harmonic Oscillator to QQ-Relations}\la{ch:2}
\section{Inspiration from the Harmonic Oscillator}
To motivate the construction of the QSC we first consider the 1D harmonic oscillator and concentrate on
the features which, as we will see later, have similarities with the construction for the spectrum of ${\mathcal N}=4$ SYM.
The harmonic oscillator is the simplest integrable model which at the same time exhibits nontrivial features surprisingly similar to ${\mathcal N}=4$ SYM. Our starting point is the Schr\"odinger equation
\begin{equation}\la{SH}
-\frac{\hbar^2}{2m} \psi''(x)+V(x)\psi(x)=E\psi(x)
\end{equation}
where $V(x)=\frac{m\omega^2 x^2}{2}$.
Alternatively, it can be written in terms of the quasi-momentum
\begin{equation}\la{pdef}
p(x)=\frac{\hbar}{i}\frac{\psi'(x)}{\psi(x)}
\end{equation}
as
\begin{equation}\la{ppeq}
p^2-i\hbar p'=2m(E-V)\;.
\end{equation}
This non-linear equation is completely equivalent to \eq{SH}.
Instead of solving this equation directly let us make a simple ansatz for $p(x)$.
We see that for large $x$ the r.h.s. behaves as $-m^2\omega^2 x^2$
implying that at infinity $p\simeq i m \omega x$. Furthermore, $p(x)$ should have simple poles
at the position of zeros of the wave function which we denote $x_i$.
All the residues at these points should be equal to $\hbar/i$ as one can see from $\eq{pdef}$.
We can accommodate all these basic analytical properties with the following ansatz:
\begin{equation}\la{px}
p(x)=i m \omega x+\frac{\hbar}{i}\sum_{i=1}^N\frac{1}{x-x_i}\;.
\end{equation}
We note that at large $x$ the r.h.s. of \eq{px} behaves as $i m\omega x+\frac{\hbar}{i}\frac{N}{x}+{ O}(1/x^2)$.
Plugging this large $x$ approximation of $p(x)$ into the exact equation
\eq{ppeq} we get:
\begin{equation}
\left(i m\omega x+\frac{\hbar}{i}\frac{N}{x}\right)^2
+\hbar ( m\omega)=2 m (E-m^2\omega^2 x^2/2)+{ O}(1/x)\;.
\end{equation}
Comparing the coefficients in front of $x^2$ and $x^0$ we get $E=\hbar\omega(N+1/2)$ which is the famous formula for the spectrum of the harmonic oscillator.
In order to reconstruct
the wave function we expand \eq{ppeq} near the pole $x=x_i$. Namely, we require
\begin{equation}\la{baxterHO}
{\rm res}_{x=x_k}\left[\left(i m \omega x+\frac{\hbar}{i}\sum_{i=1}^N\frac{1}{x-x_i}\right)^2+i\hbar
\frac{\hbar}{i}\sum_{i=1}^N\frac{1}{(x-x_i)^2}\right]=0\;,
\end{equation}
obtaining (from the first bracket)
\begin{equation}\la{BAEHO}
x_k=\frac{\hbar}{\omega m}\sum_{j\neq i}^N\frac{1}{x_i-x_k}\;\;,\;\;k=1,\dots,N\;.
\end{equation}
This set of equations determines all $x_k$ in a unique way.
\begin{exercise}
Verify for $1$ and $2$ roots that there is a unique up to a permutation solution of the equation \eq{BAEHO}, find the solution.
\end{exercise}
Finally,
we can integrating \eq{pdef} to obtain
\begin{equation}
\psi(x)=e^{-\frac{m\omega x^2}{2\hbar}}Q(x)\;\;,\;\;Q(x)\equiv\prod_{i=1}^N (x-x_i)\;.
\end{equation}
It is here for the first time we see the Q-function, which is the analog of the main building block of the QSC!
We will refer to equation \eq{BAEHO} for zeros of the Q-functions
as the Bethe ansatz equation. We will call $\{x_i\}$ the Bethe roots.
Let us outline the main features which will be important for what follows:
\begin{itemize}
\item The asymptotic of $Q(x)\sim x^N$ contains quantum numbers of the state.
\item Zeros of the $Q(x)$ function can be determined from the condition of cancellation of poles \eq{baxterHO} (analog of Baxter equation)
which can be explicitly written as \eq{BAEHO} (analog of Bethe equations).
\item The wave function can be completely determined from the Bethe roots
or from $Q(x)$ (by adding a simple universal for all states factor).
\item The Schr\"odinger equation has a second (non-normalizable)
solution which behaves as $\psi_2\simeq x^{-N-1}e^{+\frac{m\omega}{2h}x^2}$.
Together with the normalizable solution $\psi_1$ they form a Wronskian
\begin{equation}
W=\left|
\bea{cc}\psi_1(x)& \psi_1'(x)\\
\psi_2(x)& \psi_2'(x)
\eea
\right|
\end{equation}
which is a constant.
\begin{exercise}
Prove that the Wronskian $W$ is a constant for a general Scr\"odinger equation.
\end{exercise}
\end{itemize}
\section{$SU(2)$-Heisenberg Spin Chain}\la{sec:twist}
In this section we discuss how the construction from the
previous section generalizes to integrable spin chains -- a system with a large number of degrees of freedom.
The simplest spin chain is the Heisenberg $SU(2)$ magnetic which
is discussed in great detail in numerous reviews and lectures.
We highly recommend Faddeev's 1982 Les Houches lectures \cite{Faddeev:1996iy} for that. We describe the results most essential for us below.
In short, the Heisenberg spin chain is a chain of $L$ spin-$1/2$ particles with a nearest neighbour interaction.
The Hamiltonian of the system can be written as
\begin{equation}\la{ham}
\hat H=2g^2\sum_{i=1}^L(1-{P}_{i,i+1})
\end{equation}
where $P_{i,i+1}$ is an operator which permutes the particles at the position $i$ and $i+1$ and $g$ is a constant.
We introduce twisted boundary conditions by defining
\beqa
P_{L,L+1}|\uparrow,\dots,\uparrow\rangle=|\uparrow,\dots,\uparrow\rangle\;\;,\;\;
P_{L,L+1}|\uparrow,\dots,\downarrow\rangle=
e^{+2i\phi}|\downarrow,\dots,\uparrow\rangle\;,\\
P_{L,L+1}|\downarrow,\dots,\downarrow\rangle=|\downarrow,\dots,\downarrow\rangle
\;\;,\;\;
P_{L,L+1}|\downarrow,\dots,\uparrow\rangle=e^{-2i\phi}|\uparrow,\dots,\downarrow\rangle\;.
\eeqa
The states can, again, be described by the Baxter function $Q_1(u)=e^{\phi u}\prod_{i=1}^{N_1}(u-u_i)$.
The Bethe roots $u_i$ have a physical meaning -- they represent
the momenta $p_i$ of spin down ``excitations" moving in a sea of spin ups
via $u_i=\frac{1}{2}\cot\frac{p_i}{2}$ (see Fig.\ref{su2chain}).
We find the roots $u_j$ from the equation similar to \eq{BAEHO}\footnote{One should assume all $u_j$ to be different like in the harmonic oscillator case.}
\begin{equation}\la{BAEHO2}
\left(\frac{u_k+i/2}{u_k-i/2}\right)^L=
e^{-2i\phi}\prod_{j\neq k}^{N_1}\frac{u_k-u_j+i}{u_k-u_j-i}
\;\;,\;\;k=1,\dots,N_1\;
\end{equation}
\begin{exercise}
Take log and expand for large $u_k$. You should get exactly the same as \eq{BAEHO} up to a rescaling and shift of $u_j$.
\end{exercise}
from where one gets a discrete set of solutions for $\{u_i\}$.
The energy is then given by
\begin{equation}\la{ene}
E=\sum_{j}^{N_1}\frac{2g^2}{u_j^2+1/4}\;.
\end{equation}
\begin{exercise}
Take $L=2$ and compute the energy spectrum in two different ways: 1) by directly diagonalizing the Hamiltonian \eq{ham}, which becomes a $4\times 4$ matrix of the form
$$2g^2\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 2 & -1-e^{-2 i \phi } & 0 \\
0 & -1-e^{2 i \phi } & 2 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right)$$
Next solve the Bethe equation \eq{BAEHO2} for $N_1=0,1,2$ and compute the energy from the formula \eq{ene}.
\end{exercise}
One could ask what the analog of the Schr\"odinger equation is in this case.
The answer is given by the Baxter equation of the form
\begin{equation}\la{baxsu2}
T(u)Q_1(u)=(u+i/2)^L Q_1(u-i)+(u-i/2)^L Q_1(u+i)\;,
\end{equation}
where $T(u)$ is a polynomial which plays the role of the potential, but it is not fixed completely and has to be determined from the self-consistency of \eq{baxsu2}.
\begin{exercise}
Show that the leading large $u$ coefficients of $T(u)$ are $T(u)\simeq 2\cos\phi u^L+
u^{L-1}(N_2-N_1)\sin\phi$ where $N_2=L-N_1$.
\end{exercise}
\noindent In practice we do not even need to know $T(u)$ as it is sufficient to require polynomiality from $T(u)$ to get \eq{BAEHO2}
as a condition of cancellation of the poles.
\begin{exercise}
For generic polynomial $Q(u)$ we see that $T(u)$ is a rational function with poles at $u=u_k$, where $Q(u_k)=0$.
Show that these poles cancel if the Bethe ansatz equation \eq{BAEHO2} is satisfied.
\end{exercise}
\begin{figure}
\begin{center}
\def0.6\textwidth{\textwidth}
\input{updown.pdf_tex}
\end{center}
\caption{\la{su2chain}Two equivalent representations of the same state. In the first case
we treat spin downs as excitations (magnons) moving with some
momenta $p_i$ and all spin ups correspond to the
reference (vacuum) state. In the second case we treat spin ups as excitations moving
with some momenta $q_i$.}
\end{figure}
Notice that given some polynomial $T(u)$ there is another polynomial (up to a $e^{-u\phi}$ multiplier to ``twist" $\phi$) solution to the Baxter equation,
just like we had before for the Schr\"odinger equation.
Its asymptotics are $Q_2\simeq e^{-u\phi}u^{N_2}$ where $N_2=L-N_1$.
The roots of $Q_2$ also has a physical interpretation -- they describe the $L-N_1$ spin up particles moving in the sea of the spin downs (i.e. opposite to $Q_1$ which described the reflected picture where the spin ups played the role of the observers and the spin-downs were considered as particles).
The second solution together with the initial one should satisfy the Wronskian relation
(in the same way as for the Sch\"odinger equation)\footnote{The $\propto$ sign is used to indicate that the equality holds up to a numerical multiplier (which can be easily recovered from large $u$ limit).}
\begin{equation}\la{QQsu2}
\left|
\bea{cc}
Q_1(u-i/2)& Q_1(u+i/2)\\
Q_2(u-i/2)& Q_2(u+i/2)
\eea
\right|\propto\;Q_{12}(u)
\end{equation}
where $Q_{12}(u)$ satisfies
\begin{equation}\la{Q12c}
\frac{Q_{12}(u+i/2)}{Q_{12}(u-i/2)}=\frac{(u+i/2)^L}{(u-i/2)^L}
\end{equation}
so we conclude that $Q_{12}(u)=-2i\sin\phi\; u^L$.
\newpage
\begin{exercise}
Show that if $Q_1$ and $Q_2$ are two linearly independent solutions of \eq{baxsu2}, then \eq{Q12c} holds.
\end{exercise}
We see that there are strict similarities with the harmonic oscillator.
Furthermore, it is possible to invert the above logic and prove the following statement:
equation \eq{QQsu2} plus the polynomiality assumption (up to an exponential prefactor)
by itself implies the Bethe equation, from which we departed.
This logic is very close to the philosophy of the QSC.
\begin{exercise}
Show that the Baxter equation is the following ``trivial" statement
\begin{equation}
\left|
\bea{lll}
Q(u-i) & Q(u)& Q(u+i)\\
Q_1(u-i) & Q_1(u)& Q_1(u+i)\\
Q_2(u-i) & Q_2(u)& Q_2(u+i)
\eea
\right|=0\;\;,\;\;{\rm for}\;\;Q=Q_1\;\;{\rm or}\;\;Q=Q_2\;.
\end{equation}
From that determine $T(u)$ in terms of $Q_1$ and $Q_2$.
\end{exercise}
\section{Nested Bethe Ansatz and $QQ$-relations}
The symmetry of the Heisenberg spin chain from the previous section is $SU(2)$. In order to get closer to $PSU(2,2|4)$ (the symmetry of ${\mathcal N}=4$ SYM) we now consider a generalization of the Heisenberg
spin chain for the $SU(3)$ symmetry group. For that we just have to assume that there are $3$ possible states per chain site instead of $2$, otherwise the construction of the Hamiltonian is very similar.
The spectrum of the $SU(3)$ spin chain can be found from the ``Nested" Bethe ansatz equations~\cite{Kulish:1983rd}, which now involve two different unknown (twisted) polynomials $Q_A$ and $Q_B$. They can be written as\footnote{by the twisted polynomials we mean the functions of the form $e^{\psi u}\prod\limits_i(u-u_i)$, for some number $\psi$.}:
\beqa\la{QAB}
1&=&-\frac{Q_A^{++} Q_B^{-}}{Q_A^{--} Q_B^{+}}\;\;,\;\;u=u_{A,i}\\
\frac{Q_\theta^+}{Q_\theta^-}&=&\nonumber
-\frac{Q_A^- Q_B^{++}}{Q_A^+ Q_B^{--}}\;\;,\;\;u=u_{B,i}
\eeqa
and the energy is given by
\begin{equation}
E=\left.i \partial_u\log \frac{Q^+_B}{Q^-_B}\right|_{u=0}\;.
\end{equation}
We denote $Q_\theta=u^L$.
We also introduced some very convenient notation
\begin{equation}
f^\pm = f(u\pm i/2)\;\;,\;\;f^{\pm\pm}=f(u\pm i)\;\;,\;\;f^{[\pm a]}=f(u\pm a i/2).
\end{equation}
\begin{exercise}
Show that the $SU(3)$ Bethe equations reduce to the $SU(2)$ equations
\eq{BAEHO2} and \eq{ene} when $Q_A=1$.
\end{exercise}
\subsection{Bosonic duality}
From the $SU(2)$ Heisenberg spin chain we learned that the Baxter polynomial $Q_1(u)$ contains as many roots as arrow-downs we have in our state. In particular the trivial polynomial $Q_1(u)=e^{-u\phi}$ corresponds to the state $|\uparrow\uparrow\dots\uparrow\rangle
$. One can also check that there is only one solution of the Bethe equations where $Q_1(u)$ is a twisted polynomial of degree $L$ and it satisfies
\begin{equation}
e^{i\phi/2}Q_1^--e^{-i\phi/2}Q_1^+=2i\sin\phi u^Le^{-u\phi} \; .
\end{equation}
\begin{exercise}
Solve this equation for $L=1$ and $L=2$ and check that
$Q_1$ also solves the Bethe equations of the $SU(2)$ spin chain. Compute the corresponding energy.
\end{exercise}
As this equation produces a polynomial of degree $L$ it must correspond to the maximally ``excited" state $|\downarrow\downarrow\dots\downarrow\rangle$.
It is clear that even though physically these states are very similar our current description in terms of the Bethe ansatz singles out one of them.
We will see that there is a ``dual" description where the Q-function corresponding to the state
$|\downarrow\downarrow\dots\downarrow\rangle$ is trivial. In the case of the $SU(3)$ spin chain where we have $3$ different states per node of the spin chain, which we can denote $1,2,3$, there are $3$ equivalent vacuum states $|11\dots 1\rangle$, $|22\dots 2\rangle$, and $|33\dots 3\rangle$, but only one of them corresponds to the trivial solution of the $SU(3)$ nested Bethe ansatz. Below we concentrate on the $SU(3)$ case and demonstrate that there are several equivalent sets of Bethe ansatz equations \eq{QAB}.
\begin{figure}
\begin{center}
\def0.6\textwidth{0.5\textwidth}
\input{bosonicduality.pdf_tex}
\end{center}
\caption{\la{su2chain2}Bosonic duality applied to the first node of the BA.}
\end{figure}
To build a dual set of Bethe equations we first have to pick a $Q$-function which we are going to dualise. For example we can build a new set of Bethe equations by replacing $Q_{A}$, a twisted polynomial of degree $N_A$,
with another twisted polynomial $Q_{\tilde A}$ of degree $N_{\tilde A}=N_B-N_A$, where $N_B$ is the degree of the polynomial $Q_B$.
For that we find a dual $Q$-function $Q_{\tilde A}$ from
\begin{equation}\la{Bdualitysu2}
\left|
\bea{cc}
Q_A^-& Q_A^+\\
Q_{\tilde A}^-& Q_{\tilde A}^+
\eea
\right|\propto\;Q_{B}(u)\;.
\end{equation}
Let's see that $Q_{\tilde A}$ satisfies the same Bethe equation.
By evaluating \eq{Bdualitysu2} at $u=u_{\tilde A,i}+i/2$
and dividing by the same relation evaluated at $u=u_{\tilde A,i}-i/2$ we get:
\begin{equation}
\frac{Q_A Q_{\tilde A}^{++}-0}
{0-Q_A Q_{\tilde A}^{--}}=\frac{Q_B^+}{Q_B^-}\;\;,\;\;u=u_{\tilde A,i}
\end{equation}
which is exactly the first equation \eq{QAB} with $A$ replaced by $\tilde A$!
To accomplish our goal we should also exclude $Q_A$ from the second equation. For that we notice that at $u=u_{B,i}$
the relation gives
\begin{equation}
\frac{Q_A^-}{Q_A^+}=\frac{Q_{\tilde A}^-}{Q_{\tilde A}^+}\;\;,\;\;u=u_{B,i}\;
\end{equation}
which allows us to rewrite the whole set of equations
\eq{QAB} in terms of $Q_{\tilde A}$.
We call this transformation a Bosonic duality. Similarly one can
apply the dualization procedure to $Q_B$. We determine $Q_{\tilde B}$
from
\begin{equation}\la{Bdualitysu2_2}
\left|
\bea{cc}
Q_B^-& Q_B^+\\
Q_{\tilde B}^-& Q_{\tilde B}^+
\eea
\right|\propto\;Q_{A}(u)Q_{\theta}(u)\;.
\end{equation}
By doing this we will be able to replace $B$ by $\tilde B$ in \eq{QAB}.
Let us also show that we can use $Q_{\tilde B}$ instead of $Q_{B}$ in the expression for the energy \eq{ene}. We recall that $Q_\theta(u)\propto u^L$, so evaluating \eq{Bdualitysu2_2} at $u=0$ we get
\begin{equation}\la{QQ0}
Q_B(-\tfrac{i}{2})Q_{\tilde B}(+\tfrac{i}{2})=Q_B(+\tfrac{i}{2})Q_{\tilde B}(-\tfrac{i}{2})\;.
\end{equation}
We can also differentiate \eq{Bdualitysu2_2} in $u$ once and then set $u=0$, so that
\begin{equation}\la{dirQ}
Q_B'(-\tfrac{i}{2})Q_{\tilde B}(+\tfrac{i}{2})+Q_B(-\tfrac{i}{2})Q'_{\tilde B}(+\tfrac{i}{2})=Q'_B(+\tfrac{i}{2})Q_{\tilde B}(-\tfrac{i}{2})+Q_B(+\tfrac{i}{2})Q'_{\tilde B}(-\tfrac{i}{2})\;.
\end{equation}
Dividing \eq{dirQ} by \eq{QQ0} we get
\begin{equation}
\frac{Q_B'(-\tfrac{i}{2})}{Q_{ B}(-\tfrac{i}{2})}+\frac{Q'_{\tilde B}(+\tfrac{i}{2})}{Q_{\tilde B}(+\tfrac{i}{2})}=
\frac{Q'_B(+\tfrac{i}{2})}{Q_B(+\tfrac{i}{2})}+\frac{Q'_{\tilde B}(-\tfrac{i}{2})}{Q_{\tilde B}(-\tfrac{i}{2})}\;,
\end{equation}
which indeed gives
\begin{equation}
E=\left.i \partial_u\log \frac{Q^+_B}{Q^-_B}\right|_{u=0}=\left.i \partial_u\log \frac{Q^+_{\tilde B}}{Q^-_{\tilde B}}\right|_{u=0}\;.
\end{equation}
\paragraph*{Better notation for $Q$-functions}
One can combine the above duality transformations and say dualise $Q_A$
after dualising $Q_B$ and so on. In order to keep track of all possible transformations one should introduce some notation, as otherwise we can end up with multiple tildas.
Another question we will try to answer in this part is how many equivalent BA's we will generate by applying the duality many times to various nodes.
In order to keep track of the dualities we place numbers $1,2,3$ in between the nodes of the Dynkin diagram.
We place the $Q$-functions on the nodes of the diagram as in Fig.\ref{su2chain2}.
Then we interpret the duality as an exchange of the corresponding labels sitting on the links of the diagram, so if before the dualization of $Q_A$ we had $1,2,3$, after the duality we have to exchange the indexes $1$ and $2$ obtaining $2,1,3$. If instead we first dualised $Q_B$ we would obtain $1,3,2$. Each duality produces a permutation of the numbers.
We also use these numbers to label the $Q$-functions. Namely we assign the indexes to the $Q$ function in accordance with the numbers appearing above the given node. So, in particular, in the new notation
\begin{equation}
Q_{A}=Q_1\;\;,\;\;Q_{B}=Q_{12}\;.
\end{equation}
Each order of the indexes naturally corresponds to a particular set of Bethe equations. For instance, the initial set of Bethe equations on $Q_{A}, Q_{B}$
correspond to the order $1,2,3$ and the Bethe ansatz (BA) for $Q_{\tilde A}, Q_{B}$ correspond to $2,1,3$ and so on. Now we can answer the question of how many dual BA systems we could have;
this is given by the number of permutations of $1,2,3$ i.e. for the case of $SU(3)$ we get $6$ equivalent systems of BA equations.
Following our prescription we also denote
\begin{equation}
Q_{\tilde A}=Q_2\;\;,\;\;Q_{\tilde B}=Q_{13}\;.
\end{equation}
We note that we should not distinguish Q's which only differ by the order of indexes. So, for instance, $Q_{21}$ and $Q_{12}$ is the same $Q_{B}$.
We can count the total number of various $Q$-functions we could possibly generate with the dualities: $2^3-2=6$ different $Q$-functions
which are
\begin{equation}
Q_{i}\;\;,\;\;Q_{[ij]}\;\;,\;\;i,j=1,\dots,3
\end{equation}
for completeness we also add $Q_{\emptyset}\equiv 1$ and $Q_{123}=Q_{[ijk]}\equiv Q_\theta=u^L$ so that in total we have $2^3$. For general $SU(N)$ we will find $2^N$ different Q-functions.
We see that the number of the Q-functions grows rapidly with the rank of the symmetry group.
For $PSU(2,2|4)$ we get $256$ functions, and we should study the relations among them in more detail.
\paragraph*{$QQ$-relations}
Let us rewrite the Bosonic duality in the new notation.
The relation \eq{Bdualitysu2} becomes
\begin{equation}\la{QQ2}
\left|
\bea{cc}
Q_i^-& Q_i^+\\
Q_j^-& Q_j^+
\eea
\right|\propto\;Q_{ij}Q_{\emptyset}
\end{equation}
where we added $Q_\emptyset=1$ to the r.h.s. to make both l.h.s and r.h.s be bilinear in $Q$. Very similarly \eq{Bdualitysu2_2} gives
\begin{eqnarray}\la{qqq1}
\left|
\bea{cc}
Q_{1{\bf 2}}^-& Q_{1{\bf 2}}^+\\
Q_{1{\bf 3}}^-& Q_{1{\bf 3}}^+
\eea
\right|\propto\;Q_{1}(u)Q_{1{\bf 23}}(u)\;.
\end{eqnarray}
We see that both identities can be written in one go as
\begin{eqnarray}\la{qqq2}
\left|
\bea{cc}
Q_{I{\bf i}}^-& Q_{I{\bf i}}^+\\
Q_{I{\bf j}}^-& Q_{I{\bf j}}^+
\eea
\right|\propto\;Q_{I}(u)Q_{I{\bf ij}}(u)\;,
\end{eqnarray}
where for general $SU(N)$ we would have $i=1,\dots,N$
and $j=1,\dots,N$ and $I$ represents a set of indexes
such that in \eq{qqq1} it is an empty set $I=\emptyset$
and for the second identity \eq{qqq2} $I$ contains only one element $1$.
Note that no indexes inside $I$ are involved with the relations and
in the r.h.s. we get indexes $i$ and $j$ glued together in the new function. We see that proceeding in this way we can build any $Q$-function starting from the basic $Q_i$ with one index only. For that we can first take $I=\emptyset$ and build $Q_{ij}$, then take $I=i$
and build $Q_{ijk}$ and so on. It is possible to combine these steps together to get explicitly
\begin{equation}\la{QQ3}
Q_{ijk}Q^+_\emptyset Q^-_\emptyset\propto\left|
\bea{ccc}
Q_i^{--}& Q_i& Q_i^{++}\\
Q_j^{--}& Q_j& Q_j^{++}\\
Q_k^{--}& Q_k& Q_k^{++}
\eea
\right|\;\;.
\end{equation}
Whereas the first identity \eq{QQ2} is obvious from the definition, the second \eq{QQ3} is a simple exercise to prove from
\eq{qqq2}.
\begin{exercise}
Prove \eq{QQ3} using the following Mathematica code
\begin{mathematica}
(*define Q to be absolutely antisymmetric*)
Q[a___] := Signature[{a}] Q @@ Sort[{a}] /; ! OrderedQ[{a}]
(*program bosonic duality*)
Bosonic[J___, i_, j_] := Q[J, i, j][u_] -> (
Q[J, i][u + I/2] Q[J, j][u - I/2] -
Q[J, i][u - I/2] Q[J, j][u + I/2])/Q[J][u];
(*checking the identity*)
Q[1, 2, 3][u] Q[][u + I/2] Q[][u - I/2] /. Bosonic[1, 2, 3] /.
Bosonic[1, 2] /. Bosonic[1, 3] /. Bosonic[2, 3] // Factor
\end{mathematica}
Also derive a similar identity for $Q_{ijkl}$ using the same code.
\end{exercise}
From the previous exercise it should be clear that we can generate any $Q_{ij\dots k}$ as a determinant of the
basic $N$ Q-functions $Q_i$. In particular the ``full-set" Q-function $Q_{12\dots N}$, which is also $Q_\theta=u^L$, can be written as a determinant of $N$ basic polynomials $Q_i$. Interestingly this identity by itself is constraining enough to give rise to the full spectrum of the $SU(N)$ spin chain! Indeed $Q_{12\dots N}$ is a polynomial of degree $L$ and thus we get $L$ nontrivial relations
on the coefficients of the (twisted) polynomials $Q_i$, which together contain exactly $L$ Bethe roots. This means that this relation alone is equivalent to the whole set of Nested Bethe ansatz equations.
So we can put aside a non-unique BA approach, dependent on the choice of the vacuum, and replace it completely by a simple determinant like \eq{QQ3}. In other words the QQ-relations and the condition of polynomiality is all we need to quantize this quantum integrable model.
We will argue that for ${ N}=4$ SYM we only have to replace the polynimiality with another slightly more complicated analyticity condition but otherwise keep the same QQ-relations. We will have to, however, understand what the QQ-relations look like for the case of super-symmetries like $SU(N|M)$, which is described in the next section.
\subsection{Fermionic duality in $SU(N|M)$}
We will see how the discussion in the previous section generalizes to the super-group case. Our starting point will be again the set of nested Bethe ansatz equations, which follow the pattern of the Cartan matrix.
Let us discuss the construction of the Bethe ansatz. Below we wrote the Dynkin diagram, Cartan matrix and the Bethe ansatz equations for the $SU(3|3)$ super spin chain
\begin{eqnarray}\la{BAEs}
\bea{ccc}
\bea{cc}
Q_A&\bigcirc\\
Q_B&\bigcirc\\
Q_C&\bigotimes\\
Q_D&\bigcirc\\
Q_E&\bigcirc
\eea &
\bea{|c|c|c|c|c|}
\hline
2&-1&0&0&0\\ \hline
-1&2&-1&0&0\\ \hline
0&-1&0&+1&0\\ \hline
0&0&+1&-2&+1\\ \hline
0&0&0&+1&-2\\ \hline
\eea &
\bea{l}
-1=(\;\;\;\;\;Q^{++}_A Q_B^-)/(\;\;\;\;\;Q^{--}_A Q_B^+)\;\;,\;\;u=u_{A,i}\\
-1=(Q_A^-Q^{++}_B Q_C^-)/(Q_A^+Q^{--}_B Q_C^+)\;\;,\;\;u=u_{B,i}\\
+1=(Q^{-}_B\;\;\;\;\;\;\;Q_D^+)/(Q^{+}_B\;\;\;\;\;\;\,Q_D^-)\;\;,\;\;u=u_{C,i}\\
-1=(Q_C^+Q^{--}_D Q_E^+)/(Q_C^-Q^{++}_D Q_E^-)\;\;,\;\;u=u_{D,i}\\
-1=(Q_D^+Q^{--}_E \;\;\;\;\;)/(Q_D^-Q^{++}_E\;\;\;\;\;)\;\;,\;\;u=u_{E,i}\\
\eea
\eea
\end{eqnarray}
The $Q$-functions still correspond to the nodes of the Dynkin diagrams
and the shift of the argument of the $Q$-functions entering the numerators of the Bethe equations simply follow the pattern of the Cartan matrix (with the inverse shifts in numerators).
Since the structure of the equations for the bosonic nodes is the same as before, one can still apply the Bosonic duality transformation for instance on $Q_B$ and replace it by $Q_{\tilde B}$. However for the fermionic type nodes (normally denoted by a crossed circle), such as $Q_C$, we get a new type of duality transformation
\begin{equation}\la{Qfdual}
Q_C Q_{\tilde C}\propto
\left|
\bea{cc}
Q_B^-& Q_B^+\\
Q_D^-& Q_D^+
\eea
\right|
\end{equation}
which look similar to the Bosonic one with the difference that we can extract explicitly the dual Baxter polynomial $Q_{\tilde C}$\footnote{Whereas for the Bosonic duality \eq{Bdualitysu2} the dual Baxter polynomial occur in a complicated way and one had to solve a first order finite difference equation in order to extract it.}.
Let us show that the middle Bethe equation can be obtained from the duality relation \eq{Qfdual}.
Indeed we see again that for both $u=u_{C,i}$ and $u=u_{\tilde C,i}$
we get the middle equation
\begin{equation}
1=\frac{Q^+_{B}Q^-_{D}}{Q^-_{B}Q^+_{D}}\;\;,\;\;u=u_{\tilde C,i}\;\;{\rm or}\;\;u=u_{C,i}.
\end{equation}
Next we should be able to exclude $Q_C$ in the other two equations. For that we set $u=u_{B,i}+i/2$ and $u=u_{B,i}+i/2$ to get
\begin{equation}
Q^+_CQ^+_{\tilde C}=c(0- Q_D Q_B^{++})\;\;,\;\;
Q^-_CQ^-_{\tilde C}=+c(Q_D Q_B^{--}-0)\;\;u=u_{B,i}\;.
\end{equation}
Dividing one by the other
\begin{equation}
-1=\frac{Q^+_CQ^+_{\tilde C}}
{Q^-_CQ^-_{\tilde C}}\frac{Q_B^{--}}{Q_B^{++}}\;\;,\;\;u=u_{B,i}
\end{equation}
which allows up to exclude $Q_C$ from the second equation of \eq{BAEs}. This then becomes
\begin{equation}
-1=\frac{Q_A^-Q^{++}_B Q_C^-}{Q_A^+Q^{--}_B Q_C^+}\;\;\leftrightarrow\;\;
+1=\frac{Q_A^-Q_{\tilde C}^+}{Q_A^+Q_{\tilde C}^-}
\;\;,\;\;u=u_{B,i}\;.
\end{equation}
As we see this changes the type of the equation from bosonic to fermionic. Thus we also change the type of the Dynkin diagram. This is expected since for super algebras the Dynkin diagram is not unique. Similarly the fourth equation also changes in a similar way. To summarize,
after duality we get
\begin{eqnarray}\la{BAEs}
\bea{ccc}
\bea{cc}
Q_A&\bigcirc\\
Q_B&\bigotimes\\
Q_{\tilde C}&\bigotimes\\
Q_D&\bigotimes\\
Q_E&\bigcirc
\eea &
\bea{|c|c|c|c|c|}
\hline
2&-1&0&0&0\\ \hline
-1&0&+1&0&0\\ \hline
0&+1&0&-1&0\\ \hline
0&0&-1&0&+1\\ \hline
0&0&0&+1&-2\\ \hline
\eea &
\bea{l}
-1=(\;\;\;\;\;Q^{++}_A Q_B^-)/(\;\;\;\;\;Q^{--}_A Q_B^+)\;\;,\;\;u=u_{A,i}\\
+1=(Q_A^-\;\;\;\;\;\;\; Q_{\tilde C}^+)/(Q_A^+\;\;\;\;\;\;\; Q_{\tilde C}^-)\;\;,\;\;u=u_{B,i}\\
+1=(Q^{+}_B\;\;\;\;\;\;\;Q_D^-)/(Q^{-}_B\;\;\;\;\;\;\,Q_D^+)\;\;,\;\;u=u_{\tilde C,i}\\
+1=(Q_{\tilde C}^-\;\;\;\;\;\;\;Q_E^+)/(Q_{\tilde C}^+\;\;\;\;\;\;\; Q_E^-)\;\;,\;\;u=u_{D,i}\\
-1=(Q_D^+Q^{--}_E \;\;\;\;\;)/(Q_D^-Q^{++}_E\;\;\;\;\;)\;\;,\;\;u=u_{E,i}\\
\eea
\eea
\end{eqnarray}
\begin{figure}
\begin{center}
\def0.6\textwidth{0.5\textwidth}
\input{fermionicduality.pdf_tex}
\end{center}
\caption{\la{su2chainf}Fermionic duality}
\end{figure}
\paragraph*{Index notation}
Again in order to keep track of all possible combinations of dualities we have to introduce index notation.
In the super case we label the links in the Dynkin diagram by two types of indexes (with hat and without). The type of the index changes each time we cross a fermionic node. For instance our initial set of Bethe equations corresponds to the indexes $123\hat 1\hat 2\hat 3$. The fermionic duality transformation again simply exchanges the labels on the links of the Dynkin diagram (see Fig.\ref{su2chainf}). So after duality we get $12\hat 1 3\hat 2\hat3$, which is consistent with the $\bigcirc-\bigotimes-\bigotimes-\bigotimes-\bigcirc$ grading of the resulting Bethe ansatz equations.
Finally, we label the $Q$-functions by two antisymmetric groups of indexes -- with hat and without again simply listing all indexes appearing above the given node of the Dynkin diagram. In particular we get
\beqa
&Q_A=Q_{1}\;\;,\;\;Q_B=Q_{12}\;\;,\;\;Q_C=Q_{123}\;\;,&\\
\nonumber&Q_{\tilde C}=Q_{12\hat 1}\;\;,\;\;Q_D=Q_{123\hat 1}\;\;,\;\;Q_E=Q_{123\hat 1\hat 2}\;.&
\eeqa
An alternative notation is to omit hats and separate the two sets of indexes by a vertical line:
\beqa
&Q_A=Q_{1|\emptyset}\;\;,\;\;Q_B=Q_{12|\emptyset}\;\;,\;\;Q_C=Q_{123|\emptyset}\;\;,&\\
\nonumber&Q_{\tilde C}=Q_{12|1}\;\;,\;\;Q_D=Q_{123| 1}\;\;,\;\;Q_E=Q_{123|12}\;.&
\eeqa
\begin{exercise}
The fermionic duality transformation changes the type of the Dynkin diagram. The simplest way to understand which diagram one gets after the duality is to follow the indexes attached to the links. Each time the type of the index changes (from hatted to non-hatted)
you should draw a cross. List all possible Dynkin diagrams corresponding to $SU(3|3)$ Lie algebra.
\end{exercise}
\paragraph*{Fermionic QQ-relations}
In index notation \eq{Qfdual} becomes
\begin{equation}\la{fe}
\boxed{
Q_{I b} Q_{I \hat i}\propto
Q_{I}^-Q_{I b\hat i}^+
-
Q_{I}^+Q_{I b\hat i}^-
}\;.
\end{equation}
For completeness let us write here the bosonic duality relations
\begin{equation}\la{bo}
\boxed{
Q_{I}Q_{I a b}\propto
Q_{I a}^+Q_{I b}^-
-
Q_{I b}^+Q_{I a}^- \;\;,\;\;
Q_{I}Q_{I \hat i\hat j}\propto
Q_{I \hat i}^+Q_{I \hat j}^-
-
Q_{I \hat j}^+Q_{I \hat i}^-
}\;.
\end{equation}
In the case of $SU(N|M)$ one could derive all $Q$ functions in terms of $N+M$ functions $Q_a$
and $Q_{\hat i}$. We will demonstrate this in the next section in the example of $SU(4|4)$.
\section{QQ-relations for $PSU(2,2|4)$ Spin Chain}\la{sec:PSU224QQ}
The global symmetry of ${\mathcal N}=4$ SYM is $PSU(2,2|4)$. The QQ-relations from the previous section associated with this symmetry constitute an important part of the QSC construction.
The symmetry (up to a real form and a projection) is same as $SU(4|4)$.
In this section we specialize the QQ-relations from the previous part to this case and derive all the most important relations among Q-functions.
In particular we show that all $256$ various Q-functions can be derived from just $4+4$ Q-functions with one index
\begin{equation}
Q_{a|\emptyset}\;\;,\;\;Q_{\emptyset|i}\;,
\end{equation}
which are traditionally denoted in the literature as
\begin{equation}
{\bf P}_a\;\;,\;\;{\bf Q}_i\;.
\end{equation}
These are the elementary Q-functions.
For us, another important object is $Q_{a|i}$. According to the general consideration above it
can be obtained from the fermionic duality relation \eq{fe} with $I=\emptyset$
\begin{equation}\la{Qai}
Q_{a| j}^+-Q_{a| j}^-={\bf P}_a{\bf Q}_j\;.
\end{equation}
This is the first order equation on $Q_{a|i}$ which one should solve; and the formal solution to this equation is\footnote{Note that there is a freedom to add a constant to $Q_{a|i}$. This freedom is fixed in the twisted case as we should require that $Q_{a|j}$ has a ``pure" asymptotics at large $u$ i.e. $e^{\phi_{ai}u}u^\alpha(1+A_1/u+A_2/u^2+\dots)$.}
\begin{equation}\la{Qaiformal}
Q_{a|j}(u)=-\sum_{n=0}^\infty {\bf P}_a(u+i\tfrac{2n+1}{2}){\bf Q}_i(u+i\tfrac{2n+1}{2})\;.
\end{equation}
\begin{exercise}
Find a solution to the equation \eq{Qai}
for ${\bf P}_a{\bf Q}_j=e^{\phi u}$
and also for ${\bf P}_a{\bf Q}_j=1/u^2$.
\end{exercise}
Once we know $Q_{a|i}$ we can build any $Q$-function explicitly in terms of $Q_{a|i},\;{\bf Q}_i$ and ${\bf P}_a$. For example using the Bosonic duality we can get
\begin{equation}
Q_{{\bf ab}|i}=\frac{Q_{{\bf a}|i}^+Q_{{\bf b}|i}^--Q_{{\bf a}|i}^-Q_{{\bf b}|i}^+}{{\bf Q}_i}\;.
\end{equation}
In this way we can build all Q-functions explicitly in terms of $Q_{a|i},\;{\bf Q}_i$ and ${\bf P}_a$. There is a nice simplification
taking place for Q-functions with equal number of indexes:
\begin{equation}\la{Q22}
Q_{ab|ij}=\left|
\bea{cc}
Q_{a|i}&Q_{a|j}\\
Q_{b|i}&Q_{b|j}
\eea
\right|
\end{equation}
\begin{exercise}
Prove \eq{Q22} using the following Mathematica code
\begin{mathematica}
(*define Q to be absolutely antisymmetric*)
Q[a___][b___][u_] := Signature[{a}] Signature[{b}]
Q[Sequence @@ Sort[{a}]][Sequence @@ Sort[{b}]][u]
/; ! (OrderedQ[{a}] && OrderedQ[{b}])
(*program bosonic and fermionic dualities*)
B1[J___, a_, b_][K___] := Q[J, a, b][K][u_] :>
(Q[J, a][K][u + I/2] Q[J, b][K][u - I/2] -
Q[J, a][K][u - I/2] Q[J, b][K][u + I/2])/Q[J][K][u];
B2[K___][J___, i_, j_] :=
Q[K][J, i, j][u_] :> (Q[K][J, i][u + I/2] Q[K][J, j][u - I/2] -
Q[K][J, i][u - I/2] Q[K][J, j][u + I/2])/Q[K][J][u];
F1[K___, a_][J___, i_][u_] := Q[K, a][J, i][u] :>
(Q[K, a][J, i][u - I] Q[K][J][u - I] +
Q[K, a][J][u - I/2] Q[K][J, i][u - I/2])/Q[K][J][u]
F2[K___, a_][J___, i_][u_] := Q[K, a][J, i][u] :>
(Q[K, a][J, i][u + I] Q[K][J][u + I] -
Q[K, a][J][u + I/2] Q[K][J, i][u + I/2])/Q[K][J][u]
(*deriving the identity*)
Q[a, b][i, j][u] /. B1[a, b][i, j] /. B2[a][i, j] /. B2[b][i, j] /.
Flatten[Table[F1[c][k][u + I], {c, {a, b}}, {k, {i, j}}]] /.
Flatten[Table[F2[c][k][u - I], {c, {a, b}}, {k, {i, j}}]] /.
B2[][i, j] // Simplify
\end{mathematica}
Also derive a similar identity for $Q_{abc|ijk}$ using the same code.
The general strategy is to use the bosonic duality to decompose $Q$'s into $Q$-functions with fewer indexes. Then use \eq{Qai} to bring all $Q_{a|k}(u+i n)$ to the same argument $Q_{a|k}(u)$.
After that the expression should simplify enormously.
Also show the following identities to hold
\begin{eqnarray}\la{Q3n4}
Q_{abc|ijkl}=
{\bf Q}_i Q^+_{abc|jkl}-
{\bf Q}_j Q^+_{abc|kli}+
{\bf Q}_k Q^+_{abc|lij}-
{\bf Q}_l Q^+_{abc|ijk}\;,\\
Q_{abcd|ijk}=\la{Q4n3}
{\bf P}_a Q^+_{bcd|ijk}-
{\bf P}_b Q^+_{cda|ijk}+
{\bf P}_c Q^+_{dab|ijk}-
{\bf P}_d Q^+_{abc|ijk}\;.
\end{eqnarray}
Also check \eq{Q12341234} and \eq{periodicity} below.
\end{exercise}
In particular for the $Q$-function with all indexes $Q_{1234|1234}$ (remember that the $Q$-function with all indexes played an important role in the XXX spin chain giving the external ``potential" $Q_\theta=u^L$)
we get
\begin{equation}\la{Q12341234}
Q_{1234| 1 2 3 4}=
\left|
\bea{cccc}
Q_{1|1}&Q_{1|2}&Q_{1|3}&Q_{1|4}\\
Q_{2|1}&Q_{2|2}&Q_{2|3}&Q_{2|4}\\
Q_{3|1}&Q_{3|2}&Q_{3|3}&Q_{3|4}\\
Q_{4|1}&Q_{4|2}&Q_{4|3}&Q_{4|4}
\eea
\right|\;.
\end{equation}
Finally, one can show that
\begin{equation}\la{periodicity}
Q_{1234|1234}^{+}-Q_{1234|1234}^-=\sum_{a,i}{\bf Q}_i{\bf P}_a Q^-_{1234\check a,1234 \check i}
\end{equation}
where the check (inverse hat) denotes an ``index annihilator" i.e. for example $Q_{1234\check 4|\dots}=Q_{123|\dots}$ and $Q_{1234\check 3|\dots}=-Q_{123\check 3 4|\dots}=-Q_{124|\dots}$ and so on.
\paragraph*{Hodge duality} The $SU(4|4)$ Dynkin diagram has an obvious symmetry -- we can flip it upside down. At the same time the labeling of the $Q$-functions essentially breaks this symmetry as we agreed to list all indexes from above a given node and not below. To fix this we can introduce a Hodge dual set of $Q$-functions by defining
\begin{eqnarray}\la{hdu}
Q^{a_1\dots a_n|i_1\dots i_m}\equiv (-1)^{n m} \epsilon^{a_1\dots a_n b_1\dots b_{4-n}}\epsilon^{i_1\dots u_m j_1\dots j_{4-m}}Q_{b_1\dots b_{4-n}|j_1\dots j_{4-m}}
\end{eqnarray}
with $b_1<\dots<b_{4-n}$ and $j_1<\dots<j_{4-n}$ so that there is only one term in the r.h.s.
One can check that these $Q$-functions with upper indexes satisfy the same QQ-relations as the initial $Q$-functions\footnote{in particular \eq{hdu} implies $Q^{\emptyset|1}=+Q_{1234|234}$ and $Q^{\emptyset|2}=-Q_{1234|134}$ and so on.
}.
Finally, we already set $Q_{\emptyset|\emptyset}=1$ and considering the symmetry of the system we should also set $Q_{1234|1234}=Q^{\emptyset|\emptyset}=1$. In fact that is indeed the case for ${\mathcal N}=4$ SYM whereas for the spin chains we have $Q_\theta=u^L$ attached to one of the ends of the Dynkin diagram, which breaks the symmetry.
Assuming $Q_{1234|1234}=1$ we get some interesting consequences. In particular the l.h.s. of \eq{periodicity} vanishes and we get
\begin{equation}\la{zero}
{\bf Q}_i{\bf P}_aQ^{a|i}=0\;.
\end{equation}
Also we can rewrite \eq{Q3n4} and \eq{Q4n3} in our new notation
\beqa
{\bf P}^a&\equiv& Q^{a|\emptyset}=Q^{a|i}(u+i/2){\bf Q}_i\;,\\
{\bf Q}^i&\equiv& Q^{\emptyset|i}=Q^{a|i}(u+i/2){\bf P}_a\;.
\eeqa
Combining that with \eq{zero} we get
\begin{equation}
{\bf P}_a{\bf P}^a={\bf Q}_i{\bf Q}^i=0\;.
\end{equation}
Finally we can expand the determinant of the $4\times 4$ matrix in \eq{Q12341234} in the first row to get
\begin{equation}
1=Q_{1|1}Q_{234|234}
-Q_{1|2}Q_{234|134}
+Q_{1|3}Q_{234|124}
-Q_{1|4}Q_{234|123}\;,
\end{equation}
which is equivalent to $-1=Q_{1|a}Q^{1|a}$. Also we can replace the first row in \eq{Q12341234} by $Q_{2|i}$ instead of $Q_{1|i}$ to get zero determinant. At the same time expanding this determinant in the first row will result in $0=Q_{2|a}Q^{1|a}$. At the end we will get the following general expression
\begin{equation}\la{QupQdn}
Q_{i|a}Q^{j|a}=-\delta_i^j\;
\end{equation}
which implies that $Q^{i|a}$ is inverse to $Q_{i|a}$.
With these relations, we have completed the task of building the QQ-relations for $SU(4|4)$ symmetry (with an additional condition that $Q_{1234|1234}=1$, which can be associated with `P' in $PSU(2,2|4)$). The next step is to understand the analytical properties of the $Q$-functions.
For the case of the spin chain all $Q$-functions are simply polynomials and it was sufficient to produce the spectrum from the QQ-relations. However, in that construction there is no room for a continuous parameter -- the 't Hooft coupling $g=\frac{\sqrt\lambda}{4\pi}$ and
thus for the ${ N}=4$ SYM the analytical properties should be more complicated and we will motivate the analyticity in the next section.
The analytical properties are the missing ingredients in the construction and to deduce them we will have to revise the
strong coupling limit.
\chapter{Classical String and Strong Coupling Limit of QSC}\la{sec:classics}\la{ch:3}
In this section we briefly describe the action of the super-string in $AdS^5\times S_5$, following closely \cite{Gromov:2007aq}.
We also advice to study the lecture notes of K.Zarembo from the same Les Houches summer school.
\section{Classical String Action}
The classical action is similar to the action of the principal chiral field (PCF), so let us briefly review it. The fields $g(\sigma,\tau)$ in PCF belong to the $SU(N)$ group. One builds ``currents" out of them by
\begin{equation}\la{current}
J_\mu\equiv -g^{-1} \partial_\mu g
\end{equation}
and then the classical action is simply
\begin{equation}
S=\frac{\sqrt{\lambda}}{4\pi}\int{\rm tr} (J\wedge J)\;.
\end{equation}
The global symmetry of this action is $SU_L(N)\times SU_R(N)$ since we can change $g(\sigma,\tau)\to h_L g(\sigma,\tau) h_R$ for arbitrary $h_L,h_R\in SU(N)$ without changing the action.
The construction for the Green-Schwartz superstring action is very similar. We take $g\in SU(2,2|4)$ and then the current $J$ (taking values in the $su(2,2|4)$ algebra) is built in the same way as in \eq{current}.
The only new ingredient is that we have to decompose the current into $4$ components in order to ensure an extra local $sp(2,2)\times sp(4)$ symmetry in the way described below.
The superalgebra $su(2,2|4)$ can be represented by $8\times 8$ supertraceless supermatrices
\begin{equation}
M=\left(
\bea{c|c}
A&B\\ \hline
C&D
\eea
\right)
\end{equation}
where $A\in u(2,2)$ and $B\in u(4)$ and the fermionic components are related by
\begin{equation}
C=B^\dagger\left(
\bea{cc}1_{2\times 2}&0\\
0&-1_{2\times 2}
\eea
\right)\;.
\end{equation}
An important property of the $su(2,2|4)$ superalgebra is that there is a $Z_4$ automorphism (meaning that one should act $4$ times to get a trivial transformation). This $Z_4$ automorphism has its counterpart in the QSC construction as we discuss later. Its action on an element of the algebra is defined in the following way:
\begin{equation}
\phi[M]\equiv \left(
\bea{c|c}
E A^T E &-E C^T E\\ \hline
E B^T E & E D^T E
\eea
\right)\;\;,\;\;E=\left(
\bea{cccc}
0&-1&0&0\\
1&0&0&0\\
0&0&0&-1\\
0&0&1&0
\eea
\right)\;.
\end{equation}
It is easy to see that $\phi^4=1$. The consequence of this is that any element of the algebra can be decomposed into
the sum $M=M^{(0)}+M^{(2)}+M^{(3)}+M^{(4)}$, such that $\phi[M^{(n)}]=i^n M^{(n)}$.
\begin{exercise}
Find $M^{(n)}$ for $n=0,1,2,3$ explicitly in terms of $A,B,C,D,E$.
\end{exercise}
The invariant part $M^{(0)}$ is exactly $sp(2,2)\times sp(4)$. In particular we can decompose the current $J=J^{(0)}+J^{(1)}+J^{(2)}+J^{(3)}$ and define the action as
\begin{equation}\la{GS}
S=\frac{\sqrt\lambda}{4\pi}
\int {\rm str} \left(
J^{(2)}\wedge * J^{(2)}-J^{(1)}\wedge J^{(3)}\right)\;.
\end{equation}
\begin{exercise}
Show that $M^{(0)}\in sp(2,2)\times sp(4)$.
\end{exercise}
\begin{exercise}The fact that the action does not contain $J^{(0)}$ guarantees the local invariance of the action w.r.t. $sp(2,2)\times sp(4)$. Explain why.
\end{exercise}
The equations of motion which one can derive from the action \eq{GS} are
\begin{equation}\la{EOM}
\partial_\mu k_\mu=0\;\;,\;\;k_\mu=gK_\mu g^{-1}\;\;,\;\;K=J^{(2)}+\frac{1}{2}* J^{(1)}-\frac{1}{2}* J^{(3)}\;.
\end{equation}
One can also interpret $k_\mu$ as a Noether charge w.r.t to the global $PSU(2,2|4)$ symmetry $g\to h g$.
\begin{exercise}Derive $k_\mu$ from Noether's theorem.
\end{exercise}
\section{Classical Integrability}
The equations of motion \eq{EOM} and the flatness condition:
\begin{equation}\la{flt}
dJ-J\wedge J=0
\end{equation}
can be packed into the
flatness condition
of the 1-form
\begin{equation}\la{defA}
A(u)=J^{(0)}+\frac{u}{\sqrt{u^2-4g^2}} J^{(2)}-
\frac{2g}{\sqrt{u^2-4 g^2}} * J^{(2)}\;\;,\;\;u\in C
\end{equation}
where we use that classically we can set $J^{(1)}=J^{(3)}=0$, as these fermionic parts only become relevant at 1-loop level.
\newpage
\begin{exercise}By expanding in Taylor series in $u$ show that each term in the expansion is zero as a consequence of \eq{EOM} and \eq{flt}, i.e.
\begin{equation}\la{flt2}
dA(u)-A(u)\wedge A(u)=0\;\;,\;\;\forall u.
\end{equation}
{Hint:} First verify \eq{flt2} for $u=0$.
For that you will have to project the equation \eq{flt} into $Z_4$ components first. For example
\begin{equation}\la{flt3}
dJ^{(0)}-J^{(0)}\wedge J^{(0)}-J^{(2)}\wedge J^{(2)}=0\;.
\end{equation}
\end{exercise}
The existence of the flat connection $A(u)$, depending on a spectral parameter $u$ implies integrability of the model
at least at the classical level. Note that \eq{flt2}\footnote{which in more familiar notations becomes $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu +[A_\mu,A_\nu]=0$.} implies that $A(u)$ is a ``pure gauge" i.e. there exists a matrix valued function $G(\sigma,\tau,u)$ such that
\begin{equation}
A_\mu(u)=-G^{-1}\partial_\mu G\;.
\end{equation}
A way to build $G$ is to compute the Wilson line from some fixed point to $(\sigma,\tau)$
\begin{equation}
G(\sigma,\tau,u)={\rm Pexp}\int^{(\sigma,\tau)} A(u)\;.
\end{equation}
Using $G$ we can build
the monodromy matrix (which is a super matrix $(4+4)\times(4+4)$)
\begin{equation}
\Omega(u,\tau)=G^{-1}(0,\tau,u)G(2\pi,\tau,u)={\rm Pexp}\oint_\gamma A(u)\;.
\end{equation}
where $\gamma$ is a closed path starting and ending at some point
on the worldsheet and wrapping around once.
The flatness condition allows us to deform the contour freely
provided the endpoints are fixed.
Shifting the whole path in time will produce a similarity
transformation of $\Omega(u,\tau)$.
\begin{exercise}
Show that the eigenvalues of $\Omega(u,\tau)$ do not depend on $\tau$
if $A$ is flat.
\end{exercise}
We denote the eigenvalues of $\Omega(u,\tau)$ as
\begin{equation}\la{qmomo}
\{
e^{i p_1},
e^{i p_2},
e^{i p_3},
e^{i p_4}|
e^{i p_{\hat 1}},
e^{i p_{\hat 2}},
e^{i p_{\hat 3}},
e^{i p_{\hat 4}}
\}\;.
\end{equation}
These functions of the spectral parameter $u$ are
called quasi-momenta. Since they do not depend on time they represent a generating function for conserved quantities. One can, for instance,
expand $p_i(u)$ in the Taylor series at large $u$ to obtain inifinitely many integrals of motion which leads to integrability of string theory. Below we study the analytic properties of the quasimomenta.
\paragraph*{``Zhukovsky" square roots}
All the quasimomenta have a square root singularity with the branch points at
$\pm 2g$ (inherited from the definition of $A$ \eq{defA}).
Note that the analytic continuation under the cut changes the sign of the terms
with $J^{(2)}$ in \eq{defA} which is in fact equivalent to applying the $Z_4$ automorphism.
At the same time one can show that
\begin{equation}\la{COC}
C^{-1}\Omega(u)C=\tilde\Omega^{-ST}(u)\;\;,\;\;C=
\left(
\bea{c|c}
E&0\\ \hline
0& E
\eea
\right)
\end{equation}
where $\tilde\Omega(u)$ denotes the analytic continuation of
$\Omega(u)$ under the cut $[-2g,2g]$.
\begin{exercise}
Show that
\begin{equation}
C^{-1} M C=- \phi[M]^{ST}
\end{equation}
where $ST$ denotes super transpose is defined as
$
\left(
\bea{c|c}
A&B\\ \hline
C&D
\eea
\right)^{ST}\equiv \left(
\bea{c|c}
A^T&C^T\\ \hline
-B^T&D^T
\eea
\right)
$.
Use this to show that $C^{-1}AC=-\tilde A^{ST}$ (where tilde denotes analytic continuation under the branch cut $[-2g,2g]$). Then prove \eq{COC}.
\end{exercise}
Equation \eq{COC} implies that the eigenvalues of $\Omega(u)$ are related to the eigenvalues of $\tilde\Omega(u)$ by inversion and possible permutation. This statement in terms of the quasimomenta \eq{qmomo} tells us that the analytic continuation of the quasimomenta i.e. $\tilde p_a(u)$ and $\tilde p_{\hat i}(u)$
results in the change of sign and possible reshuffling.
The exact way they reshuffle can be determined by considering some
particular classical solutions and building the quasimomenta explicitly.
Some examples can be found in \cite{Gromov:2007aq}. Since all the classical solutions
are related to each other continuously one finds that\footnote{It is also possible to shift the quasimomenta by $2\pi m$ where $m$ is integer. This is indeed the case for $p_i$ for the classical solutions which wind in $S^5$ and $m$ gives their winding number. The $AdS^5$ quasimomenta still satisfy \eq{perm}.}
\beqa\la{perm}
\tilde p_{\hat 1}(u)=-p_{\hat 2}(u)\;\;,\;\;
\tilde p_{\hat 2}(u)=-p_{\hat 1}(u)\;\;,\;\;
\tilde p_{\hat 3}(u)=-p_{\hat 4}(u)\;\;,\;\;
\tilde p_{\hat 4}(u)=-p_{\hat 3}(u)\;.
\eeqa
This property will play a crucial role in the QSC construction as we discuss in the next section. One can consider \eq{perm} as a manifestation of $Z_4$ symmetry of the action.
\paragraph*{Large $u$ asymptotics and quantum numbers}
Another important property of the quasimomenta is that the quantum numbers of the state can be read off from their values at infinity.
To see this notice the following property
\begin{equation}
A=-g^{-1}\left(d+*k\frac{2g}{u}\right)g
\end{equation}
where $k_\mu$ is the Noether current defined in \eq{EOM}.
This implies that
\begin{equation}
\Omega=-g^{-1}\left(1+\frac{2g}{u}\int_0^{2\pi}d\sigma k_\tau\right)g
\end{equation}
using that the charge $Q_{\rm Noether}=2g\int_0^{2\pi}k_\tau d\sigma$ we immediately get
\begin{equation}
\left(\bea{c}
p_{\hat 1}\\
p_{\hat 2}\\
p_{\hat 3}\\
p_{\hat 4}\\ \hline
p_1\\
p_2\\
p_3\\
p_4\\
\eea\right) \simeq
\frac{1}{2 u}
\left(\bea{l}
+\Delta-S_1+S_2 \\
+\Delta+S_1 -S_2 \\
-\Delta-S_1 -S_2 \\
-\Delta+S_1 +S_2 \\ \hline
+J_1+J_2-J_3 \\
+J_1-J_2+J_3 \\
-J_1+J_2 +J_3 \\
-J_1-J_2-J_3
\eea\right)\,. \label{inf}
\end{equation}
where the r.h.s. comes from the diagonalization of $Q_{\rm Noether}/u$ (in the fundamental representation). Here $J_i$ are integer R-charges (which map to the scalar fields in gauge theory), $S_1,S_2$ are integer Lorentz charges (corresponding to the covariant derivatives)
and $\Delta$ is the dimension of the state, i.e. its energy.
Again we will see the quantum counterpart of this formula when we discuss QSC construction in the next section.
\paragraph*{Action variables and WKB quantization}
Another reason the quasimomenta were introduced is because they allow us to define the action variables very easily. For non-trivial solutions the quasimomenta have additional quadratic branch cuts, which come from the diagonalization procedure. The integrals around these cuts give the action variables~\cite{Dorey:2006zj}\footnote{this property fixes the choice of the spectral parameter $u$, which otherwise can be replaced by any $f(u)$.}
\begin{equation}\la{Actionv}
I_{ C}=\frac{1}{2\pi i}\oint_C p_A(u) du\;,
\end{equation}
where $C$ is some branch cut of $p_A(u)$.
Here $A$ can take any of $8$ values.
In the Bohr-Sommerfeld quasi-classical quantization procedure one simply imposes $I_C\in Z$ to get the first quantum correction. For example in \cite{Gromov:2007aq} this property was used to obtain the 1-loop quantum spectrum of the string.
\section{Quasimomenta and the Strong Coupling Limit of QSC}
To understand how the quansimomenta we introduced above are related to the $Q$-functions
from the previous section we are going to first get an insight from the harmonic oscillator.
Reconstructing the $\psi$ from $p$, but inverting the relation \eq{pdef} we get
\begin{equation}\la{psix}
\psi(x)=e^{-\frac{m\omega x^2}{2\hbar}}Q(x)=e^{\frac{i}{\hbar}\int^x p(x) dx}\;.
\end{equation}
Similarly to what we found in \eq{Actionv} we also had
\begin{equation}\la{HOW}
N=\frac{1}{2\pi i}\frac{i}{\hbar}\oint_C p(x)dx\;,
\end{equation}
which allows us to identify $\frac{i x}{\hbar}\to u$
so that \eq{HOW} and \eq{Actionv} become really identical.
Under this identification we can deduce from \eq{psix}
\begin{equation}
Q_A\simeq\exp\left(\int^u p_A(v) dv\right)\;.
\end{equation}
This naive argument indeed produces the right identification for the strong coupling
limit (i.e. $g\to\infty$)
of
${\bf P}_a$ and ${\bf Q}_i$ functions introduced earlier. More precisely we get:
\beqa\la{expP}
{\bf P}_a\sim\exp\left(-\int^u p_a(v) dv\right)\;\;&,&\;\;
{\bf P}^a\sim\exp\left(+\int^u p_a(v) dv\right)\\
{\bf Q}_i\sim\exp\left(-\int^u p_{\hat i}(v) dv\right)\;\;&,&\;\;
{\bf Q}^i\sim\exp\left(+\int^u p_{\hat i}(v) dv\right)\;.
\eeqa
Note that at the leading classical level we do not control the preexponential factors and they may contain some order $1$ powers of $u$.
From that we can immediately draw a number of important consequences:
\begin{itemize}
\item We can deduce that large $u$ asymptotics of ${\bf P}_a$ and ${\bf Q}_i$ from \eq{inf} are
of the form $u^{Q_{\rm Noether}/2}$.
\item We can no longer expect that ${\bf P}_a$ or ${\bf Q}_i$ are polynomials as the expressions
\eq{expP} have Zhukovsky branch cuts $[-2g,2g]$.
\item From \eq{perm} we can deduce the following analytic continuation under the branch cut
\begin{equation}
\tilde {\bf Q}_i \sim\exp\left(+\int^u \tilde p_{\hat i}(v) dv\right)=
\exp\left(-\int^u p_{\hat\phi_i}(v) dv\right)\sim {\bf Q}^{\phi_i}
\end{equation}
where $\phi_i$ is determined by \eq{perm} to be $\phi_1=2,\;\phi_2=1,\;\phi_3=4,\;\phi_4=3$. So more explicitly we should have the following monodromies
\begin{equation}\la{classical}
\tilde {\bf Q}_1={\bf Q}^2\;\;,\;\;\tilde {\bf Q}_2={\bf Q}^1\;\;,\;\;\tilde {\bf Q}_3={\bf Q}^4\;\;,\;\;\tilde {\bf Q}_4={\bf Q}^3\;.
\end{equation}
These relations remain almost intact at the quantum level. The only improvement one should make is to complex conjugate the r.h.s., as at the quantum level the $Q_i$ are not real
\begin{equation}\la{gluing}
\tilde {\bf Q}_1=\bar {\bf Q}^2\;\;,\;\;\tilde {\bf Q}_2=\bar {\bf Q}^1\;\;,\;\;\tilde {\bf Q}_3=\bar {\bf Q}^4\;\;,\;\;\tilde {\bf Q}_4=\bar {\bf Q}^3\;.
\end{equation}
The reason for the complex conjugation will become clear in the next Chapter.
\end{itemize}
To conclude this section we notice that we managed to get all the crucial additional information we have to add to the QQ-relations from just classical limit. Namely, the existence of the Zhukovsky cut and the ``gluing" conditions \eq{gluing}. In the next chapter we combine all the information together and give the complete description of the spectrum of $N=4$ SYM by means of the QSC.
\chapter{QSC Formulation}\la{ch:4}
The goal of this section is to summarize the insights we got from the classical limit and from the spin chains and to motivate further the analytic properties of the basic ${\bf P}_a,\;{\bf P}^a,\;{\bf Q}_i,\;{\bf Q}^i$ Q-functions.
\section{Main QQ-Relations}
The Q-functions of the ${\mathcal N}=4$ SYM satisfy exactly the same QQ-relations as those of the $SU(4|4)$ spin chain. So we simply summarize the most important relations from Sec.\ref{sec:PSU224QQ} here to make this section self-contained
\beqa
\la{allQQ1}&&Q_{a| i}^+-Q_{a| i}^-={\bf P}_a{\bf Q}_i\;\;,\\
&&{\bf P}_a{\bf P}^a={\bf Q}_i {\bf Q}^i=0\;,\\
\la{allQQ2}&&{\bf Q}_i=-{\bf P}^a Q_{a|i}^+
\;\;,\\
\la{allQQ22}
&&{\bf Q}^i=+
{\bf P}_a Q^{a|i+}\;\;,\\
\la{allQQ3}&&Q^{a|i}=-(Q_{a|i})^{-t}\;\;.
\eeqa
We also note that the first identity \eq{allQQ1} can be combined with \eq{allQQ2}
into
\begin{equation}
\la{allQQ0}Q_{a| i}^+-Q_{a| i}^-=-{\bf P}_a{\bf P}^b{Q}^+_{b|i}\;.
\end{equation}
This relation tells us that we can use $8$ functions ${\bf P}_a$ and ${\bf P}^a$
as the basis to reconstruct all other Q-functions i.e. we can in principle solve \eq{allQQ0}
in terms of ${\bf P}$'s (we will have an example in the next sections). Then we can use $Q_{a|i}$ to find $Q^{a|i}$ as its inverse \eq{allQQ3}. Then one can reconstruct ${\bf Q}_i$ and
${\bf Q}^i$ using \eq{allQQ2} and \eq{allQQ22}.
The advantage of this choice of basis is, as we explain below, due to the fact that the analytic properties of ${\bf P}_a$ and ${\bf P}^a$ are the simplest among all $Q$-functions and they can be very efficiently parameterized.
\section{Large $u$ Asymptotic and the Quantum Numbers of the State}
The large $u$ asymptotics of ${\bf P}$'s and ${\bf Q}$'s can be deduced from their classical limit
\eq{expP} and \eq{inf}. The main complication here is that in the non-twisted theory there are some additional powers of $u$ comming from the pre-exponent of \eq{expP}, which modify the asympotic by $\pm 1$. To fix the asymptotic completely one can make comparison with the Asymptotic Bethe Ansatz of Beisert-Staudacher, which can be derived as a limit of QSC.
We don't discuss this calculation here but this was done in detail in the original paper \cite{Gromov:2014caa}.
Here we just quote the result
\begin{eqnarray}\label{largeu2}
{\bf P}_a\simeq A_a\, u^{-\tilde M_a}\,,\ \ {\bf Q}_{i}\simeq B_i\,u^{\hat M_i-1}\,,\qquad {\bf P}^a\simeq A^a\, u^{\tilde M_a-1}\,,\ \ {\bf Q}^{i}\simeq B^i\,u^{-\hat M_i}\,,
\end{eqnarray}
where
\begin{align}
\label{relMta}
\tilde M_a=&\left\{\frac{J_1+J_2-J_3+2}{2}
,\frac{J_1-J_2+J_3}{2}
,\frac{-J_1+J_2+J_3+2}{2}
,\frac{-J_1-J_2-J_3}{2}
\right\}\\
\hat M_i=&\left\{\frac{\Delta -S_1-S_2+2}{2},
\frac{\Delta +S_1+S_2}{2}
,\frac{-\Delta
-S_1+S_2+2}{2} ,\frac{-\Delta
+S_1-S_2}{2} \right\}
\label{M-ass}\end{align}
we see that indeed the asymptotics are consistent with what we found in the classical limit.
Another way to understand the shift by $\pm 1$ in the asymptotic is to consider a more general twisted theory. The twists (like the parameter $\phi$ we introduced in the spin chain section) remove many degeneracies\footnote{see \cite{Kazakov:2015efa} for more details about the twisted version of QSC.}. For example without the twist
the leading asymptotic in the l.h.s. of \eq{allQQ1} cancels and one needs to know the subleading term to deduce the asymptotic of the r.h.s. This does not happen in the twisted case when $Q_{a|i}\sim e^{\phi_{a,i}u}u^{M_{a,i}}$ and the asymptotic behaves more predictably. As a result in the twisted theory there are no $\pm 1$ shifts w.r.t. the classical limit asymptotic and one can alternatively derive \eq{largeu2} by considering first the twisted ${\mathcal N}=4$ SYM and then removing them.
\paragraph*{Finding normalization of ${\bf P}$ and ${\bf Q}$}
We will see in the next section that in the near BPS limit ${\bf P}$ and ${\bf Q}$ become small
which will allow us to solve the QSC exactly at finite coupling. In order to see this we derive a more general result for the coefficients $A_a,\;A^a$ and $B_i,\;B^i$ from \eq{largeu2}.
\begin{exercise}
Use \eq{allQQ1} and \eq{largeu2} to show that
\begin{eqnarray}\label{Qkjlargeu}
Q_{a|j}\simeq
-i\, A_a\,B_j\,\frac{u^{-\tilde M_a+\hat M_j}}{-\tilde M_a+\hat M_j}\,.
\end{eqnarray}
\end{exercise}
From \eq{Qkjlargeu} we can fix the constants \(A^a\) and \(B^i\)
in terms of \(\tilde M_a\) and \(\hat M_j\). Substituting the asymptotic
\eq{Qkjlargeu} into \eq{allQQ2} we get
\begin{equation}
-A^au^{\tilde M_a-1}\left(-i\, A_a\,B_j\,\frac{u^{-\tilde M_a+\hat M_j}}{-\tilde M_a+\hat M_j}\right)=
B_j u^{\hat M_j-1}\;,
\end{equation}
which simplifies to
\begin{equation}
-1= i\,\sum_{a=1}^4\frac{A^aA_a}{\tilde M_a-\hat M_j}\;,
\end{equation}
which allows us to find the combinations $A^1A_1,\;A^2A_2$
and so on.
Solving this linear system we find
\begin{eqnarray}\label{AABB0}
A^{a_0}A_{a_0}=i\frac{\prod\limits_{j}(\tilde M_{a_0}-\hat M_j)}{\prod\limits_{b\neq a_0}(\tilde M_{a_0}-\tilde M_b)}\,,\;\;
B^{j_0}B_{j_0}=i\frac{\prod\limits_{a}(\hat M_{j_0}-\tilde M_a)}{\prod\limits_{k\neq
{j_0}}(\hat M_{j_0}-\hat M_k)}\;,\;\; a_0,j_0=1\dots4
\end{eqnarray}
(with no summation over \(a_0\) or \(j_0\) in l.h.s.).
\begin{exercise}
Derive the relations \eq{AABB0}.
\end{exercise}
Interestingly the condition that all $A^{a_0}A_{a_0}=0$
for all $a_0$ singles out the BPS states
with protected dimension (which works for physical and even
non-physical operators like in the BFKL regime).
\begin{exercise}
Find all solutions of $A^{a_0}A_{a_0}=0,\;\forall a_0$ in terms of $J_i,\;S_i$ and $\Delta$.
\end{exercise}
Next we investigate the cut structure of ${\bf P}_a$ and ${\bf Q}_i$.
\section{Analytic Structure of Q-functions}
In this section we deduce the analytic properties of ${\bf P}_a$ and ${\bf Q}_i$
functions following a maximal simplicity principle, i.e. we assume simplest possible
analytical properties which do not contradict the classical limit and the structure of the QQ-system.
In Sec.\ref{sec:classics} from the
strong coupling analysis we deduced that ${\bf P}_a$
and ${\bf Q}_i$ should have cuts with branch points
at $\pm 2g$ (to recall $g=\frac{\sqrt{\lambda}}{4\pi}$ where $\lambda$ is the 't Hooft coupling).
We can assume that ${\bf P}_a$ should
have just one single cut $[-2g,2g]$. Note that since ${\bf P}^a$
is related to ${\bf P}_a$ by the symmetry of flipping the Dynkin
diagram upside-down it should also have the same analytic properties.
Note that ${\bf Q}_i$ (and ${\bf Q}^i$) cannot have the same analytic properties as ${\bf P}$'s. Indeed, in general $\Delta$ in the asymptotic of ${\bf Q}_i$ is not-integer and thus we must have a nontrivial monodromy around infinity.\footnote{Depending on the values of $J_i$ there could be a similar issue with ${\bf P}_a$ as the asymptotic could contain half-integer numbers. Strictly speaking ${\bf P}_a$ could have an extra cut going to infinity which would disappear in any bi-linear combinations of ${\bf P}$s.}
The simplest way to gain such a monodromy is to choose the branch cut to close through infinity i.e. we can assume that ${\bf Q}_i$ and ${\bf Q}^i$
have
a ``long" branch-cut $(-\infty,-2g]\cup[+2g,+\infty)$.
This simple argument leads us to the simple analyticity picture Fig.\ref{cuts}, which historically was derived using the TBA approach in \cite{Gromov:2014caa}.
\begin{figure}
\begin{center}
\def0.6\textwidth{\textwidth}
\input{PQ.pdf_tex}
\end{center}
\caption{\la{cuts}cut structure of ${\bf P}_a,\;{\bf P}^a$ and ${\bf Q}_i,\;{\bf Q}^i$.}
\end{figure}
Note that ${\bf P}$ and ${\bf Q}$ are additionally constrained to be a part of the same Q-system. This makes it very inconvenient to have different conventions for the choice of the branch cuts so we may need to look under the long cut of ${\bf Q}$.
A simple way to explore the space under the cut of ${\bf Q}$ is to use the QQ-relation \eq{allQQ0} written in the form
\begin{equation}
Q_{a|i}^++{\bf P}_a{\bf P}^b Q_{b|i}^+=Q_{a|i}^-
\end{equation}
which implies that
\beqa\la{Qi}
\nonumber Q_i&=&-{\bf P}^aQ^+_{a|i}=-{\bf P}^a(\delta_a^b+{\bf P}^{[+2]}_a {\bf P}^{b[+2]})Q_{b|i}^{[3]}\\
&=&-{\bf P}^a(\delta_a^b+{\bf P}^{[+2]}_a {\bf P}^{b[+2]})
(\delta_b^c+{\bf P}^{[+4]}_b {\bf P}^{c[+4]})Q_{c|i}^{[5]}=\dots\;.
\eeqa
First we note that from the formal solution \eq{Qaiformal} we can always assume that $Q_{a|i}$ is regular in the upper half plane. From that we see that \eq{Qi} implies that ${\bf Q}_i$ has an infinite ladder of cuts.
The first term in the last line of \eq{Qi} has a cut
at $[-2g,2g]$,
the second has the cut at $[-2g-i,2g-i]$ and so on. See Fig.\ref{Qishort}.
The puzzle is how to make this structure of cuts compatible
with the initial guess that ${\bf Q}_i$
has only one cut going to infinity. In fact there is no contradiction so far as in order to see the infinite ladder of the cuts we should go
to the right from the branch point at $2g$ i.e. under the long cut.
At the same time if we want to go to the lower half plane avoiding the long cut we should go under the first short cut. What is expected to be seen under the first short cut is no branch point singularities below the real axis. Thus if we denote by $\tilde {\bf Q}_i$ the analytic continuation of ${\bf Q}_i$
under the first cut it will have no branch cut singularities below the real axis (see Fig.\ref{Qishort}).
\begin{figure}[ht]
\begin{center}
\def0.6\textwidth{0.7\textwidth}
\input{PQ2.pdf_tex}
\end{center}
\caption{Analytic structure of $Q_i$ under its long cut.\la{Qishort}}
\end{figure}
From Fig.\ref{Qishort} we notice an obvious asymmetry of the upper half plane and the lower half plane. Indeed the function ${\bf Q}_i$, which is a part of Q-system, is analytic in the upper half plane, whereas $\tilde {\bf Q}_i$, which does not necessarily satisfy any QQ-relation is analytic in the lower half plane. I.e. building the Q-system we decided to keep all Q-functions analytic above the real axis and now we potentially lost the symmetry under complex conjugation which can be linked to unitarity of the theory. To reconstruct the symmetry we have to impose the ``gluing conditions".
\section{Gluing Conditions}
In this section we address an imperfection of our construction where the upper half plane plays a more important role from a QQ-relations point of view. To exchange the upper and lower halves we can complex conjugate Q-functions. This procedure does not affect ${\bf P}_a$ and ${\bf P}^a$ much,
depending on the normalization constant they can at most change their signs. For ${\bf Q}_i$ the complex conjugation seems to be more dramatic as the ladder of branch cuts going down will now go up. Simply multiplying ${\bf Q}_i$ by a constant would not undo the complex conjugation, however, if we also analytically continue $\bar {\bf Q}_i$ under the first branch cut we actually get a very similar analytic structure to the initial ${\bf Q}_i$!
I.e. the
complex conjugation and the analytic continuation should give us back either some ${\bf Q}_i$ or ${\bf Q}^i$. To determine which of ${\bf Q}$ could do the job we recall that in the classical limit we obtained \eq{gluing} and in accordance with that we impose the following gluing conditions\footnote{for physical operators ${\bf Q}_2$ and ${\bf Q}_4$
can mix with ${\bf Q}_1$ and ${\bf Q}_3$ as they are growing faster
and have the same non-integer part in the asymptotic (similarly ${\bf Q}^3$ and ${\bf Q}^1$ are defined modulo ${\bf Q}^2$ and ${\bf Q}^4$). As a result
they are ambiguously defined. Fortunately, we don't have to impose all the $4$ of the
gluing conditions and it is sufficient to
use another pair of equations.}
\begin{equation}\la{gluing2}
\boxed{\tilde {\bf Q}_1\propto\bar {\bf Q}^2\;\;,\;\;\tilde {\bf Q}_2\propto\bar {\bf Q}^1\;\;,\;\;\tilde {\bf Q}_3\propto\bar {\bf Q}^4\;\;,\;\;\tilde {\bf Q}_4\propto\bar {\bf Q}^3\;.}
\end{equation}
Together with \eq{allQQ1}-\eq{allQQ3}, \eq{gluing2} constitutes
the closed system of QSC equations. It is rather nontrivial that these equations only have a discrete set of solutions (and so far there is no mathematically rigorous proof of this). To demonstrate this we consider some simple examples in the next section and also implement an algorithm which allows us to find solutions numerically.
\section{Left-Right Symmetric Sub-Sector}
In many situations it is sufficient to restrict ourselves to a subset of all states which has an additional symmetry. The left-right (LR) symmetric sub-sector, which includes $su(2)$ and $sl(2)$ sub-sectors, contains the states preserving the upside-down symmetry of the Dynkin diagram, i.e. the states which should have $J_3=0,\;S_2=0$.
To understand what we should expect in this case consider the bosonic subgroup $SO(4,2)\times SO(6)$. The $SO(6)$ Dynkin diagram has $3$ nodes and imposing the LR symmetry would imply that the nodes $1$ and $3$ are indistinguishable which reduces the symmetry to $SO(5)$ (see Fig.\ref{so5}).
\begin{figure}
\begin{center}
\def0.6\textwidth{0.6\textwidth}
\input{so5.pdf_tex}
\end{center}
\caption{\la{so5} Under identification of the upper and lower nodes the $SO(6)$ Dynkin diagram (on the left) becomes the $SO(5)$ Dynkin diagram (on the right).}
\end{figure}
In order to break $SO(6)$ to $SO(5)$ it is sufficient to select some preferable direction in the vector $6$D representation. Our ${\bf P}_a$
and ${\bf P}^a$ are in $4$D fundamental and anti-fundamental representations of $SO(6)$. The vector representation can be realized as anti-symmetric tensors with two fundamental indexes $A_{ab}$
and so we can pick a direction to break $SO(6)$ to $SO(5)$ by picking a particular anti-symmetric tensor $\chi_{ab}$ which can be used to relate fundamental and anti-fundamental representations, i.e. can be used to lower the indexes. In this sub-sector we will get
\begin{equation}\la{Plower}
{\bf P}_a=\chi_{ab}{\bf P}^b\;.
\end{equation}
Since we have already selected the order of ${\bf P}$'s by assigning their asymptotic we can see that the only non-zero components of $\chi$,
consistent with the asymptotic of ${\bf P}$ are $14,23,32,41$. Finally we still have freedom to rescale ${\bf P}_a$ to bring $\chi_{ij}$ to the conventional form
\begin{equation}
\chi_{ab}=\left(
\bea{cccc}
0&0&0&1\\
0&0&-1&0\\
0&1&0&0\\
-1&0&0&0
\eea
\right)\;.
\end{equation}
By the same argument we should impose
\begin{equation}\la{Qlower}
{\bf Q}_i=\chi_{ij}{\bf Q}^j\;
\end{equation}
for the same tensor $\chi_{ij}$.
\chapter{QSC - analytic examples}\la{ch:5}
In this section we consider an example where the QSC can be solved analytically at finite coupling. It is unfortunate that the analytical solutions for physical operators are rather complicated. It is possible to get the solution perturbatively at weak coupling, but this already involves computer algebra. Here instead we consider a non-local operator which can be understood as an analytic continuation of twist-$J$ states.
The twist operators are the states with $J_1=J$, $J_2=J_3=0$ and $S_1=S,\;S_2=0$. They belong to the LR symmetric subsector described in the previous section and below we give the description of the $sl(2)$ sector to which these states also belong in the next section.
\section{$sl(2)$ Sector}
We discuss the simplifications which arise in the $sl(2)$ sector.
As the $sl(2)$ sector is inside the LR subsector we can restrict ourselves to the Q-functions with lower indexes due to \eq{Qlower} and \eq{Plower}.
The asymptotics of ${\bf P}_a$ \eq{largeu2} become
\begin{equation}\la{Pasm}
{\bf P}_1=A_1 u^{-L/2-1}\;\;,\;\;
{\bf P}_2=A_2 u^{-L/2}\;\;,\;\;
{\bf P}_3=A_3 u^{+L/2-1}\;\;,\;\;
{\bf P}_4=A_4 u^{+L/2}\;.
\end{equation}
Similarly for ${\bf Q}_i$
\beqa
\nonumber&&{\bf Q}_1=B_1 u^{+(\Delta-S)/2}\;\;,\;\;
{\bf Q}_2=B_2 u^{+(\Delta+S)/2-1}\;\;,\\
&&{\bf Q}_3=B_3 u^{-(\Delta+S)/2}\;\;,\;\;
{\bf Q}_4=B_4 u^{-(\Delta-S)/2-1}\;\;.
\eeqa
Also we write \eq{AABB0} explicitly for this case
\beqa\la{AAsl2}
\nonumber A_1A_4&=&-\frac{i (-\Delta +L-S+2) (-\Delta +L+S) (\Delta +L-S+2) (\Delta +L+S)}{16 L (L+1)}\\
A_2A_3&=&-\frac{i (-\Delta +L-S) (-\Delta +L+S-2) (\Delta +L-S) (\Delta +L+S-2)}{16 (L-1) L}
\eeqa
and
\beqa\la{BBsl2}
\nonumber B_1B_4&=&
-\frac{i (-\Delta +L+S-2) (-\Delta +L+S) (\Delta +L-S) (\Delta +L-S+2)}{16 \Delta
(S-1) (-\Delta +S-1)}\\
B_2B_3&=&
+\frac{i (\Delta -L+S-2) (\Delta -L+S) (\Delta +L+S-2) (\Delta +L+S)}{16 \Delta
(S-1) (\Delta +S-1)}\;.
\eeqa
We can see that both $A_a A^a$ and $B_aB^a$ vanish for $\Delta\to L,\;S\to 0$. The reason for this is that $S=0$ is the BPS protected state and the vanishing of the coefficients indicates the shortening of the multiplet. At the same time when ${\bf P}$ and ${\bf Q}$ are small we get an enormous simplification as we show in the next section where we consider a near BPS limit where $S$ is small.
\section{Analytic Continuation in $S$}\la{sec:cont}
In this section we introduce an analytic continuation in the Lorentz spin $S_1=S$, which for local operators must be integer. The analytic continuation in the spin plays an important role as it links BFKL and DGLAP regimes or high energy scattering in QCD\footnote{for the applications of QSC in this regime see \cite{Alfimov:2014bwa,Gromov:2015vua}}. We leave the questions related to the physics of hight energy scattering outside these lectures and describe in detail the analytic continuation in $S$ from QSC point of view.
The simplest way to describe the analytic continuation is by considering the gluing conditions \eq{gluing2}, which for LR-symmetric sector reduce to just two
\begin{equation}\la{gluing}
\tilde {\bf Q}_1\propto\bar {\bf Q}_3\;\;,\;\;
\tilde {\bf Q}_2\propto\bar {\bf Q}_4
\;,
\end{equation}
since the two others gluing conditions follow by taking complex conjugate and analytically continue the above two conditions.
Also, as we will see that from the numerical analysis in the next section, these two conditions are not independent and only one of them is sufficient to build the spectrum.
At the same time imposing both conditions \eq{gluing} leads to the quantization of the charge $S_1$ whereas keeping only the first condition
$\tilde {\bf Q}_1\propto\bar {\bf Q}_3$ allows us to have $S_1$ non-integer\footnote{one can show that the second condition necessary leads to the quantization of $S$~\cite{GA}. It could be simpler to check this numerically with the code we explain in the next Chapter.}! However,
this will modify the second gluing condition. To constrain the possible form of the modified gluing conditions we denote
\beqa\la{Mdef}
\tilde {\bf Q}_i(u)&=& {M_{i}}^j(u)\bar {\bf Q}_j(u)\;\;,\;\;
{M_i}^j(u)=
\left(\bea{cccc}
0&0&{M_{1}}^3&0\\
{M_{2}}^1&{M_{2}}^2&{M_{2}}^3&{M_{2}}^4\\
{M_{3}}^1&0&0&0\\
{M_{4}}^1&{M_{4}}^2&{M_{4}}^3&{M_{4}}^4
\eea\right)_{ij}\;.
\eeqa
Since the gluing condition tells us that $\tilde {\bf Q}_i$ is essentially the same as ${\bf Q}^i$
up to a possible symmetry of Q-system transformation we should assume that ${M_i}^j(u)$ is an i-periodic function of $u$: ${M_i}^j(u+i)={M_i}^j(u)$. Furthermore,
since ${M_i}^j(u)$ relates two functions which are both analytic in the lower half plane it should be analytic.
\newpage
\begin{exercise}
Use periodicity of ${M_i}^j$ and equation \eq{Mdef} to find $M_k^j$
explicitly in terms of $\tilde {\bf Q}_k(u),\tilde {\bf Q}_k(u+i),\tilde {\bf Q}_k(u+2i),\tilde {\bf Q}_k(u+3i)$ and $\bar {\bf Q}_j(u),\bar {\bf Q}_j(u+i),\bar {\bf Q}_i(u+2i),\bar {\bf Q}_j(u+3i)$. From that relation you can see that ${M_{k}}^j$ does not have any branch-cuts, but could possibly have poles. However, existence of poles would contradict the power-like asymptotic of $\bar {\bf Q}_j(u)$ and analyticity of $\tilde {\bf Q}_i(u)$ as we will have to conclude that $\bar {\bf Q}_j$ has infinitely many zeros in the lower half plane, which is impossible with a power-like asymptotic.
\end{exercise}
Armed with the new knowledge of regularity of $M$ we can analytically continue both sides of \eq{Mdef} and complex conjugate them to find the following condition on the matrix $M$:
\beqa\la{Mreality}
\bar M(u)=M^{-1}(u)\;.
\eeqa
Another constraint comes from the LR-symmetry of the state, which tells us that ${\bf Q}_i=\chi_{ij}{\bf Q}^j$, where ${\bf Q}^j$ is a tri-linear combination of ${\bf Q}_i$ as in \eq{QQ3}. So using that we get from \eq{Mdef}
\begin{equation}\la{Mdef2}
\tilde {\bf Q}^l(u)=(\chi^{-1})^{li} {M_{i}}^j\chi_{jk}\bar {\bf Q}^k(u)\;\;,\;\;
\end{equation}
at the same time we can use \eq{QQ3} and \eq{hdu} to rewrite the r.h.s. as a combination of $3$ $\tilde {\bf Q}_i$ and then apply the initial \eq{Mdef}; this results in the following equation
\begin{equation}\la{Mdef3}
\tilde {\bf Q}^i(u)= -{\rm det}(M){(M^{-1})_{j}}^i\bar {\bf Q}^j(u)\;.
\end{equation}
Comparing \eq{Mdef2} and \eq{Mdef3} we get
\begin{equation}\la{Mconstr}
(\chi^{-1})^{li} {M_{i}}^j\chi_{jk}{M_n}^{k}=-\det(M)\delta^{l}_n\;,
\end{equation}
\begin{exercise}
Derive \eq{Mconstr} by combining \eq{Mdef2} and \eq{Mdef3}.
\end{exercise}
or in matrix form
\begin{equation}
M\chi M^T=-\chi\;{\rm det}(M)\;.
\end{equation}
Which implies in particular that $\det(M)=\pm 1$.
Imposing \eq{Mconstr} and \eq{Mreality}
we obtain that $M$ should reduce to the following form
\beqa
{M_i}^j(u)=
\left(\bea{crrr}
0&0&\alpha&0\\
\beta&0&\gamma&-\bar{\alpha}\\
\frac{1}{\bar{\alpha}}&0&0&0\\
\frac{\gamma}{\alpha\bar\alpha}&-\frac{1}{\alpha}&\bar{\beta}&0
\eea\right)_{ij}\;,
\eeqa
with real $\gamma$, which results in the following two independent gluing conditions
\beqa\nonumber\la{gluing}
\tilde{\bf Q}_1&=&\alpha\bar {\bf Q}_3\;,\\ \la{gluingF}
\tilde{\bf Q}_2&=&\beta\bar {\bf Q}_1+\gamma \bar{\bf Q}_3-\bar{\alpha}\bar{\bf Q}_4\;.
\eeqa
Since $\alpha$ appears both in numerator and denominator it cannot be a non-trivial function of $u$ as it would create poles.
At the same time $\beta$ and $\gamma$ can be nontrivial periodic functions of $u$. For the case of the twist two operators ${\rm tr}Z D_-^S Z$\footnote{$Z$ is a complex scalar of the theory, $D_-$ is a light-cone covariant derivative.} with non-integer Lorentz spin $S$ we will verify numerically that $\gamma$ is a constant
and $\beta=\beta_1+\beta_2\cosh(2\pi u)+\beta_3\sinh(2\pi u)$. For integer $S$ both $\gamma$ and $\beta$ vanish.
Using this gluing matrix one can compute the BFKL pomeron/odderon eigenvalue by analytically continuing to $S\sim-1$.
\section{Slope Function}\la{slope}
Having the possibility of having the Lorentz spin $S$ non-integer
allows us to study the near-BPS regime $S\to 0$ analytically.
In this section we will compute the first term linear in $S$ called to slope function~\cite{Basso:2011rs} analytically to all orders in $g$.
This calculation was precented originally in~\cite{Gromov:2014bva}
in a slightly different form, there also the next term in small $S$ expansion was derived. Here we adopt more widely accepted notation of~\cite{Gromov:2014caa}, which are different from~\cite{Gromov:2014bva}.
The main simplification in this limit is due to the scaling of ${\bf P}_a$ and ${\bf Q}_i$ with $S\to 0$ and $\Delta=L+e S$ where $e\sim 1$, which can be deduced from the scaling of $A_a$
and $B_i$ \eq{AAsl2} and \eq{BBsl2}:
\beqa\la{AAvals}
A_1A_4\simeq-B_1B_4\simeq-\frac{i}{2}(1-e)S\;\;,\;\;
A_2A_3\simeq-B_2B_3\simeq-\frac{i}{2}(1+e)S\;.
\eeqa
From that we can deduce that ${\bf P}_a$ and ${\bf Q}_i$ both scale as $\sqrt S$.
This assumption in the main simplification -- the equation for $Q_{a|i}$ \eq{allQQ1} becomes simply
\begin{equation}
Q_{a|i}^+-Q_{a|i}^-\simeq 0\;,
\end{equation}
i.e. $Q_{a|i}$ is a constant!
To find which constants they are we can simply use the general formula \eq{Qkjlargeu} which in our limit gives
\begin{equation}
Q_{a|j}=\left(
\begin{array}{cccc}
-\frac{2 i A_1 B_1}{(e-1) S} & 0
& 0 & 0 \\
0 & -\frac{2 i A_2 B_2}{(e+1) S}
& 0 & 0 \\
0 & 0 & \frac{2 i A_3 B_3}{(e+1)
S} & 0 \\
0 & 0 & 0 & \frac{2 i A_4
B_4}{(e-1) S} \\
\end{array}
\right)\;.
\end{equation}
Using the rescaling symmetry\footnote{\la{rescaling}We can rescale ${\bf P}_1\to f{\bf P}_1$ and ${\bf P}_2\to g{\bf P}_2$, rescaling simultaneously ${\bf P}_3\to 1/f{\bf P}_1$ and ${\bf P}_4\to1/g{\bf P}_4$ and similar for ${\bf Q}_i$. In addition for ${\bf P}$'s only we have the freedom ${\bf P}_3\to{\bf P}_3+\gamma_2{\bf P}_2-\gamma_1{\bf P}_1$
and ${\bf P}_4\to{\bf P}_4+\gamma_3{\bf P}_1+\gamma_1{\bf P}_2$ for some constants $\gamma_n$, this ambiguity is resolved in the twisted theory. These transformations are the most general which preserve $\chi_{ij}$ tensor and do not modify the asymptotic of ${\bf P}$'s.} we can set $B_1=i A_4,\;B_2=i A_3,\;B_3=iA_2,\;B_4=iA_1$ giving
\begin{equation}
Q_{a|j}=\left(
\begin{array}{cccc}
i & 0 & 0 & 0 \\
0 & -i & 0 & 0 \\
0 & 0 & i & 0 \\
0 & 0 & 0 & -i \\
\end{array}
\right)\;.
\end{equation}
This implies that ${\bf Q}_i$ and ${\bf P}_a$ are essentially equal in this limit due to \eq{allQQ22}:
\begin{equation}
{\bf Q}_1=i{\bf P}_4\;\;,\;\;{\bf Q}_2=i{\bf P}_3\;\;,\;\;{\bf Q}_3=i{\bf P}_2\;\;,\;\;{\bf Q}_4=i{\bf P}_1\;.
\end{equation}
This makes our calculations much easier as we can write the gluing condition \eq{gluingF} directly on ${\bf P}$
\beqa\la{gluing1}
\tilde{\bf P}_4&=&-\alpha\bar {\bf P}_2\\ \la{gluing2}
\tilde{\bf P}_3-\bar{\alpha}\bar{\bf P}_1&=&-\left[\beta_1+\beta_2\cosh(2\pi u)-\beta_3\sinh(2\pi u)\right]\bar {\bf P}_4-\gamma \bar{\bf P}_2\;.
\eeqa
To solve these equations we have to impose the asymptotic on ${\bf P}_a$.
For simplicity we consider the $L=2$ case only, leaving general $L$ as an exercise. For $L=2$ \eq{Pasm} gives:
\begin{equation}
{\bf P}_1\simeq A_1 \frac{1}{u^2}\;\;,\;\;
{\bf P}_2\simeq A_2 \frac{1}{u}\;\;,\;\;
{\bf P}_3\simeq A_3 \;\;,\;\;
{\bf P}_4\simeq A_4 u\;.
\end{equation}
Since ${\bf P}_a$ is a function with only one branch cut which can be resolved with the help of the Zhukovsky variable $x(u)=\frac{u+\sqrt{u^2-4g^2}}{2g}$ we can use the following general ansatz\footnote{which can be interpreted as a Laurent series expansion in $x$ plane, where the functions ${\bf P}$ are analytic in the exterior of the unit circle and the first singularity lies inside the unit circle ensuring good convergence of the series expansion.}
\beqa\la{ansatz}
{\bf P}_1=\sum_{n=2}^\infty\frac{c_{1,n}}{x^n}\;\;,\;\;
{\bf P}_2=\sum_{n=1}^\infty\frac{c_{2,n}}{x^n}\;\;,\;\;
{\bf P}_3=\sum_{n=0}^\infty\frac{c_{3,n}}{x^n}\;\;,\;\;
{\bf P}_4=\sum_{n=-1}^\infty\frac{c_{4,n}}{x^n}\;\;.
\eeqa
Note that under analytic continuation $\tilde x=1/x$.
Now we can use the condition \eq{gluing1} to deduce ${\bf P}_4$ and
${\bf P}_1$. Plugging the ansatz \eq{ansatz} into \eq{gluing1} we get
\begin{equation}
\frac{c_{4,-1}}{x}+c_{4,0}+{c_{4,1}}{x}+{c_{4,2}}{x^2}+\dots=-\alpha\left(
\frac{\bar c_{2,1}}{x}+\frac{\bar c_{2,2}}{x^2}+
\frac{\bar c_{2,3}}{x^3}+\dots
\right)\;.
\end{equation}
We see that the l.h.s. contains infinitely many positive powers of $x$
whereas in the r.h.s. there are only negative powers, which implies that
$c_{4,n\ge 0}=0$ and $c_{2,n\ge2}=0$ and thus
\begin{equation}
{\bf P}_2=\frac{c_{2,1}}{x}\;\;,\;\;{\bf P}_4=-\alpha\bar c_{2,1}x\;.
\end{equation}
In order to deal with the second equation is a similar way we should use the identities
\begin{equation}\la{ident}
\cosh(2\pi u)=\sum_{n=-\infty}^\infty I_{2n}\left(\sqrt{\lambda}\right)x^{2n}(u)\;\;,\;\;
\sinh(2\pi u)=\sum_{n=-\infty}^\infty I_{2n+1}\left(\sqrt{\lambda}\right)x^{2n+1}(u)
\end{equation}
where $I_n(z)$ is the Bessel function of 2nd kind defined as
\begin{equation}
I_n(y)=\oint \frac{e^{y/2(z+1/z)}}{z^{1-n}}\frac{dz}{2\pi i}\;.
\end{equation}
\newpage
\begin{exercise}
Prove identities \eq{ident}, you will have to use that $u=g(x+1/x)$
and that $g=\frac{\sqrt{\lambda}}{4\pi}$.
\end{exercise}
After that we can express both sides of \eq{gluing2} as a power series in $x$ and match the coefficients. In particular comparing the coefficients of $x^0$ and $x^{-2}$ we get
\begin{equation}
c_{3,0}=-\alpha\beta_3 c_{2,1}I_1(\sqrt{\lambda})\;\;,\;\;c_{1,2}=
\frac{\bar\alpha\bar{\beta}_3 \bar c_{2,1}}{\alpha }
I_3(\sqrt{\lambda
})\;.
\end{equation}
Finally, forming the combinations
\beqa
A_1 A_4&=&g c_{4,-1}c_{1,2}=-g\bar{\alpha}\bar{\beta_3}\bar c^2_{2,1} I_3(\sqrt{\lambda})\\
A_2 A_3&=&g c_{2,1}c_{3,0}=-g\alpha\beta_3 c_{2,1}^2I_1(\sqrt{\lambda})
\eeqa
which are also given in \eq{AAvals} in terms of a real quantity $e=(\Delta-L)/S$ and $S$ we conclude that
\begin{equation}
\Delta-L=S \frac{I_1(\sqrt{\lambda})+I_3(\sqrt{\lambda})}{I_1(\sqrt{\lambda})-I_3(\sqrt{\lambda})}\;.
\end{equation}
reproducing the result from \cite{Basso:2011rs}!
\begin{exercise}
Repeat the above calculation for arbitrary $L$. You have to obtain
\begin{equation}
\Delta-L=S\frac{\sqrt\lambda I_{L+1}(\sqrt\lambda)}{L I_{L}(\sqrt\lambda)}\;.
\end{equation}
In the derivation you can assume that $\gamma=\beta_1=\beta_2=0$.
We explain below why that is the case for $L=2$.
\end{exercise}
In order to fix the solution for ${\bf P}_a$ we notice that we
also get
\begin{equation}\la{c21}
c_{2,1}^2=\frac{iS}{g\alpha\beta_3(I_1(\sqrt{\lambda})-I_3(\sqrt{\lambda}) }\;.
\end{equation}
Even though the constants $\gamma,\beta_1$ and $\beta_2$ did not enter into the calculation, leading to the dimension $\Delta$, they will still appear in the solution for ${\bf P}_a$. Here we will fix them further from the reality conditions.
Let us also show that $\gamma=\beta_2=0$. First the coefficients $x$ and $1/x$ from \eq{gluing2} give the following combination
\begin{equation}
\beta_1=-\beta_2 I_0(\sqrt{\lambda})\;\;,\;\;\gamma=-\alpha\beta_2 \frac{c_{2,1}}{\bar c_{2,1}}I_2(\sqrt{\lambda})\;.
\end{equation}
Since $c_{2,1}$ is already fixed \eq{c21} we obtain
\begin{equation}\la{gamma}
\frac{\gamma^2|\beta_3|^2}{|\alpha|^2I^2_2(\sqrt{\lambda})}=-\beta_2^2\bar\beta_3^2
\end{equation}
where the l.h.s. is real and positive. At the same time if we compare the coefficients of $x^2$ and $x^3$ in \eq{gluing2} we get
\begin{equation}\la{c3oc2}
\frac{c_{3,3}}{c_{3,2}}\frac{I_1(\sqrt{\lambda})}{I_2(\sqrt{\lambda})}=\frac{\beta_2}{\beta_3}=\frac{\bar\beta_2}{\bar\beta_3}
\end{equation}
where again the l.h.s. should be real due to the complex conjugation property of ${\bf P}_a$ which allows us to complex conjugate the r.h.s.\footnote{Under the complex conjugation ${\bf P}_a\to e^{i\phi_a}{\bf P}_a$, for some real $\phi_a$. This implies that the ratios $c_{a,n}/c_{a,m}$ are real.}. Squaring both sides of \eq{c3oc2} and multiplying by
\eq{gamma} we obtain
\begin{equation}\la{gamma}
\frac{\gamma^2|\beta_3|^2}{|\alpha|^2I^2_2(\sqrt{\lambda})}=-\beta_2^2\bar\beta_2^2\;,
\end{equation}
which is only possible if $\beta_2=\gamma=0$.
\chapter{Solving QSC at finite coupling Numerically}
\section{Description of the Method}
In this part of the notes we describe the numerical algorithm and analyze some of the numerical results.
We illustrate the general method initially proposed in \cite{Gromov:2015wca} by considering the same states as in the previous section ${\rm tr} ZD_-^S Z$ i.e. twist-$2$ operators.
First we consider $S=2$ case -- Konishi operator.
Additionally from the beginning we impose the parity symmetry which this states have i.e. symmetry under $u\to -u$ which reflects in the parity of ${\bf P}_a$ functions. The {\it Mathematica} code which we used for this lecture can be found as an ancillary file for the arXiv submission 1504.06640.
Below we describe the main steps and ideas for the numerical procedure.
\begin{figure}[t]
\begin{center}
\def0.6\textwidth{0.6\textwidth}
\input{jumps.pdf_tex}
\end{center}
\caption{\la{jumps}To reconstruct $Q_{a|i}$ at the values of $u\sim 1$ we perform several jumps by $i$ using \eq{Qaijump} into the region on the upper half plane where the asymptotic expansion \eq{expansionQai} is applicable.}
\end{figure}
\begin{itemize}
\item Parameterise the system in terms of the truncated series in $x$ of ${\bf P}_a$ as follows:
\beqa\la{Px}
{\bf P}_a=(x g)^{-\tilde M_a}{\bf p}_a\;\;,\;\;{\bf p}_a=\left(A_a+\sum_{n=1}^\infty\frac{c_{a,n}}{x^{2n}}\right)
\eeqa
where in the code we cut the sum at some finite value \mline{Pcut}. We will see that to get $6$ digits of precision with need \mline{Pcut} as small as $3$ (for relatively small $g=1/5$). This series converges very well even for $|x|=1$. Note that under the analytic continuation to the next sheet we simply replace $x\to 1/x$ so that
\beqa\la{Ptilde}
\tilde {\bf P}_a=(x/g)^{\tilde M_a}\left(A_a+\sum_{n=1}^\infty{c_{a,n}}{x^{2n}}\right)\;.
\eeqa
\item Given ${\bf P}_a$ in terms of $c_{a,n}$ find $Q_{a|i}(u)$ as a series expansion in large $u$:
\begin{equation}\la{expansionQai}
Q_{a|i}(u)=u^{-\tilde M_a+\hat M_j}\sum_{n=0}\frac{B_{a,i,n}}{u^{2n}}\;.
\end{equation}
We find the coefficients $B_{a,i,n}$ by plugging the expansion \eq{expansionQai} into the finite difference equation \eq{allQQ0}.
The term $B_{a,i,0}$ we found before in \eq{Qkjlargeu}.
Expanding it at large $u$ we get a linear system on the coefficients $B_{a,i,n}$.
The series \eq{expansionQai} is asymptotic and works well as far as $u$ is large enough.
In our numerical implementation we keep around $12$ terms.
\item Starting from the expansion \eq{expansionQai}
at large ${\rm Im}u$ we can move down to the real axis
using \eq{allQQ0} in the form
\begin{equation}\la{Qaijump}
Q_{a|i}(u-\tfrac{i}{2})=\left(\delta_{a}^b-{\bf P}_a(u){\bf P}_c(u) \chi^{cb}\right)Q_{b|i}(u+\tfrac{i}{2})\;.
\end{equation}
Applying \eq{Qaijump} recursively we can decrease the imaginary part of $u$ from the asymptotic area to
reach finite values of $u$ (see Fig.\ref{jumps}). We will mostly need values of $Q_{a|i}$
at ${\rm Im}\;u=1/2$ with $-2g<{\rm Re}\;u<2g$.
\item
Having $Q_{a|i}$ computed we reconstruct ${\bf Q}_i$ and $\tilde{\bf Q}_i$ from
\begin{equation}\la{QfromP}
{\bf Q}_i(u)=Q_{a|i}(u+i/2)\chi^{ab}{\bf P}_b(u)\;\;,\;\;
\tilde{\bf Q}_i(u)=Q_{a|i}(u+i/2)\chi^{ab}\tilde{\bf P}_b(u)
\end{equation}
where ${\bf P}_a$ and $\tilde {\bf P}_a$ are given in terms of $c_{a,n}$ in \eq{Px} and \eq{Ptilde}.
\item Finally, we constrain $c_{a,n}$ from the gluing conditions \eq{gluing}. We will see that it is sufficient to impose only half of them. In our numerical implementation we build a function
\begin{equation}
F(\Delta,c_{a,n},u)=\tilde {\bf Q}_3-\alpha \bar {\bf Q}_1
\end{equation}
and then adjust $\Delta$ and $c_{a,n}$ to minimize $F(\Delta,c_{a,n},u)$ at some set of probe points $u_k\in (2g,2g)$. For this we use standard numerical optimization methods.
\end{itemize}
In the next section we give more details about the {\it Mathematica} implementation of our method.
\section{Implementation in {\it Mathematica}}
The {\it Mathematica} notebook we describe below, with slight improvements,
can be downloaded from arXiv \cite{Gromov:2015wca}.
First we make basic definitions. We define $x(u)$ in the way to ensure
that it has only one cut $[-2g,2g]$
\begin{mathematica}
X[u_] = (u + g*Sqrt[u/g - 2]*Sqrt[u/g + 2])/(2*g);
chi = {{0, 0, 0, -1}, {0, 0, 1, 0}, {0, -1, 0, 0},
{1, 0, 0, 0}};
\end{mathematica}
\begin{exercise}
What is the branch cut structure of the naive definition\\ \mline{X[u_]=(u+Sqrt[u^2-4g^2])/(2g)} consider also the case of complex $g$.
\end{exercise}
Next we define $\hat M$ and $\tilde M$ as in \eq{relMta}. We also specialize to the Konishi operator in the $sl(2)$ sector with $J_1=S=2$. The variable \mline{d} denotes the full dimension $\Delta$;
\begin{mathematica}
J1 = 2; J2 = 0; J3 = 0; S1 = 2; S2 = 0;
Mt = {(J1+J2-J3+2)/2,(J1-J2+J3)/2,(-J1+J2+J3+2)/2,(-J1-J2-J3)/2}
Mh = {(d-S1-S2+2)/2,(d+S1+S2)/2,(-d-S1+S2+2)/2,(-d+S1-S2)/2}
powp = -Mt;
powq = Mh - 1;
(*setting the value for the coupling*)
g = 1/5;
\end{mathematica}
The variables \mline{powp} and \mline{powq} give the powers of ${\bf P}_a$ and ${\bf Q}_i$.
We also set the coupling to a particular value $g=1/5$.
\paragraph*{Parameters} There are several parameters which are responsible for the precision of the result.
\begin{mathematica}
cutP = 3;(* number of terms we keep in the expansion of P_a *)
cutQai = 12;(* number of powers in expansion of Qai at large *)
shiftQai = 20; (* Number of jumps from asymptotic region *)
WP = 50;(* Working precision *)
PO = 12;(* Number of the sampling points on the cut to use *)
\end{mathematica}
\paragraph*{Ansatz for ${\bf P}_a$ and parameters of the problem}
The set of ${\bf p}_a$ from \eq{Px} we define as follows
\begin{mathematica}
ps = {A[1] + I*Sum[c[1, n]/x^(2*n), {n, cutP}],
A[2] + I*Sum[c[2, n]/x^(2*n), {n, cutP}],
A[3] + Sum[c[3, n]/x^(2*n), {n, cutP}],
A[4] + Sum[c[4, n - 1]/x^(2*n), {n, 2, cutP + 1}]};
\end{mathematica}
Note that we set the first sub-leading coefficient in ${\bf P}_4$ to zero,
this is always possible to do due to the residual symmetry (see footnote \ref{rescaling}).
Whereas the coefficients $c_{a,n}$ will serve as parameters in the optimization problem, the leading coefficients $A_a$ are fixed in terms of the quantum numbers of the state via \eq{AAsl2}
\begin{mathematica}
A[1]=-I Product[(Mt[[1]]-Mh[[j]])/If[j==1,1,Mt[[1]]-Mt[[j]]],{j,4}]
A[2]=+I Product[(Mt[[2]]-Mh[[j]])/If[j==2,1,Mt[[2]]-Mt[[j]]],{j,4}]
A[3] = 1; A[4] = 1;
\end{mathematica}
Similarly we code the leading coefficients of ${\bf Q}_i$ and $Q_{a|i}$
\begin{mathematica}
B[1]=-I Product[(Mh[[1]]-Mt[[j]])/If[j==1,1,Mh[[1]]-Mh[[j]]],{j,4}]
B[2]=+I Product[(Mh[[2]]-Mt[[j]])/If[j==2,1,Mh[[2]]-Mh[[j]]],{j,4}]
B[3] = 1; B[4] = 1;
(* leading order coefficients in Q_ai *)
Do[B[a,i,0] = -I(A[a]B[i])/(powq[[i]]+powp[[a]]+1),{a,4},{i,4}]
\end{mathematica}
The whole Q-system, which we are partially going to reconstruct,
is thus parameterized by a set of $c_{i,n}$ and $d$. The substitute \mline{sb} will replace these variables by their values stored in the list \mline{params}
\begin{mathematica}
prm := {d}~Join~Flatten[Table[c[i, n], {i, 4}, {n, cutP}]];
sb := Rule @@@ (Transpose[{prm, SetPrecision[params, WP]}])
\end{mathematica}
We will update the list \mline{params} at each iteration with its better approximation. As we are going to solve it with a Newton-like method,
which is very sensitive to the starting points, one should roughly know where to look for the solution. A perturbative solution, available in some cases, could be good to start with, but sometimes even a very rough estimate of $d$ and a few first coefficients will lead to a convergent procedure. For \mline{cutP}$=3$ we need in total $1+4*3=13$ parameters.
\paragraph*{Finding $Q_{a|i}$ at large $u$}
Having finished with defining the basics we can finally accomplish the first step in the algorithm -- find the large $u$ expansion of $Q_{a|i}$ in the form \eq{expansionQai}. First we re-expand ${\bf P}_a$ at large $u$:
\begin{mathematica}
psu = Series[(g x/u)^powp ps/.x->X[u],{u,Infinity,cutQai+2}];
\end{mathematica}
Next we define separately the non-integer power $u^{-\tilde M_a+\hat M_j}$
and the series in inverse negative powers \eq{expansionQai}
\begin{mathematica}
qaipow = Table[u^powq[[i]]*u^powp[[a]]*u, {a, 4}, {i, 4}];
Bpart = Table[Sum[B[a,i,n]/u^(2*n),{n,0,cutQai/2}],{a,4},{i,4}];
\end{mathematica}
For optimization purposes we pre-expand these parts of the expansion separately with shifts $u\to u\pm i/2$
\begin{mathematica}
powP=Series[(qaipow/.u->u+I/2)/qaipow/.(u+a_)^(b_):>u^b*(1+a/u)^b
, {u, Infinity, cutQai + 2}];
powM=Series[(qaipow/.u->u-I/2)/qaipow/.(u+a_)^(b_):>u^b*(1+a/u)^b
, {u, Infinity, cutQai + 2}];
BpartP = Series[Bpart /. u -> u + I/2, {u, Infinity, cutQai + 2}];
BpartM = Series[Bpart /. u -> u - I/2, {u, Infinity, cutQai + 2}];
\end{mathematica}
Finally we code the function which computes the coefficients $B_{a,i,n}$
\begin{mathematica}
FindQlarge := Block[{},
PP=Series[KroneckerProduct[psu,chi.psu],{u,Infinity,cutQai+2}]/.sb;
eqs=ExpandAll[Series[Normal[
(BpartP/.sb)*powP-(BpartM/.sb)*powM+(1/u)*PP.(BpartP*powP) /. sb]
,{u,Infinity,cutQai+2}]];
slB = Last[Solve[LogicalExpand[eqs == 0]]];
Qailarge = qaipow*Bpart/.slB/.sb]
\end{mathematica}
The function computes the expansion and store it in the variable \mline{Qailarge}.
\paragraph*{Finding $Q_{a|i},\;{\bf Q}_i$ and $\tilde{{\bf Q}}_i$ on the real axis}
In order to impose the gluing conditions on the Zhukovsky branch cut \eq{gluing} we will use a set of sampling points (\mline{points}), chosen so that their density increases near the ends of the interval $[-2g,2g]$ to guarantee maximal efficiency (we use Chebyshev nodes).
\begin{mathematica}
points = N[Table[-2*g*Cos[Pi*((n - 1/2)/PO)], {n, PO}], WP];
\end{mathematica}
Now for each of the sampling points we have to climb up to the asymptotic region using \eq{Qaijump}.
\begin{mathematica}
SolveQPP[n0_] := Block[{}, Clear[Qai, PP, PS];
PS[uu_] := PS[uu] = SetPrecision[Expand[(x*g)^powp*ps/.sb]
/.x^(a_.)->X[uu]^a /. sb, WP];
PP[(uu_)?NumericQ] := PP[uu] = IdentityMatrix[4]+
KroneckerProduct[PS[uu], chi.PS[uu]];
Qai[n0][uu_] = Qailarge /. u -> uu + I*n0 - I/2;
Qai[n_][u_] := Qai[n][u] = SetPrecision[PP[u+I*n].Qai[n+1][u],WP];
Qaiplist = Table[Qai[1][p], {p, points}]];
\end{mathematica}
This function creates \mline{Qaiplist} which contains values of $Q_{a|i}$
at the sampling points. This allows us to compute ${\bf Q}_i$ using simple matrix multiplication via \eq{QfromP}
\begin{mathematica}
DoQlist := Block[{},
Qilist = Transpose[Table[((x*g)^powp*ps/.x->X[u]/.sb
/.u->points[[i]]).chi .Qaiplist[[i]], {i, PO}]];
Qitlist = Transpose[Table[((x*g)^powp*ps/.x->1/X[u]/.sb
/.u->points[[i]]).chi.Qaiplist[[i]], {i, PO}]];];
\end{mathematica}
Now when we have the values of ${\bf Q}_i$ and also $\tilde {\bf Q}_i$
we can define the function $F$, which depends on the parameters $d,c_{a,n}$ and computes the mismatch of the gluing condition
at the sampling points.
\begin{mathematica}
F[Plist_List] := (F[Plist] = Block[{}, Print[Plist];
params = Plist;
FindQlarge;
SolveQPP[shiftQai];
DoQlist;
C1list = Qilist[[1]]/Conjugate[Qilist[[3]]];
C2list = Qitlist[[1]]/Conjugate[Qitlist[[3]]];
c = Mean[Join[C1list, C2list]];
Flatten[{Re[{C1list-c, C2list-c}/c], Im[{C1list-c, C2list-c}/c]}]]
)/; NumericQ[Total[Plist]];
\end{mathematica}
Finally, we have to tune the values of parameters
so that the square of the function $F$ is minimized.
\begin{mathematica}
(*setting the starting configuration*)
params0 = SetPrecision[{4.5,0,0,0,-1,0,0,1,0,0,0,0,0}, WP];
(*finding optimal parameters*)
FindMinimum[(1/2)*F[prm].F[prm],
Transpose[{prm, params0}],
Method -> {"LevenbergMarquardt", "Residual" -> F[prm]},
WorkingPrecision -> 30,
AccuracyGoal -> 7]
\end{mathematica}
The built-in function \mline{FindMinimum} is rather slow and takes around $10$min to run. It is much better to use the implementation from the
notebook attached to the arXiv submission \cite{Gromov:2015wca} which uses parallel computing and gives the result in about $1$ minute. It is possible to further improve the above basic code performance by roughly a factor of $10-100$, but that will also make it more cumbersome.
\begin{exercise}
Use the above code to get the dimension of the Konishi operator
at $g=1/5$. Compare your result with the high precision evaluation $\Delta=4.4188598808023509$ taken from \cite{Gromov:2015wca}.
\end{exercise}
\begin{exercise}
Use the result for $g=1/5$ as a starting point to compute $g=3/10$.
You should get $\Delta=4.826949$. Note that the convergence radius of the
perturbation theory is $g^*=1/4$\footnote{The finite convergence radius of the perturbation theory is due to the branch-cut singularity of the spectrum at $g_*=\pm i/4$. This is the value of the coupling when branch points of the Zhukovsky cuts $2g+i n$ and $-2g+i n\pm i$ become equal.}, so this value is already outside the range accessible with perturbation theory.
\end{exercise}
\begin{exercise}
Check that the same code will work perfectly for non-integer values of the Lorentz spin $S$. Analytic continuation in the spin is very important for the BFKL applications \cite{Alfimov:2014bwa,Gromov:2015vua}. Try to change $S=S_1$ gradually until it reaches $S=3/2$ for $\Delta=2/10$. You should get $\Delta=3.85815$. Verify numerically \eq{gluingF} and show that $\gamma\simeq 0.0030371$ is indeed a real constant and $\beta=\beta_1+\beta_2\cosh(2\pi u)+\beta_3\sinh(2\pi u)$ for some constants $\beta_k$.
\end{exercise}
\chapter{Applications, Further Reading and Open Questions}
In this section we attempt to cover most of the recent applications of the QSC methods and offer some open questions.
\paragraph*{QSC for ABJ(M) Theory} The QSC was also developed for ABJ(M) theory
(which is a 3D ${\mathcal N}=6$ Chern-Simons theory) in \cite{Cavaglia:2014exa,Bombardelli:2017vhk}.
An nontrivial specific feature of this theory is that the positions of the branch points are related to the `t Hooft coupling in a very nontrivial way and is called the interpolation function $h(\lambda)$. By comparing the results of localization with
the analytic calculation of the slope function (similar to what we did in Section.\ref{slope}), it was possible to obtain an expression for the interpolation function for ABJM theory~\cite{Cavaglia:2016ide} and for a more general ABJ theory~\cite{Gromov:2014eha}. The detailed proof of these expressions is still an open question and would likely require the QSC formulation for the cusped Wilson-line in these theories, which is not known yet.
\paragraph*{QSC for Wilson Line with a Cusp}
The anomalous dimension for the Maldacena-Wilson line with a cusp was shown to be integrable in \cite{Drukker:2012de,Correa:2012hh}. In \cite{Gromov:2015dfa} the QSC construction for this observable was formulated, which allowed for the precise numerical analysis and non-perturbative analytic results. In \cite{Gromov:2016rrp} by taking an appropriate limit of the cusp anomalous dimension, the potential between heavy quark--anti-quarks was studied in detail with the help of QSC.
\paragraph*{QSC for High Order Perturbative Expansion}
The QSC method allows for a very efficient analytic perturbative expansion. A very nice and powerful method for $sl(2)$ sector was developed in~\cite{Marboe:2014gma} allowing the computation of $10$-loops analytical coefficients on a standard laptop in just $3$ hours. An alternative method, which can be applied in general situation was developed in~\cite{Gromov:2015vua}. In~\cite{Marboe:2017dmb} the project of creating a database perturbative expansion of low lying anomalous dimensions was initiated.
\paragraph*{QSC for QCD Pomeron} As we discuss in the Section~\ref{sec:cont} the QSC enables a very simple analytic continuation in the quantum numbers such as Lorentz spin $S$.
As was explained in~\cite{Kotikov:2007cy} one can approach the regime,
where ${\mathcal N}=4$ SYM becomes similar to QCD. This regime can be also studied with the QSC~\cite{Alfimov:2014bwa}. In particular the most complicated highest transcedentality parts of the planar QCD result
at $3$ loops was obtained for the first time
in~\cite{Gromov:2015vua}, by using the QSC. It was later confirmed by an independent calculation in~\cite{Caron-Huot:2016tzz}.
\paragraph*{QSC for Deformations of ${\mathcal N}=4$ SYM}
The ${\mathcal N}=4$ admits numerous deformations. Some of them are
analogous to the twists we discussed in Section~\ref{sec:twist} and can be easily introduced into the QSC formalism simply by modifying the asymptotic of the Q-functions. For some examples see~\cite{Gromov:2015dfa,Kazakov:2015efa}. Another deformation is
called $\eta$-deformation~\cite{Arutyunov:2013ega}, which most likely can be described by the QSC as well, by replacing a simple cut in ${\bf P}_a$ function with a periodised set of cuts\footnote{this case was considered very recently in~\cite{Klabbers:2017vtw}.}.
\paragraph*{QSC for Fishnet Graphs}
In the limit when one of the twist parameters becomes large and the 't Hooft coupling simultaneously scales to zero one gets a significant simplification
in the perturbation theory, which gets dominated by the ``fishnet" scalar graphs. First this limit was considered for the cusp anomalous dimension in~\cite{Correa:2012hh,Erickson:1999qv} and it was possible to reproduce the result analytically from the QSC.
A more systematic study of the ``fishnet" limit of ${\mathcal N}=4$
was initiated by~\cite{Gurdogan:2015csr} where it was demonstrated that
many more observables can be studied by considering a special type of diagram. In~\cite{Gromov:2017cja} it was shown how the QSC methods can be used to evaluate these type of Feynman graphs.
\paragraph*{Open Questions} Even though a number of longstanding problems were resolved with the help of the QSC there are still a number of open questions which could potentially be solved using the QSC. Some of them are likely to be solved soon, others may never be solved. Below we give an incomplete list of such problems, focusing on those more likely to be solved before the next ice age.
It would be very useful to be able to extract the strong coupling expansion of the spectrum analytically from the QSC. Some first steps were done in~\cite{Hegedus:2016eop}.
The structure of the QSC is very constraining and at the moment we only
know two QSCs for SYM and ABJ(M). It would be useful to make a complete classification of the QSCs starting from the symmetry group.
This way one should find the QSC for $AdS_3/CFT_2$ and also possibly for a mysterious $6D$ theory -- a mother theory of 6D integrable fishnet graphs. Similarly, different asymptotic and gluing conditions represent different observables in ${\mathcal N}=4$ SYM, it would be
useful to have complete classification of such asymptotics and gluing conditions.
For even more mathematically oriented readers there is the question of
proving existence/countability of the solution of the QSC.
A big open conceptual question is how
to derive the QSC from the gauge theory perspective, without a reference to AdS/CFT correspondence as this would allow
us to prove to some extent AdS/CFT by taking the classical limit of the QSC, deriving the Green-Schwartz classical spectral curve.
Some of the problems which are within immediate reach include:
studying oderon dimension in the way similar to BFKL pomeron~\cite{Gromov:2015vua} (see for settings~\cite{Brower:2014wha});
constructing the QSC for the recently proposed integrability framework for the
Hagedorn phase transition~\cite{Harmark:2017yrv} which would enable analytic weak coupling expansion and numerical analysis for this observable;
integrable boundary problems with
non-diagonal twist, like recently considered in~\cite{Guica:2017mtd}
could be most likely treated in the way similar to~\cite{Gromov:2016rrp}, this problem seems to be also related to the
problem of finding the spectrum of tachyons~\cite{Bajnok:2013wsa}, which is another problem where the QSC reformulation could help to advance further.
A more complicated but very important problem is to extend the QSC formalism to the problem of computing n-point correlation function.
Existing beautiful integrability-based hexagon formalism~\cite{Basso:2015zoa} should give important hints on re-summing wrapping corrections. This problem seems to be linked to the problem of finding separated variables in the AdS/CFT for some first steps at weak coupling see~\cite{Gromov:2016itr}. The
one-point function~\cite{Buhl-Mortensen:2015gfd} could be the perfect framework for developing the new QSC-based formalism for the correlators.
See also~\cite{Buhl-Mortensen:2017ind} for more exotic observables which could be also potentially governed by integrability.
Finally, the other main open questions are whether we could also use integrability to get non-planar corrections and also get closer to the real world QCD.
\\{ }\\
If you have questions, please feel free to email to \url{[email protected]}. You are also welcome to email any answers to the above questions to \url{[email protected]}!
\thebibliography{0}
\bibitem{Gromov:2007aq}
N.~Gromov and P.~Vieira,
``The AdS(5) x S**5 superstring quantum spectrum from the algebraic curve,''
Nucl.\ Phys.\ B {\bf 789} (2008) 175
doi:10.1016/j.nuclphysb.2007.07.032
[hep-th/0703191 [HEP-TH]].
\bibitem{Gromov:2009tv}
N.~Gromov, V.~Kazakov and P.~Vieira,
``Exact Spectrum of Anomalous Dimensions of Planar N=4 Supersymmetric Yang-Mills Theory,''
Phys.\ Rev.\ Lett.\ {\bf 103} (2009) 131601
doi:10.1103/PhysRevLett.103.131601
[arXiv:0901.3753 [hep-th]].
\bibitem{Gromov:2009bc}
N.~Gromov, V.~Kazakov, A.~Kozak and P.~Vieira,
``Exact Spectrum of Anomalous Dimensions of Planar N = 4 Supersymmetric Yang-Mills Theory: TBA and excited states,''
Lett.\ Math.\ Phys.\ {\bf 91} (2010) 265
doi:10.1007/s11005-010-0374-8
[arXiv:0902.4458 [hep-th]].
\bibitem{Gromov:2009zb}
N.~Gromov, V.~Kazakov and P.~Vieira,
``Exact Spectrum of Planar ${\mathcal N}=4$ Supersymmetric Yang-Mills Theory: Konishi Dimension at Any Coupling,''
Phys.\ Rev.\ Lett.\ {\bf 104} (2010) 211601
doi:10.1103/PhysRevLett.104.211601
[arXiv:0906.4240 [hep-th]].
\bibitem{Gromov:2009tq}
N.~Gromov,
``Y-system and Quasi-Classical Strings,''
JHEP {\bf 1001} (2010) 112
doi:10.1007/JHEP01(2010)112
[arXiv:0910.3608 [hep-th]].
\bibitem{Beisert:2010jr}
N.~Beisert {\it et al.},
``Review of AdS/CFT Integrability: An Overview,''
Lett.\ Math.\ Phys.\ {\bf 99}, 3 (2012)
doi:10.1007/s11005-011-0529-2
[arXiv:1012.3982 [hep-th]].
\bibitem{Gromov:2011cx}
N.~Gromov, V.~Kazakov, S.~Leurent and D.~Volin,
``Solving the AdS/CFT Y-system,''
JHEP {\bf 1207}, 023 (2012)
doi:10.1007/JHEP07(2012)023
[arXiv:1110.0562 [hep-th]].
\bibitem{Gromov:2013pga}
N.~Gromov, V.~Kazakov, S.~Leurent and D.~Volin,
``Quantum Spectral Curve for Planar $\mathcal{N} =$ Super-Yang-Mills Theory,''
Phys.\ Rev.\ Lett.\ {\bf 112}, no. 1, 011602 (2014)
doi:10.1103/PhysRevLett.112.011602
[arXiv:1305.1939 [hep-th]].
\bibitem{Gromov:2014bva}
N.~Gromov, F.~Levkovich-Maslyuk, G.~Sizov and S.~Valatka,
``Quantum spectral curve at work: from small spin to strong coupling in $ \mathcal{N} $ = 4 SYM,''
JHEP {\bf 1407}, 156 (2014)
doi:10.1007/JHEP07(2014)156
[arXiv:1402.0871 [hep-th]].
\bibitem{Cavaglia:2014exa}
A.~Cavaglià, D.~Fioravanti, N.~Gromov and R.~Tateo,
``Quantum Spectral Curve of the $\mathcal N=$ 6 Supersymmetric Chern-Simons Theory,''
Phys.\ Rev.\ Lett.\ {\bf 113}, no. 2, 021601 (2014)
doi:10.1103/PhysRevLett.113.021601
[arXiv:1403.1859 [hep-th]].
\bibitem{Gromov:2014caa}
N.~Gromov, V.~Kazakov, S.~Leurent and D.~Volin,
``Quantum spectral curve for arbitrary state/operator in AdS$_{5}$/CFT$_{4}$,''
JHEP {\bf 1509}, 187 (2015)
doi:10.1007/JHEP09(2015)187
[arXiv:1405.4857 [hep-th]].
\bibitem{Alfimov:2014bwa}
M.~Alfimov, N.~Gromov and V.~Kazakov,
``QCD Pomeron from AdS/CFT Quantum Spectral Curve,''
JHEP {\bf 1507}, 164 (2015)
doi:10.1007/JHEP07(2015)164
[arXiv:1408.2530 [hep-th]].
\bibitem{Gromov:2015wca}
N.~Gromov, F.~Levkovich-Maslyuk and G.~Sizov,
``Quantum Spectral Curve and the Numerical Solution of the Spectral Problem in AdS5/CFT4,''
JHEP {\bf 1606}, 036 (2016)
doi:10.1007/JHEP06(2016)036
[arXiv:1504.06640 [hep-th]].
\bibitem{Gromov:2015vua}
N.~Gromov, F.~Levkovich-Maslyuk and G.~Sizov,
``Pomeron Eigenvalue at Three Loops in $\mathcal N=$ 4 Supersymmetric Yang-Mills Theory,''
Phys.\ Rev.\ Lett.\ {\bf 115}, no. 25, 251601 (2015)
doi:10.1103/PhysRevLett.115.251601
[arXiv:1507.04010 [hep-th]].
\bibitem{Gromov:2015dfa}
N.~Gromov and F.~Levkovich-Maslyuk,
``Quantum Spectral Curve for a cusped Wilson line in $ \mathcal{N}=4 $ SYM,''
JHEP {\bf 1604}, 134 (2016)
doi:10.1007/JHEP04(2016)134
[arXiv:1510.02098 [hep-th]].
\bibitem{Gromov:2016rrp}
N.~Gromov and F.~Levkovich-Maslyuk,
``Quark-anti-quark potential in $ \mathcal{N} =$ 4 SYM,''
JHEP {\bf 1612}, 122 (2016)
doi:10.1007/JHEP12(2016)122
[arXiv:1601.05679 [hep-th]].
\bibitem{Bombardelli:2017vhk}
D.~Bombardelli, A.~Cavaglià, D.~Fioravanti, N.~Gromov and R.~Tateo,
``The full Quantum Spectral Curve for $AdS_4/CFT_3$,''
arXiv:1701.00473 [hep-th].
\bibitem{Bombardelli:2016rwb}
D.~Bombardelli {\it et al.},
``An integrability primer for the gauge-gravity correspondence: An introduction,''
J.\ Phys.\ A {\bf 49} (2016) no.32, 320301
doi:10.1088/1751-8113/49/32/320301
[arXiv:1606.02945 [hep-th]].
\bibitem{Beisert:2005tm}
N.~Beisert,
``The SU(2|2) dynamic S-matrix,''
Adv.\ Theor.\ Math.\ Phys.\ {\bf 12} (2008) 945
doi:10.4310/ATMP.2008.v12.n5.a1
[hep-th/0511082].
\bibitem{Janik:2006dc}
R.~A.~Janik,
``The AdS(5) x S**5 superstring worldsheet S-matrix and crossing symmetry,''
Phys.\ Rev.\ D {\bf 73}, 086006 (2006)
doi:10.1103/PhysRevD.73.086006
[hep-th/0603038].
\bibitem{Beisert:2006ez}
N.~Beisert, B.~Eden and M.~Staudacher,
``Transcendentality and Crossing,''
J.\ Stat.\ Mech.\ {\bf 0701}, P01021 (2007)
doi:10.1088/1742-5468/2007/01/P01021
[hep-th/0610251].
\bibitem{Ambjorn:2005wa}
J.~Ambjorn, R.~A.~Janik and C.~Kristjansen,
``Wrapping interactions and a new source of corrections to the spin-chain/string duality,''
Nucl.\ Phys.\ B {\bf 736} (2006) 288
doi:10.1016/j.nuclphysb.2005.12.007
[hep-th/0510171].
\bibitem{Cavaglia:2010nm}
A.~Cavaglia, D.~Fioravanti and R.~Tateo,
``Extended Y-system for the $AdS_5/CFT_4$ correspondence,''
Nucl.\ Phys.\ B {\bf 843} (2011) 302
doi:10.1016/j.nuclphysb.2010.09.015
[arXiv:1005.3016 [hep-th]].
\bibitem{Bombardelli:2009ns}
D.~Bombardelli, D.~Fioravanti and R.~Tateo,
``Thermodynamic Bethe Ansatz for planar AdS/CFT: A Proposal,''
J.\ Phys.\ A {\bf 42} (2009) 375401
doi:10.1088/1751-8113/42/37/375401
[arXiv:0902.3930 [hep-th]].
\bibitem{Arutyunov:2009ur}
G.~Arutyunov and S.~Frolov,
``Thermodynamic Bethe Ansatz for the AdS(5) x S(5) Mirror Model,''
JHEP {\bf 0905}, 068 (2009)
doi:10.1088/1126-6708/2009/05/068
[arXiv:0903.0141 [hep-th]].
\bibitem{Balog:2012zt}
J.~Balog and A.~Hegedus,
``Hybrid-NLIE for the AdS/CFT spectral problem,''
JHEP {\bf 1208}, 022 (2012)
doi:10.1007/JHEP08(2012)022
[arXiv:1202.3244 [hep-th]].
\bibitem{Faddeev:1996iy}
L.~D.~Faddeev,
``How algebraic Bethe ansatz works for integrable model,''
hep-th/9605187.
\bibitem{Kulish:1983rd}
P.~P.~Kulish and N.~Y.~Reshetikhin,
``Diagonalization Of Gl(n) Invariant Transfer Matrices And Quantum N Wave System (lee Model),''
J.\ Phys.\ A {\bf 16} (1983) L591.
doi:10.1088/0305-4470/16/16/001
\bibitem{Basso:2011rs}
B.~Basso,
``An exact slope for AdS/CFT,''
arXiv:1109.3154 [hep-th].
\bibitem{Dorey:2006zj}
N.~Dorey and B.~Vicedo,
``On the dynamics of finite-gap solutions in classical string theory,''
JHEP {\bf 0607}, 014 (2006)
doi:10.1088/1126-6708/2006/07/014
[hep-th/0601194].
\bibitem{Kazakov:2015efa}
V.~Kazakov, S.~Leurent and D.~Volin,
``T-system on T-hook: Grassmannian Solution and Twisted Quantum Spectral Curve,''
JHEP {\bf 1612}, 044 (2016)
doi:10.1007/JHEP12(2016)044
[arXiv:1510.02100 [hep-th]].
\bibitem{Cavaglia:2016ide}
A.~Cavaglià, N.~Gromov and F.~Levkovich-Maslyuk,
``On the Exact Interpolating Function in ABJ Theory,''
JHEP {\bf 1612}, 086 (2016)
doi:10.1007/JHEP12(2016)086
[arXiv:1605.04888 [hep-th]].
\bibitem{Gromov:2014eha}
N.~Gromov and G.~Sizov,
``Exact Slope and Interpolating Functions in N=6 Supersymmetric Chern-Simons Theory,''
Phys.\ Rev.\ Lett.\ {\bf 113} (2014) no.12, 121601
doi:10.1103/PhysRevLett.113.121601
[arXiv:1403.1894 [hep-th]].
\bibitem{Correa:2012hh}
D.~Correa, J.~Maldacena and A.~Sever,
``The quark anti-quark potential and the cusp anomalous dimension from a TBA equation,''
JHEP {\bf 1208}, 134 (2012)
doi:10.1007/JHEP08(2012)134
[arXiv:1203.1913 [hep-th]].
\bibitem{Drukker:2012de}
N.~Drukker,
``Integrable Wilson loops,''
JHEP {\bf 1310} (2013) 135
doi:10.1007/JHEP10(2013)135
[arXiv:1203.1617 [hep-th]].
\bibitem{Arutyunov:2013ega}
G.~Arutyunov, R.~Borsato and S.~Frolov,
``S-matrix for strings on $\eta$-deformed AdS5 x S5,''
JHEP {\bf 1404} (2014) 002
doi:10.1007/JHEP04(2014)002
[arXiv:1312.3542 [hep-th]].
\bibitem{Marboe:2014gma}
C.~Marboe and D.~Volin,
``Quantum spectral curve as a tool for a perturbative quantum field theory,''
Nucl.\ Phys.\ B {\bf 899} (2015) 810
doi:10.1016/j.nuclphysb.2015.08.021
[arXiv:1411.4758 [hep-th]].
\bibitem{Kotikov:2007cy}
A.~V.~Kotikov, L.~N.~Lipatov, A.~Rej, M.~Staudacher and V.~N.~Velizhanin,
``Dressing and wrapping,''
J.\ Stat.\ Mech.\ {\bf 0710}, P10003 (2007)
doi:10.1088/1742-5468/2007/10/P10003
[arXiv:0704.3586 [hep-th]].
\bibitem{Caron-Huot:2016tzz}
S.~Caron-Huot and M.~Herranen,
``High-energy evolution to three loops,''
arXiv:1604.07417 [hep-ph].
\bibitem{Alfimov:2014bwa}
M.~Alfimov, N.~Gromov and V.~Kazakov,
``QCD Pomeron from AdS/CFT Quantum Spectral Curve,''
JHEP {\bf 1507}, 164 (2015)
doi:10.1007/JHEP07(2015)164
[arXiv:1408.2530 [hep-th]].
\bibitem{Erickson:1999qv}
J.~K.~Erickson, G.~W.~Semenoff, R.~J.~Szabo and K.~Zarembo,
``Static potential in N=4 supersymmetric Yang-Mills theory,''
Phys.\ Rev.\ D {\bf 61} (2000) 105006
doi:10.1103/PhysRevD.61.105006
[hep-th/9911088].
\bibitem{Gurdogan:2015csr}
Ö.~Gürdoğan and V.~Kazakov,
``New Integrable 4D Quantum Field Theories from Strongly Deformed Planar $\mathcal N = $ 4 Supersymmetric Yang-Mills Theory,''
Phys.\ Rev.\ Lett.\ {\bf 117} (2016) no.20, 201602
Addendum: [Phys.\ Rev.\ Lett.\ {\bf 117} (2016) no.25, 259903]
doi:10.1103/PhysRevLett.117.201602, 10.1103/PhysRevLett.117.259903
[arXiv:1512.06704 [hep-th]].
\bibitem{Gromov:2017cja}
N.~Gromov, V.~Kazakov, G.~Korchemsky, S.~Negro and G.~Sizov,
``Integrability of Conformal Fishnet Theory,''
arXiv:1706.04167 [hep-th].
\bibitem{Hegedus:2016eop}
Á.~Hegedűs and J.~Konczer,
``Strong coupling results in the AdS$_{5}$ /CF T$_{4}$ correspondence from the numerical solution of the quantum spectral curve,''
JHEP {\bf 1608} (2016) 061
doi:10.1007/JHEP08(2016)061
[arXiv:1604.02346 [hep-th]].
\bibitem{Harmark:2017yrv}
T.~Harmark and M.~Wilhelm,
``The Hagedorn temperature of AdS5/CFT4 via integrability,''
arXiv:1706.03074 [hep-th].
\bibitem{Guica:2017mtd}
M.~Guica, F.~Levkovich-Maslyuk and K.~Zarembo,
``Integrability in dipole-deformed N=4 super Yang-Mills,''
arXiv:1706.07957 [hep-th].
\bibitem{Bajnok:2013wsa}
Z.~Bajnok, N.~Drukker, Á.~Hegedüs, R.~I.~Nepomechie, L.~Palla, C.~Sieg and R.~Suzuki,
``The spectrum of tachyons in AdS/CFT,''
JHEP {\bf 1403}, 055 (2014)
doi:10.1007/JHEP03(2014)055
[arXiv:1312.3900 [hep-th]].
\bibitem{Basso:2015zoa}
B.~Basso, S.~Komatsu and P.~Vieira,
arXiv:1505.06745 [hep-th].
\bibitem{Gromov:2016itr}
N.~Gromov, F.~Levkovich-Maslyuk and G.~Sizov,
arXiv:1610.08032 [hep-th].
\bibitem{Buhl-Mortensen:2017ind}
I.~Buhl-Mortensen, M.~de Leeuw, A.~C.~Ipsen, C.~Kristjansen and M.~Wilhelm,
``Asymptotic one-point functions in AdS/dCFT,''
arXiv:1704.07386 [hep-th].
\bibitem{Buhl-Mortensen:2015gfd}
I.~Buhl-Mortensen, M.~de Leeuw, C.~Kristjansen and K.~Zarembo,
``One-point Functions in AdS/dCFT from Matrix Product States,''
JHEP {\bf 1602} (2016) 052
doi:10.1007/JHEP02(2016)052
[arXiv:1512.02532 [hep-th]].
\bibitem{Brower:2014wha}
R.~C.~Brower, M.~S.~Costa, M.~Djurić, T.~Raben and C.~I.~Tan,
``Strong Coupling Expansion for the Conformal Pomeron/Odderon Trajectories,''
JHEP {\bf 1502} (2015) 104
doi:10.1007/JHEP02(2015)104
[arXiv:1409.2730 [hep-th]].
\bibitem{Cavaglia:2015nta}
A.~Cavaglià, M.~Cornagliotto, M.~Mattelliano and R.~Tateo,
``A Riemann-Hilbert formulation for the finite temperature Hubbard model,''
JHEP {\bf 1506} (2015) 015
doi:10.1007/JHEP06(2015)015
[arXiv:1501.04651 [hep-th]].
\bibitem{GA}
M.~Alfimov, N.~Gromov, G.~Sizov to appear.
\bibitem{Marboe:2017dmb}
C.~Marboe and D.~Volin,
``The full spectrum of AdS5/CFT4 I: Representation theory and one-loop Q-system,''
arXiv:1701.03704 [hep-th].
\bibitem{Klabbers:2017vtw}
R.~Klabbers and S.~J.~van Tongeren,
``Quantum Spectral Curve for the eta-deformed $AdS_5xS^5$ superstring,''
arXiv:1708.02894 [hep-th].
\endthebibliography
\end{document}
|
2,869,038,155,297 | arxiv | \section{Introduction}
Recently, the wireless-based localization has attracted great attention
due to its adaptability to the existing wireless infrastructure and
capacity for assisting communications \cite{survey_localization1,survey_localization2}.
It has been widely employed in multisensory extended reality (XR)
\cite{XR}, smart transportation \cite{smarttrans1,smarttrans2},
and connected robotics and autonomous systems (CRAS) \cite{CRAS}.
These applications generally require submeter-level localization accuracy
and even centimeter-level localization accuracy. Initially, the global
navigation satellite system (GNSS) has been employed to provide location
services, but it has low accuracy in indoor environments \cite{GPS}.
Thus, the cellular-based localization and wireless local area networks
(WLAN)-based localization have been developed as alternatives to GNSS
\cite{wireless_loc1,wireless_locTOA2,wireless_locTOA3,Wifi_local1}.
These approaches estimate the positions of agents by utilizing the
features inferred from the radio frequency (RF) signals, which include
time of arrival (TOA), angle of arrival (AoA), angle of departure
(AoD), time difference of arrival (TDOA), and received signal strength
(RSS). In particular, the TOA-based localization is a widely studied
wireless localization method \cite{wireless_locTOA2,wireless_locTOA3,Wifi_local1,wireless_locTOA4,wireless_locTOA5},
which estimates the distance by multiplying the delay of the line-of-sight
(LoS) path with the light speed.
However, the estimation accuracy of the multipath channel delays is
limited by the bandwidth of the transmitted signal. To address this
issue, the multiband delay estimation schemes have been proposed in
\cite{XuHuilin1,nsdi,CS2019,CS2020,ESPRIT1,ESPRIT2,MUSIC_1,LTESAGE},
which make use of the channel state information (CSI) measurements
across multiple frequency bands to obtain high accuracy delay estimation.
Compared to single band delay estimation, multiband delay estimation
can obtain extra multiband gains, which consist of two parts: (i)
Multiband CSI samples lead to more subcarrier apertures gain; (ii)
Frequency band apertures gain brought by the difference of carrier
frequency between subbands \cite{ESPRIT2}. The subcarrier apertures
and the frequency band apertures are shown in Fig. \ref{fig:The-distribution-of}.
As can be seen, the spectrum resource used for localization is non-contiguous,
that consists of a number of subbands. The green regions are the frequency
subbands allocated to other applications (e.g., wireless communication)
and thus are unable to utilize for localization. Despite the existence
of the apertures gains, the multiband delay estimation method still
faces new challenges. One challenge is the phase distortion in the
channel frequency response (CFR) samples caused by hardware imperfections
\cite{nonidealfoctor1,nonidealfoctor2,nonidealfoctor3}, which has
a severe effect on delay estimation and needs to be calibrated. The
other challenge is that the high frequency carrier in the CFR samples
leads to a violent oscillation phenomenon of the likelihood function,
which causes numerous bad local optimums \cite{ESPRIT1,ESPRIT2}.
Therefore, it is very difficult to exploit the frequency band apertures
gain. Some related works are summarized below.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{band_distribution2}\caption{\label{fig:The-distribution-of}An illustration of subcarrier and
frequency band apertures.}
\end{figure}
\textbf{Maximum likelihood (ML) based methods:} The traditional approaches
to estimate the delay parameters are ML based estimation methods,
of which the space-alternating generalized expectation-maximization
(SAGE) algorithm stands out for its versatility and robustness in
harsh multipath environments \cite{SAGE_base}. In \cite{LTESAGE},
a multiband TOA estimation method has been carried out by the SAGE
algorithm for long term evolution (LTE) downlink systems, which provides
a reduced standard deviation for the delay estimation. However, the
authors in \cite{ML_localoptimun} pointed out that the ML based methods
tend to converge to a local optimum and need to find the global optimum
in a large searching space. To reduce the computational complexity,
the authors in \cite{XuHuilin1} proposed a low-complexity approach,
which recovers the channel impulse response (CIR) with equally spaced
taps and approximately estimates the first path by mitigating the
energy leakage. However, this method estimates the delay of the first
path coarsely and results in a limited performance improvement.
\textbf{Subspace based estimation methods:} There have been works
attempting to solve the multiband delay estimation problems by using
subspace based methods, such as multiple signal classification (MUSIC)
\cite{MUSIC_1} and estimating signal parameters via rotational invariance
techniques (ESPRIT) \cite{ESPRIT1,ESPRIT2}. In \cite{MUSIC_1}, the
classical MUSIC algorithm has been applied to delay estimation. However,
this approach simply exploits the subcarrier apertures gain while
the frequency band apertures gain is not exploited, which results
in a performance loss. The authors in \cite{ESPRIT1,ESPRIT2} employed
the multiple shift-invariance structure in the multiband channel measurements
and thus the proposed algorithm achieves a high accuracy delay estimation.
Nevertheless, the subcarrier spacing of different subbands is assumed
to be equivalent, which restricts its applications for practical systems.
Moreover, the subspace based methods generally require multiple snapshots
of orthogonal frequency division multiplexing (OFDM) pilot symbols
to guarantee the performance \cite{MUSICbase}, which consumes lots
of pilot resources.
\textbf{Compressed sensing (CS) based methods:} In an indoor environment,
the CIR is sparse since it consists of only a small number of paths.
Motivated by the sparsity of CIR over the delay domain, many state-of-the-art
delay estimation algorithms have been proposed based on CS methods
\cite{nsdi,CS2019,CS2020}. In \cite{nsdi,CS2019}, the authors formulated
$l_{1}$-norm minimization problems for capturing the signal sparsity.
In \cite{CS2020}, orthogonal matching pursuit (OMP) methods have
been used for recovering the sparse CIR. However, these approaches
have the problem of energy leakage resulting from basis mismatch \cite{FDD2D},
which require dense grids and have high computational complexity.
In the aforementioned studies, on one hand, the accuracy of most algorithms
is limited, since the associated multi-parameter estimation problem
contains many bad local optimums caused by high frequency carrier
terms and frequency band apertures. Though some other works have eliminated
the oscillation, they have not exploited all apertures gains in the
multiband CSI measurements \cite{nsdi,MUSIC_1}, e.g., most works
only exploit the subcarrier apertures gain due to the difficulty of
exploiting the frequency band apertures gain. On the other hand, many
works have not considered the imperfect phase distortion factors caused
by receiver timing offset or phase noise in practical systems \cite{XuHuilin1,ESPRIT1,MUSIC_1,XuHuilin2}.
Though some studies in \cite{nsdi,CS2020,ESPRIT2} have considered
the phase distortion, the calibration of the phase distortions with
extra information is required via a handshaking procedure under the
assumption of channel reciprocity. However, this assumption is restrictive
and the transceiver needs to have the ability of Tx/Rx switching.
Furthermore, in the handshaking procedure, the phase lock loop (PLL)
must keep inlock to ensure that the phase offset remain an unchanged
absolute value, which is difficult to achieve in practice.
In this paper, we consider a TOA-based localization system using OFDM
training signals over multiple frequency subbands. Then, a novel two-stage
global estimation (TSGE) scheme is proposed to fully exploit all the
multiband gains, where we consider all the phase distortion factors
and calibrate them implicitly without using extra handshaking procedures.
Specifically, in Stage 1, we build a coarse signal model, in which
the high frequency carriers are all absorbed for eliminating the oscillation
of the likelihood function. Although the coarse estimation algorithm
derived from the coarse signal model can only exploit the subcarrier
apertures gain, it does not get stuck in bad local optimums and thus
can provide a much more stable delay estimation to narrow down the
search range for the global delay estimation in the refined stage.
Then, we provide a sparse representation for multiband CFR, where
we adopt a common support based sparse vector to capture the group
sparsity structure in the multiband channel over the delay domain.
Based on this model, a Turbo Bayesian inference (Turbo-BI) algorithm
is proposed for channel parameter estimation (including the delay
parameter). Compared to the CS-based delay estimation methods in \cite{nsdi,CS2019,CS2020},
our proposed algorithm achieves higher estimation accuracy with lower
computational complexity. It is because we adopt a dynamic grid adjustment
strategy and we do not need a very dense grid.
In Stage 2, with the help of prior information passed from Stage 1,
we perform a finer estimation based on a refined signal model. A higher
estimation accuracy than that in Stage 1 can be guaranteed since this
signal model contains the structure of frequency band apertures. However,
the refined signal model leads to a multi-dimensional non-convex likelihood
function that has many bad local optimums, which makes it extremely
difficult to find the global optimum and fully exploit the frequency
band apertures gain. For utilizing this apertures gain and the prior
information properly, we adopt a global search algorithm based on
the particle swarm optimization (PSO) to find a good solution for
the non-convex optimization of the multi-dimensional likelihood function.
In particular, the coarse estimation results from Stage 1 can be utilized
for determining the particle search range, which can reduce the search
complexity significantly. For further reducing the search space and
improving the estimation accuracy, we employ primal-decomposition
theory to decouple the objective function and get a least square (LS)
solution for channel coefficients. Then, the dimension of the search
space can be reduced by eliminating the channel coefficients in the
primal optimization problem. The main contributions are summarized
below.
\begin{itemize}
\item A novel TSGE scheme is proposed for obtaining multiband gains with
acceptable complexity, which includes a coarse estimation stage and
a refined estimation stage based on a two-stage signal model.
\item In Stage 1, we set up a common support based probability model, which
employs the group sparsity structure of the multiband channel. Then,
a Turbo-BI algorithm is proposed for delay estimation.
\item In Stage 2, we propose a PSO-LS algorithm based on PSO and primal-decomposition
theory, which reduces the dimension of the search space and significantly
improves the estimation accuracy.
\end{itemize}
Finally, extensive simulation results are presented to validate the
effectiveness of the proposed scheme and we show that it can achieve
superior delay estimation accuracy as compared to the baseline schemes.
The rest of this paper is organized as follows. In Section \ref{sec:System Model},
we describe the system model and introduce the phase distortion factors.
In Section \ref{sec:Two-Stage-Multiband-Delay}, we formulate the
two-stage signal model and outline the TSGE scheme. Sections \ref{sec:Turbo-BI-Algorithm-in}
and \ref{sec:PSO-LS-algorithm-in} present the Turbo-BI algorithm
and the PSO-LS algorithm in Stage 1 and Stage 2, respectively. In
Section \ref{sec:Simulation-Results}, numerical results are presented
and finally Section \ref{sec:Conclusion} concludes the paper.
\textit{Notations}: Bold upper (lower)-case letters are used to define
matrices (column vectors). In particular, bold letters indexed by
subscript $m$ denote vectors or matrices corresponding to the $m$-th
subband. $\mathbf{I}$ denotes an identity matrix, $\delta\left(\cdot\right)$
denotes the Dirac's delta function, $\mathrm{diag}\left(\cdot\right)$
constructs a diagonal matrix from its vector argument, and $\|\cdot\|$
denotes the Euclidean norm of a complex vector. For a matrix $\mathbf{A}$,
$\mathbf{A}^{T},\mathbf{A}^{H},\mathbf{A}^{-1},\mathrm{tr}(\mathbf{A})$
represent a transpose, complex conjugate transpose, inverse, and trace
of a matrix, respectively. For a scalar $a$, $a^{*}$ denotes the
conjugate of a scalar. The notation $\mathbb{R}^{\textrm{+}}$ represents
the strictly positive real number and $\mathcal{CN}(\mathbf{x};\mathrm{\boldsymbol{\mu}},\boldsymbol{\Sigma})$
denotes a complex Gaussian normal distribution corresponding to variable
$\mathbf{x}$ with mean $\boldsymbol{\mu}$ and covariance matrix
$\boldsymbol{\Sigma}$.
\section{System Model\label{sec:System Model}}
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{system_model2}\caption{\label{fig:The-multiband-OFDM}An illustration of the multiband OFDM
system.}
\end{figure}
As shown in Fig. \ref{fig:The-multiband-OFDM}, we consider a single-input
single-output (SISO) multiband system which employs OFDM training
signals over $M$ frequency subbands. The multiband system consists
of $M$ non-overlapping single band OFDM subsystems, where the $m$-th
subband is allocated to the $m$-th operator. Assume that each frequency
subband has $N_{m}$ orthogonal subcarriers with subcarrier spacing
$f_{s,m}$ and the carrier frequency of subband $m$ is denoted as
$f_{c,m}$. Then, the continuous-time CIR $h\left(t\right)$ can be
written as
\begin{equation}
h(t)=\sum_{k=1}^{K}\alpha_{k}\delta\left(t-\tau_{k}\right),
\end{equation}
where $K$ denotes the number of multipath components between the
transmitter and the receiver, $\alpha_{k}\in\mathbb{C}$ and $\tau_{k}\in\mathbb{R^{\textrm{+}}}$
denote the complex path gain and the delay of the $k$-th path, respectively.
The delays are sorted in an increasing order, i.e., $\tau_{k-1}<\tau_{k}$,
$k=2,...,K$, and $\tau_{1}$ is the LoS path which needs to be estimated
for localization. We assume that the complex path gain and delay parameters
are independent of the frequency subbands. Then, via a Fourier transform
of the CIR as in \cite{CS2019,ESPRIT2}, the CFR samples can be expressed
as
\begin{equation}
\tilde{h}_{m,n}=\sum_{k=1}^{K}\alpha_{k}e^{-j2\pi f_{m,n}\tau_{k}},\label{eq:CFR}
\end{equation}
where $f_{m,n}=f_{c,m}+n_{m}f_{s,m}$, $m=1,...,M$, $n_{m}\in\mathcal{N_{\mathit{m}}}\triangleq\left\{ -\frac{N_{m}}{2},...,\frac{N_{m}}{2}-1\right\} $.
With a slight abuse of notation, we use $n$ instead of $n_{m}$ in
the following equations. We assume that $N_{\mathit{m}},\forall m$
is an even number without loss of generality, and denote $N=N_{1}+\ldots+N_{M}$
as the number of CFR samples over all subbands.
Apparently, the CFR exhibits sparsity over delay domain when $K$
is small, which will be exploited in our proposed Turbo-BI algorithm.
Then, during the period of a single OFDM symbol, the discrete-time
received signal model can be written as \cite{CS2019,ESPRIT2}
\begin{equation}
y_{m,n}=\sum_{k=1}^{K}\alpha_{k}e^{-j2\pi\left(f_{c,m}+nf_{s,m}\right)\tau_{k}}e^{-j2\pi nf_{s,m}\delta_{m}}e^{j\varphi_{m}}s_{m,n}+w_{m,n},\label{eq:original_signal}
\end{equation}
where $w_{m,n}$ is the $n$-th element of the additive white Gaussian
noise (AWGN) vector $\boldsymbol{w}_{m}\in\mathbb{C}^{N_{m}\times1}$,
following the distribution $\mathcal{C}\mathcal{N}\left(0,\sigma_{ns}^{2}\mathbf{I}\right)$.
$s_{m,n}$ denotes a known training symbol over the $n$-th subcarrier
of subband $m$ and we assume $s_{m,n}=1,\forall m,n$ for simplicity.
The parameter $\varphi_{m}$ and $\delta_{m}$ represent the phase
distortion factors caused by random phase offset and receiver timing
offset \cite{nonidealfoctor1,nonidealfoctor2,nonidealfoctor3}, respectively.
In practice, the receiver timing offset $\delta_{m}$ is often within
a small range and thus we assume that $\delta_{m},\forall m$ follows
a prior distribution $p\left(\delta_{m}\right)\sim\mathcal{N}\left(0,\sigma_{p}^{2}\right)$
\cite{nonidealfoctor3}, where $\sigma_{p}$ is the timing synchronizaton
error.
However, the global delay estimation problem will be intractable if
we directly use the signal model (\ref{eq:original_signal}) due to
the huge search space of the multi-dimensinal parameters and the existence
of many local optimums in the likelihood function, which thus motivates
the set up of the novel two-stage signal model and the associated
two-stage global estimation scheme.
\section{\label{sec:Two-Stage-Multiband-Delay}Two-Stage Global Estimation
Scheme for Multiband Delay Estimation}
In this section, we first explain why we cannot use the original signal
model for delay estimation based on the likelihood function analysis,
which motivates us to introduce the proposed two-stage signal model.
Then, based on the two-stage signal model, we provide the outline
of the proposed TSGE scheme.
\subsection{\label{subsec:Two-Stage-Signal-Model}Two-Stage Signal Model}
It is difficult to directly use the received signal model (\ref{eq:original_signal})
for delay estimation. On one hand, the original model (\ref{eq:original_signal})
exists inherent ambiguity. Specifically, for an arbitrary constant
$c$, if we substitute two sets of variables $(\left|\alpha_{k}\right|e^{j\angle\alpha_{k}},\phi_{m})$
and $(\left|\alpha_{k}\right|e^{j(\angle\alpha_{k}+c)},\phi_{m}-c)$
into equation (\ref{eq:original_signal}), the same observation result
will be obtained. It indicates that the parameters $(\alpha_{k},\phi_{m})$
are ambiguous, which leads to the difficulty of delay estimation.
On the other hand, due to the high frequency carrier $f_{c,m}$, the
original signal model (\ref{eq:original_signal}) leads to a multimodal
non-convex likelihood function that has many sidelobes. Consequently,
it is extremely difficult to find the global optimum. In the worst
case, the point of true values may fall into sidelobes, which will
inevitably result in a large estimation error. To clarify this problem,
we plot a likelihood function curve for the one-dimensional problem
of estimating the LoS path delay only in Fig. \ref{fig:Likehood-original},
where the red circle marks the point of the true values of LoS path
delay and the red star marks the point of global optimum. As can be
seen, the likelihood function fluctuates frequently, which makes it
intractable to find the global optimum. Moreover, the point of true
values is not in the mainlobe, at which the global optimum locates.
Even though we try our best to find the global optimum, an absolute
estimation error about $27$ ns is still inevitable.
Therefore, we build new signal models without inherent ambiguity and
the probability that the point of true values is in the region of
the mainlobe is much higher. Moreover, the signal models should reserve
the structure of frequency band apertures and subcarrier apertures.
Motivated by the above facts, we propose a two-stage signal model,
which is transformed from the original signal model (\ref{eq:original_signal})
by absorbing different frequency/phase terms into the complex gain.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{likehood_prime}\caption{\label{fig:Likehood-original}An illustration of the likelihood function
based on signal model (\ref{eq:original_signal}).}
\end{figure}
\subsubsection{Coarse Signal Model}
\begin{equation}
y_{m,n}=\sum_{k=1}^{K}\alpha_{k,m}e^{-j2\pi nf_{s,m}\tau_{k}}e^{-j2\pi nf_{s,m}\delta_{m}}+w_{m,n},\label{eq:coarse_signal}
\end{equation}
where $\alpha_{k,m}=\alpha_{k}e^{j\varphi_{m}}e^{-j2\pi f_{c,m}\tau_{k}},\forall k,m$.
In signal model (\ref{eq:coarse_signal}), we absorb the terms of
random phase offset $e^{j\varphi_{m}}$ and carrier phase $e^{-j2\pi f_{c,m}\tau_{k}}$
into $\alpha_{k}$. Signal model (\ref{eq:coarse_signal}) becomes
unambiguous according to this equivalent transformation and all subbands
share the common sparse delay domain. As shown in Fig. \ref{fig:Likehood-coarse},
we depict the likelihood function based on signal model (\ref{eq:coarse_signal})
with the same parameter values as Fig. \ref{fig:Likehood-original}.
The likelihood function no longer frequently fluctuates and the point
of true values locates at the mainlobe region. In this case, we can
exploit the subcarrier apertures gain and estimate delay parameters
without ambiguity. This helps to achieve a much more stable delay
estimation in the coarse estimation stage whose primary purpose is
to narrow down the search range to a relatively small region with
high stability (probability). However, the estimation accuracy is
limited, since we have absorbed the carrier phase term and thus cannot
exploit the frequency band apertures gain, which motivates the next
refined estimation signal model (\ref{eq:refined_signal}) in Stage
2.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{likehood_cugu}\caption{\label{fig:Likehood-coarse}An illustration of the likelihood function
based on signal model (\ref{eq:coarse_signal}).}
\end{figure}
\subsubsection{Refined Signal Model}
\begin{equation}
y_{m,n}=\sum_{k=1}^{K}\alpha_{k}^{\prime}e^{-j2\pi f_{c,m}^{\prime}\tau_{k}}e^{-j2\pi nf_{s,m}\tau_{k}}e^{-j2\pi nf_{s,m}\delta_{m}}e^{j\varphi_{m}^{\prime}}+w_{m,n},\label{eq:refined_signal}
\end{equation}
where $f_{c,m}^{\prime}=f_{c,m}-f_{c,1}$, $\alpha_{k}^{\prime}=\alpha_{k}e^{j\varphi_{1}}e^{-j2\pi f_{c,1}\tau_{k}}$,
$\varphi_{m}^{\prime}=\varphi_{m}-\varphi_{1}$,$\forall k,m.$ In
the signal model (\ref{eq:refined_signal}), we absorb the random
phase offset, $e^{j\varphi_{1}}$, and carrier phase of the first
frequency subband, $e^{-j2\pi f_{c,1}\tau_{k}}$, into $\alpha_{k}$
and reserve the residual term, e.g., $e^{-j2\pi\left(f_{c,2}-f_{c,1}\right)\tau_{k}}$
and $e^{j\left(\varphi_{2}-\varphi_{1}\right)}$ when $m=2$, in (\ref{eq:refined_signal}).
Compared to (\ref{eq:coarse_signal}), signal model (\ref{eq:refined_signal})
has the extra structure of frequency band apertures, $e^{-j2\pi f_{c,m}^{\prime}\tau_{k}}$,
and residual random phase offset, $e^{j\varphi_{m}^{\prime}}$. Fig.
\ref{fig:Likehood-jinggu} illustrates the likelihood function based
on signal model (\ref{eq:refined_signal}), where the likelihood function
fluctuates less than that in Fig. \ref{fig:Likehood-original} and
the point of true values is now in the mainlobe region. Moreover,
we observe that the mainlobe is sharper than that in Fig. \ref{fig:Likehood-coarse}
due to the the existing of frequency band apertures, which leads to
a potential performance improvement. However, the frequency band apertures
also cause numerous bad local optimums in the likelihood function.
How to find the global optimum with low-complexity is a challenging
problem. To solve this problem, we propose the PSO-LS algorithm later.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{likehood_jinggu}\caption{\label{fig:Likehood-jinggu}An illustration of the likelihood function
based on signal model (\ref{eq:refined_signal}).}
\end{figure}
In summary, both two signal models are essential. In Stage 1 based
on signal model (\ref{eq:coarse_signal}), we exploit the subcarrier
apertures and give a stable coarse estimation as the initial result
to Stage 2. Then, in Stage 2, signal model (\ref{eq:refined_signal})
is used for providing a refined estimation by exploiting both the
subcarrier apertures gain and the frequency band apertures gain.
\subsection{Outline of the TSGE Scheme}
Based on the two-stage signal model, the TSGE scheme is depicted as
follows:
\begin{itemize}
\item Stage 1: We set up the coarse signal model (\ref{eq:coarse_signal})
and perform an initial delay estimation using the proposed Turbo-BI
algorithm. By doing this, we exploit the channel sparsity over delay
domain and the subcarrier apertures gain. Then, we provide the estimation
result to Stage 2.
\item Stage 2: Based on the coarse estimation result from Stage 1 and the
refined signal model (\ref{eq:refined_signal}), a more refined delay
estimation is performed. To fully exploit subcarrier and frequency
band apertures gains and overcome the difficulty of finding the global
optimum as shown in Fig. \ref{fig:Likehood-jinggu}, we propose the
PSO-LS algorithm.
\end{itemize}
The overall algorithm is summarized as in Algorithm \ref{alg:TSMBDE-scheme}.
The details of the Turbo-BI and PSO-LS algorithms are presented in
Section \ref{sec:Turbo-BI-Algorithm-in} and \ref{sec:PSO-LS-algorithm-in},
respectively.
\begin{algorithm}[tbh]
{\small{}\caption{\label{alg:TSMBDE-scheme}TSGE scheme}
}{\small\par}
\textbf{Input:} CFR samples $y_{m,n},\forall m,n$.
\textbf{Output:} The delay estimation result from Stage 2.
\begin{algorithmic}[1]
\STATE \textbf{Stage 1:}
\STATE \textbf{\%Coarse estimation}
\STATE Construct the coarse signal model (\ref{eq:coarse_signal}).
\STATE Common support based sparse representation for the coarse
signal model (\ref{eq:coarse_signal}).
\STATE Perform Turbo-BI algorithm.
\STATE Pass the coarse estimation result to Stage 2.
\STATE \textbf{Stage 2:}
\STATE \textbf{\%Refined estimation}
\STATE Construct the refined signal model (\ref{eq:refined_signal}).
\STATE Perform primal-decomposition for problem (\ref{eq:P2}).
\STATE Perform PSO-LS algorithm with the the initial particles generated
using the coarse estimation in Stage 1.
\end{algorithmic}
\end{algorithm}
\section{\label{sec:Turbo-BI-Algorithm-in}Turbo-BI Algorithm in Stage 1}
\subsection{Common Support Based Sparse Representation}
We first describe the sparse representation over delay domain for
the coarse signal model (\ref{eq:coarse_signal}), which is a necessary
step before employing the sparse recovery methods, e.g., Turbo-BI
algorithm. One commonly used method is to define a uniform grid $\mathcal{D}=\left\{ \overline{d}_{1},\ldots,\overline{d}_{L}\right\} $
of $L$ ($L\gg K$) delay points over $\left[0,T_{d}\right]$ ($T_{d}$
denotes an upper bound for the maximum delay spread). If all the true
delay values exactly lie in the discrete set $\mathcal{D}$, we can
reformulate the signal model (\ref{eq:coarse_signal}) as:
\begin{equation}
\left[\begin{array}{c}
\mathbf{y}_{1}\\
\vdots\\
\mathbf{y}_{M}
\end{array}\right]=\left[\begin{array}{lll}
\mathbf{S}_{1}\\
& \ddots\\
& & \mathbf{S}_{M}
\end{array}\right]\cdot\left[\begin{array}{ccc}
\mathbf{A}_{1}\\
& \ddots\\
& & \mathbf{A}_{M}
\end{array}\right]\cdot\left[\begin{array}{c}
\mathbf{x}_{1}\\
\vdots\\
\mathbf{x}_{M}
\end{array}\right]+\left[\begin{array}{c}
\boldsymbol{w}_{1}\\
\vdots\\
\boldsymbol{w}_{M}
\end{array}\right],\label{eq:ULAsignal}
\end{equation}
where $\mathbf{y}_{m}=[y_{m,-\frac{N_{m}}{2}},\cdots,y_{m,\frac{N_{m}}{2}-1}]^{T}\in\mathbb{C}^{N_{m}\times1}$,
$\mathbf{A}_{m}=[\mathbf{a}_{m}(\overline{d}_{1}),\mathbf{a}_{m}(\overline{d}_{2}),\cdots,\mathbf{a}_{m}(\overline{d}_{L})]\in\mathbb{C}^{N_{m}\times L}$
denotes the basis matrix, $\mathbf{a}_{m}(\overline{d}_{l})=[e^{-j2\pi(-\frac{N_{m}}{2})f_{s,m}\overline{d}_{l}},\ldots,e^{-j2\pi(\frac{N_{m}}{2}-1)f_{s,m}\overline{d}_{l}}]^{T}\in\mathbb{C}^{N_{m}\times1}$
is a linear steering vector, $\mathbf{S}_{m}=\mathrm{diag}(e^{-j2\pi(-\frac{N_{m}}{2})f_{s,m}\delta_{m}},\ldots,$$e^{-j2\pi(\frac{N_{m}}{2}-1)f_{s,m}\delta_{m}})\in\mathbb{C}^{N_{m}\times N_{m}}$,
and $\mathbf{x}_{m}\in\mathbb{C}^{L\times1}$ is a sparse vector whose
non-zero elements correspond to the true delays. For example, if the
$l$-th element of $\mathbf{x}_{m}$ denoted by $x_{m,l}$ is non-zero
and the corresponding true delay is $\tau_{\hat{k}}$, then we have
$\overline{d}_{l}=\tau_{\hat{k}}$ and $x_{m,l}=\alpha_{\hat{k},m}$.
However, the true delays generally do not lie exactly on the predefined
discrete grid $\mathcal{D}$ in practice, which leads to the energy
leakage \cite{FDD2D}. To handle this issue, the authors in \cite{nsdi,CS2019,CS2020}
employ a dense grid ($L\gg N$) to make the equation (\ref{eq:ULAsignal})
hold approximately, which leads to a high computational complexity.
To overcome the energy leakage caused by delay mismatch and high computational
complexity issues of using fixed grids, we adopt a dynamic grid adjustment
strategy. Specifically, we introduce an off-grid vector $\Delta\mathbf{\boldsymbol{\tau}}=\left[\Delta\tau_{1},\cdots,\Delta\tau_{L}\right]$,
which satisfies $\Delta\tau_{l_{k}}=\tau_{k}-\overline{d}_{l_{k}},k=1,\cdots,K$
and $\Delta\tau_{l}=0,\forall l\notin\left\{ l_{1},\cdots,l_{K}\right\} $.
Note that $l_{k}\triangleq\underset{l}{\mathrm{argmin}}\left|\tau_{k}-\overline{d}_{l}\right|$
denotes the index of grid which is nearest to $\tau_{k}$. Then, $\mathbf{A}_{m}$
can be rewritten as
\begin{equation}
\mathbf{A}_{m}\left(\Delta\mathbf{\boldsymbol{\tau}}\right)=\left[\mathbf{a}_{m}\left(\overline{d}_{1}+\Delta\tau_{1}\right),\mathbf{a}_{m}\left(\overline{d}_{2}+\Delta\tau_{2}\right),\cdots,\mathbf{a}_{m}\left(\overline{d}_{L}+\Delta\tau_{L}\right)\right].
\end{equation}
Now, the mismatch can be compensated by the off-grid vector $\Delta\boldsymbol{\tau}$
and signal model (\ref{eq:ULAsignal}) can be held even if the true
delays do not lie on the uniform grid $\mathcal{D}$.
After the sparse representation, we propose a support-based probability
model to capture the structured sparsity of the multiband channels.
Since all sparse vectors $\mathbf{x}_{m}$'s share a common support
corresponding to the true delays, the sparse vector $\mathbf{x}=\left[\mathbf{x}_{1};\ldots;\mathbf{x}_{M}\right]$
obeys a group sparsity. We use $\mathbf{s}=\left[s_{1},\ldots,s_{L}\right]^{T}\in\left\{ 0,1\right\} ^{L}$
to denote the common support vector of $\mathbf{x}_{m},\forall m$,
where $s_{l}=1$ indicates that $x_{m,l},\forall m$ are active (non-zero),
while $s_{l}=0$ indicates that $x_{m,l},\forall m$ are inactive
(zero). Then, we further model the channel prior information using
a Bernoulli-Gaussian (BG) model \cite{BG,BG2}. Given the channel
support vector, $\mathbf{s}$, the conditional prior distribution
of the elements of $\mathbf{x}_{m},\forall m$ is independent and
can be written as
\begin{equation}
p\left(x_{m,l}\mid s_{l}\right)=\left(1-s_{l}\right)\delta\left(x_{m,l}\right)+s_{l}\mathcal{CN}\left(x_{m,l};0,\sigma_{m,l}^{2}\right).
\end{equation}
\subsection{Turbo-BI Algorithm}
To achieve the goal of delay estimation, we need to estimate the sparse
vector $\mathbf{x}$, support vector $\mathbf{s}$, and the uncertain
parameters $\mathbf{\boldsymbol{\xi}}\triangleq[\boldsymbol{\delta}^{T},(\Delta\boldsymbol{\mathbf{\tau}})^{T}]^{T}$,
where $\boldsymbol{\delta}=\left[\delta_{1},\ldots,\delta_{M}\right]^{T}$,
given the observations $\mathbf{y}=\left[\mathbf{y}_{1};\ldots;\mathbf{y}_{M}\right]\in\mathbb{C}^{N\times1}$.
In particular, for given $\boldsymbol{\xi}$, we are interested in
computing the conditional marginal posteriors $p(\mathbf{x}|\mathbf{y},\mathbf{\boldsymbol{\xi}})$.
For the uncertain parameters $\boldsymbol{\xi}$, we adopt the maximum
a posteriori (MAP) estimation method as
\begin{equation}
\boldsymbol{\xi}^{*}=\underset{\boldsymbol{\xi}}{\mathrm{argmax}}\thinspace p(\boldsymbol{\xi}|\mathbf{y})\propto\underset{\boldsymbol{\xi}}{\mathrm{argmax}}\thinspace\ln p(\mathbf{y},\boldsymbol{\xi})=\underset{\boldsymbol{\xi}}{\mathrm{argmax}}\thinspace\ln\underset{\mathbf{s}}{\sum}\int p(\mathbf{y},\boldsymbol{\xi},\mathbf{x},\mathbf{s})d\mathbf{x}.\label{eq:MstepMAP}
\end{equation}
This is a high-dimensional non-convex objective function and we cannot
obtain a closed-form expression due to the multi-dimensional integration
over $\mathbf{x}$ and $\mathbf{s}$, which makes it difficult to
directly maximize $\ln p(\mathbf{y},\boldsymbol{\xi})$. To handle
this issue, we adopt the majorization-minimization (MM) method to
construct a surrogate function and then use alternating optimization
(AO) method to find a stationary point of (\ref{eq:MstepMAP}). Inspired
by the expectation-maximization (EM) method \cite{2003EM}, we propose
a Turbo-BI algorithm which performs iterations between the following
two steps until convergence.
\begin{itemize}
\item Turbo-BI-E Step: For given $\mathbf{\boldsymbol{\xi}}$, approximately
calculate the posterior $p(\mathbf{x}|\mathbf{y},\mathbf{\boldsymbol{\xi}})$
by combining the message passing and linear minimum mean square error
(LMMSE) approaches via a turbo framework, as will be elaborated in
Subsection \ref{subsec:Turbo-BI-E-Step};
\item Turbo-BI-M Step: Given $p(\mathbf{x}|\mathbf{y},\mathbf{\boldsymbol{\xi}})$,
construct a surrogate function for $\ln p(\mathbf{y},\boldsymbol{\xi})$
based on the MM method, partition $\boldsymbol{\xi}$ into $B$ blocks
$\mathbf{\boldsymbol{\xi}}=(\mathbf{\boldsymbol{\xi}}_{1},...,\boldsymbol{\xi}_{B})$,
then alternatively maximize the surrogate function with respect to
$\mathbf{\boldsymbol{\xi}_{\mathit{j}}},j=1,\ldots,B$, as will be
elaborated in Subsection \ref{subsec:Turbo-BI-M-Step}.
\end{itemize}
\subsubsection{Turbo-BI-E Step\label{subsec:Turbo-BI-E-Step}}
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{turbo_AB}\caption{\label{fig:Turbo}Modules of the Turbo-BI-E step and message flow
between different modules.}
\end{figure}
The Turbo-BI-E step contains two modules, as illustrated in Fig. \ref{fig:Turbo}.
Module A is a LMMSE estimator based on the observation $\mathbf{y}_{m},\forall m$
and Module B is a sparsity combiner that utilizes the sparsity information
of $\mathbf{x}_{m},\forall m$ to further refine the estimation results.
The extrinsic estimation of one module will be treated as a prior
mean for the other module in the next iteration. Based on the iterations
between these two modules, the channel prior information and observations
information are combined to be exploited. Specifically, in Module
A, we assume that $\mathbf{x}_{m},\forall m$ follows a Gaussian distribution
$\mathcal{CN}\left(\mathbf{x}_{m};\mathbf{x}_{A,m}^{pri},v_{A,m}^{pri}\mathbf{I}\right)$,
where $\mathbf{x}_{A,m}^{pri}$ and $v_{A,m}^{pri}$ are the extrinsic
message output from Module B. We define $\boldsymbol{\Phi}_{m}=\mathbf{S}_{m}\mathbf{A}_{m}\left(\Delta\mathbf{\boldsymbol{\tau}}\right)\in\mathbb{C}^{N_{m}\times L}$
as the measurement matrix and we can obtain the conditional distribution
$p\left(\mathbf{y}_{m}|\mathbf{x}_{m}\right)=\mathcal{CN}\left(\boldsymbol{\Phi}_{m}\mathbf{x}_{m},\sigma_{ns}^{2}\mathbf{I}\right)$.
Then, the posterior distribution of $\mathbf{x}_{m}$ is given by
$p\left(\mathbf{x}_{m}\mid\mathbf{y}_{m}\right)=\mathcal{C}\mathcal{N}\left(\mathbf{x}_{A,m}^{post},\mathbf{V}_{A,m}^{post}\right)$,
where
\begin{equation}
\mathbf{V}_{A,m}^{post}=\left(\frac{\boldsymbol{\Phi}_{\mathit{m}}^{\mathit{H}}\boldsymbol{\Phi}_{m}}{\sigma_{ns}^{2}}+\frac{1}{v_{A,m}^{pri}}\mathbf{I}\right)^{-1},\label{eq:VApost_inv}
\end{equation}
\begin{equation}
\mathbf{x}_{A,m}^{post}=\mathbf{V}_{A,m}^{post}\left(\frac{\mathbf{x}_{A,m}^{pri}}{v_{A,m}^{pri}}+\frac{\boldsymbol{\Phi}_{m}^{H}\mathbf{y}_{m}}{\sigma_{ns}^{2}}\right).\label{eq:XAPOST}
\end{equation}
Note that in \cite{Turbo_BI}, the singular value decomposition (SVD)
of $\boldsymbol{\Phi}_{m}$ is utilized to reduce the computational
complexity of $\mathbf{V}_{A,m}^{post}$, which involves a matrix
inverse operation. In our case, however, the complexity of the matrix
inverse operation is acceptable. It is because $L\ll N_{m}$, the
complexity of the matrix inversion operation, $\mathcal{O}\left(L^{3}\right)$,
is relatively low. Finally, we calculate the extrinsic message passing:
\begin{equation}
v_{A,m,i}^{ext}=\left(\frac{1}{v_{A,m,i}^{post}}-\frac{1}{v_{A,m}^{pri}}\right)^{-1},\label{eq:vAext}
\end{equation}
\begin{equation}
x_{A,m,i}^{ext}=v_{A,m,i}^{ext}\left(\frac{x_{A,m,i}^{post}}{v_{A,m,i}^{post}}-\frac{x_{A,m,i}^{pri}}{v_{A,m}^{pri}}\right),\label{eq:xAext}
\end{equation}
where $v_{A,m,i}^{post}$ is the $i$-th diagonal element of $\mathbf{V}_{A,m}^{post}$.
In Module B, we assume that $\mathbf{x}_{B,m}^{pri}$ is modeled as
an AWGN observation of $\mathbf{x}_{m}$ \cite{modelB1,modelB2}:
\begin{equation}
\mathbf{x}_{B,m}^{pri}=\mathbf{x}_{m}+\mathbf{z}_{m},\label{eq:ModuleB-1}
\end{equation}
where $\mathbf{z}_{m}\sim\mathcal{C}\mathcal{N}\left(0,v_{B,m}^{pri}\mathbf{I}\right)$
is independent of $\mathbf{x}_{m}$, $\mathbf{x}_{B,m}^{pri}=\mathbf{x}_{A,m}^{ext}$
and $v_{B,m}^{pri}=\frac{1}{L}\sum_{i=1}^{L}v_{A,m,i}^{ext}$ are
the extrinsic message from Module A. Based on (\ref{eq:ModuleB-1}),
we combine the sparsity prior information of $\mathbf{x}_{m}$ and
the extrinsic messages from Module A, aiming at calculating the posterior
distributions $p\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)$ by
performing the sum-product message passing (SPMP) \cite{SPMP} over
the factor graph, where $\mathbf{x}_{B}^{pri}=[(\mathbf{x}_{B,1}^{pri})^{T},\ldots,(\mathbf{x}_{B,M}^{pri})^{T}]^{T}$.
Particularly, the factor graph of the joint distribution $p\left(\mathbf{x}_{B}^{pri},\mathbf{x},\mathbf{s}\right)$
is shown in Fig. \ref{fig:Factor-graph-of}, where the function expression
of each factor node is listed in Table \ref{tab:Factors,-distributions-and}.
At subband $m$, the factor graph is denoted by $\mathcal{G_{\mathit{m}}}$.
As can be seen, factor graphs $\mathcal{G_{\mathit{m}}}$'s share
the common support vector $\mathbf{s}$.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{factorgraph}\caption{\label{fig:Factor-graph-of}Factor graph of the Turbo-BI algorithm.}
\end{figure}
\begin{table*}[t]
\begin{centering}
\caption{\label{tab:Factors,-distributions-and}Factors, distributions and
functional forms in Fig. \ref{fig:Factor-graph-of}.}
\par\end{centering}
\centering{
\begin{tabular}{|c|c|c|}
\hline
{\small{}Factor} & {\small{}Distribution} & {\small{}Functional form}\tabularnewline
\hline
\hline
$g_{m,l}\left(x_{B,m,l}^{pri},x_{m,l}\right)$ & $p\left(x_{B,m,l}^{pri}\mid x_{m,l}\right)$ & $\mathcal{CN}\left(x_{m,l};x_{B,m,l}^{pri},v_{B,m}^{pri}\right)$\tabularnewline
\hline
$f_{m,l}\left(s_{l},x_{m,l}\right)$ & $p\left(x_{m,l}\mid s_{l}\right)$ & $\left(1-s_{l}\right)\delta\left(x_{m,l}\right)+s_{l}\mathcal{CN}\left(x_{m,l};0,\sigma_{m,l}^{2}\right)$\tabularnewline
\hline
$d_{l}\left(s_{l}\right)$ & $p\left(s_{l}\right)$ & $p_{s}$\tabularnewline
\hline
\end{tabular}
\end{table*}
We now outline the message passing scheme on graph $\mathcal{G}$.
The details are elaborated in Appendix \ref{subsec:A.-Message-Passing}.
According to the sum-product rule, the message passing over the path
$x_{m,l}\rightarrow f_{m,l}\rightarrow s_{l}$ are given by (\ref{eq:MP1})
and (\ref{eq:MP2}). Then the message is passed back over the path
$s_{l}\rightarrow f_{m,l}\rightarrow x_{m,l}$ using (\ref{eq:MP3})
and (\ref{eq:MP4}). After calculating the updated messages $\left\{ v_{f_{m,l}\rightarrow x_{m,l}}\right\} $,
the approximate posterior distributions are given by
\begin{equation}
p\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)\wasypropto v_{f_{m,l}\rightarrow x_{m,l}}\left(x_{m,l}\right)v_{g_{m,l}\rightarrow x_{m,l}}\left(x_{m,l}\right),\label{eq:MP5}
\end{equation}
where $v_{g_{m,l}\rightarrow x_{m,l}}\left(x_{m,l}\right)=\mathcal{C}\mathcal{N}\left(x_{m,l};x_{B,m,l}^{pri},v_{B,m}^{pri}\right)$.
Then the posterior mean and variance are given by
\begin{equation}
x_{B,m,l}^{post}=\mathbb{E}\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)=\int x_{m,l}\thinspace p\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)dx_{m,l},\label{eq:MP6}
\end{equation}
\begin{equation}
v_{B,m}^{post}=\frac{1}{L}\sum_{l=1}^{L}\mathrm{Var}\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)=\frac{1}{L}\sum_{l=1}^{L}\int\left|x_{m,l}-\mathbb{E}\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)\right|^{2}p\left(x_{m,l}\mid\mathbf{x}_{B}^{pri}\right)dx_{m,l}.\label{eq:MP7}
\end{equation}
Finally, based on the derivation in \cite{XApri}, the extrinsic update
for Module A can be calculated as
\begin{equation}
\mathbf{x}_{A,m}^{pri}=\mathbf{x}_{B,m}^{ext}=v_{A,m}^{pri}\left(\frac{\mathbf{x}_{B,m}^{post}}{v_{B,m}^{post}}-\frac{\mathbf{x}_{B,m}^{pri}}{v_{B,m}^{pri}}\right),\label{eq:xBext}
\end{equation}
\begin{equation}
v_{A,m}^{pri}=v_{B,m}^{ext}=\left(\frac{1}{v_{B,m}^{post}}-\frac{1}{v_{B,m}^{pri}}\right)^{-1}.\label{eq:vBext}
\end{equation}
\subsubsection{Turbo-BI-M Step\label{subsec:Turbo-BI-M-Step}}
In the M-step, we construct a surrogate function at fixed point $\dot{\boldsymbol{\xi}}$
for the objective function (\ref{eq:MstepMAP}) based on the MM method
as
\begin{equation}
u(\mathbf{\boldsymbol{\xi}};\dot{\mathbf{\boldsymbol{\xi}}})=\int p(\mathbf{x}\mid\mathbf{y},\dot{\mathbf{\xi}})\ln\frac{p(\mathbf{x},\mathbf{y},\mathbf{\boldsymbol{\xi}})}{p(\mathbf{x}\mid\mathbf{y},\dot{\mathbf{\boldsymbol{\xi}}})}d\mathbf{x},\label{eq:surrogatefunc}
\end{equation}
which satisfies basic properties $u(\boldsymbol{\mathbf{\xi}};\dot{\boldsymbol{\mathbf{\xi}}})\leq\ln p(\mathbf{y},\boldsymbol{\mathbf{\xi}}),\forall\boldsymbol{\mathbf{\xi}}$;
$u(\dot{\mathbf{\boldsymbol{\xi}}};\dot{\mathbf{\boldsymbol{\xi}}})=\ln p(\mathbf{y},\dot{\mathbf{\xi}}),\forall\boldsymbol{\mathbf{\xi}}$;
$\left.\frac{\partial u(\mathbf{\boldsymbol{\mathbf{\xi}}};\dot{\mathbf{\boldsymbol{\xi}}})}{\partial\boldsymbol{\xi}}\right|_{\mathbf{\boldsymbol{\xi}}=\dot{\mathbf{\boldsymbol{\xi}}}}=\left.\frac{\partial\ln p(\boldsymbol{y},\mathbf{\boldsymbol{\xi}})}{\partial\boldsymbol{\xi}}\right|_{\boldsymbol{\mathbf{\xi}}=\dot{\mathbf{\boldsymbol{\xi}}}},\forall\boldsymbol{\mathbf{\xi}}$.
Then, we partition $\mathbf{\boldsymbol{\xi}}$ into $B=2$ blocks
with $\boldsymbol{\xi}_{1}=\boldsymbol{\delta}$, $\boldsymbol{\xi}_{2}=\Delta\boldsymbol{\tau}$
based on their distinct physical meaning, and alternatively update
$\boldsymbol{\xi}_{1}$ and $\boldsymbol{\xi}_{2}$ as
\begin{eqnarray}
\mathbf{\boldsymbol{\delta}}^{(i+1)} & = & \underset{\mathbf{\boldsymbol{\delta}}}{\mathrm{argmax}}\thinspace u\left(\mathbf{\boldsymbol{\delta}},\Delta\boldsymbol{\tau}^{(i)};\mathbf{\boldsymbol{\delta}}^{(i)},\Delta\boldsymbol{\tau}^{(i)}\right),\label{eq:Mstep_delta}\\
\Delta\mathbf{\boldsymbol{\tau}}^{(i+1)} & = & \underset{\boldsymbol{\Delta}\mathbf{\tau}}{\mathrm{argmax}}\thinspace u\left(\boldsymbol{\delta}^{(i+1)},\Delta\boldsymbol{\tau};\boldsymbol{\delta}^{(i)},\Delta\boldsymbol{\tau}^{(i)}\right).\label{Mstep_tau}
\end{eqnarray}
Since the optimization problems (\ref{eq:Mstep_delta}) and (\ref{Mstep_tau})
are non-convex and it is hard to find their optimal solutions, we
apply a one-step gradient update for $\mathbf{\boldsymbol{\delta}}$
and $\Delta\boldsymbol{\mathbf{\tau}}$ as follows:
\begin{eqnarray}
\boldsymbol{\delta}^{(i+1)} & = & \boldsymbol{\delta}^{(i)}+\gamma_{\boldsymbol{\delta}}\cdot\boldsymbol{\zeta}_{\boldsymbol{\delta}}^{(i)},\label{eq:updaterule_delta}\\
\Delta\boldsymbol{\tau}^{(i+1)} & = & \Delta\boldsymbol{\tau}^{(i)}+\gamma_{\Delta\boldsymbol{\tau}}\cdot\boldsymbol{\zeta}_{\Delta\boldsymbol{\tau}}^{(i)},\label{eq:updaterule_tau}
\end{eqnarray}
where $\gamma_{\boldsymbol{\delta}}$ and $\gamma_{\Delta\boldsymbol{\tau}}$
are the step size determined by the Armijo rule \cite{Bertsekas_book95_NProgramming},
$\boldsymbol{\zeta}_{\boldsymbol{\delta}}^{(i)}$ and $\boldsymbol{\zeta}_{\Delta\boldsymbol{\tau}}^{(i)}$
are the gradients of the objective function in (\ref{eq:Mstep_delta})
and (\ref{Mstep_tau}) with respect to $\boldsymbol{\delta}$ and
$\Delta\boldsymbol{\tau}$, respectively. The detailed derivations
for $\boldsymbol{\zeta}_{\boldsymbol{\delta}}^{(i)}$ and $\boldsymbol{\zeta}_{\Delta\boldsymbol{\tau}}^{(i)}$
are presented in Appendix \ref{subsec:B.-Gradient-derivation}. Moreover,
the convergence of this in-exact MM algorithm to a stationary point
can be guaranteed \cite[Theorem 1]{robust_recovery}.
\subsection{Summary of the Turbo-BI Algorithm and Complexity Analysis}
The Turbo-BI algorithm is summarized in Algorithm \ref{alg:Turbo-BI-algorithm}.
Finally, we analyze the computational complexity of the proposed Turbo-BI
algorithm. It is observed that the computational complexity of the
Turbo-BI-E step is dominated by the matrix multiplication $\boldsymbol{\Phi}_{\mathit{m}}^{\mathit{H}}\boldsymbol{\Phi}_{m}$
in (\ref{eq:VApost_inv}), which is $\mathcal{O}\left(N_{m}L^{2}\right)$.
In the Turbo-BI-M step, the computational complexity of choosing the
right step size mainly depends on calculating the cost function. We
denote the number of calculating the cost function for $\gamma_{\boldsymbol{\delta}}$
and $\gamma_{\Delta\boldsymbol{\tau}}$ in every backtracking line
search as $R_{b,1}$ and $R_{b,2}$, respectively. Then, the complexity
of choosing $\gamma_{\boldsymbol{\delta}}$ and $\gamma_{\Delta\boldsymbol{\tau}}$
are $\mathcal{O}\left(NL^{2}R_{b,1}\right)$ and $\mathcal{O}\left(NL^{2}R_{b,2}\right)$,
respectively. Besides, the complexity in calculating $\boldsymbol{\zeta}_{\boldsymbol{\delta}}$
and $\boldsymbol{\zeta}_{\Delta\boldsymbol{\tau}}$ are $\mathcal{O}\left(NL^{2}\right)$
and $\mathcal{O}\left(NL\right)$ based on matrix multiplication,
respectively. Hence, the overall computational complexity of Turbo-BI
algorithm is $\mathcal{O}\left(NL^{2}I_{in}+NL^{2}(R_{b,1}+R_{b,2})\right)$
per iteration, where $I_{in}$ denotes the number of Turbo iterations
for convergence.
\begin{algorithm}[t]
{\small{}\caption{\label{alg:Turbo-BI-algorithm}Turbo-BI algorithm}
}{\small\par}
\textbf{Input:} $\boldsymbol{y}$, $\boldsymbol{\Phi}_{m},\forall m$,
\textcolor{black}{maximum iteration number $I_{EM}$, threshold $\epsilon$.}
\textbf{Output:} $\boldsymbol{\delta}^{\ast}$, $\Delta\boldsymbol{\tau}^{\ast}$.
\begin{algorithmic}[1]
\FOR{${\color{blue}{\color{black}i=1,\cdots,I_{EM}}}$}
\STATE \textbf{Turbo-BI-E Step:}
\WHILE{not converge}
\STATE \textbf{\%Module A: LMMSE Estimator}
\STATE Initialize $\mathbf{x}_{A,m}^{pri}=\boldsymbol{0}$ and $v_{A,m}^{pri}$.
\STATE Update $\mathbf{V}_{A,m}^{post}$ and $\mathbf{x}_{A,m}^{post}$,
using (\ref{eq:VApost_inv}) and (\ref{eq:XAPOST}).
\STATE Update the extrinsic information $v_{B,m}^{pri}=\frac{1}{L}\sum_{i=1}^{L}v_{A,m,i}^{ext}$
and $\mathbf{x}_{B,m}^{pri}=\mathbf{x}_{A,m}^{ext}$, using (\ref{eq:vAext})
and (\ref{eq:xAext}).
\STATE \textbf{\% Module B: Sparsity Combiner}
\STATE Perform message passing over the factor graph $\mathcal{G}$
using (\ref{eq:MP1}) - (\ref{eq:MP4}).
\STATE Calculate the approximate posterior distributions $p\left(x_{l}^{m}\mid\mathbf{x}_{B}^{pri}\right)$
using (\ref{eq:MP5}).
\STATE Update $x_{B,m,l}^{post}$ and $\ensuremath{v_{B,m}^{post}}$
using (\ref{eq:MP6}) and (\ref{eq:MP7}).
\STATE Update the extrinsic information $\mathbf{x}_{A,m}^{pri}=\mathbf{x}_{B,m}^{ext}$
and $v_{A,m}^{pri}=v_{B,m}^{ext}$, using (\ref{eq:xBext}) and (\ref{eq:vBext}).
\ENDWHILE
\STATE \textbf{Turbo-BI-M Step:}
\STATE Construct the surrogate function in (\ref{eq:surrogatefunc})
using $\mathbf{x}_{A,m}^{post}$ and $\mathbf{V}_{A,m}^{post}$, which
is from the Turbo-BI-E step.
\STATE Update $\boldsymbol{\delta}$ and $\Delta\boldsymbol{\tau}$,
using (\ref{eq:updaterule_delta}) and (\ref{eq:updaterule_tau}).
\IF{\textcolor{black}{{} }$\left\Vert \Delta\boldsymbol{\delta}^{(i+1)}-\Delta\boldsymbol{\delta}^{(i)}\right\Vert \leq\epsilon$
and $\left\Vert \Delta\boldsymbol{\tau}^{(i+1)}-\Delta\boldsymbol{\tau}^{(i)}\right\Vert \leq\epsilon$}
\STATE \textbf{\textcolor{black}{break}}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{\label{sec:PSO-LS-algorithm-in}PSO-LS Algorithm in Stage 2}
In Stage 2, we aim to fully exploit the frequency band apertures gain
and perform a refined delay estimation based on the estimation result
from Stage 1. First, we reformulate the refined estimation signal
model (\ref{eq:refined_signal}) as a linear form:
\begin{equation}
\mathbf{y}=\mathbf{H\left(\boldsymbol{\theta}\right)x}+\boldsymbol{w},
\end{equation}
where $\mathbf{H\left(\boldsymbol{\theta}\right)}=\left[\begin{array}{ccc}
\mathbf{h}_{11} & \cdots & \mathbf{h}_{1K}\\
\vdots & \ddots & \vdots\\
\mathbf{h}_{M1} & \cdots & \mathbf{h}_{MK}
\end{array}\right]\in\mathbb{C}^{N\times K}$, $h_{mk}(n)=e^{-j2\pi\left(f_{c,m}^{\prime}+nf_{s,m}\right)\tau_{k}}e^{-j2\pi nf_{s,m}\delta_{m}}e^{j\varphi_{m}^{\prime}}$
denotes the $n$-th element of the column vector $\mathbf{h}_{mk}$,
$\mathbf{\boldsymbol{\theta}}=[\tau_{1},...,\tau_{K},$$\delta_{1},...,\delta_{M},\varphi_{2}^{\prime},...,\varphi_{M}^{\prime}]^{T}\in\mathbb{R}^{(K+2M-1)\times1}$
denotes the vector consisting of unknown parameters, $\mathbf{x}=\left[\alpha_{1}^{\prime},\ldots,\alpha_{K}^{\prime}\right]^{T}\in\mathbb{C}^{K\times1}$,
and $\boldsymbol{w}\sim\mathcal{C}\mathcal{N}(0,\sigma_{ns}^{2}\mathbf{I})\in\mathbb{C}^{N\times1}$.
We adopt the MAP method for estimation, that takes the prior information
of $\delta_{m}$ into consideration. Then, the optimization problem
can be formulated as
\begin{eqnarray}
\mathcal{P}_{1}: & \underset{\boldsymbol{\theta},\mathbf{x}}{\max} & \ln\thinspace p\left(\mathbf{y}|\mathbf{x}\right)+\sum_{m=1}^{M}\textrm{ln}\thinspace p\left(\delta_{m}\right)\nonumber \\
& \text{s.t. } & 0\leq\varphi_{m}^{\prime}\leq2\pi,\forall m\in\left\{ 2,\cdots,M\right\} ,\label{eq:P1}
\end{eqnarray}
where $p\left(\mathbf{y}|\mathbf{x}\right)\wasypropto e^{-\frac{\|\mathbf{y}-\mathbf{H\left(\theta\right)x}\|^{2}}{\sigma_{ns}^{2}}}$
is the likelihood function. After an equivalent transformation, $\mathcal{P}_{1}$
can be reformulated as
\begin{eqnarray}
\mathcal{P}_{2}: & \underset{\boldsymbol{\theta},\mathbf{x}}{\min} & \frac{\|\mathbf{y}-\mathbf{H\left(\boldsymbol{\theta}\right)x}\|^{2}}{\sigma_{ns}^{2}}+\sum_{m=1}^{M}\frac{\delta_{m}^{2}}{2\sigma_{p}^{2}}\nonumber \\
& \text{\textrm{s.t.}} & 0\leq\varphi_{m}^{\prime}\leq2\pi,\forall m\in\left\{ 2,\cdots,M\right\} .\label{eq:P2}
\end{eqnarray}
The non-convex problem $\mathcal{P}_{2}$ has a multimodal and multi-dimensional
objective function, which is extremely challenging to solve due to
the existence of numerous local optimums. In this case, conventional
algorithms, such as gradient descent and exhaustive search algorithms,
have high-computational complexity or are easily trapped into local
optimums. To overcome these drawbacks, we adopt the PSO method \cite{PSO1,PSO2,PSOCL3,PSOHeterogeneous4}
to find a good solution for the non-convex optimization problem $\mathcal{P}_{2}$.
In the PSO method, we employ a number of particles which are potential
solutions, to find the optimal solution by iterations, where the search
space is bounded by the constraints of the target optimization problem
\cite{2001Swarm}. This heuristic algorithm has been shown to have
low computational complexity and a strong optimization ability for
complex multimodal optimization problems \cite{PSOCL3}. Generally,
PSO starts with a random initialization of the particles' locations
in a large search space. In our problem $\mathcal{P}_{2}$, however,
we can narrow down the search space based on the coarse estimation
results from Stage 1.
Specifically, we set the search space as
\begin{equation}
\mathcal{S}=[\boldsymbol{\beta}-\mathbf{e},\boldsymbol{\beta}+\mathbf{e}]\in\mathbb{R}^{D\times1},\label{eq:searchspace}
\end{equation}
where $\boldsymbol{\beta}$ has the values consisting of the coarse
estimation results, $\mathbf{e}$ is the search range obtained by
evaluating the mean squared error (MSE) of coarse estimation results
based on offline training or evaluating the Cram�r-Rao bound (CRB)
values based on coarse signal model (\ref{eq:coarse_signal}), and
$D$ denotes the dimension of the particles (search space). Since
there is no estimation for $\varphi_{m}^{\prime}$ in Stage 1, the
search space for $\varphi_{m}^{\prime}$ is set to be $\left[0,2\pi\right]$.
Note that the search space $S$ is a real set with dimension $D=3K+2M-1$,
since the $K$-dimension complex vector $\mathbf{x}$ can be seen
as a $2K$-dimension real vector in a real domain search space. Then,
for the $i$-th iteration, each particle comes to a better location
by updating its velocity and position based on the following two equations:
\begin{equation}
V_{q,d}^{(i+1)}=\omega V_{q,d}^{(i)}+c_{1}r_{1,q,d}\left(pbest_{q,d}^{(i)}-X_{q,d}^{(i)}\right)+c_{2}r_{2,q,d}\left(gbest_{d}^{(i)}-X_{q,d}^{(i)}\right),\label{eq:velocityupdate}
\end{equation}
\begin{equation}
X_{q,d}^{(i+1)}=X_{q,d}^{(i)}+V_{q,d}^{(i+1)},\label{eq:positionupdate}
\end{equation}
where $c_{1}$ and $c_{2}$ are acceleration coefficients, and $\omega$
is the inertia factor, which is proposed to balance the global and
local search abilities \cite{PSO_first}. Note that $X_{q,d}^{(i)}$
and $V_{q,d}^{(i)}$ are the position and velocity of the $d$-th
element of the $q$-th particle, respectively, where $q\in\left\{ 1,2,...,Q_{p}\right\} ,d\in\left\{ 1,2,...,D\right\} $
with $Q_{p}$ denoting the number of particles. $r_{1,q,d}$ and $r_{2,q,d}$
are two independent random numbers uniformly distributed within $[0,1]$.
Besides, $gbest$ denotes the best position in the whole swarm and
$pbest_{q,d}^{(i)}$ denotes the best position values at the $d$-th
dimension that the $q$-th particle has come across so far at the
$i$-th iteration. The goodness of the particle position is measured
by the objective function value in $\mathcal{P}_{2}$ (also called
fitness in PSO algorithms). In our problem, a smaller fitness indicates
a better position.
For getting a better solution in the PSO algorithm, we aim to reduce
the dimension of the search space in problem $\mathcal{P}_{2}$ and
thus propose the PSO-LS algorithm. We first employ primal-decomposition
theory to decouple the problem $\mathcal{P}_{2}$ as:
\begin{eqnarray}
\mathcal{P}_{3}: & \underset{\boldsymbol{\theta}}{\min} & \frac{\|\mathbf{y}-\mathbf{H\left(\boldsymbol{\theta}\right)\mathit{g}^{*}(\boldsymbol{\theta})}\|^{2}}{\sigma_{ns}^{2}}+\sum_{m=1}^{M}\frac{\delta_{m}^{2}}{2\sigma_{p}^{2}}\nonumber \\
& \text{s.t. } & 0\leq\varphi_{m}^{\prime}\leq2\pi,\forall m\in\left\{ 2,\cdots,M\right\} ,\label{eq:LSPSO}
\end{eqnarray}
where $g^{*}(\boldsymbol{\theta})=\underset{\mathbf{x}}{\mathrm{argmin}}\|\mathbf{y}-\mathbf{H\left(\boldsymbol{\theta}\right)x}\|^{2}$.
Then we can obtain a least-square (LS) solution of $g^{*}(\boldsymbol{\theta})$
as
\begin{equation}
g^{*}(\boldsymbol{\theta})=\left(\mathbf{H\left(\boldsymbol{\theta}\right)}^{H}\mathbf{H\left(\boldsymbol{\theta}\right)}\right)^{-1}\mathbf{H\left(\boldsymbol{\theta}\right)}^{H}\mathbf{y}.
\end{equation}
Using this method, the dimension of search space is reduced from $\mathbb{R}^{3K+2M-1}$
to $\mathbb{R}^{K+2M-1}$. Then, we analyze the computational complexity
of the PSO-LS algorithm. The complexity mainly depends on the calculations
of the objective function in (\ref{eq:LSPSO}), which is $\mathcal{O}\left(NK^{2}Q_{p}\right)$
per iteration. Finally, the proposed PSO-LS algorithm is presented
in Algorithm \ref{alg:PSO-LSalgorithm}.
\begin{algorithm}[t]
{\small{}\caption{\label{alg:PSO-LSalgorithm}PSO-LS algorithm}
}{\small\par}
\textbf{Input:} $\mathbf{y}$, search space $\mathcal{S}$, $Q_{p}$,\textcolor{black}{{}
maximum iteration number $I_{PSO}$, threshold $\epsilon$.}
\textbf{Output:} $\mathbf{gbest}$.
\begin{algorithmic}[1]
\STATE Initialize $X_{q,d}^{(1)}$ within the search space $\mathcal{S}$
in (\ref{eq:searchspace}), $V_{q,d}^{(1)}$, $gbest_{d}^{(1)}$,
$pbest_{q,d}^{(1)}$, $\forall q,d$.
\FOR{ $i=1,\cdots,I_{PSO}$}
\FOR{${\color{blue}{\color{black}q=1,\cdots,Q_{p}}}$}
\STATE Update the velocity of particle $q$ using (\ref{eq:velocityupdate}).
\STATE Update the position of particle $q$ using (\ref{eq:positionupdate}).
\STATE Calculate the fitness of particle $q$ based on the objective
in (\ref{eq:LSPSO}).
\STATE Update $pbest_{q,d}^{(i+1)}$ and $gbest_{d}^{(i+1)}$ ,$\forall q,d$.
\ENDFOR
\IF{\textcolor{black}{{} }$\left\Vert \mathbf{gbest}^{(i+1)}-\mathbf{gbest}^{(i)}\right\Vert \leq\epsilon$}
\STATE \textbf{\textcolor{black}{break}}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{\label{sec:Simulation-Results}Simulation Results}
In this section, we provide numerical results to evaluate the performance
of the proposed schemes and draw useful insights. In the default setup,
we consider that CSI samples are collected using one OFDM training
symbol with subcarrier spacing $f_{s,1}=f_{s,2}=60$ KHz and subband
bandwidth $B_{1}=B_{2}=40$ MHz at $M=2$ subbands, with central frequencies
$f_{c,1}=1.80$ GHz and $f_{c,2}=2.02$ GHz. $\varphi_{m},\forall m$
and $\delta_{m},\forall m$ are generated following a uniform distribution
within $[0,2\pi]$ and a Gaussian distribution $\mathcal{N}\left(0,\sigma_{p}^{2}\right)$,
respectively. We use the Quadriga platform to generate multiband CSI
samples in an indoor factory (InF) scenario, which is depicted in
3GPP R16 \cite{3gpp_Rel16}. Other system parameters are set as follows
unless otherwise specified: \textcolor{black}{Signal-to-noise ratio
(SNR) is 7 dB}\textcolor{black}{\small{}, $I_{EM}=100$, $I_{PSO}=500$},
$Q_{p}=100$, $\epsilon=10^{-5}$, $c_{1}=2.5$, $c_{2}=0.5$, and
$\omega=0.99-\frac{0.79}{I_{PSO}}t$ for the $t$-th PSO iteration.\textcolor{black}{\small{}
}To assess the performance of the schemes, we adopt the empirical
cumulative distribution function (CDF) of the LoS path delay estimation
errors using 200 Monte Carlo trials.
For comparison, we consider the following four benchmark schemes:
\begin{itemize}
\item \textbf{Turbo-BI algorithm:} We adopt the coarse estimation results
of the Turbo-BI algorithm in Stage 1 as one of the benchmarks.
\item \textbf{Multiband weighted delay estimation (MBWDE) algorithm \cite{ESPRIT2}:}
It adopts a weighted subspace fitting algorithm for delay estimation,
which is able to exploit the frequency band apertures gain.
\item \textbf{Two-stage gradient descent (TSGD) scheme:} This scheme has
the same coarse estimation implementation as the TSGE scheme in Stage
1 and then the gradient descent method is employed to perform refined
estimation in Stage 2 with the initial point generated using the coarse
estimation in Stage 1.
\item \textbf{Ideal gradient descent (IGD) scheme:} This scheme adopts gradient
descent method based on problem $\mathcal{P}_{2}$, where we set the
initial point to be the true values. In fact, this scheme is infeasible
because we cannot know the true values of the estimation parameters
in practical environment. In the simulations, however, we can regard
this scheme as an ideal benchmark to show the effectiveness of the
TSGE scheme.
\end{itemize}
\subsection{Impact of the factor $\boldsymbol{\delta}$}
We first study the impact of the receiver timing offset $\boldsymbol{\delta}$.
Since the InF scenario does not consider the effect of $\boldsymbol{\delta}$,
we construct a two-path channel model with Rayleigh distributed magnitudes.
The delays are set to follow a uniform distribution within $[20,200]$
ns. Fig. \ref{fig:CDF_delta} depicts the CDF of LoS path delay estimation
errors for different standard deviation $\sigma_{p}$ achieved by
TSGE. As can be seen, the factor $\boldsymbol{\delta}$ has significantly
degraded the estimation performance. Besides, when $\boldsymbol{\delta}$
is considered, the estimation performance mainly depends on the prior
standard deviation $\sigma_{p}$, where the delay estimation errors
increase with $\sigma_{p}$. It is reasonable since a larger $\sigma_{p}$
means less prior information of $\boldsymbol{\delta}$ we have. Therefore,
in the following simulations, we assume that $\sigma_{p}=0$ ns in
order to avoid its effect on the estimation performance.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{delta_pso2}\caption{\label{fig:CDF_delta}The CDF of LoS path delay estimation errors
for different $\sigma_{p}$. }
\end{figure}
\subsection{Performance of TSGE Scheme}
In Fig. \ref{fig:Performance compare}, we compare the delay estimation
performance of the TSGE scheme with benchmarks. First, we observe
that the CS-based schemes (i.e., TSGE, Turbo-BI, and TSGD) and the
IGD scheme achieve a better performance than the subspace-based algorithm,
i.e., MBWDE. It is mainly because that the received CSI samples are
collected using only a single OFDM training symbol, which leads to
a limited ability of subspace-based algorithms to suppress the noise
interference. In contrast, CS-based schemes have a strong ability
of reducing the effects of noise by signal sparse reconstruction,
thus they achieve a better performance. Second, it can be seen that
the proposed TSGE scheme significantly outperforms the Turbo-BI and
TSGD. This is because TSGE scheme exploits the extra frequency band
apertures gain compared to the Turbo-BI algorithm. Moreover, note
that the likelihood function based on the refined signal model (\ref{eq:refined_signal})
has numerous local optimums, which requires a strong global search
ability for the algorithms in Stage 2. Compared to the gradient descent
algorithm in the TSGD scheme, the PSO-LS algorithm in the TSGE scheme
has stronger global search ability, thus achieves higher estimation
accuracy than TSGD. Finally, it is observed that the CDF curve of
TSGE approaches that of IGD, which indicates a negligible performance
loss.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{resultcdf2}\caption{\label{fig:Performance compare}The CDF of LoS path delay estimation
errors for different schemes.}
\end{figure}
In Table \ref{tab:CPU-time-and}, we investigate the computing cost
and the estimation accuracy of the considered algorithms, i.e., the
PSO-LS algorithm with different numbers of particles and iterations,
primal PSO algorithm based on the problem $\mathcal{P}_{2}$, and
the MBWDE algorithm. The computing cost is characterized by the spent
CPU time when running the algorithms with the Intel Xeon 6248R CPU.
For fairness, we set the same initial estimation values for all algorithms.
It can be readily seen that the PSO-LS is able to achieve a higher
estimation accuracy with less computing cost than primal PSO, which
validates the effectiveness of the proposed PSO-LS algorithms. In
addition, our proposed PSO-LS algorithm is able to achieve better
estimation accuracy than MBWDE with nearly equal computing cost. Furthermore,
we can see that the PSO-LS algorithm is able to achieve a trade-off
between the computing cost and estimation performance, which implies
that it can be adapted to various scenarios. In particular, when we
have sufficient computing resource, we can use a large number of particles
and iterations to pursue the highest estimation accuracy. Conversely,
when the computing resource is limited, we can use a small number
of particles and iterations to achieve a relatively accurate estimation.
\begin{table*}[t]
\begin{centering}
\caption{\label{tab:CPU-time-and}CPU time and RMSE comparison.}
\par\end{centering}
\centering{
\begin{tabular}{|c|>{\centering}m{3.3cm}|>{\centering}m{3.3cm}|>{\centering}m{3.4cm}|c|}
\hline
& \multicolumn{1}{>{\centering}m{3.3cm}|}{{\small{}PSO-LS}{\small\par}
{\footnotesize{}(}\textcolor{black}{\footnotesize{}$I_{PSO}=500$}{\footnotesize{},$Q_{p}=100$)}} & {\small{}PSO-LS}{\small\par}
{\footnotesize{}(}\textcolor{black}{\footnotesize{}$I_{PSO}=100$}{\footnotesize{},
$Q_{p}=20$)} & {\small{}primal PSO}{\small\par}
{\footnotesize{}(}\textcolor{black}{\footnotesize{}$I_{PSO}=500$}{\footnotesize{},
$Q_{p}=100$)} & {\small{}MBWDE}\tabularnewline
\hline
{\small{}CPU time (s)} & {\small{}32.8} & {\small{}1.3} & {\small{}20.3} & {\small{}0.8}\tabularnewline
\hline
{\small{}RMSE (ns)} & {\small{}0.2876} & {\small{}0.5425} & {\small{}0.5517} & {\small{}1.671}\tabularnewline
\hline
\end{tabular}
\end{table*}
\subsection{Impact of Frequency Band Spacing}
Fig. \ref{fig:Impact-frequency} illustrates the root mean square
error (RMSE) of the LoS delay estimation for different schemes versus
frequency band spacing, when $\textrm{SNR}=10$ dB and bandwidth $B_{1}=B_{2}=60$
MHz. We define $\mathrm{RMSE}(\hat{\tau})\triangleq\sqrt{\mathbb{E}\left\{ (\hat{\tau}-\tau)^{2}\right\} }$,
where $\hat{\tau}$ is the estimated LoS delay and $\tau$ is the
true LoS delay. In particular, we fix $f_{c,1}$ and change $f_{c,2}$
for different frequency band spacings. It can be seen that the RMSE
decreases with the increase of frequency band spacing for MBWDE, TSGE,
and IGD. This is because these schemes are able to exploit frequency
band apertures gain, which enlarges as frequency band spacing increases.
Furthermore, the TSGD scheme has a better performance than the Turbo-BI
algorithm when frequency band spacing is narrow, but the performance
gap decreases as the frequency band spacing increases. When the frequency
band spacing is 260 MHz, the performance of TSGD is even worse than
Turbo-BI. This is reasonable since the oscillation of the likelihood
function becomes more violent with the increase of frequency band
spacing, which causes more bad local optimums and makes it more difficult
to fully exploit the frequency band apertures gain. However, TSGD
has a limited global search ability, which results in a high probability
of being stuck into local optimums. Hence, TSGD can only exploit part
of frequency band apertures gain and has a poor delay estimation performance
with large frequency band spacing. Besides, TSGE has more stable performance
than TSGD due to its strong global optimization ability. Finally,
we observe that the performance of Turbo-BI is irrelevant to the frequency
band spacing since it only exploits subcarrier apertures gain in Stage
1.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{result_freband2}\caption{\label{fig:Impact-frequency}RMSE of LoS delay estimation versus frequency
band spacing.}
\end{figure}
\subsection{Impact of SNR}
In Fig. \ref{fig:Impact-snr}, we show the impact of SNR on the delay
estimation performance. As can be seen, the RMSE of all schemes decreases
with the increase of SNR because of the reduction of noise interference
to delay estimation. Besides, the performance gap of TSGE and IGD
over Turbo-BI becomes larger as SNR increases.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{SNR4_30pso_124}\caption{\label{fig:Impact-snr}RMSE of LoS delay estimation versus SNR.}
\end{figure}
\subsection{Impact of the Bandwidth}
In Fig. \ref{fig:Impact-Bandwidth}, we investigate the RMSE of the
LoS delay estimation versus the bandwidth with $\textrm{SNR}=10$
dB. It is observed that the delay estimation accuracy increases as
the bandwidth, which is mainly because larger bandwidth leads to more
subcarrier apertures gain. Furthermore, we observe that the performance
gain of TSGE over MBWDE increases with bandwidth.
\begin{figure}[t]
\centering{}\includegraphics[width=10cm]{bandwidth10dBN120\lyxdot 93}\caption{\label{fig:Impact-Bandwidth}RMSE of LoS delay estimation versus the
bandwidth.}
\end{figure}
\section{\label{sec:Conclusion}Conclusion}
In this paper, we studied a delay estimation problem in the multiband
OFDM system with phase distortion factors considered. We proposed
a novel two-stage global estimation scheme that fully exploits the
multiband gains to improve the delay estimation performance. In particular,
in Stage 1, we perform a common support based sparse representation
for the coarse estimation signal model and then achieve a coarse delay
estimation using the Turbo-BI algorithm. Then, with the help of the
coarse estimation results, we further conducted a more refined delay
estimation by employing the proposed PSO-LS algorithm based on the
refined signal model in Stage 2. Finally, simulation results showed
that the proposed TSGE scheme achieves superior performance over baseline
algorithms in the InF scenario. Future work can consider a multiband
delay estimation problem for localization in a multiple-input multiple-output
(MIMO) system.
|
2,869,038,155,298 | arxiv | \section{Introduction}
Consider a random variable $X$
and a quantum state $\st$
correlated to $X$.
The task of randomness extraction
from source $X$ against side information $\st$
is to distill an almost random key $S$ from $X$.
Here the randomness of $S$
is measured by the trace distance
between the composite systems $(S,\st)$ and $(U,\st)$,
where $U$ is a random variable
uniformly distributed and independent of $\st$.
A major application of randomness extraction is
privacy amplification \cite{bbcm95,rk05},
whose task is to transform
a partially secure key into a highly secure key
in the presence of an adversary with side information.
It has been shown that
a two-universal hash function \cite{cw79}
can be used to provide randomness extraction
against quantum side information,
in which the extractable key length
is lower-bounded by a quantum generalization
of the conditional min-entropy \cite{rr05}.
The extractable key length can also be given by
a quantum generalization of the conditional collision entropy
(conditional {R\'{e}nyi} entropy of order 2) \cite{rr05,rk05}.
It should be stated that
the collision entropy is lower-bounded by the min-entropy
and so gives a better extractable key length
than the min-entropy,
while the min-entropy has several useful properties
such as the monotonicity under quantum operations.
The way to consider
a tighter bound on the length
of an extractable almost random key
is to generalize entropies
by the smoothing.
In fact,
the existing extractable key lengths
have been described by smooth entropies,
most of which are defined as the maximization
of entropies
with respect to quantum states
within a small ball (see e.g.
\cite{ha12,rr05,rk05,tcr09,tssr11}).
More precisely,
let $\hil_{A}$ and $\hil_{B}$ be finite-dimensional Hilbert spaces,
and $\cls({\cal{H}})$ and $\cls_{\le}({\cal{H}})$
denote the sets of normalized and sub-normalized quantum states
on a Hilbert space ${\cal{H}}$, respectively;
then, for example,
the smooth min-entropy $H_{\mathrm{min}}^{\epsilon}(A|B)_{\rho}$
of system $A$ conditioned on system $B$
of a state $\rho\in\cls_{\le}(\hil_{A}\otimes\hil_{B})$
is defined by
\begin{align*}
H_{\mathrm{min}}^{\epsilon}(A|B)_{\rho}
&=\max_{\rho'\inB^{\epsilon}(\rho)}
H_{\mathrm{min}}(A|B)_{\rho'},\\
H_{\mathrm{min}}(A|B)_{\rho}
&=\max_{\sg_{B}\in\cls(\hil_{B})}\sup\{\lambda|2^{-\lambda}\mathbb{I}_{A}\otimes\sg_{B}\ge\rho\},
\end{align*}
where $\mathbb{I}_{A}$ denotes the identity operator on $\hil_{A}$,
and $B^{\epsilon}(\rho)
=\big\{\rho'\in\cls_{\le}({\cal{H}})\big|C(\rho,\rho')\le\epsilon\big\}$
for $\epsilon>0$ and $\rho\in\cls({\cal{H}})$
with $C(\rho,\rho')=\sqrt{1-F^{2}(\rho,\rho')}$
and $F(\rho,\rho')=\mathrm{Tr}\big|\sqrt{\rho}\sqrt{\rho'}\big|$.
The hypothesis relative entropy~\cite{th13,wr12}
is not based on this smoothing,
but is conditioned in the same way as above,
i.e. an operator of the form $\mathbb{I}_{A}\otimes\sg_{B}$
is introduced and the entropy is maximized
with respect to $\sg_{B}$.
This paper introduces a quantum generalization $R_{\epsilon}$
of the collision entropy
smoothed and conditioned differently from the existing ones
(Definition~\ref{grigr}).
Based on the fact that the collision entropy
is not subadditive,
its optimization $\ibr{\ren}_{\ep}$
with respect to additional side information,
which automatically satisfies the strong subadditivity
and so the data processing inequality (Proposition~\ref{dpineq}),
is also introduced (Definition~\ref{grigr}).
These generalized collision entropies $R_{\epsilon}$ and $\ibr{\ren}_{\ep}$
are shown to give a lower bound on the maximal key length
in randomness extraction against quantum side information
(Theorem~\ref{rethm}).
Moreover, a lower bound on $\ibr{\ren}_{\ep}$ for general states
is derived (Theorem~\ref{gqrlwb})
and used to show the asymptotic optimality of $\ibr{\ren}_{\ep}$ (Corollary~\ref{aopt}).
The general lower bound on $\ibr{\ren}_{\ep}$
is expressed as the difference between two unconditional entropies
and its evaluation reduces to the eigenvalue problem of two states,
the entire state and the marginal state of side information.
\section{Preliminaries}
Let ${\cal{H}}$ be a Hilbert space.
For an Hermitian operator $X$ on ${\cal{H}}$
with spectral decomposition
$X=\sum_{i}\lambda_{i}E_{i}$,
let $\{X\ge0\}$ denote the projection on ${\cal{H}}$
given by
\begin{align}
\nonumber
\{X\ge0\}=\sum_{i:\lambda_{i}\ge0}E_{i}.
\end{align}
The projections $\{X>0\}$, $\{X\le0\}$ and $\{X<0\}$
are defined analogously.
Let $A$ and $B$ be positive operators on ${\cal{H}}$.
The trace distance $d_{1}({A},{B})$
and the relative entropy $\re{A}{B}$
between $A$ and $B$
are defined as
\begin{align}
\nonumber
d_{1}({A},{B})
&=\frac{1}{2}\mathrm{Tr}[(A-B)(\{A-B>0\}
-\{A-B<0\})],\\
\nonumber
\re{A}{B}
&=\mathrm{Tr}[A(\log_{2}A-\log_{2}B)],
\end{align}
respectively,
where $\log_{2}$ denotes the logarithm to base $2$.
The von Neumann entropy of $A$ is defined as
\begin{align}
\nonumber
S(A)=-\mathrm{Tr}A\log_{2}A.
\end{align}
For an operator $X>0$,
let $\nom{X}$ denote the normalization of $X$;
that is, $\nom{X}=X/\mathrm{Tr}[X]$.
It then follows from
$S(\nom{A})\le\log_{2}\mathrm{rank}\nom{A}=\log_{2}\mathrm{rank}A$
and
$\re{\nom{A}}{\nom{B}}\ge0$
that
\begin{align}
\label{vne_ineq}
S(A)
&\le\mathrm{Tr}[A]\sbl{\log_{2}\mathrm{rank}A
-\log_{2}\mathrm{Tr}[A]},
\\
\label{qre_ineq}
\re{A}{B}
&\ge
\mathrm{Tr}[A]\sbl{\log_{2}\mathrm{Tr}[A]-\log_{2}\mathrm{Tr}[B]}.
\end{align}
More generally,
it can be shown that
inequality (\ref{vne_ineq}) holds for $A\ge0$
and
inequality (\ref{qre_ineq}) holds for $A,B\ge0$
such that $\mathrm{supp}A\subset\mathrm{supp}B$,
by using the convention $0\lgt0=0$,
which can be justified by
taking the limit,
$\lim_{\epsilon\rightarrow+0}\epsilon\log_{2}\epsilon=0$.
Let $f$ be an operator convex function
on an interval $J\subset\Re$.
Let $\{X_{i}\}_{i}$ be a set of
operators on ${\cal{H}}$ with their spectrum in $J$,
and
$\{C_{i}\}_{i}$ be a set of operators on ${\cal{H}}$
such that $\sum_{i}C_{i}^{\dag}C_{i}=\mathbb{I}$,
where
$\mathbb{I}$ is the identity operator on ${\cal{H}}$.
Then Jensen's operator inequality
for $f$, $\{X_{i}\}_{i}$ and $\{C_{i}\}_{i}$
is given by
\begin{align}
\label{opjensen}
f\Big(\sum_{i}C_{i}^{\dag}X_{i}C_{i}\Big)
\le
\sum_{i}C_{i}^{\dag}f(X_{i})C_{i}
\end{align}
(see e.g. \cite{bh96,hp03}).
Let ${\cal{X}}$ and ${\cal{S}}$ be finite sets
and ${\cal{G}}$ be a family of functions
from ${\cal{X}}$ to ${\cal{S}}$.
Let $G$ be a random variable
uniformly distributed over ${\cal{G}}$.
Then ${\cal{G}}$ is called two-universal,
and
$G$ is called a two-universal hash function \cite{cw79},
if
\begin{align}
\label{propuhf}
\mathrm{Pr}[G(x_{0})=G(x_{1})]\le\frac{1}{|{\cal{S}}|}
\end{align}
for every distinct $x_{0},x_{1}\in{\cal{X}}$.
For example,
the family of all functions from ${\cal{X}}$ to ${\cal{S}}$
is two-universal.
A more useful two-universal family is
that of all linear functions
from $\{0,1\}^{n}$ to $\{0,1\}^{m}$.
More efficient families,
which can be described using $O(n+m)$ bits
and have a polynomial-time evaluating algorithms,
are discussed in \cite{cw79,wc81}.
Let $X$ be a random variable
on a finite set ${\cal{X}}$
and $\rst_{B}$ be a quantum state on a Hilbert space $\hil_{\opq}$.
In considering the composite system $(X,\rst_{B})$,
it may help to introduce
a Hilbert space $\hil_{X}$ with an orthonormal basis $\{|x\rangle\}_{x\in{\cal{X}}}$
and define
the classical-quantum state $\rho_{X\opq}$
by
\begin{align}
\nonumber
\rho_{X\opq}
=\sum_{x}
\pp_{\xx}|x\rangle\langlex|\otimes\st_{\xx},
\end{align}
where $\pp_{\xx}=\mathrm{Pr}[X=x]$ for $x\in{\cal{X}}$,
and $\st_{\xx}$
denotes the quantum state $\rst_{B}$
conditioned on $X=x$.
Let $\rho_{X\opq}$ be a classical-quantum state as above.
Then the distance $\sd(X|B)_{\rho}$ from uniform
of system $X$ given system $B$
can be defined as
\begin{align}
\nonumber
\sd(X|B)_{\rho}
=d_{1}(\rho_{\opxB},\sg_{X\opq})
\quad\text{with}\quad
\sg_{X\opq}=\frac{1}{d_{X}}\mathbb{I}_{X}\otimes\rho_{B},
\end{align}
where $d_{X}=|{\cal{X}}|$ is the dimension of $X$
and $\mathbb{I}_{X}$ is the identity operator on $X$.
Instead of the trace distance,
another distance measure
may be used to define the distance from uniform.
For example,
the relative entropy
can be used
to define the distance from uniform of the form
\[
\sr(X|B)_{\rho}
=\re{\rho_{\opxB}}{\sg_{X\opq}}.
\]
Here, quantum Pinsker's inequality
$\big(d_{1}(\rho,\sigma)\big)^{2}\le\re{\rho}{\sigma}$
(see~\cite{op93})
gives
\begin{align}
\nonumber
\big(\sd(X|B)_{\rho}\big)^{2}\le\sr(X|B)_{\rho},
\end{align}
which ensures that
an upper bound on $\sr(X|B)_{\rho}$
also gives
an upper bound on $\sd(X|B)_{\rho}$.
Therefore,
in this paper, we will use $\sr(X|B)_{\rho}$,
instead of $\sd(X|B)_{\rho}$,
as the measure of the distance from uniform.
\section{Randomness extraction}
First, we introduce a quantum generalization of
the (smoothed) conditional collision entropy
and its optimization
with respect to additional side information.
\begin{definition}
\label{grigr}
Let $\hil_{A}$ and $\hil_{B}$ be Hilbert spaces,
and $\st_{AB}$ be a quantum state on $\hil_{A}\otimes\hil_{B}$.
For $\epsilon\ge0$,
the information spectrum collision entropy\footnote{%
This name of $R_{\epsilon}$ follows that of
the information spectrum relative entropy
$D^{\epsilon}_{s}(\rho||\sigma)=\sup\{R|\mathrm{Tr}[\rho\{\rho\le2^{R}\sigma\}]\le\epsilon\}$,
which can be considered as an entropic version
of the quantum information spectrum, $\underline{D}$ and $\overline{D}$
(see~\cite{th13}).} %
$\ren_{\ep}(A|B)_{\st}$ of system $A$ conditioned on system $B$
of a state $\rho$
is given by
\begin{align}
\nonumber
\ren_{\ep}(A|B)_{\st}=\sup_{\lambda}
\big\{
\lambda{\big|}
\mathrm{Tr}\bbl{\mbl{\rxro_{B}
-\ext{-\lambda}\st_{B}^{2}\le0}\st_{B}}\ge1-\epsilon
\big\},
\end{align}
where we have introduced
\begin{align}
\nonumber
\rxro_{B}
=\mathrm{Tr}_{A}\bbl{\st_{AB}^{2}}.
\end{align}
Moreover, for $\epsilon\ge0$,
the information spectrum collision entropy $\igr(A|B)_{\st}$
of system $A$ conditioned on system $B$ of a state $\rho$
with optimal side information is given by
\begin{align*}
\igr(A|B)_{\st}=\sup_{\hil_{C},\st_{ABC}:\mathrm{Tr}_{C}[\st_{ABC}]=\st_{AB}}
R_{\epsilon}(A|BC)_{\rho},
\end{align*}
where the supremum ranges over
all Hilbert spaces $\hil_{C}$ and quantum states $\st_{ABC}$
on $\hil_{A}\otimes\hil_{B}\otimes\hil_{C}$
such that $\mathrm{Tr}_{C}[\st_{ABC}]=\st_{AB}$.
\end{definition}
In contrast to the conditional von Neumann entropy
$\vn(A|B)_{\st}=S(\st_{AB})-S(\st_{B})$,
$\ren_{\ep}(A|B)_{\st}$ can increase when additional side information is provided;
that is,
\begin{align*}
R_{\epsilon}(A|BC)_{\rho}>\ren_{\ep}(A|B)_{\st}
\end{align*}
is possible
(such side information for the classical collision entropy
is called spoiling knowledge~\cite{bbcm95}).
On the other hand,
$\igr(A|B)_{\st}$ is optimized with respect to additional side information,
and so satisfies the following data processing inequality.
\begin{proposition}
\label{dpineq}
Let $\hil_{A}$, $\hil_{B}$ and $\hil_{B'}$ be Hilbert spaces,
and $\st_{AB}$ be a quantum state on $\hil_{A}\otimes\hil_{B}$.
Let ${\cal{F}}$ be a trace preserving completely positive map
from system $B$ to system $B'$.
Then
\begin{align*}
\igr(A|B)_{\st}\le\ibr{\ren}_{\ep}(A|B')_{{\cal{F}}(\rho)}.
\end{align*}
\end{proposition}
\begin{proof}
The definitions of $R_{\epsilon}$ and $\ibr{\ren}_{\ep}$ at once give that
$\ren_{\ep}(A|B)_{\st}$ is invariant under the adjoint action of an isometry on system $B$
and $\ibr{\ren}_{\ep}$ is strongly subadditive.
Moreover,
the Stinespring dilation theorem (see~\cite{st55})
ensures that
there exist a Hilbert space $\hil_{E}$ and
an isometry $U:\hil_{B}\rightarrow\hil_{B'}\otimes\hil_{E}$ such that
\begin{align*}
{\cal{F}}(\st_{B})=\mathrm{Tr}_{E}\bbl{U\stbU^{\dag}}
\end{align*}
for any $\st_{B}\in\cls(\hil_{B})$.
Therefore,
if we suppose that $\igr(A|B)_{\st}=R_{\epsilon}(A|BC)_{\rho}$
for $\st_{ABC}\in\cls(\hil_{A}\otimes\hil_{B}\otimes\hil_{C})$
such that $\mathrm{Tr}_{C}[\st_{ABC}]=\st_{AB}$,
then
\begin{align*}
\igr(A|B)_{\st}&=R_{\epsilon}(A|BC)_{\rho}
=R_{\epsilon}(A|B'\opeC)_{U\stU^{\dag}}\\
&\le\ibr{\ren}_{\ep}(A|B')_{{\cal{F}}(\rho)}.
\end{align*}
This completes the proof.
\end{proof}
We are now ready to state a main theorem.
Note that the monotonicity of the relative entropy
enables to replace $\ren_{\ep}(A|B)_{\st}$ in this theorem by $\igr(A|B)_{\st}$.
\begin{theorem}
\label{rethm}
Let ${\cal{X}}$ and ${\cal{S}}$ be finite sets,
and $X$ be a random variable on ${\cal{X}}$.
Let $\hil_{B}$ be a Hilbert space,
and $\rho_{X\opq}$ be a classical-quantum state on $\hil_{X}\otimes\hil_{B}$.
Let $G$ be a random variable,
independent of $\rho_{X\opq}$,
uniformly distributed over
a two-universal family of hash functions from ${\cal{X}}$ to ${\cal{S}}$.
Then
\begin{align}
\label{result}
\sr(S|\rvgB)_{\rho_{S\rvgB}}
\le
\epsilon\log_{2}\sbl{d|{\cal{S}}|}
+\eta_{0}(\epsilon)
+\frac{\delta+\epsilon+\epsilon^{1/2}}{\ln2}
\end{align}
for $\epsilon\ge0$,
where $\eta_{0}$ is a function on $[0,\infty)$
given by
\begin{align}
\label{defet}
\eta_{0}(\epsilon)=
\left\{
\begin{array}{cl}
-\epsilon\log_{2}\epsilon \quad& \mathrm{for}\ 0\le\epsilon\le1/2,\\
1/2 & \mathrm{for}\ \epsilon>1/2,
\end{array}
\right.
\end{align}
and we have introduced
\begin{align}
\nonumber
S=G(X),
\quad
d=\mathrm{rank}\hspace{1pt}\st_{B}
\quad
{\text{and}}
\quad
\delta=|{\cal{S}}|\ext{-\ren_{\ep}(X|B)_{\st}}.
\end{align}
\end{theorem}
In this theorem,
$\rho_{S\rvgB}$ is given by
\begin{align*}
\rho_{S\rvgB}=\sum_{s\in{\cal{S}},g\in{\cal{G}}}
p_{s|g}\brk{s}\otimes\frac{1}{|{\cal{G}}|}\brk{g}\otimes\rho_{s|g}
\end{align*}
with
\begin{align*}
p_{s|g}=\sum_{x\ing^{-1}(s)}\pp_{\xx}
\quad
{\text{and}}
\quad
\rho_{s|g}=\frac{1}{p_{s|g}}
\sum_{x\ing^{-1}(s)}\pp_{\xx}\st_{\xx},
\end{align*}
and $\sr(S|\rvgB)_{\rho_{S\rvgB}}$ can be written as
\begin{align*}
\sr(S|\rvgB)_{\rho_{S\rvgB}}=\re{\rho_{S\rvgB}}{\sigma_{S\rvgB}}
\end{align*}
with
\begin{align*}
\sigma_{S\rvgB}
=\frac{1}{|{\cal{S}}|}\mathbb{I}_{S}\otimes\frac{1}{|{\cal{G}}|}\mathbb{I}_{\rvg}\otimes\st_{B}.
\end{align*}
\begin{proof}
Direct calculation shows that
\begin{align}
\label{fsteq}
\begin{split}
&\sr(S|\rvgB)_{\rho_{S\rvgB}}\\
&=\sum_{g,s}
\pp_{\g}\mathrm{Tr}\bbl{\st_{\s\g}\log_{2}\st_{\s\g}}
-\mathrm{Tr}\bbl{\st_{B}\log_{2}\st_{B}}
+\log_{2}|{\cal{S}}|
\end{split}
\end{align}
with $\pp_{\g}=1/|{\cal{G}}|$,
where we have defined
\begin{align}
\label{defasg}
\st_{\s\g}=\sum_{x\ing^{-1}(s)}\pp_{\xx}\st_{\xx}.
\end{align}
Since the real function $-\log_{2}$
is operator convex on $(0,\infty)$
(see e.g. \cite{bh96}),
we can estimate
the first term of the right-hand side of (\ref{fsteq})
by applying Jensen's operator inequality as follows.
Let $f=-\log_{2}$
and introduce the operators $\opx_{\s\g}$ and $\opc_{\s\g}$
by writing
\begin{align}
\nonumber
\opx_{\s\g}=\st_{\s\g}+\gamma\mathbb{I}
\quad
{\text{and}}
\quad
\opc_{\s\g}=(\pp_{\g}\st_{\s\g})^{1/2}
P\hat{\st}_{B}^{-1/2}
\end{align}
for $\gamma>0$,
where we have defined
\begin{align}
\nonumber
\hat{\st}_{B}=P\st_{B}P
\quad
{\text{and}}
\quad
P=\mbl{\rxro_{B}-\ext{-r}\st_{B}^{2}\le0}
\end{align}
for $r<\ren_{\ep}(X|B)_{\st}$.
It readily follows that
$\opx_{\s\g}>0$
and
$\sum_{s,g}\opc_{\s\g}^{\dag}\opc_{\s\g}=\prj_{\st}$,
where $\prj_{\st}$ denotes the projection
onto the range of $\hat{\st}_{B}$.
Furthermore, let us define the operators
$\opx_{+}$ and $\opc_{+}$ by
\begin{align}
\nonumber
\opx_{+}=\mathbb{I}
\quad
{\text{and}}
\quad
\opc_{+}=\mathbb{I}-\prj_{\st},
\end{align}
so that
\begin{align}
\nonumber
f(\opx_{+})=0
\quad
{\text{and}}
\quad
\sum_{s,g}\opc_{\s\g}^{\dag}\opc_{\s\g}+\opc_{+}^{\dag}\opc_{+}=\mathbb{I}.
\end{align}
Then
by Jensen's operator inequality (\ref{opjensen}),
\begin{align}
\nonumber
f\Big(\sum_{s,g}\opc_{\s\g}^{\dag}\opx_{\s\g}\opc_{\s\g}
+\opc_{+}^{\dag}\opx_{+}\opc_{+}\Big)
\le
\sum_{s,g}\opc_{\s\g}^{\dag}f(\opx_{\s\g})\opc_{\s\g}.
\end{align}
Here, $\sum_{s,g}\opc_{\s\g}^{\dag}\opx_{\s\g}\opc_{\s\g}$
is an operator on the range ${\cal{R}}(\hat{\st}_{B})$ of $\hat{\st}_{B}$,
while $\opc_{+}^{\dag}\opx_{+}\opc_{+}$
is an operator on its orthogonal complement ${\cal{R}}(\hat{\st}_{B})^{\perp}$.
It thus follows that
\begin{align}
\nonumber
\hat{\st}_{B}^{1/2}f\Big(
\sum_{s,g}\opc_{\s\g}^{\dag}\opx_{\s\g}\opc_{\s\g}
\Big)\hat{\st}_{B}^{1/2}
\le
\sum_{s,g}\hat{\st}_{B}^{1/2}\opc_{\s\g}^{\dag}f(\opx_{\s\g})\opc_{\s\g}\hat{\st}_{B}^{1/2},
\end{align}
which, in the limit $\gamma\rightarrow+0$, leads to
\begin{align}
\label{qj_applied}
\begin{split}
&\sum_{s,g}
\pp_{\g}P\st_{\s\g}^{1/2}\big(\log_{2}\st_{\s\g}\big)\st_{\s\g}^{1/2}P\\
&\le
\hat{\st}_{B}^{1/2}
\bigg(
\log_{2}\sum_{s,g}
\pp_{\g}
\hat{\st}_{B}^{-1/2}
P\st_{\s\g}^{2}P
\hat{\st}_{B}^{-1/2}
\bigg)
\hat{\st}_{B}^{1/2}.
\end{split}
\end{align}
To estimate the right-hand side of (\ref{qj_applied}),
let us estimate
the sum $\sum_{s,g}\pp_{\g}P\st_{\s\g}^{2}P$.
Substitution of (\ref{defasg}) into this sum
gives
\begin{align}
\nonumber
\sum_{s,g}
\pp_{\g}P\st_{\s\g}^{2}P
=\sum_{g,x,\xx'}
\pp_{\g}\pp_{\xx}\prxdp(g(x){=}g(\xx'))
P\st_{\xx}\st_{\xxd}P.
\end{align}
Here,
we divide the sum of the right-hand side into two parts
so that one part consists of the terms with $x=\xx'$
and the other part consists of the remaining terms.
It follows from the definition of $P$ that
the former part can be bounded as
\begin{align}
\nonumber
\sum_{g,x,\xx':x=\xx'}
\pp_{\g}\pp_{\xx}\prxdp(g(x){=}g(\xx'))
P\st_{\xx}\st_{\xxd}P
&=P\rxro_{B}P\\
\nonumber
&\le\ext{-r}P\st_{B}^{2}P.
\end{align}
By using (\ref{propuhf}),
the latter part can also be bounded as
\begin{align}
\nonumber
\begin{split}
&\sum_{g,x,\xx':x\neq\xx'}
\pp_{\g}\pp_{\xx}\prxdp(g(x){=}g(\xx'))
P\st_{\xx}\st_{\xxd}P\\
&=
\sum_{x,\xx':x\neq\xx'}
\pp_{\xx}\pp_{\xxd}
P\st_{\xx}\st_{\xxd}P
\sum_{g}
\prgp(g(x){=}g(\xx'))\\
&\le
\frac{1}{|{\cal{S}}|}
\sum_{x,\xx':x\neq\xx'}
\pp_{\xx}\pp_{\xxd}
P\st_{\xx}\st_{\xxd}P\\
&\le
\frac{1}{|{\cal{S}}|}P\st_{B}^{2}P.
\end{split}
\end{align}
The above two inequalities at once give
\begin{align}
\label{bndsum}
\sum_{s,g}
\pp_{\g}P\st_{\s\g}^{2}P
\le
\frac{1}{|{\cal{S}}|}
(1+\dl_{\len})
P\st_{B}^{2}P
\end{align}
with $\dl_{\len}=|{\cal{S}}|\ext{-r}$.
Note here that
$\log_{2}\st_{\s\g}\le0$ and $\st_{\s\g}\ge\st_{\s\g}^{1/2}P\st_{\s\g}^{1/2}$,
and so
\begin{align}
\nonumber
\sum_{g,s}
\pp_{\g}\mathrm{Tr}\bbl{\st_{\s\g}\log_{2}\st_{\s\g}}
\le
\sum_{s,g}
\pp_{\g}
\mathrm{Tr}\bbl{\st_{\s\g}^{1/2}P\st_{\s\g}^{1/2}\log_{2}\st_{\s\g}}.
\end{align}
Therefore,
by taking the trace of both sides of (\ref{qj_applied})
and then using (\ref{bndsum}) and $\mathrm{Tr}[\hat{\st}_{B}]\le1$,
we obtain
\begin{align}
\nonumber
\sr(S|\rvgB)_{\rho_{S\rvgB}}
\le
(1-\mathrm{Tr}[\hat{\st}_{B}])\log_{2}|{\cal{S}}|
+\log_{2}(1+\dl_{\len})+\Delta,
\end{align}
where we have introduced
\begin{align}
\nonumber
\Delta=
\mathrm{Tr}\bbl{\hat{\st}_{B}\log_{2}\sbl{\hat{\st}_{B}^{-1/2}
P\st_{B}^{2}P\hat{\st}_{B}^{-1/2}}}
-\mathrm{Tr}[\st_{B}\log_{2}\st_{B}].
\end{align}
Furthermore,
by using $\mathrm{Tr}[\hat{\st}_{B}]\ge1-\epsilon$ and
$\log_{2}(1+x)\lex/\ln2$ for $x\ge0$,
this inequality can be simplified to
\begin{align}
\label{semi-fin}
\sr(S|\rvgB)_{\rho_{S\rvgB}}
\le
\epsilon\log_{2}|{\cal{S}}|
+\frac{\dl_{\len}}{\ln2}+\Delta.
\end{align}
It remains to estimate $\Delta$.
Let us write $\Delta$
in the form $\Delta=\rs_{1}+\rs_{2}$,
where
\begin{align}
\nonumber
\rs_{1}&=\mathrm{Tr}\bbl{\hat{\st}_{B}\log_{2}\sbl{\hat{\st}_{B}^{-1/2}
P\st_{B}^{2}P\hat{\st}_{B}^{-1/2}}}
-\mathrm{Tr}[\hat{\st}_{B}\log_{2}\hat{\st}_{B}],\\
\nonumber
\rs_{2}&=\mathrm{Tr}[\hat{\st}_{B}\log_{2}\hat{\st}_{B}]-\mathrm{Tr}[\st_{B}\log_{2}\st_{B}].
\end{align}
First, we estimate the first part $\rs_{1}$.
Let $S=\hat{\st}_{B}$ and
$T=\hat{\st}_{B}^{-1/2}P\st_{B}^{2}P\hat{\st}_{B}^{-1/2}$.
Since
\begin{align}
\nonumber
T-S
=\hat{\st}_{B}^{-1/2}P\st_{B}(\mathbb{I}-P)\st_{B}P\hat{\st}_{B}^{-1/2}
\ge0,
\end{align}
and hence $\mathrm{supp}S\subset\mathrm{supp}T$,
inequality (\ref{qre_ineq})
can be applied to yield
\begin{align}
\nonumber
\rs_{1}=-\re{S}{T}
\le
\mathrm{Tr}[S]\log_{2}\frac{\mathrm{Tr}[T]}{\mathrm{Tr}[S]}.
\end{align}
It is now convenient to define $\phi=\mathrm{Tr}[T-S]$,
which can be written as
\begin{align}
\nonumber
\phi
&=\mathrm{Tr}\bbl{\hat{\st}_{B}^{-1/2}P\st_{B}(\mathbb{I}-P)\st_{B}P\hat{\st}_{B}^{-1/2}}\\
\nonumber
&=\mathrm{Tr}\bbl{(\mathbb{I}-P)\st_{B}^{1/2}\st_{B}^{1/2}P\hat{\st}_{B}^{-1}P\st_{B}}.
\end{align}
Hence by
Schwarz's inequality,
\begin{align}
\nonumber
\phi
\le
\big(
\mathrm{Tr}\bbl{(\mathbb{I}-P)\st_{B}(\mathbb{I}-P)}
\mathrm{Tr}\bbl{\st_{B}P\hat{\st}_{B}^{-1}P\st_{B}}
\big)^{1/2}.
\end{align}
By use of $\mathrm{Tr}[(\mathbb{I}-P)\st_{B}]\le\epsilon$ and
$\mathrm{Tr}\bbl{\st_{B}P\hat{\st}_{B}^{-1}P\st_{B}}
=\mathrm{Tr}[T]
=\mathrm{Tr}[S]+\phi$,
this inequality can be simplified to
$\phi\le\epsilon^{1/2}\sbl{\mathrm{Tr}[S]+\phi}^{1/2}$,
which, together with $\mathrm{Tr}[S]=\mathrm{Tr}[\hat{\st}_{B}]\le1$,
gives
\begin{align}
\nonumber
\phi\le
\frac{\epsilon+\sbl{\epsilon^{2}+4\epsilon\mathrm{Tr}[S]}^{1/2}}{2}
\le
\frac{\epsilon+\sbl{\epsilon+2\epsilon^{1/2}}}{2}
=\epsilon+\epsilon^{1/2}.
\end{align}
Therefore
\begin{align}
\label{estdo}
\rs_{1}\le
\mathrm{Tr}[S]\log_{2}\frac{\mathrm{Tr}[S]+\phi}
{\mathrm{Tr}[S]}
\le\frac{\epsilon+\epsilon^{1/2}}{\ln2}.
\end{align}
Next,
we estimate the second part $\rs_{2}$.
Let $\kappa_{P}(\st_{B})=P\st_{B}P+(\mathbb{I}-P)\st_{B}(\mathbb{I}-P)$.
Since $\st_{B}$ and $\kappa_{P}(\st_{B})$ are density operators,
$\re{\st_{B}}{\kappa_{P}(\st_{B})}\ge0$,
and hence
\begin{align}
\nonumber
-S(\st_{B})
+S(\hat{\st}_{B})
+S((\mathbb{I}-P)\st_{B}(\mathbb{I}-P))
\ge0.
\end{align}
From this and (\ref{vne_ineq}),
\begin{align}
\label{estdt}
\begin{split}
\rs_{2}
&=-S(\hat{\st}_{B})+S(\st_{B})
\le
S((\mathbb{I}-P)\st_{B}(\mathbb{I}-P))\\
&\le
\epsilon\log_{2}d
+\eta_{0}(\epsilon),
\end{split}
\end{align}
where $d=\mathrm{rank}\st_{B}$ and
$\eta_{0}$ is a monotone increasing function on $[0,\infty)$
defined by (\ref{defet}).\footnote{%
This inequality is a special case of Fannes inequality~\cite{F73}.}
Now,
the required inequality (\ref{result})
follows from
(\ref{semi-fin}), (\ref{estdo}) and (\ref{estdt})
with taking the limit $r\rightarrow\ren_{\ep}(X|B)_{\st}-0$.
This completes the proof.
\end{proof}
Let $l^{\epsilon}_{D}(X|B)$ denote the maximal length
of randomness of distance $\epsilon$ from uniform
(with respect to the relative entropy $D$),
extractable from system $X$ given system $B$.
It then follows from this theorem that
\begin{align}
\label{grupb}
l^{\epsilon'}_{D}(X|B)
\ge\ibr{\ren}_{\ep}(X|B)+\log_{2}\delta
\end{align}
with
\begin{align}
\label{eppdef}
\epsilon'=\epsilon\log_{2}\sbl{d|{\cal{X}}|}
+\eta_{0}(\epsilon)
+\frac{\delta+\epsilon+\epsilon^{1/2}}{\ln2}
\end{align}
(where we have used $|{\cal{S}}|\le|{\cal{X}}|$).
Let ${\cal{X}}$ be a finite set.
For any set of quantum states $\{\st_{\xx}\}_{x\in{\cal{X}}}$ on $\hil_{B}$,
one can construct a set of pure states
$\{\st^{*}_{x}\}_{x\in{\cal{X}}}$ on $\hil_{\opbs}$
such that there exists a trace preserving completely positive map
${\cal{F}}:\opb^{*}\rightarrowB$
satisfying ${\cal{F}}(\st^{*}_{x})=\st_{\xx}$ for all $x\in{\cal{X}}$~\cite{cjw04}.
Therefore, for the case where $d=\mathrm{rank}\st_{B}$ is high, e.g. infinite,
we may substitute $R(X|B)_{{\cal{F}}(\rho)}$
by $R(X|\opb^{*})_{\rho}$
with $\rho_{X\opb^{*}}=\sum_{x}\pp_{\xx}\brk{x}\otimes\st^{*}_{x}$,
where $R(X|\opb^{*})_{\rho}\leR(X|B)_{{\cal{F}}(\rho)}$
and $\mathrm{rank}\st_{\opbs}\le|{\cal{X}}|$.
(Here, the latter inequality is an advantage
but the former is a disadvantage of this reduction.)
In randomness extraction against
classical side information~$Y$,
the distance from uniform can be upper-bounded as
$\sr(S|\rvgB)_{\rho_{S\rvgB}}
\le\epsilon\log_{2}|{\cal{S}}|+\delta/\ln2$
if $G$ and $Y$ are independent~\cite{bbcm95},
and as
\[
\sr(S|\rvgB)_{\rho_{S\rvgB}}
\le\epsilon\log_{2}|{\cal{S}}|+\frac{\delta+\epsilon}{\ln2}
\]
if $Y$ may depend on $G$
\cite{yw07}.
(It should be stated that,
in quantum key distribution,
adversary's measurement can wait
until the choice of hash functions is announced,
and so adversary's information $Y$
may depend on the choice $G$).
Here, we note that for a purely classical state $\st_{XY}$,
$R(X|Y)_{\rho}$ becomes
\begin{align}
\nonumber
R_{\ep}(\rvx|\rvy)=\sup_{\lambda}\mbl{\lambda|
\mathrm{Pr}[Y\in\{y|R(X|Y=y)\ge\lambda\}]
\ge1-\epsilon},
\end{align}
where
$R(X|Y=y)
=-\log_{2}\sum_{x}
\mathrm{Pr}[X=x|Y=y]^{2}$.
Since
$R_{\ep}(\rvx|\rvy)$ coincides with the (smoothed) conditional collision entropy
given by \cite{bbcm95,yw07},
$\ren_{\ep}(A|B)_{\st}$ can be considered as its quantum generalization.
Hence it may be of interest to compare the results of these works.
It can be seen that
the upper bound given in this work (see (\ref{result}))
is larger than
that given in \cite{yw07} (see above) by
$\epsilon\log_{2}(d/\epsilon)+\epsilon^{1/2}/\ln2$,
which is $O(\epsilon^{1/2})$ as $\epsilon\rightarrow+0$.
\section{Asymptotic optimality}
We first introduce two information spectrum entropies
which asymptotically approach the von Neumann entropy.
\begin{definition}
Let $\st_{A}$ be a quantum state on a Hilbert space $\hil_{A}$.
Then, for $\epsilon\ge0$,
the information spectrum sup-entropy $\overline{\vn}_{\epsilon}(A)_{\rho}$
and inf-entropy $\underline{\vn}_{\epsilon}(A)_{\rho}$ of system $A$ of a state $\rho$
are given by
\begin{alignat*}{2}
\overline{\vn}_{\epsilon}(A)_{\rho}&=&
\inf_{\lambda}&\big\{\lambda{\big|}
\mathrm{Tr}\bbl{\mbl{\rho\ge2^{-\lambda}}\rho}\ge1-\epsilon\big\},\\
\underline{\vn}_{\epsilon}(A)_{\rho}&=& \,
\sup_{\lambda}&\big\{\lambda{\big|}
\mathrm{Tr}\bbl{\mbl{\rho\le2^{-\lambda}}\rho}\ge1-\epsilon\big\},
\end{alignat*}
respectively.
\end{definition}
\begin{proposition}
\label{isvnasy}
Let $\st_{A}$ be a quantum state on a Hilbert space $\hil_{A}$
of finite dimension $\dd_{A}$.
Then, for $\gamma>0$,
\begin{align*}
\overline{\vn}_{\overline{\ep}}(A^{n})_{\rho^{\otimesn}}
&\len\big(S(A)_{\rho}+\gamma\big),\\
\underline{\vn}_{\underline{\ep}}(A^{n})_{\rho^{\otimesn}}
&\gen\big(S(A)_{\rho}-\gamma\big),
\end{align*}
with
\begin{align}
\label{defexp}
\overline{\ep}=(1+n)^{\dd_{A}}2^{-n\uexp{\st_{A},\gamma}}
\quad\text{and}\quad
\underline{\ep}=(1+n)^{\dd_{A}}2^{-n\lexp{\st_{A},\gamma}},
\end{align}
where we have introduced
\begin{align*}
\uexp{\rho,\gamma}
&=\inf_{\sigma\in\cls({\cal{H}}):\rho\sigma=\sigma\rho,
S(\sigma)+\re{\sigma}{\rho}>S(\rho)+\gamma}
\re{\sigma}{\rho},\\
\lexp{\rho,\gamma}
&=\inf_{\sigma\in\cls({\cal{H}}):\rho\sigma=\sigma\rho,
S(\sigma)+\re{\sigma}{\rho}<S(\rho)-\gamma}
\re{\sigma}{\rho},
\end{align*}
for $\rho\in\cls({\cal{H}})$ and $\gamma>0$.
\end{proposition}
\begin{proof}
Note that $\overline{\vn}_{\epsilon}(A)_{\rho}$ and $\underline{\vn}_{\epsilon}(A)_{\rho}$
can be described by the probability distribution
induced by the eigenvalues of $\st_{A}$.
Hence, the proposition is a direct consequence of
Sanov's theorem (see e.g.~\cite{ct06}),
which gives that, in our notation,
\begin{align*}
\mathrm{Tr}\bbl{\rho^{\otimesn}\{2^{-n\mu}
<\rho^{\otimesn}<2^{-n\nu}\}}
\le(1+n)^{d}2^{-n D(\rho;\mu,\nu)}
\end{align*}
with
\begin{align*}
D(\rho;\mu,\nu)
=\inf_{\sigma\in\cls({\cal{H}}):\rho\sigma=\sigma\rho,
\nu<S(\sigma)+\re{\sigma}{\rho}<\mu}
\re{\sigma}{\rho}
\end{align*}
for $\rho\in\cls({\cal{H}})$,
where ${\cal{H}}$ is a Hilbert space of finite dimension~$d$.
\end{proof}
Next, we give a general lower bound on $\ibr{\ren}_{\ep}$
in terms of the two information spectrum entropies introduced above,
and then show the asymptotic optimality of $\ibr{\ren}_{\ep}$.
Since each information spectrum entropy is determined
by the eigenvalues of a quantum state, the evaluation of the lower bound
reduces to the eigenvalue problem of
two quantum states $\st_{AB}$ and $\st_{A}$.
\begin{theorem}
\label{gqrlwb}
Let $\hil_{A}$ and $\hil_{B}$ be Hilbert spaces,
and $\st_{AB}$ be a quantum state on $\hil_{A}\otimes\hil_{B}$.
Then, for $\lep,\uep>0$,
\begin{align*}
\igr(A|B)_{\st}\ge\underline{\vn}_{\lep}(AB)_{\rho}
-\overline{\vn}_{\uep}(B)_{\rho}
+\log_{2}\big(1-\lep^{1/2}\big)
\end{align*}
with
\begin{align*}
\nonumber
\epsilon=\lep^{1/2}+\lep+\uep.
\end{align*}
\end{theorem}
\begin{proof}
Let $\hil_{C}$ be a two-dimensional Hilbert space
with an orthonormal basis $\{|0\rangle,|1\rangle\}$.
For $\lep>0$,
define a quantum state $\st_{ABC}$
on $\hil_{A}\otimes\hil_{B}\otimes\hil_{C}$ by
\begin{align*}
\st_{ABC}=\bar{\st}_{AB}\otimes\brk{1}+(\st_{AB}-\bar{\st}_{AB})\otimes\brk{0},
\end{align*}
where we have introduced
\begin{align*}
\bar{\st}_{AB}=\st_{AB}\big\{\st_{AB}\le\mu\big\}
\end{align*}
with $\mu=2^{-\underline{\vn}_{\lep}(AB)_{\rho}}$.
It is clear from this definition that
\begin{align*}
\mathrm{Tr}_{C}[\st_{ABC}]=\st_{AB}
\quad\text{and}\quad
\bar{\st}_{AB}^{2}\le\mu\bar{\st}_{AB}.
\end{align*}
Also,
it follows from the definition of $\underline{\vn}_{\epsilon}$ that
\begin{align}
\label{trrb}
\mathrm{Tr}[\bar{\st}_{AB}]\ge1-\lep.
\end{align}
Moreover,
for $\uep>0$,
define $\hat{\st}_{B}$ and $\check{\st}_{B}$ by
\begin{align*}
\hat{\st}_{B}&=\st_{B}\big\{\st_{B}\ge\lambda\big\},\\
\check{\st}_{B}&=\big\{\st_{B}\ge\lambda\big\}\bar{\st}_{B}\big\{\st_{B}\ge\lambda\big\},
\end{align*}
with $\lambda=2^{-\overline{\vn}_{\uep}(B)_{\rho}}$ and $\bar{\st}_{B}=\mathrm{Tr}_{A}[\bar{\st}_{AB}]$.
It then follows from $\bar{\st}_{B}\le\st_{B}$ and (\ref{trrb}) that
\begin{align*}
\check{\st}_{B}\le\hat{\st}_{B}
\quad\text{and}\quad
\mathrm{Tr}\big[\hat{\st}_{B}-\check{\st}_{B}\big]\le\lep,
\end{align*}
and so for $\check{c}=1-\lep^{1/2}$,
\begin{align*}
\mathrm{Tr}\big[\hat{\st}_{B}\big\{\check{\st}_{B}<\check{c}\lambda\big\}\big]
&\le\mathrm{Tr}\big[\hat{\st}_{B}\big\{\check{\st}_{B}<\check{c}\hat{\st}_{B}\big\}\big]\\
&=\mathrm{Tr}\big[\hat{\st}_{B}\big\{\lep^{1/2}\hat{\st}_{B}<\hat{\st}_{B}-\check{\st}_{B}\big\}\big]\\
&<\mathrm{Tr}\big[\lep^{-1/2}\big(\hat{\st}_{B}-\check{\st}_{B}\big)\big]
\le\lep^{1/2}.
\end{align*}
Hence,
\begin{align*}
\mathrm{Tr}\big[\hat{\st}_{B}\big\{\check{\st}_{B}\ge\check{c}\lambda\big\}\big]
>\mathrm{Tr}\big[\hat{\st}_{B}\big]-\lep^{1/2}.
\end{align*}
Here,
on noting that
$\check{\st}_{B}$ commutes with
$\big\{\st_{B}\ge\lambda\big\}$,
let us introduce the projection $\prj$
on $\hil_{B}\otimes\hil_{C}$ defined by
\begin{align*}
\prj=
\big\{\st_{B}\ge\lambda\big\}\big\{\check{\st}_{B}\ge\check{c}\lambda\big\}
\otimes\brk{1}.
\end{align*}
It can be seen from this definition that
\begin{align*}
\mathrm{Tr}\bbl{\stbc\prj}
&>\mathrm{Tr}\big[\hat{\st}_{B}\big]-\lep^{1/2}-\mathrm{Tr}\big[\st_{B}-\bar{\st}_{B}\big]\\
&\ge1-\uep-\lep^{1/2}-\lep.
\end{align*}
Moreover, since
\begin{align*}
\prj\rxro_{BC}\prj
=\prj\mathrm{Tr}_{A}\big[\bar{\st}_{AB}^{2}\otimes\brk{1}\big]\prj
\le\mu\prj\stbc\prj
\end{align*}
and
\begin{align*}
\prj\stbc^{2}\prj\ge\prj\stbc\prj\stbc\prj
=\prj(\check{\st}_{B}^{2}\otimes\brk{1})\prj
\ge\check{c}\lambda\prj\stbc\prj,
\end{align*}
it follows that
\begin{align*}
\prj(\rxro_{BC}-2^{-r}\stbc^{2})\prj\le0
\end{align*}
for
$r\le\underline{\vn}_{\lep}(AB)_{\rho}
-\overline{\vn}_{\uep}(B)_{\rho}+\log_{2}\check{c}$.
Hence, $\prj\le\{\rxro_{BC}-2^{-r}\stbc\le0\}$
and so
\begin{align*}
\nonumber
\igr(A|B)_{\st}
&\geR_{\epsilon}(A|BC)_{\rho}\\
&\ge\underline{\vn}_{\lep}(AB)_{\rho}
-\overline{\vn}_{\uep}(B)_{\rho}
+\log_{2}\big(1-\lep^{1/2}\big)
\end{align*}
for
\begin{align}
\nonumber
\epsilon=\lep^{1/2}+\lep+\uep.
\end{align}
This completes the proof.
\end{proof}
\begin{corollary}
\label{aopt}
Let $\hil_{A}$ and $\hil_{B}$ be Hilbert spaces
of finite dimensions $\dd_{A}$ and $\dd_{B}$, respectively,
and $\st_{AB}$ be a quantum state on $\hil_{A}\otimes\hil_{B}$.
Then,
\begin{align*}
\lim_{n\rightarrow\infty}
\frac{1}{n}
\igr(A|B)_{\st}\ge\vn(A|B)_{\st}
\end{align*}
for $\epsilon$ converging to $0$ as $n\rightarrow\infty$.
\end{corollary}
\begin{proof}
It follows from Theorem~\ref{gqrlwb} and Proposition~\ref{isvnasy} that
\begin{align*}
\igr(A^{\nn}|B^{\nn})_{\st^{\ot\nn}}\gen\big(\vn(A|B)_{\st}-2\gamma\big)
+\log_{2}\big(1-\lep^{1/2}\big)
\end{align*}
with
\begin{align*}
\nonumber
\epsilon=\lep^{1/2}+\lep+\uep,
\end{align*}
where $\lep$ and $\uep$ are given by~(\ref{defexp}).
Now, suppose that
$\rho$, $\sigma$ and $\gamma$ satisfy the condition
in the definition of $\overline{D}$ or $\underline{D}$.
Then
\begin{align*}
\gamma&<|S(\rho)-S(\sigma)|\pm\re{\sigma}{\rho}\\
&\led_{1}({\rho},{\sigma})\dim{\cal{H}}
-d_{1}({\rho},{\sigma})\log_{2}d_{1}({\rho},{\sigma})
\pm\re{\sigma}{\rho}\\
&\le{\re{\sigma}{\rho}}^{\frac{1-\delta}{2}}
\end{align*}
with $\delta>0$,
for sufficiently small $\re{\sigma}{\rho}$,
where the second inequality follows from
Fannes inequality~\cite{F73}
and the third one from
quantum Pinsker's inequality and
$\lim_{x\rightarrow+0}x^{\delta}\log_{2} x=0$ for $\delta>0$.
Therefore,
we can take
\begin{align*}
\gamma=n^{-\nex_{\gm}},\
\lep=(1+n)^{\dd_{A}\dd_{B}}2^{-n^{\nex_{\ep}}},\
\uep=(1+n)^{\dd_{B}}2^{-n^{\nex_{\ep}}}
\end{align*}
for $\nex_{\gm}$ and $\nex_{\ep}$ such that
\begin{align*}
\nex_{\gm},\nex_{\ep}>0
\quad\text{and}\quad
2\nex_{\gm}+\nex_{\ep}<1,
\end{align*}
from which the corollary follows.
\end{proof}
For a classical-quantum state $\rho_{X\opq}$ on $\hil_{X}\otimes\hil_{B}$,
it follows from inequality (\ref{grupb}) that
\begin{align*}
\ibr{\ren}_{\ep}(X|B)\lel^{\epsilon'}_{D}(X|B)-\log_{2}\delta
\leH_{\mathrm{min}}^{\sqrt{\epsilon'}}(X|B)_{\rho}-\log_{2}\delta
\end{align*}
(see e.g. \cite{th13} for the second inequality),
and so the asymptotic equipartition property
of the min-entropy (see e.g.~\cite{tcr09})
and the above corollary yield that
\begin{align*}
\lim_{n\rightarrow\infty}
\frac{1}{n}\ibr{\ren}_{\ep}(X^{n}|B^{n})_{\rho^{\otimesn}}
&=S(X|B)_{\rho}
\end{align*}
for $\epsilon$ converging to $0$ as $n\rightarrow\infty$.
\section{Concluding remarks}
It follows from Theorem~\ref{rethm} and
the monotonicity of the relative entropy that
$\ibr{\ren}_{\ep}$ gives the length of extractable randomness
with distance $\epsilon'$ from uniform,
where $\epsilon'$ is given by (\ref{eppdef}).
For fixed $\epsilon$,
the maximal length $l^{\epsilon}_{d_{1}}$ of
extractable randomness
has an asymptotic expansion
with an optimal second-order term,
\begin{align*}
&\frac{1}{n}l^{\epsilon}_{d_{1}}(X^n|B^n)\\
&=S(X|B)_{\rho}
+\sqrt{\frac{V(X|B)_{\rho}}{n}}\Phi^{-1}(\epsilon^2)
+O\left(\frac{\logn}{n}\right),
\end{align*}
where $V(A|B)_{\rho}=\mathrm{Tr}[\st_{AB}(\log\st_{AB}-\log \mathbb{I}_{A}\otimes\st_{B})^{2}]$
and $\Phi$ denotes the cumulative distribution function
of the standard normal distribution (see~\cite{th13}).
Hence, it is of interest to examine
how close $\frac{1}{n}\ibr{\ren}_{\ep}$ is to the above optimum,
in particular when the classical large deviation theory
giving an optimal second-order asymptotics~\cite{H08}
is applied instead of Sanov's theorem.
Moreover, it should be stated that
in many applications such as those in cryptography,
$\epsilon$ should converge to $0$ faster than any polynomial $n^{-c}$
for sufficiently large $n>n_{c}$.
However,
the optimality of the above expansion,
which is shown by use of Berry-Essen theorem (see e.g.~\cite{f71}),
is lost for such $\epsilon$;
in fact, the lower bound on $l^{\epsilon}_{d_{1}}$ diverges to $-\infty$
for $\epsilon$ converging faster than $1/\sqrt{n}$.
Therefore, it is also of interest to examine the possibility
of further improvement in the second-order asymptotics
for $\epsilon$ converging sufficiently fast.
\section*{Acknowledgement}
The author is grateful to reviewers for their helpful comments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,869,038,155,299 | arxiv | \section{Introduction}
\input{src/1_introduction}
\input{src/2_related}
\input{src/3_approach}
\input{src/4_evaluation}
\input{src/5_conclusion}
\input{src/6_acks}
\bibliographystyle{ACM-Reference-Format}
\balance
\section{Related Work}
\subsection{Layout Generation}
One of the earliest approaches by Li et al. in \cite{li2019layoutgan}, utilizes GANs to re-arrange graphical layouts objects using wire-frame renderings.
Another GAN approach by the authors of \cite{zheng2019content} introduces contextual awareness using keywords focusing mainly in the domain of magazine layouts.
While a feasible approach, pairing keywords with layouts can be a time consuming process which limits its transferability to other datasets.
A framework called LayoutVAE \cite{jyothi2019layoutvae} uses variational auto-encoders (VAE) \cite{kingma2013auto} to generate stochastic scene layouts with a two stage process: categorical count and bounding box prediction.
Patil et al. in \cite{patil2019read} adapt a combined recursive network and VAE framework by \cite{li2019grains} for document layouts using only several hundred training examples.
The authors in \cite{lee2020neural} use a GNN framework based on \cite{johnson2018image} to generate complete graphs based on partial graphs using size and relative layout location.
These approaches treat bounding box placement as a single point estimate or uni-modal distribution and thus remains vulnerable to degenerate samples as reported in \cite{zheng2019content,lee2020neural,patil2019read,li2019layoutgan}.
In contrast, our approach frames layouts as a sequence of multiple choices enabling predictions in different distributions.
\subsection{Multi Choice Learning}
To address the problem of using single point estimates in problems with different outputs, multi-choice learning has recently emerged as a viable tool.
The term multi-choice learning can be traced to \cite{guzman2012multiple,guzman2014multi} which introduced a structured multi-hypotheses SVM algorithm using winner-takes-all loss (WTA).
In a two stage training regime, a set of predictors generate a set of hypotheses and only the predictor which produced the minimum loss is updated during training.
Using this technique, \cite{kirillov2015inferring} showed that increasing diversity can be achieved by penalizing similar outputs.
WTA was developed for neural network ensembles in \cite{lee2016stochastic} which encourages each network to become an "expert".
This boosted output diversity and was applied for a range of tasks: image classification, semantic segmentation, image captioning and image synthesis \cite{chen2017photographic}.
A variant called Relaxed WTA (RWTA) appeared in \cite{rupprecht2017learning} which interestingly showed that multi-hypothesis prediction (MHP) resulted in Voronoi tessellation in the output space.
RWTA applies a small update to all non-minimum predictors during training.
This relaxation helps move outputs away from an initial single Voronoi cell and therefore increases diversity.
MHP motivated further work with RNN multi-choice prediction in \cite{bhattacharyya2018accurate}.
A multi label approach was developed in \cite{firman2018diversenet} that uses a control vector to induce multiple outputs.
These methods by \cite{rupprecht2017learning,firman2018diversenet} demonstrate that multi-choice learning can be harnessed by a single neural network architecture, instead of an ensemble seen in \cite{lee2016stochastic}.
This simplified training into a single regime and reduced training times.
\subsection{Multimodal Learning}
An alternate approach to multi-output problems is representing different peaks of $y$ with the same input $x$ as separate Gaussian distributions.
This is advantageous over multi-choice frameworks because it provides an estimation of uncertainty through the variance.
Mixture density networks (MDN) introduced in \cite{bishop1994mixture} generates mixture Gaussian distributions from the outputs of neural networks.
Interestingly, MDNs use a mixture coefficient layer that forms a probability density function over each mixture distributions.
These were applied in real-valued domains such as generating speech and handwriting \cite{graves2013generating}.
MDNs at higher dimensions are difficult to optimize with gradient descent due to numerical instability, mode collapse and require special initialization schemes \cite{rupprecht2017learning,cui2019multimodal,makansi2019overcoming,zhou2020movement,prokudin2018deep}.
A recent sample and fit framework by \cite{makansi2019overcoming} showed that MDNs can be stabilized for future prediction tasks such as pedestrian and car movement.
This framework combined concepts from multi-choice learning by developing Evolving WTA Loss (EWTA) for its sampling phase.
In a successive round of updates, the EWTA updates the top $k$ hypotheses and decreases $k$ at each step until $k=1$.
This reduces hypotheses being stuck in an unacceptable regions between ground truths as seen in RWTA.
Another variant, Entropy WTA was introduced by \cite{zhou2020movement} to prevent modal collapse for MDNs in robotic primitive movement tasks.
Modal collapse occurs when a sub-set of mixtures are not trained due to lack of data.
During generation, these under-trained mixtures are selected and result in degenerate samples.
Despite these improvements to MDNs, we still experience numerical instabilities for layouts caused by divisions of very small variances resulting in large gradients.
Our approach combines several ideas from multi-choice learning and mixture density networks.
Multi-hypothesis prediction is more stable and easier to train than MDNs however is vulnerable to collapse when hypotheses exceed labels.
This is due to the difficulty in determining which predictor has not been trained for which input.
As a result, we use the idea of mixture coefficient layers from MDNs
to help resolve this problem by diminishing the likelihood choosing untrained predictors and vice versa.
This new framework will be discussed at further depth in Section~\ref{sec::mcf}.
\section{LayoutMCL}
Section~\ref{sec::arch} begins with a conceptual discussion that explains the intuition behind our overall layout architecture followed by a technical overview.
Section~\ref{sec::mcf} delves into the mechanics of our multi choice learning framework.
\subsection{Architecture} \label{sec::arch}
Our architecture mimics how humans would design graphical layouts for a presentation or poster.
If you were to observe the creative process in a time-lapse, from an empty canvas humans will place a series of objects in sequential order.
Each decision is an mental interaction between other existing objects and a human's personal preference.
This is also how we construct speech and writing.
We already have a vague mental image of what we want to say or design before-hand which explains why some approaches apply GANs \cite{li2019layoutgan}.
However what matters most is that the human output process is always structured sequentially, and not in a single step.
Based on these intuitions, we design LayoutMCL to generate layouts using an auto-regressive architecture, shown in Figure~\ref{fig::mclarch}.
The input to LayoutMCL is the existing layout which is appended with a new object at the following step.
More formally, a layout is comprised of a set of objects, $L_\pi = \{o_0, ..., o_n\}$, where $\pi$ is a specific object ordering.
Our experiments find that \textit{human-reading} order of the layout objects generates the best quality samples, particularly in multi-column documents.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{img/mclarch.png}
\caption{Auto-regressive architecture of LayoutMCL depicting a single step in object prediction from a partial layout to a new object.
New objects are appended to the existing layout and generation continues until $o^s=1$}
\Description{LayoutMCL}
\label{fig::mclarch}
\end{figure}
Each step consists of three object predictions created by separate prediction modules.
Separate modules allows the outputs to take on different distributions types.
The object bounding box, $o^b$, coordinates $(x,y,w,h)$ are normalized between $\{0,1\}$.
The object category, $o^c$, varies by dataset (E.g. toolbar, title, icon).
The stop label, $o^s$ is 0 for all objects except for the final object where $o^s=1$.
An object is therefore defined as $o_{n} = \{o_n^b, o_n^c, o_n^s\}$.
\subsubsection{CRN Encoder} \label{sec::encoder}
The encoder processes an existing layout using both visual and object categorical information in the same manner that a human would.
This is achieved by using a joint convolutional neural network (CNN) and bidirectional recurrent network (RNN) which creates a shared representation.
The RNN encodes the continuous geometric dimensions and categorical properties of $L_\pi$ to also avoid the problem of occlusion.
Bidirectionality is utilized because an object's location in $\pi$ is not solely based on prior objects.
This module aggregates the resulting vector to create $X_{layout}$.
The CNN is leveraged to distinguish visual patterns including alignment and spacing between locally spaced objects.
This module encodes a rendered masked layout to build a \textit{spatial representation}, $X_{spatial}$.
$X_{spatial}$ and $X_{layout}$ are combined into a latent vector, $X_{shared}$, shared between three prediction modules.
\subsubsection{Object Prediction Modules\label{sec::mhd}}
This bounding box module uses a multi-choice framework, discussed in the Section~\ref{sec::mcf}, to learn the diverse range of human preferences in object placement and size with multiple predictors.
Each predictor can be thought of as learning a separate preference.
This bounding box module consists of a mixture coefficient layer and $M \times C$ distinct 2-layer feed-forward layer where $C$ is the number of categories and $M$ is the number of hypotheses/predictors.
The predictors are 2-layer feed-forward networks that use an intermediate ReLU layer and final Sigmoid layer that predicts $o_{n+1}^b$.
The categorical module, $F_c$, is a 2-layer feed-forward networks with intermediate ReLU activation and a final log softmax layer that predicts $o_{n+1}^c$.
$F_c$ is trained by minimizing the negative log likelihood loss, $l_c$, between the output and category labels.
Likewise, the stop module, $F_s$, uses the same 2-layer feed-forward networks and predicts $o_{n+1}^s$.
$F_s$ is trained by minimizing the binary cross entropy loss, $l_s$.
In summary, the total loss minimized during training is defined in Equation~\ref{eq:lossfn}.
\begin{equation}\label{eq:lossfn}
\mathcal{L}_{total} = l_c\lambda_c + l_s\lambda_s + l_b\lambda_b
\end{equation}
where the $l_b$ is the loss for the bounding box module defined in Equation~ \ref{eq::metaloss3}.
This will be discussed in the next section.
$\lambda$ are hyper-parameters used to re-weight the losses during training.
\subsection{Multi Choice Framework for Layouts\label{sec::mcf}}
Here we describe a simple multi choice framework that combines recent ideas from \cite{lee2016stochastic,rupprecht2017learning, makansi2019overcoming,zhou2020movement,firman2018diversenet}.
To begin, we discuss the two most prevalent problems found in past literature using multiple choice frameworks.
\begin{enumerate}
\item \textbf{Modal Collapse}. A subset of predictors are not \textit{paired} with ground truths and generate degenerate hypotheses during test time. Pairing refers to being trained with reference to a ground truth.
\item \textbf{Averaging Problem}. All predictors are paired, however some or all are paired with multiple ground truths. This results in some or all hypotheses lying in an unacceptable central regions between ground truths.
\end{enumerate}
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{img/wtaours.png}
\caption{The three stages of our multi-choice framework depicting ground truths in red and hypotheses in blue. The top row shows movement of hypotheses over time. Bottom row shows change of density by predictor. (a) 10 predictors are randomly initialized with 3 ground truths. (b) Ground truth are paired with closest hypothesis (gold in bottom row) (c) Density of paired predictors are boosted and unpaired predictors pushed to zero.}
\Description{Density and vanilla Loss}
\label{fig::wtaours}
\end{figure*}
Balancing these two problems is not an easy task.
Early approaches \cite{lee2016stochastic,guzman2012multiple} focused on solving the averaging problem caused by single point estimates.
This was solved by using vanilla WTA loss however poorly initialized and unpaired predictors in regions far from ground truth labels are ignored leading to modal collapse.
More recent approaches \cite{makansi2019overcoming,rupprecht2017learning} attempt to resolve modal collapse by moving all predictions towards ground truth labels with variants of WTA loss.
A limitation of Relaxed WTA \cite{rupprecht2017learning} is the concentration of hypotheses in central regions between ground truth labels as seen in Figure~\ref{fig::wtaother}a.
Evolving WTA \cite{makansi2019overcoming} showed significant improvement in reducing the number of hypotheses in central regions, however some still remain as shown in Figure~\ref{fig::wtaother}b.
We take a different approach to this problem by combining a mixture coefficient layer with vanilla WTA loss.
The mixture coefficient layer predicts a probability density function over the set of predictors.
This idea circumvents modal collapse by boosting the coefficient of good predictors and reduces the coefficient of poor predictors.
This is useful in circumstances where the number of predictors exceed the number of ground truth labels.
In these situations, the likelihood of unpaired predictors are pushed towards zero and are unlikely to surface during test time.
In the opposite circumstance, all predictors are still paired with a ground truth.
Like \cite{rupprecht2017learning} we use multi-hypothesis prediction generated by multiple prediction layers, each initialized separately.
However our framework is designed within an auto-regressive architecture and used for predicting objects with different categories that exhibit different output patterns.
For example, a title is generally smaller than an image.
Each category, therefore, uses a separate sub-group of predictors to generate hypotheses.
In the below sections we describe our framework in detail.
\subsubsection{Multi-Hypothesis Prediction}
Given a category, $c \in C$, and $x \in X_{shared}$, $f_\theta^c(x)$ is defined as a set of $M$ predictors parameterized by $\theta$:
\begin{equation}\label{eq:fc}
f_\theta^c(x) = (f_{1}^c(x),...,f_{M}^c(x)).
\end{equation}
These predictors generates $M$ hypotheses where $\hat{y_i}$ refers to the $i^{th}$ hypothesis and $y$ is the corresponding ground truth label.
For the purpose of layouts, we use the L1 loss for the $i^{th}$ predictor.
\begin{equation}\label{eq:fc2}
L_1(\hat{y_i}, y) = \left \rvert \lvert \hat{y_i} - y \right \rvert \rvert.
\end{equation}
A framework using a single hypothesis would simply take the mean over all ground truth labels leading to the averaging problem as discussed earlier.
Instead we first look at vanilla WTA loss \cite{makansi2019overcoming,rupprecht2017learning} defined in Equation~\ref{eq:metaloss1}.
\begin{align*}
\mathcal{L}(f_\theta^c(x), y) &= \sum_{i=1}^M w_i L_1(\hat{y_i}, y) \numberthis \label{eq:metaloss1} \\
w_j &= \delta(j = \argminA_i\left \rvert \lvert \hat{y_i} - y \right \rvert \rvert) \numberthis \label{eq:metaloss2},
\end{align*}
where $\hat{\delta}$ is the Kronecker delta returning 1 when true and 0 otherwise.
The Kronecker delta is used to select the best hypothesis $\hat{y_i} \in \hat{Y}$ that minimizes $L_1(\hat{y_i}, y)$.
In some sense, we are simply \textbf{pairing} predictors with ground truth labels.
Variations of WTA-loss alter the count of best hypotheses or the value of $\delta$ \cite{makansi2019overcoming,rupprecht2017learning}, however we take a different approach by introducing a mixture coefficient into this loss function.
\subsubsection{Mixture Coefficient Layer}
The use of a mixture coefficient layer means that we treat each predictor similar to a mixture in MDNs \cite{bishop1994mixture}.
This layer acts as a discrete probability function over predictors based on the shared representation and object category.
The mixture coefficient layer, $F_d: \mathcal{X} \to \mathcal{Y}$ is defined as:
\begin{equation}\label{eq:mixturelayer}
\phi = F_d(x, c)
\end{equation}
where category is introduced as a concatenated one-hot vector and $\phi$ is a $M$ sized vector that sums to one.
\begin{equation}\label{eq:mixtureequal}
1 = \sum_{i=1}^M{\phi_i}.
\end{equation}
where $\phi_i$ is the coefficient of the $i^{th}$ predictor.
This vector is normalized with a softmax operation.
To combine this with Equation~\ref{eq:metaloss1}, the minimization of $\phi_i$ should increase the coefficients for the paired predictors and reduce the coefficients of unpaired predictors.
To achieve this, we minimize the negative log likelihood of $\phi$ as shown in Equation~\ref{eq::metaloss3}.
\begin{equation}\label{eq::metaloss3}
\mathcal{L}(f_\theta^c(x), y) = \sum_{i=1}^M -log(\phi_i)w_i L_1(\hat{y_i}, y),
\end{equation}
During generation, predictors are sampled from a multinomial distribution parameterized by $\phi$.
\subsubsection{Stages}
The multi-choice framework can be demonstrated through a toy example shown in Figure~\ref{fig::wtaours}.
The top row of scatter plots show the movement of hypotheses (blue) over time in relation to the ground truths (red).
The bottom row contains a bar chart showing the change of $\phi$ for each predictor.
There are three key stages that the multi-predictors and the mixture coefficient layers go through:
\begin{enumerate}
\item \textbf{Initial Stage} (Figure~\ref{fig::wtaours}a). 10 predictors and mixture coefficient layer with output vector, $\phi$, are randomly initialized along with 3 ground truths.
The vector, $\phi$, shows that the density of all predictors are almost equal.
\item \textbf{Pairing Stage} (Figure~\ref{fig::wtaours}b).
Each ground truth is paired with the closest hypothesis. The chosen hypothesis moves closer to the ground truth until their loss is negligible.
The vector, $\phi$, shows that the density of all predictors are almost equal, including the chosen (shown with gold bars) .
\item \textbf{Boosting Stage} (Figure~\ref{fig::wtaours}c).
Minimizing distance no longer decreases loss, therefore the network begins to boost $\phi_i$ for chosen predictors (gold) and decreasing $\phi_i$ for unpaired predictors.
The chance of sampling unpaired predictors is less than 99\% thus modal collapse becomes unlikely.
\end{enumerate}
Despite appearances, it is worth noting that each stage does not require any initiation and begins automatically.
\subsubsection{Comparison to WTA Variants}
Using the exact same initialization scheme from the previous toy example, Figure~\ref{fig::wtaother} shows the final equilibrium of the 10 predictors using Relaxed and Evolving WTA.
Figure~\ref{fig::wtaother}a shows that using Relaxed WTA results in large concentrations of hypotheses in central regions between ground truths.
Figure~\ref{fig::wtaother}b shows that using Evolving WTA impressively moves hypotheses away from central regions towards ground truths.
On average we find that only 2-3 hypotheses remain stuck between regions of ground truths as seen in the figure.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{img/wtaother.png}
\caption{The final equilibrium for Relaxed and Evolving WTA.
Hypotheses are shown with blue and ground truths with red circles. As seen in \cite{makansi2019overcoming}.}
\Description{Other WTA Losses}
\label{fig::wtaother}
\end{figure}
In comparison to Evolving WTA, the advantage of using a mixture coefficient layer for layouts is that the likelihood of sampling an unpaired hypothesis is very low.
In the previous example, the probability of selecting an unpaired hypothesis is less than 1\%.
In contrast, using Evolving WTA would raise the probability of selecting a poor hypothesis significantly to 20-30\%.
The disadvantages of using the mixture coefficient layer, however, is that the likelihood of paired hypotheses are not equally likely.
In Figure~\ref{fig::wtaours}c, we can see that predictor 5's density is significantly higher (49\%) than predictor 2 and 3.
Since $\phi_i$ of unpaired hypotheses are pushed towards zero, we can rectify this problem during test time by recalculating $\phi_i$ for paired predictors as $\hat{\phi_i}=1/P$ where P is the number of paired predictors.
\subsubsection{Number of Predictors}
A common question that naturally arises is \textit{"How many predictors should you use?"}.
The correct number depends on the dataset complexity and number of ground truths that arise during training for each input.
This is clearly difficult to calculate in multi-dimensional datasets due to exponential number of input-output combinations and has been a common criticism of multi-choice learning.
The use of a mixture coefficient layer allows us to offer some simple guidance based on our observations.
Our hypothetical architecture should initialize enough predictors so that the number of hypotheses sparsely cover all regions of the output space.
In the worse case scenario, where ground truths form concentrated clumps in one dimension and are extremely spaced apart in another, coverage should be increased such that each ground truth has their own paired predictor.
Over time, the mixture coefficient of non-paired predictors will diminish to zero, thus these should not surface.
Hence, the brief answer to the question is \textit{the more the better}, but this obviously has to be balanced with computation power and training speed.
Based on tests, a general rule of thumb is to start with 10 predictors and move up in multiples of 2.
\section{Evaluation}
\label{cha:evaluation}
In this section, we conduct experiments to demonstrate the strengths of LayoutMCL in comparison to existing layout generative models.
These experiments include quantitative benchmarks, a diversity and consistency test. We also
demonstrate LayoutMCL providing multiple recommendations in a hypothetical graphical editor.
\subsection{Experiment Setup}
\label{sec:setup}
This section provides implementation details of LayoutMCL and benchmark models.
We describe the datasets and propose metrics that measure the similarity between generated and real layouts.
\paragraph{\textbf{Implementation}}
Our LayoutMCL is implemented in PyTorch and uses a bounding box module with 10 predictors per step.
The encoder consisting of 2 stacked bi-directional GRU layers with 128 hidden units and 5 CNN layers.
The network was trained with a learning rate of 0.001, batch size of 512 with an Adam optimizer.
The hyper-parameters in Equation~\ref{eq:lossfn} are $\lambda_c=1$, $\lambda_s=1$ and $\lambda_b=40$.
\paragraph{\textbf{Benchmark Models}}
Layout generative models are designed with different capabilities.
The closest two models to our approach are LayoutVAE \cite{jyothi2019layoutvae} and Neural Design Network \cite{lee2020neural}.
These models allow specification of object categories and do not require the specification of object counts and keyword pairings for all datasets.
\cite{patil2019read,li2019layoutgan,zheng2019content} do not satisfy all of these properties; therefore they are difficult to compare fairly.
\begin{enumerate}
\item \textbf{LayoutVAE} \cite{jyothi2019layoutvae}. This model combines two conditional VAEs that operate in 2 successive stages: object count and bounding box.
In the first stage, the counts for each object are predicted using a Poisson distribution.
In the second stage, each object's bounding boxes are generated.
\item \textbf{Neural Design Network} \cite{lee2020neural}.
NDN frames layouts as graphs where objects are the nodes and their respective location and sizes form relational edges. This model combines the use of graph convolutional networks \cite{johnson2018image} and VAEs to generate design layouts.
NDN-all is a format where \textbf{all} prespecified existing location and size relationships between objects are given prior to generation.
Since NDN-all performs best in \cite{lee2020neural}, this model is used for benchmarks in this paper.
\end{enumerate}
\paragraph{\textbf{Datasets}} Here we describe the three publicly available multimedia layout datasets from different domains.
\begin{enumerate}
\item \textbf{Mobile App} \cite{deka2017rico}. Following \cite{lee2020neural}, we only include portrait layouts with 13 categories (toolbars, images, text, icons, buttons, inputs, list items, advertisements, page indicator, web views, background images, drawers, modals) that have less than or equal to 10 objects. This leaves 21k examples.
\item \textbf{Magazine} \cite{zheng2019content}. This dataset contains 4k layouts with 6 categories (text, images, headlines, over-image text, over-image headline, backgrounds)
\item \textbf{Document} \cite{zhong2019publaynet}. Layouts are filtered for single and double column portrait documents with less than or equal to 10 objects per layout. The dataset uses all 5 categories (text, title, figure, table, list) which leaves 230k examples.
\end{enumerate}
\paragraph{\textbf{Metrics}}
The proposed metrics are designed to capture similarity of generated layouts to a held-out test dataset.
\begin{enumerate}
\item \textbf{Alignment} \cite{lee2020neural}. This captures a common characteristic found in layouts; where objects tend to be left, centre or right aligned with other objects.
This generally improves the readability and aesthetics for the reader, particularly in magazines and documents.
This is defined in Equation~\ref{eq::alignment} as:
\begin{equation} \label{eq::alignment}
\frac{1}{N_d}\sum_d\sum_i\min_{j,i\neq j}\{\min(l(o^d_i,o^d_j),m(o^d_i,o^d_j),r(o^d_i,o^d_j)\}),
\end{equation}
where $N_d$ is count of generated layouts and $o^d_i$ is the $i^{th}$ object of the $d^{th}$ layout.
Alignment is calculated using the normalized bounding box values.
An alignment score closer to the test dataset is better.
\item \textbf{Fréchet Inception Distance (FID)} \cite{heusel2017gans}.
FID was originally used to measure realism in GAN samples.
This has been adapted to measure the similarity between generated and real layouts.
For each dataset, an encoder backbone (Section~\ref{sec::encoder}) is trained to discriminate between real and fake layouts.
Fake layouts are created by taking existing layouts and adding perturbations drawn from the uniform distribution, $\mathcal{X} \sim U(-0.25,0.25)$, to each bounding box dimensions.
The second-last output layer has 512 units and the FID distance between two samples is calculated with:
\begin{equation} \label{eq::fid}
d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2(C_1 \times C_2)^{1/2})
\end{equation}
where $mu_i$ is the output vector's mean, $C_i$ is the co-variance matrix and Tr is the trace operation.
The FID model's accuracy at detecting fake layouts from the entire held-out test dataset are: Mobile Apps (95.5\%), Magazines (91.2\%) and Documents (89.3\%).
We interpret a smaller FID score as better as it indicates higher similarity.
\item \textbf{Fake Positive (\%)}. Using the same FID discriminator above, we compute Fake Positive as the percentage of samples the model assigns as fake over the total number of generated samples.
This is defined as:
\begin{equation} \label{eq::fidfake}
FP = \frac{1}{N}\sum^N_i{\delta(j = \argmaxA_i(\gamma_i))}
\end{equation}
where $\gamma$ is the 2D vector from the final discriminator layer and $j$ is the index corresponding to the fake category.
A model with a lower Fake Positive (\%) is better as it generates realistic layouts that "trick" the discriminator at higher rates.
\end{enumerate}
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{img/diversityfigure.png}
\caption{Each model generates multiple outputs using the same input to demonstrate diversity and consistency.
For NDN and LayoutVAE, the layouts shown are best 5 from 100 samples.
In contrast, LayoutMCL samples are the first 5 from 5.}
\Description{Comparison of stability and diversity of outputs}
\label{fig::diversitycomparison}
\end{figure*}
\subsection{Quantitative Metrics for Layouts}
We evaluate each model's ability to generate realistic layouts using metrics proposed in Section~\ref{sec:setup}.
In this experiment, LayoutVAE and LayoutMCL generates layouts by predicting categories and bounding boxes while NDN only predicts bounding boxes.
NDN is provided all object categories and all graph constraints prior to testing.
To help the models generate samples within similar neighborhoods, we provide a single starting object for each layout.
Each model generates 100 samples for each dataset and quantitatively compared in Table~\ref{tab:table1}.
Alignment and Fake Positive is calculated for a held-out test dataset and shown on the right-most column.
\begin{table}[h]
\caption{Layout Generation Performance}
\label{tab:table1}
\begin{tabular}{lcccc}
\toprule
&LayoutVAE&NDN&Ours&Test\\
\midrule
\textbf{Mobile App} \\
Alignment&0.01829&0.01767&\textbf{0.01477}&0.00659 \\
FID&182.66&55.10&\textbf{8.15}&\\
Fake Positive (\%) &93&54&\textbf{12}&5\\
\midrule
\textbf{Magazine} \\
Alignment&0.04563&\textbf{0.02355}&0.029120&0.00926\\
FID&1885.44&336.87&\textbf{26.18}&-\\
Fake Positive (\%) &100&99&\textbf{48}&9\\
\midrule
\textbf{Document} \\
Alignment&0.01434&0.01373&\textbf{0.00881}&0.00102\\
FID&54.33&76.46&\textbf{9.13}&-\\
Fake Positive (\%) &75&94&\textbf{23}&11\\
\bottomrule
\end{tabular}
\end{table}
In summary, LayoutMCL outperforms both models across proposed metrics and datasets, with the exception of alignment on the Magazine dataset.
For the mobile app and document dataset, LayoutMCL achieves the closest Alignment score to the held-out group and exceeds the next closest model by by 16-35\%.
LayoutMCL achieves the best FID score which outperforms the next closest benchmark model by 83-98\% across all datasets.
Note that the FID score varies depending on the base model, checkpoints and composition of the held-out dataset.
The Mobile App FID scores are similar to prior reported \cite{lee2020neural}, however the Magazine FID scores are relatively higher here in comparison.
Unfortunately, without open access to prior FID models, this discrepancy cannot be investigated.
\subsection{Diversity and Consistency Test}
The key feature of LayoutMCL is diversity and consistency of outputs.
Our experience with existing layout models is that they are able to generate visually appealing layouts, however, with low probability or consistency.
Consistency and diversity of results is critical in fully automated contexts such as decoy multimedia deployment in cybersecurity deception.
We create an experiment where every model is given the same input (partial layout) from which it repeatedly generates complete layouts successively.
Since all models achieved relatively low FID score for documents, we use this dataset as a benchmark.
Five samples for every model using 3 partial layouts are shown in Figure~\ref{fig::diversitycomparison}.
For NDN and LayoutVAE, the layouts shown here were selected as the best 5 from 100 generated.
In contrast, LayoutMCL samples were the first 5 from 5 generated.
This demonstrates the significant advantage in reliability and consistency between the generative models.
This can be attributable to the fact that NDN and LayoutVAE use a single distribution for bounding box prediction.
These results show that quantitative measures do not translate directly into \textbf{consistent} visually appealing layouts.
Figure~\ref{fig::diversitycomparison} also demonstrates the difference in sample diversity.
Every sample from LayoutMCL is a plausible document layout, each with a different composition of object types and sizes.
LayoutVAE fails to generate double columns as seen in the middle column and does not show diversity of categorical composition.
Each group of samples are highly similar in their object type and count.
NDN also fails to produce diverse results as all layouts form similar patterns with the same input.
\subsection{Recommendations with Hard and Soft Constraints}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{img/multichoiceusecase.png}
\caption{This visualisation demonstrates multi-choice layout recommendations with hard and soft constraints. This approach provides users with a range of potential options from which they can select from and create a final layout. Note that pictures in (c) are not generated by LayoutMCL. }
\Description{The potential use case of LayoutMCL}
\label{fig::usecase}
\end{figure}
In this section, we demonstrate the flexibility and benefits of our multi-choice framework through a simple use-case that combines both hard and soft constraints.
Hard constraints are preset object categories and locations that remain fixed during multi-hypothesis prediction.
This is useful where the designer wishes to keep elements of continuity between screens, such as a corporate logo or search bar.
Soft constraints are considered as a sequence of object properties that a designer wishes to obtain recommendations for in a final layout.
This could include the object category or size but not location.
Within the framework, hard constraints are set as the prior sequence of objects and soft constraints are force fed into the multi-choice predictors at each step.
A use-case for using hard and soft constraints within the framework is seen in Figure~\ref{fig::usecase}.
Here the user is provided multiple layout recommendations based on the constraints shown on the left.
These soft and hard constraints can also be created in the context of layout rearrangement, where the user already has a layout but would like more options.
Our multi-choice framework is beneficial for users because they are able to decide from a wide range of visual appealing layouts using \textbf{visual cues} as evident from the figure.
In comparison to prior methods, we find that this approach is user-friendly and intuitive in a real world application.
\section{Conclusion\label{cha:conclusion}}
In this paper we presented LayoutMCL, a layout generative model that contributes towards research in automated visual design.
We demonstrate that our auto-regressive architecture advances the existing state-of-art in layout generation by producing significantly more consistent, realistic and diverse samples across several metrics.
These advancements are a result of combining several ideas from multi-choice learning and mixture density networks.
The core ideas demonstrated, multi-hypothesis prediction and winner-takes-all loss, can robustly learn diverse user preferences using multiple predictors.
As a result, we demonstrate that reversing this process allows us to generate diverse layouts consistently.
For future work, we aim to study whether this framework can be applied in other potential domains such as procedural content generation for games, reinforcement learning or AR/VR design recommendations.
|
2,869,038,155,300 | arxiv | \section{Introduction}
Let $M$ be an $n\times n$ random matrix with i.i.d.\ $\Ber(p)$
entries (meaning that each entry $M_{ij}$ satisfies $\Pr(M_{ij}=1)=p$
and $\Pr(M_{ij}=0)=1-p$). It is a famous theorem of Koml\'os\ \cite{Kom67,Kom68}
that for $p=1/2$ a random Bernoulli matrix is \emph{asymptotically
almost surely} nonsingular: that is, $\lim_{n\to\infty}\Pr(M\text{ is singular})=0$.
Koml\'os' theorem can be generalised to sparse random Bernoulli matrices
as follows.
\begin{thm}
\label{thm:ber}Fix $\varepsilon>0$, and let $p=p(n)$
be any function of $n$ satisfying $\min(p,1-p)\geq (1+\varepsilon)\log n/n$.
Then for a random $n\times n$ random matrix $M$ with i.i.d.\ $\Ber(p)$
entries, we have
\[\lim_{n\to\infty}\Pr(M\text{ is singular})=0.\]
\end{thm}
\cref{thm:ber} is best-possible, in the sense that if $\min(p,1-p) \le(1-\varepsilon)\log n/n$,
then we actually have $\lim_{n\to\infty}\Pr(M\text{ is singular})=1$
(because, for instance, $M$ is likely to have two identical columns). That
is to say, $\log n/n$ is a \emph{sharp threshold} for singularity.
It is not clear when \cref{thm:ber} first appeared in print, but strengthenings
and variations on \cref{thm:ber} have been proved by several different
authors (see for example \cite{AE14,BR18,CV08,CV10}).
Next, let $Q$ be an $n\times n$ random matrix with independent rows,
where each row is sampled uniformly from the subset of vectors in
$\{ 0,1\} ^{n}$ having exactly $d$ ones ($Q$ is said
to be a random \emph{combinatorial} matrix). The study of such matrices was initiated by Nguyen\ \cite{Ngu13}, who proved that if $d=n/2$ then $Q$ is asymptotically almost surely
nonsingular (where $n\to\infty$ along the even integers). Strengthenings of Nguyen's theorem have been proved by several authors; see for example \cite{AP20,FJLS,Jai19,JSS20,Tra20}. Recently, Aigner-Horev and Person\ \cite{AP20} conjectured an analogue of \cref{thm:ber} for sparse random combinatorial matrices, which we prove in this note.
\begin{thm}
\label{thm:comb}Fix $\varepsilon>0$, and let $d=d(n)$
be any function of $n$ satisfying $\min(d,n-d)\geq (1+\varepsilon)\log n$.
Then for a $n\times n$ random zero-one matrix $Q$ with independent rows,
where each row is chosen uniformly among the vectors with $d$ ones, we have
\[
\lim_{n\to\infty}\Pr(Q\text{ is singular})\to0.
\]
\end{thm}
Just like \cref{thm:ber}, \cref{thm:comb} is best-possible in the sense that if $\min(d,n-d) \le(1-\varepsilon)\log n$,
then $\lim_{n\to\infty}\Pr(M\text{ is singular})=1$. \cref{thm:comb} improves on a result of Aigner-Horev and Person: they proved the same fact under the assumption that $\lim_{n\to \infty} d/(n^{1/2}\log^{3/2}n)=\infty$ (assuming that $d\le n/2$).
The structure of this note is as follows. First, in \cref{sec:general} we prove a simple and general lemma (\cref{lem:general}) which applies to any random matrix with i.i.d.\ rows. This lemma distills the essence of (a special case of) an argument due to Rudelson and Vershinyn~\cite{RV08}. Essentially, it shows that in order to prove \cref{thm:ber} and \cref{thm:comb}, one just needs to prove some relatively crude estimates about the typical structure of the vectors in the left and right kernels of our random matrices.
Then, in \cref{sec:ber} and \cref{sec:comb} we show how to use \cref{lem:general} to give simple proofs of \cref{thm:ber} and \cref{thm:comb}. Of course, \cref{thm:ber} is not new, but its proof is extremely simple and it serves as a warm-up for \cref{thm:comb}. It turns out that in order to analyse the typical structure of the vectors in the left and right kernel, we can work over $\ZZ_q$ for some small integer $q$ (in fact, we can mostly work over $\ZZ_2$). This idea is not new (see for example, \cite{AP20,CMMM19,Fer20,FJ19,FJLS,Hua18,Mes20,NW18a,NW18b}), but the details here are much simpler.
We remark that with a bit more work, the methods in our proofs can also likely be used to prove the conclusions of \cref{thm:ber} and \cref{thm:comb} under the weaker (and strictly best-possible) assumptions that $\lim_{n\to \infty}(\min(pn,n-pn)-\log n)=\infty$ and $\lim_{n\to \infty}(\min(d,n-d)-\log n)=\infty$. However, in this note we wish to emphasise the simple ideas in our proofs and do not pursue this direction.
\textit{Notation.} All logarithms are to base $e$. We use common asymptotic notation, as follows. For real-valued functions $f(n)$ and $g(n)$, we write $f=O(g)$ to mean that there is some constant $C>0$ such that $\vert f\vert \leq Cg$. If $g$ is nonnegative, we write $f=\Omega(g)$ to mean that there is $c>0$ such that $f \geq cg$ for sufficiently large $n$. We write $f=o(g)$ to mean that $f(n)/g(n)\to 0$ as $n\to\infty$.
\section{A general lemma}
\label{sec:general}
In this section we prove a (very simple) lemma which will give us
a proof scheme for both \cref{thm:ber} and \cref{thm:comb}. For a
vector $x$, let $\supp(x)$ (the \emph{support} of $x$)
be the set of indices $i$ such that $x_{i}\ne0$.
\begin{lem}
\label{lem:general}Let $\FF$ be a field, and let $A\in\FF^{n\times n}$
be a random matrix with i.i.d.\ rows $R_{1},\dots,R_{n}$. Let $\mathcal{P}\subseteq\FF^{n}$
be any property of vectors in $\FF^{n}$. Then for any $t\in\RR$,
the probability that $A$ is singular is upper-bounded by
\begin{align}
& \Pr(x^{T}A=0\text{ for some nonzero }x\in\FF^{n}\text{ with }|\supp(x)|<t)\label{eq:small-supp}\\
& \qquad+\frac{n}{t}\Pr(\text{there is nonzero }x\notin\mathcal{P}\text{ such that }x\cdot R_{i}=0\text{ for all }i=1,\dots,n-1)\label{eq:P}\\
& \qquad+\frac{n}{t}\max_{x\in\mathcal{P}}\Pr(x\cdot R_{n}=0)\label{eq:LO}
\end{align}
\end{lem}
\begin{proof}
Note that $A$ is singular if and only if there is a nonzero $x\in\FF^{n}$
satisfying $x^{T}A=0$. Let $\mathcal{E}_{i}$ be the event that $R_{i}\in\spn\{ R_{1},\dots,R_{i-1},R_{i+1},\dots,R_{n}\} $,
and let $X$ be the number of $i$ for which $\mathcal{E}_{i}$ holds.
Then by Markov's inequality and the assumption that the rows $R_{1},\dots,R_{n}$ are i.i.d., we have
\[
\Pr\left(x^{T}M=0\text{ for some }x\text{ with }|\supp\left(x\right)|\ge t\right)\le\Pr(X\ge t)\le\frac{\E X}{t}=\frac{n}{t}\Pr(\mathcal{E}_{n}).
\]
It now suffices to show that $\frac{n}{t}\Pr(\mathcal{E}_{n})$ is upper-bounded by the sum of the terms \cref{eq:P} and \cref{eq:LO}. Note that after sampling $R_1,\dots, R_{n-1}$, we can always choose a nonzero vector $x\in \FF^n$ with $x\cdot R_{i}=0$ for $i=1,\dots,n-1$. If the event $\mathcal{E}_{n}$ occurs, we must have $x\cdot R_n=0$. Distinguishing the cases $x\not\in\mathcal{P}$ and $x\in\mathcal{P}$ now gives the desired bound.
\end{proof}
\section{Singularity of sparse Bernoulli matrices: a simple
proof}
\label{sec:ber}
Let us fix $0<\eps<1$. We will take $t=cn$ for some small constant $c$ (depending on $\varepsilon$), and let $\mathcal{P}$
be the property $\{ x\in\QQ^{n}:|\supp (x)|\geq t\} $.
All we need to do is to show that the three terms \cref{eq:small-supp},
\cref{eq:P} and \cref{eq:LO} in \cref{lem:general} are each of the form $o(1)$. The following
lemma is the main part of the proof.
\begin{lem}
\label{lem:ber-normal}Let $R_{1},\dots,R_{n-1}$ be the first $n-1$
rows of a random $\Ber(p)$ matrix, with $\min(p,1-p)\geq (1+\varepsilon)\log n/n$.
There is $c>0$ (depending only on $\varepsilon$) such that with probability
$1-o(1)$, no nonzero vector $x\in\QQ^{n}$ with $|\supp(x)|<cn$
satisfies $R_{i}\cdot x=0$ for all $i=1,\dots,n-1$.
\end{lem}
\begin{proof}
If such a vector $x$ were to exist, we would be able to multiply by an integer and then
divide by a power of two to obtain a vector $v\in\ZZ^{n}$ with
at least one odd entry also satisfying $|\supp(v)|<cn$
and $R_{i}\cdot v=0$ for $i=1,\dots,n-1$. Interpreting $v$ as a vector in $\ZZ_{2}^n$, we would have $R_{i}\cdot v\equiv 0 \pmod{2}$ for $i=1,\dots,n-1$ and furthermore $v\in \ZZ_{2}^n$ would be a nonzero vector consisting of less than $cn$ ones. We show that such a vector $v$ is unlikely to exist (working over $\ZZ_{2}$ discretises the problem, so that we may use a union bound).
Let $p^*=\min(p,1-p)\geq (1+\varepsilon)\log n/n$. Consider any $v\in\{ 0,1\} ^{n}$ with $|\supp(v)|=s$.
Then $R_{i}\cdot v$ for $i=1,\dots,n-1$ are i.i.d.\ $\Bin(s,p)$ random variables. Let $P_{s,p}$ be the probability that a $\Bin(s,p)$ random variable is even, so
\begin{align*}
P_{s,p} & =\frac{1}{2}\left(\sum_{i=0}^{s}\binom{s}{i}p^{i}(1-p)^{s-i}+\sum_{i=0}^{s}\binom{s}{i}(-1)^{i}p^{i}(1-p)^{s-i}\right)\\
& =\frac12+\frac{(1-2p)^{s}}2\leq \begin{cases}
e^{-\left(1+o(1)\right)sp^*} & \text{if }sp^*=o(1),\\
e^{-\Omega(1)} & \text{if }sp^*=\Omega(1).
\end{cases}
\end{align*}
Taking $r=\delta/p^*$ for sufficiently small $\delta$ (relative to
$\varepsilon$), the probability that there exists nonzero $v\in\ZZ_{2}^n$
with $|\supp(v)|<cn$ and $R_{i}\cdot v\equiv 0 \pmod{2}$
for all $i=1,\dots,n-1$ is at most (recalling that $p^*\geq (1+\varepsilon)\log n/n$)
\begin{align*}
\sum_{s=1}^{cn}\binom{n}{s}P_{s,p}^{n-1} & \le\sum_{s=1}^{r}e^{s\log n-(1-\varepsilon/3)snp^*}+\sum_{s=r+1}^{cn}e^{s(\log(n/s)+1)-\Omega(n)}\\
& \le\sum_{s=1}^{\infty}n^{-s\varepsilon/3}+\sum_{s=1}^{cn}e^{n\left((s/n)(\log(n/s)+1)-\Omega(1)\right)}=o(1),
\end{align*}
provided $c$ is sufficiently small (relative to $\delta$).
\end{proof}
Taking $c$ as in \cref{lem:ber-normal}, we immediately see that the
term \cref{eq:P} is of the form $o\left(1\right)$. Observing that
the rows and columns of $M$ have the same distribution, and that
the event $x^{T}M=0$ is simply the event that $x\cdot C_{i}=0$ for
each column $C_{i}$ of $M$, it also follows from \cref{lem:ber-normal}
that the term \cref{eq:small-supp} is of the form $o\left(1\right)$.
Finally, the following straightforward generalisation of the well-known Erd\H os--Littlewood--Offord
theorem shows that the
term \cref{eq:LO} is of the form $o\left(1\right)$, which completes the proof of \cref{thm:ber}. This
lemma is the only nontrivial ingredient in the proof of \cref{thm:ber}. This lemma is a special case of \cite[Lemma~8.2]{CV08}, but it can also be quite straightforwardly deduced from the Erd\H os--Littlewood--Offord theorem itself.
\begin{lem}
\label{lem:L-O}Consider a (non-random) vector $x=(x_{1},\dots,x_{n})\in\RR^{n}$,
and let $\xi_{1},\dots,\xi_{n}$ be i.i.d.\ $\Ber(p)$
random variables, and let $p^*=\min(p,1-p)$. Then
\[
\max_{a\in \RR}\Pr(x_{1}\xi_{1}+\dots+x_{n}\xi_{n}=a)=O\left(\frac{1}{\sqrt{|\supp(x)|p^*}}\right).
\]
\end{lem}
\section{Singularity of sparse combinatorial matrices}
\label{sec:comb}
Let us again fix $0<\eps<1$. The proof of \cref{thm:comb} proceeds in almost exactly the same way
as the proof of \cref{thm:ber}, but there are three significant complications.
First, since the entries are no longer independent, the calculations
become somewhat more technical. Second, the rows and columns of $Q$ have different distributions,
so we need two versions of \cref{lem:ber-normal}: one for vectors in the left kernel and one for vectors in the right kernel. Third, the fact that
each row has exactly $d$ ones means that we are not quite as free
to do computations over $\ZZ_{2}$ (for example, if $d$ is even and
$v$ is the all-ones vector then we always have $Qv=0$ over $\ZZ_{2}$). For certain parts of the argument we will instead work over $\ZZ_{d-1}$.
Before we start the proof, the following lemma will allow us to restrict our attention to the case where $d\le n/2$, which will be convenient.
\begin{lem}
Let $Q\in \RR^{n\times n}$ be a matrix whose every row has sum $d$, for some $d\notin\{0,n\}$. Let $J$ be the $n\times n$ all-ones matrix. Then $Q$ is singular if and only if $J-Q$ is singular.
\end{lem}
\begin{proof}
Note that the all-ones vector $\one$ is in the column space of $Q$ (since the sum of all columns of $Q$ equals $d\one$). Hence every column of $J-Q$ is in the column space of $Q$. Therefore, if $Q$ is singular, then $J-Q$ is singular as well. The opposite implication can be proved the same way.
\end{proof}
In the rest of the section we prove \cref{thm:comb} uner the assumption that $(1+\eps)\log n\le d\le n/2$ (note that if $Q$ is a uniformly random zero-one matrix with every row having exactly $d$ ones, then $J-Q$ is a uniformly random zero-one matrix with every row having exactly $n-d$ ones).
The first ingredient we will need is an analogue of \cref{lem:L-O} for ``combinatorial''
random vectors. In addition to the notion of the support
of a vector, we define a \emph{fibre} of a vector to be a set of all
indices whose entries are equal to a particular value.
\begin{lem}
\label{lem:comb-LO}Let $0\le d\le n/2$, and consider a (non-random) vector $x\in\RR^{n}$ whose largest fibre
has size $n-s$, and let $\gamma\in\{ 0,1\} ^{n}$ be a random
zero-one vector with exactly $d$ ones. Then
\[
\max_{a\in \RR}\Pr(x\cdot\gamma=a)=O\left(\sqrt{n/(sd)}\right).
\]
\end{lem}
We deduce \cref{lem:comb-LO} from
the $p=1/2$ case of \cref{lem:L-O} (that is, from the Erd\H os--Littlewood--Offord
theorem~\cite{Erd45}).
\begin{proof}
The case $p=1/2$ is treated in \cite[Proposition~4.10]{LLTTY17}; this proof proceeds along similar lines. Let $p=d/n\leq 1/2$. We realise the distribution
of $\gamma$ as follows. First choose $d=pn$ random disjoint pairs
$\left(i_{1},j_{1}\right),\dots,\left(i_{pn},j_{pn}\right)\in\left\{ 1,\dots,n\right\} ^{2}$ (each having distinct entries),
and then determine the 1-entries in $\gamma$ by randomly choosing one
element from each pair.
We first claim that with probability $1-e^{-\Omega\left(sp\right)}$,
at least $\Omega\left(sp\right)$ of our pairs $\left(i,j\right)$
have $x_{i}\ne x_{j}$ (we say such a pair is \emph{good}). To see
this, let $I$ be a union of fibres of $x$, chosen such that $\left|I\right|\ge n/3$
and $n-\left|I\right|\geq s/3$ (if $s\le 2n/3$ we can
simply take $I$ to be the largest fibre of $x$, and otherwise we
can greedily add fibres to $I$ until $\left|I\right|\ge n/3$). To prove our claim, we will prove that in fact with the desired probability there are $\Omega(sp)$ different $\ell$ for which $i_\ell\notin I$ and $j_\ell\in I$.
Let $f=\ceil{pn/6}$ and let $S$ be the set of $\ell\le f$ for which
$i_{\ell}\notin I$. So, $\left|S\right|$ has a hypergeometric distribution
with mean $(n-|I|)f/n=\Omega\left(sp\right)$, and by a Chernoff bound (see for
example \cite[Theorem~2.10]{JLR00}), we have $\left|S\right|=\Omega\left(sp\right)$ with
probability $1-e^{-\Omega\left(sp\right)}$. Condition on such an outcome
of $i_{1},\dots,i_{f}$. Next, let $T$ be the set of $\ell\in S$ for which $j_{\ell}\in I$.
Then, conditionally, $\left|T\right|$ has a hypergeometric distribution
with mean at least $(|I|-f)|S|/n=\Omega\left(sp\right)$, so again using a Chernoff
bound we have $\left|T\right|=\Omega\left(sp\right)$ with probability
$1-e^{-\Omega\left(sp\right)}$, as claimed.
Now, condition on an outcome of our random pairs such that at least
$\Omega(sp)$ of them are good. Let $\xi_{\ell}$ be the indicator random variable for the event that
$i_{\ell}$ is chosen from the pair $\left(i_{\ell},j_{\ell}\right)$,
so $\xi_{1},\dots,\xi_{pn}$ are i.i.d.\ $\Ber\left(1/2\right)$
random variables, and $x\cdot\gamma=a$ if and only if
\[
(x_{i_{1}}-x_{j_{1}})\xi_{1}+\dots+(x_{i_{pn}}-x_{j_{pn}})\xi_{1}=a-x_{j_{1}}-\dots-x_{j_{pn}}.
\]
Under our conditioning, $\Omega\left(sp\right)$ of the $x_{i_{\ell}}-x_{j_{\ell}}$
are nonzero, so by \cref{lem:L-O} with $p=1/2$, conditionally we have $\Pr\left(x\cdot\gamma=a\right)\le O\left(1/\sqrt{sp}\right)$.
We deduce that unconditionally
\[
\Pr(x\cdot\gamma=0)\le e^{-(sp)}+O(1/\sqrt{sp})=O(1/\sqrt{sp})=O(\sqrt{n/(sd)}),
\]
as desired.
\end{proof}
The proof of \cref{thm:comb} then reduces to the following two lemmas. Indeed, for a constant $c>0$ (depending on $\varepsilon$) satisfying the statements in \cref{lem:comb-normal-hard,lem:comb-normal}, we can take $t=cn/\log d$, and
\[\mathcal P=\{x\in \QQ^n:x\text{ has largest fibre of size at most }(1-c/\log d)n\}.\]
We can then apply \cref{lem:general}. By \cref{lem:comb-normal-hard}, the term \cref{eq:small-supp} is bounded by $o(1)$, by \cref{lem:comb-normal} the term \cref{eq:P} is bounded by $(n/t)\cdot n^{-\Omega(1)}=(\log d/c)\cdot n^{-\Omega(1)}=o(1)$, and by \cref{lem:comb-LO} the term \cref{eq:LO} is bounded by $(n/t)\cdot O\left(\sqrt{n\log d/(cnd)}\right)= O(\log^{3/2}d/\sqrt d)=o(1)$.
\begin{lem}
\label{lem:comb-normal-hard}Let $Q$ be a random combinatorial matrix (with $d$ ones in each row),
with $(1+\varepsilon)\log n\le d\le n/2$. There
is $c>0$ (depending only on $\varepsilon$) such that with probability
$1-o(1)$, there is no nonzero vector $x\in\QQ^{n}$ with
$|\supp(x)|<cn/\log d$ and $x^{T}Q=0$.
\end{lem}
\begin{lem}
\label{lem:comb-normal}Let $R_{1},\dots,R_{n-1}$ be the first $n-1$ rows of a random combinatorial matrix (with $d$ ones in each row), with $(1+\varepsilon)\log n\le d\le n/2$. There is $c>0$ (depending only on $\varepsilon$)
such that with probability $1-n^{-\Omega(1)}$, every nonzero $x\in \QQ^n$ satisfying $R_{i}\cdot x=0$
for all $i=1,\dots,n-1$ has largest fibre of size at most $(1-c/\log d)n$.
\end{lem}
\begin{proof}[Proof of \cref{lem:comb-normal-hard}]
As in \cref{lem:ber-normal}, it suffices to work over $\ZZ_{2}$.
Let $C_{1},\dots,C_{n}$ be the columns of $Q$, consider any $v\in\ZZ_{2}^{n}$
with $|\supp(v)|=s$, and let $\mathcal{E}_v$
be the event that $C_{i}\cdot v\equiv 0\pmod{2}$ for $i=1,\dots,n$. Note that $\mathcal{E}_v$
only depends on the submatrix $Q_v$ of $Q$ containing only those
rows $j$ with $v_{j}=1$ (and $\mathcal{E}_v$ is precisely the event that every column of $Q_v$ has an even sum).
Let $p=d/n\le 1/2$, let $M_v$ be a random $s\times n$ matrix with i.i.d.\ $\Ber(p)$
entries, and let $\mathcal{E}_v'$
be the event that every column in $M_v$ has an even sum. Note that $M_v$
is very similar to $Q_v$, so the probability of $\mathcal E_v$ is very similar to the probability of $\mathcal E_v'$. Indeed, writing $R_{1},\dots,R_{s}$ and $R_{1}',\dots,R_{s}'$
for the rows of $Q_v$ and $M_v$ respectively, and writing $s_j=|\supp(R_j')|$, for each $j$ we have $s_j\sim \Bin(n,p)$, so an elementary computation using Stirling's formula shows that $\Pr(s_j=d)=\Omega(1/\sqrt{d})=e^{-O(\log d)}$. Hence
\[
\Pr(\mathcal{E}_v)=\Pr(\mathcal{E}_v'\,|\,s_j=d\text{ for all }j)\le\Pr(\mathcal{E}_v')/\Pr(s_j=d\text{ for all }j)=e^{O(s\log d )}\Pr(\mathcal{E}_v')=e^{O(s\log (pn))}\Pr(\mathcal{E}_v').
\]
Recalling the quantity $P_{s,p}$ from the proof of \cref{lem:ber-normal},
we have
\[
\Pr(\mathcal{E}_v')=P_{s,p}^{n}=\begin{cases}
e^{-\left(1+o(1)\right)spn} & \text{if }sp=o(1),\\
e^{-\Omega(n)} & \text{if }sp=\Omega(1),
\end{cases}
\]
so if $s\le cn/\log d=cn/\log(pn)$ for small $c>0$, then we also have
\[
\Pr(\mathcal{E}_v)\leq \begin{cases}
e^{-\left(1+o(1)\right)spn} & \text{if }sp=o(1),\\
e^{-\Omega(n)} & \text{if }sp=\Omega(1).
\end{cases}
\]
Let $P_s=\Pr(\mathcal{E}_v)$ (which only depends on $s$). We can now conclude the proof in exactly the same way as in \cref{lem:ber-normal}. Taking $r=\delta/p$ for sufficiently small $\delta$ (relative to
$\varepsilon$), the probability that there exists nonzero $v\in\ZZ_{2}^n$
with $|\supp(v)|<cn/\log d$ and $C_{i}\cdot v\equiv 0\pmod{2}$
for all $i=1,\dots,n$ is at most
\begin{align*}
\sum_{s=1}^{cn/\log d}\binom{n}{s}P_{s} & \le\sum_{s=1}^{r}e^{s\log n-(1-\varepsilon/3)snp}+\sum_{s=r+1}^{cn/\log d}e^{s(\log(n/s)+1)-\Omega(n)}\\
& \le\sum_{s=1}^{\infty}n^{-s\varepsilon/3}+\sum_{s=1}^{cn/\log d}e^{n\left((s/n)(\log(n/s)+1)-\Omega(1)\right)}=o(1),
\end{align*}
provided $c$ is sufficiently small (relative to $\delta$).
\end{proof}
We will deduce \cref{lem:comb-normal} from the following lemma.
\begin{lem}
\label{lem:hypergeometric}Suppose $p\le1/2$ and $pn\to\infty$, and let $\gamma\in\{ 0,1\} ^{n}$ be
a random vector with exactly $pn$ ones. Let $q\geq 2$ be an integer and consider a (non-random) vector $v\in\ZZ_{q}^{n}$
whose largest fibre has size $n-s$. Then for any $a\in \ZZ_q$ we have $\Pr(v\cdot\gamma\equiv a \pmod{q})\le P_{p,n,s}$
for some $P_{p,n,s}$ (only depending on $p$, $n$ and $s$) satisfying
\[
P_{p,n,s}=\begin{cases}
e^{-\Omega(1)} & \text{when }sp=\Omega(1),\\
e^{-\left(1-o(1)\right)sp} & \text{when }sp=o(1)
\end{cases}
\]
\end{lem}
\begin{proof}
As in the proof of \cref{lem:comb-LO}, we realise the distribution of $\gamma$ by first choosing $pn$ random disjoint pairs $(i_{1},j_{1}),\dots,(i_{pn},j_{pn})\in\{ 1,\dots,n\} ^{2}$,
and then randomly choosing one element from each pair to comprise the 1-entries of $\gamma$.
Let $\mathcal{E}$ be the event that $v_{i}\ne v_{j}$ for at least
one of our random pairs $(i,j)$. Then $\Pr(v\cdot\gamma\equiv a \pmod {q}\,|\,\mathcal{E})\le1/2$.
So, it actually suffices to prove that
\[
\Pr(\mathcal{E})\ge\begin{cases}
\Omega(1) & \text{when }sp=\Omega(1),\\
\left(2-o(1)\right)sp & \text{when }sp=o(1).
\end{cases}
\]
If $s\ge n/3$ (this can only occur if $sp=\Omega(1)$),
then we can choose $J\su \{1,\dots,n\}$ to be a union of fibres of the vector $v\in \ZZ_q^n$ such that $n/3\le|J|\le2n/3$.
In this case,
\[
\Pr(\mathcal{E})\ge\Pr(i_{1}\in J,\,j_{1}\notin J)=\Omega(1),
\]
as desired. So, we assume $s<n/3$, and let $I\su \{1,\dots,n\}$ be the set of indices in the largest fibre of $v$ (so $|I|=n-s$).
Note that $\mathcal{E}$ occurs whenever there is a pair $\{ i_{k},j_{k}\} $
with exactly one element in $I$.
Let $\mathcal{F}$ be the event that $i_{k}\in I$ for all $k=1,\dots,pn$. We have
\[
\Pr(\mathcal{E}\,|\,\mathcal{F})\ge1-(1-s/n)^{pn}=\begin{cases}
\Omega(1) & \text{when }sp=\Omega(1),\\
\left(1-o(1)\right)sp & \text{when }sp=o(1),
\end{cases}
\]
and
\[
\Pr(\mathcal{E}\,|\,\overline{\mathcal{F}})\ge(n-s-pn)/(n-pn)=\begin{cases}
\Omega(1) & \text{when }sp=\Omega(1),\\
1-o(1) & \text{when }sp=o(1).
\end{cases}
\]
This already implies that if $sp=\Omega(1)$, then $\Pr(\mathcal{E})=\Omega(1)$
as desired. If $sp=o(1)$ then $\Pr(\mathcal{F})\le(1-s/n)^{pn}=1-\left(1+o(1)\right)sp$,
so
\[
\Pr(\mathcal{E})=\Pr(\mathcal{F})\Pr(\mathcal{E}\,|\,\mathcal{F})+\Pr(\overline{\mathcal{F}})\Pr(\mathcal{E}\,|\,\mathcal{\overline{\mathcal{F}}})\geq \left(2-o(1)\right)sp,
\]
as desired.
\end{proof}
\begin{proof}[Proof of \cref{lem:comb-normal}]
Let $q=d-1$. It suffices to prove that with probability $1-o(1)$ there is no nonconstant ``bad'' vector
$v\in\ZZ_{q}^n$ whose largest fibre has size at least $(1-c/\log q)n$
and which satisfies $R_{i}\cdot v\equiv 0\pmod{q}$ for all $i=1,\dots,n-1$. (Note that by the choice of $q$, if $v\in \ZZ_{q}^n$ is constant and nonzero, then it is impossible to have $v\cdot R_1=0$).
Let $p=d/n$, consider any $v\in\ZZ_{q}^n$ whose largest fibre has size $n-s$, and
consider any $i\in \{1,\dots,n-1\}$. Then $R_{i}\cdot v$ is of the form in \cref{lem:hypergeometric},
so taking $r=\delta/p$ for sufficiently small $\delta$ (relative to $\varepsilon$), the probability
that such a bad vector exists is at most
\begin{align*}
\sum_{s=1}^{c'n/\log q}\binom{n}{s}q^{s+1}P_{p,n,s}^{n-1} & \le\sum_{s=1}^{r}e^{s\log n+(s+1)2\sqrt{pn}-(1-\varepsilon/3)spn}+\sum_{s=r+1}^{c'n/\log q}e^{s(\log(n/s)+1)+cn+2\sqrt{pn}-\Omega(n)}\\
& \le\sum_{s=1}^{\infty}n^{-s\varepsilon/3}+\sum_{s=1}^{c'n/\log q}e^{n\left((s/n)(\log(n/s)+1)-\Omega(1)\right)}=n^{-\Omega(1)},
\end{align*}
provided $c'>0$ is sufficiently small (relative to $\delta$) and $n$ is sufficiently large.
\end{proof}
|
2,869,038,155,301 | arxiv | \section{Introduction and main results}
We study initial boundary value problem of the form:
\begin{subequations}\begin{eqnarray}
\label{ME1}
v_t&=&v^k(\Delta v-f(x,t))\ \text{in}\ Q_T=\Omega\times(0,\,T),\\
\label{ME2}
\nabla v\cdot\mathbf{n}&=&0\ \text{in}\ \partial\Omega\times(0,\,T),\\
\label{ME3}
v(x,\,0)&=&v_0(x)\ge 0,
\end{eqnarray}\end{subequations}
where $k>1$, $\Omega\subset\mathbb{R}^N$ is a bounded convex domain with Lipschitz boundary,
\begin{equation}
\label{f}
f\in L^s(0,+\infty;L^r(\Omega))\quad\text{and}\quad\int\limits_{\Omega} f(x,t)\,dx=0\quad\forall\, t\ge 0.
\end{equation}
with some $s\in[1,\,+\infty]$ and $r\in(N/2,\,+\infty]$.
Note that \rf{ME1}--\rf{ME3} and \rf{f} together imply
\begin{equation}
\label{CM}
\displaystyle\int\limits_{\Omega}\tfrac{dx}{v^{k-1}}=\int\limits_{\Omega}\tfrac{dx}{v_0^{k-1}}=M\quad\text{for all}\ t>0.
\end{equation}
The motivation to study \rf{ME1}--\rf{ME3} with source term \rf{f} comes from the following relation of this problem to the so called very fast and singular diffusion equations of the porous medium type (PME) (see~\cite{DP97}--\cite{GM13} and references therein). By denoting
\begin{equation}
\label{u_v}
u=\tfrac{1}{v^{k-1}}\quad\text{and}\quad m=\tfrac{k}{k-1}
\end{equation}
system \rf{ME1}--\rf{ME3} transforms into the PME type initial boundary value problem
\begin{subequations}\begin{eqnarray}
\label{PME1}
u_t&=&\mathrm{div}\bigl(\tfrac{\nabla u}{u^m}\bigr)+ \tfrac{1}{m-1} f(x,t)\ \text{in}\ Q_T=\Omega\times(0,\,T),\\
\label{PME2}
\nabla u\cdot\mathbf{n}&=&0\ \text{in}\ \partial\Omega\times(0,\,T),\\
\label{PME3}
u(x,\,0)&=&\tfrac{1}{v^{k-1}_0(x)}.
\end{eqnarray}\end{subequations}
The latter problem has the source function $\frac{1}{m-1} f(x,t)$ and obeys the conservation law
\begin{equation}
\label{PCM}
\displaystyle\int\limits_{\Omega}u\,dx=M\quad\text{for all}\ t\ge 0.
\end{equation}
Exploring the relation between problems \rf{ME1}--\rf{ME3} and \rf{PME1}--\rf{PME3} one finds that exponent
$k=2$ in the former model corresponds to $m=2$ in the latter with \rf{PME1}, which represents singular diffusion PME. Accordingly, for $k\in(2,+\infty)$ one has $m\in(1,\,2)$ with \rf{PME1}, which represent very fast diffusion PMEs. The limit $k\rightarrow+\infty$ formally corresponds to exponent $\frac{1}{u}$ in \rf{PME1} with $m=1$. On the other hand, for $k\in(1,2)$ one has $m\in(2,\,+\infty)$ with exponent of \rf{PME1} being more degenerate then in the singular diffusion PME.
In dimension $N=1$ regularity and asymptotic convergence to a unique positive steady state of solutions to \rf{PME1}--\rf{PME3} was shown in~\cite{FKT18,KT20}. Up to our knowledge asymptotic properties and regularity of solutions to problem \rf{PME1}--\rf{PME3} equipped with a source term \rf{f} in dimensions higher then one in the given range $m\in(1,\,+\infty)$ were only investigated for the case $m=2$ under additional condition $f(\cdot,t)\rightarrow 0$ as $t\rightarrow\infty$ (see~\cite{SSZ16} and references there in). In this study, we extend the results of~\cite{SSZ16,FKT18,KT20} in two ways. Firstly, we present new results about regularity and asymptotical decay properties of solutions to problem \rf{PME1}--\rf{PME3} with general exponent $m\in(1,\,\infty)$ in dimensions $N=2$ and $N=3$. Secondly, we will considered general type source terms $f(x,t)$ as in \rf{f} that, in particular, include non-zero stationary $f(x,\,t)=f(x)$.
We also consider the unique positive solution $v_\infty\in W^{2,\,r}(\Omega)$ of the stationary problem for \rf{ME1}--\rf{ME3}:
\begin{subequations}\begin{eqnarray}
\label{SME1}
\Delta v_\infty&=&f_\infty(x)\ \text{in}\ \Omega,\\
\label{SME2}
\nabla v_\infty\cdot\mathbf{n}&=&0\ \text{in}\ \partial\Omega,\\
\label{SME3}
\int\limits_{\Omega}\tfrac{dx}{v_\infty^{k-1}}&=&M,
\end{eqnarray}\end{subequations}
where we assume that there exists a limiting source function $f_\infty(x)$, such that $f(\cdot,\,t)\rightarrow f_\infty(.)$ as $t\rightarrow\infty$ as stated in Theorems 1.1-1.2. Existence and uniqueness of such $v_\infty>0$ for $f_\infty\in L^r(\Omega)$
with $r>N/2$ is shown in Appendix A.2.
The main results of our article are stated in the following two theorems.
\begin{theorem}[time-homogeneous source]
Let $f(x,t)=f_\infty(x)=f(x)$ in \rf{f} with $r\in(N/2,\,+\infty]$.
Assume that $v_0(x)\ge 0$ and in the case $N=2$,
\begin{equation}
\label{ic_as2}
k\ge\tfrac{2r-1}{r-1}\quad\text{and}\quad\int\limits_{\Omega}\tfrac{1}{v_0^{k+\varepsilon}}<\infty\quad\text{for an enough small}\quad\varepsilon>0,
\end{equation}
while in the case $N=3$,
\begin{equation}
\label{ic_as3}
k>\tfrac{5r-3}{2r-3}\quad\text{and}\int\limits_{\Omega}\tfrac{1}{v_0^\frac{3k}{2}}<\infty.
\end{equation}
Then for any non-negative solution $v(x,\,t)$ to problem \rf{ME1}--\rf{ME3},
satisfying constraint \rf{CM} for all $t>0$, the following asymptotic decay to the unique positive solution $v_\infty$ to the
stationary problem \rf{SME1}--\rf{SME3} holds:
\begin{equation}
\label{H1_decay}
\|v(\cdot,t)-v_\infty\|_{H^1(\Omega)}\le C_P\|\nabla(v_0-v_\infty)\|_2\exp(-\bar{C}t)\quad\text{for all }\,t>0,
\end{equation}
in particular, $v(\cdot,t)\rightarrow v_\infty$ in $H^1(\Omega)$ as $t\rightarrow+\infty$. Positive constants $C_P$ and $C$ in \rf{H1_decay} depend on $k,\,r,\, |\Omega|,\,M,\,\|f\|_r$ and $v_0$ only.
\end{theorem}
\begin{theorem}[time-inhomogeneous source]
Let $r\in(N/2,\,+\infty]$ in \rf{f} and
\begin{equation}
\label{f_tau}
\int_0^\infty\|f_\tau\|_2\,d\tau<\infty
\end{equation}
hold. Let
\begin{equation}
\label{f_conv}
\|f(\cdot,\,t)-f_\infty\|_r\rightarrow 0 \text{ as } t\rightarrow+\infty,\quad\|f(\cdot,\,t)-f_0(.)\|_r\rightarrow 0 \text{ as } t\rightarrow 0.
\end{equation}
In case $N=2$, let $s=\infty$ for $k\in[\tfrac{2r-1}{r-1},\,+\infty)$ and any $s\in[1,\,\frac{r}{r-(k-1)(r-1)})$
for $k\in(\frac{r}{r-1},\,\tfrac{2r-1}{r-1})$ in \rf{f}.
In case $N=3$, let $s=\infty$ for $k\in(\tfrac{5r-3}{2r-3},\,+\infty)$ and any $s\in[1,\tfrac{r(k+2)}{r(5-2k)+3(k-1)}]$ for $k\in[\frac{r}{r-3/2},\,\tfrac{5r-3}{2r-3}]$ in \rf{f}.
Additionally, assume that $v_0(x)\ge 0$, $E(v_0)<0$ in \rf{en_def}, and \rf{ic_as2} or \rf{ic_as3}
hold for $N=2$ or $N=3$, respectively.
Then any non-negative solution $v(x,\,t)$ to problem \rf{ME1}--\rf{ME3},
satisfying constraint \rf{CM} for all $t>0$, converges to the unique positive solution $v_\infty$ to the
stationary problem \rf{SME1}--\rf{SME3} in $H^1(\Omega)$, i.e.
\begin{equation}
\label{non_stat_conv}
\|v(\cdot,t)-v_\infty\|_{H^1(\Omega)}\rightarrow 0 \text{ as } t\rightarrow+\infty.
\end{equation}
\end{theorem}
Theorems 1.1--1.2 indicate for $N=2$ and $N=3$ critical values $k_{cr}(r)$ of exponent $k$ in \rf{ME1}--\rf{ME3} (see also \rf{k_cr} and \rf{k_cr1} below) for the asymptotic $H^1$-convergence of dynamical solutions to a unique positive steady state generally to hold. Additionally, using transformation \rf{u_v} statements of Theorems 1.1--1.2 can be rewritten in terms of solutions to PME problems \rf{PME1}--\rf{PME3}. In particular, \rf{H1_decay} and \rf{non_stat_conv} imply asymptotic decays
\begin{equation*}
\|u^\frac{1}{1-k}(\cdot,t)-u_\infty^\frac{1}{1-k}\|_{H^1(\Omega)}\le C_P\|\nabla\bigl(u_0^\frac{1}{1-k}-u_\infty^\frac{1}{1-k}\bigr)\|_2\exp(-\bar{C}t)\quad\text{for all }\,t>0
\end{equation*}
and
\begin{equation*}
\|u(\cdot,t)^\frac{1}{1-k}-u_\infty^\frac{1}{1-k}\|_{H^1(\Omega)}\rightarrow 0 \text{ as } t\rightarrow+\infty.
\end{equation*}
respectively, where $u_\infty$ is the solution of stationary problem to \rf{PME1}--\rf{PME3}:
\begin{eqnarray*}
\mathrm{div}\bigl(\tfrac{\nabla u}{u^m}\bigr)&=&-\tfrac{1}{m-1}f_\infty(x)\ \text{in}\ \Omega,\\
\nabla u_\infty\cdot\mathbf{n}&=&0\ \text{in}\ \partial\Omega,\\
\int\limits_{\Omega}u_\infty\,dx&=&M,
\end{eqnarray*}
Let us also comment on two important applications of Theorems 1.1--1.2. For $k=2$ problem \rf{ME1}--\rf{ME3} corresponds to the singular heat equation \rf{PME1} with $m=2$ and in case $N=2$ further related by that to 3D viscous sheet model analyzed in~\cite{KT20}--\cite{KLN11}, the one that evoked primary motivation for this study. The source term considered in~\cite{FKT18,KT20} is homogeneous in time $f(x)\in L^\infty(\Omega)$. Theorem 1.1 for $N=2,\,k=2$ and $r=\infty$ combined with positivity and smoothness of dynamical solutions for all $t>0$ provided $v_0(x)>0$ (see Appendix A.1), then together imply asymptotic exponential decay formula \rf{H1_decay}. This result shows an exponential decay of the solutions to the 3D viscous sheet model to a selected steady state, which depends only on the initial height and velocity distributions in the sheet. We also note that for $N=2$ and $r=+\infty$ the critical value of the exponent
\rf{k_cr} is $k_{cr}(+\infty)=2$. For $r=+\infty$ and $k\in(1,\,2)$ a convergence to a nonhomogeneous positive steady state then generally might not be expected, but Theorem 1.2 still identifies $s_{cr}=\tfrac{r(k+2)}{r(5-2k)+3(k-1)}=\tfrac{k+2}{5-2k}>1$ such that decay \rf{non_stat_conv} to constant state (see \rf{const_ss} in Appendix A.2) holds for any time exponent $s\in[1,\,s_{cr})$.
The second application comes from models closely related to 3D Penrose-Fife phase-field model~\cite{PF90} corresponding to $N=3$ and $k=2$ in problem \rf{ME1}--\rf{ME3} (see~\cite{SSZ16} and references therein). We note that the minimal regularity for this model suggested in~\cite{SSZ16}, i.e. $s=2$ and $r=3$ is exactly reflected in conditions of Theorem 1.2. Interestingly, here we land on another border case $k=\frac{r}{r-3/2}=2$ and $s=s_{cr}=\tfrac{r(k+2)}{r(5-2k)+3(k-1)}=2$. Additionally, for $N=3$ and $r=+\infty$ one has the critical value of the exponent \rf{k_cr1} $k_{cr}(+\infty)=2.5$ and Theorem 1.2 still identifies $s_{cr}(k)>1$ for any $k\in(1,\,2.5]$.
The organization of the article is as follows. Proofs of Theorems 1.1--1.2 are provided in section~3 for $N=2$ and section~4 for $N=3$. In section~2 we collect some basic estimates used often in the proofs. In the discussion section we state interesting open questions and outlook.
\section{Basic estimates}
By multiplying \rf{ME1} with $\Delta v-f(x,t)$ and integrating by parts one shows the energy equality
\begin{equation}
\label{EE}
\displaystyle\tfrac{d}{dt}E(v(t))+\int\limits_{\Omega} v^k(\Delta v-f)^2\,dx=\int\limits_{\Omega} vf_t\,dx
\end{equation}
with
\begin{equation}
\label{en_def}
E(v)=\int\limits_{\Omega}[\tfrac{1}{2}|\nabla v|^2+fv]\,dx,
\end{equation}
that consequently implies
\begin{equation}
\label{grad_v}
\|\nabla v(t)\|_2\le C \text{ for all } t>0
\end{equation}
provided (\ref{f_tau}) holds, with constant $C$ depending on the value of integral in \rf{f_tau} as well as on b$E(v_0),\,\|f_0\|_2,\,M, \,|\Omega|$. Proof of \rf{grad_v} is given in Appendix~B.
Let us denote the difference between dynamical and stationary solutions by $w=v(x,t)-v_\infty(x)$. Then one
can rewrite \rf{ME1} as follows
\begin{equation*}
v_t=v^k(\Delta v-f(x,t))=v^k\Delta w+v^k(f_\infty(x)-f(x,t)).
\end{equation*}
Multiplying the last equation by $- \Delta w$ and integrating by parts results in
\begin{equation}
\label{w_es}
\tfrac{1}{2}\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+\int\limits_{\Omega} v^k|\Delta w|^2\,dx=
\int\limits_{\Omega} v^k(f(x,t)-f_\infty(x))\Delta w\,dx.
\end{equation}
Additionally, we will use the following estimate
\begin{equation}
\label{diss_est}
\int\limits_{\Omega}|\nabla w|^2\,dx\le C_{S,l}^2\Bigl(\int\limits_{\Omega} v^k|\Delta w|^2\,dx\Bigr)\Bigl(\int\limits_{\Omega} v^{\frac{kl}{2-l}}\,dx\Bigr)^{\frac{l-2}{l}},
\end{equation}
which holds for any $l>2$ in the case $N=2$ and for $l\in(2,\,6]$ in the case $N=3$ with $C_{S,l}$ being the constant in Poincare-Sobolev embedding
\begin{equation*}
\|w-\bar{w}\|_l\le C_{S,l}\|\nabla w\|_2,\quad\text{where}\quad\bar{w}=\tfrac{1}{|\Omega|}\int\limits_{\Omega}w\,dx.
\end{equation*}
Combining the latter estimate with the following calculation, taking into account $\nabla w\cdot\mathbf{n}=0$ on $\partial\Omega$,
\begin{eqnarray*}
\int\limits_{\Omega}|\nabla w|^2\,dx&=&-\int\limits_{\Omega} (w - \bar{w})\Delta w\,dx \le
\Bigl ( \int\limits_{\Omega} v^{\frac{k}{2}}|\Delta w|^2\,dx \Bigr)^{\frac{1}{2}}
\|w - \bar{w} \|_l \|v^{-\frac{k}{2}}\|_{\frac{2l}{l-2}}\nonumber\\
&\le&C_{S,l} \Bigl ( \int\limits_{\Omega} v^{\frac{k}{2}}|\Delta w|^2\,dx \Bigr)^{\frac{1}{2}} \|\nabla w\|_2
\Bigl(\int\limits_{\Omega} v^{\frac{kl}{2-l}}\,dx \Bigr)^{\frac{l-2}{2l}},
\end{eqnarray*}
which implies \rf{diss_est}.
Finally, multiplying \rf{ME1} by $v^{-p-1}$ and integrating by parts one obtains
\begin{equation}
\label{Entrops}
\tfrac{d}{dt}\int\limits_{\Omega}\tfrac{dx}{v^p}+\tfrac{4(p+1-k)p}{(p-k)^2}
\int\limits_{\Omega} \bigl|\nabla\bigl(\tfrac{1}{v^{(p-k)/2}}\bigr)\bigr|^2\,dx= p\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k}}
\end{equation}
holding for any $p>k-1>0$ and $p \neq k$.
\section{Two-dimensional case}
\subsection{Time-homogeneous case}
In this section, we collect and prove results on exponential asymptotic decay to the steady state $v_\infty(x)$ of solutions
to problem \rf{ME1}--\rf{ME3} considered with source function \rf{f} being independent of time,
i.e. $f(x,t)=f_\infty(x)=f(x)$. It turns out that given $f\in L^r(\Omega)$ with $r\in(1,\,+\infty]$ there exists a critical value of exponent
\begin{equation}
\label{k_cr}
k_{cr}(r)=\tfrac{2r-1}{r-1}\ge 2,
\end{equation}
such that for $k\ge k_{cr}(r)$ we can show an exponential asymptotic decay estimate \rf{H1_decay}
for any given positive $|\Omega|,M,\|f\|_r$ and initial condition $v_0(x) \ge0$, provided \rf{ic_as2} holds.
Showing unconditional asymptotic decay in case $k<k_{cr}(r)$ remains then an open problem. Note, that given $f\in L^\infty(\Omega)$ one has $r=+\infty$ and $k_{cr}(+\infty)=2$.
Below, in subsections 3.1.1 and 3.1.2, we derive uniform in time bounds on $\|\frac{1}{v}\|_\frac{kl}{l-2}$ for certain $l>2$.
Combining them with \rf{diss_est} and $f_\infty(x)=f(x)$ then will yield the estimate
\begin{equation}
\label{Msc_es}
\tfrac{1}{2}\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+\bar{C}\int\limits_{\Omega}|\nabla w|^2\,dx\le0\quad\text{for}\quad t>0
\end{equation}
and some constant $\bar{C}>0$.
Applying comparison principle to the last in\-equa\-li\-ty will show
\begin{equation*}
\int\limits_{\Omega}|\nabla w|^2\,dx\le\int\limits_{\Omega}|\nabla w_0|^2\,dx\exp(-2\bar{C}t),
\end{equation*}
whence substituting $w=v(x,t)-v_\infty(x)$ gives
\begin{equation*}
\|\nabla(v(\cdot,t)-v_\infty)(.)\|_2\le\|\nabla(v_0-v_\infty)\|_2\exp(-\bar{C}t).
\end{equation*}
The last estimate combined together with Poincare inequality, conversations \rf{CM} and \rf{SME3}, non-negativity of $v(x,t)$ and positivity of $v_\infty(x)\in C^\lambda(\bar{\Omega})$ (see Appendix A.2) will, finally, imply \rf{H1_decay} with $C_P$ being Poincare's constant.
\subsubsection{Case $k>k_{cr}(r)$}
First, let us estimate the last term in \rf{Entrops} using H\"{o}lder inequality as
\begin{equation}
\label{RHS}
\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k}}\le\|f\|_r\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^\frac{(p+1-k)r}{r-1}}\Bigr)^{\frac{r-1}{r}}.
\end{equation}
Conservation \rf{CM} implies existence of $x_0(t)\in\Omega$, such that $v(t,\,x_0(t))=(|\Omega|/M)^{1/(k-1)}$. Combining this fact with Sobolev-Poincare inequality one estimates
\begin{equation}
\label{MTest}
\int|\nabla\Bigl(\tfrac{1}{v^{(p-k)/2}}\Bigr)|^2\,dx\ge\tfrac{1}{C_{S,q}^2}\Bigl(\int\limits_{\Omega}\Bigl|\tfrac{1}{v^{(p-k)/2}}-\Bigl(\tfrac{M}{|\Omega|}\Bigr)^\frac{p-k}{2k-2}\Bigr|^q\,dx\Bigr)^{2/q}
\end{equation}
for any $q\in(1,\,\infty)$, where $C_{S,q}$ denotes Sobolev constant that depends on $|\Omega|$ and choice of $q$.
Using inequalities $\|v-w\|_q\ge|\|v\|_q-\|w\|_q|$ and
\begin{equation*}
(a-b)^2\ge\tfrac{a^2}{2}-b^2,
\end{equation*}
estimates \rf{Entrops}, \rf{RHS}--\rf{MTest} together imply
\begin{equation}
\label{Entrops1}
\tfrac{d}{dt}\int\limits_{\Omega}\tfrac{dx}{v^p}+\tfrac{2(p+1-k)p}{C_{S,q}^2(p-k)^2}\Bigl(\int\limits_{\Omega}\tfrac{1}{v^\frac{q(p-k)}{2}}\,dx\Bigr)^\frac{2}{q}
\le\tfrac{4(p+1-k)p}{C_{S,q}^2(p-k)^2}\Bigl(\tfrac{M}{|\Omega|}\Bigr)^{\frac{p-k}{k-1}}|\Omega|^{\frac{2}{q}}
+p\|f\|_r\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^\frac{(p+1-k)r}{r-1}}\Bigr)^{\frac{r-1}{r}}_.
\end{equation}
Let us set
\begin{equation}
\label{pdef}
p=\tfrac{(k-1)(2r-1)}{r},\quad q=\tfrac{2p}{p-k}>1,
\end{equation}
and denote
\begin{equation*}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^p}.
\end{equation*}
Due to definition \rf{pdef} and constraint $k>k_{cr}(r)$, one has $p>k$ and \rf{Entrops1} becomes
\begin{eqnarray}
\label{Entrops2}
&&\hspace{-0.cm}y'(t)+\tfrac{2(k-1)^2(2r-1)(r-1)}{C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}^2((k-2)r-k+1)^2}y(t)^\frac{(k-2)r-k+1}{(k-1)(2r-1)}\le\tfrac{(k-1)(2r-1)}{r}\|f\|_rM^{\frac{r-1}{r}}\nonumber\\
&&\hspace{-0.cm}+\tfrac{4(k-1)^2(2r-1)(r-1)}{C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}^2((k-2)r-k+1)^2}
\bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{(k-2)r-k+1}{(k-1)r}|\Omega|^{\frac{(k-2)r-k+1}{(k-1)(2r-1)}},
\end{eqnarray}
where in the first term at the right-hand side we used \rf{CM}. Hence, the right hand side turns out to be a constant depending on given values $k,r,|\Omega|,M,\|f\|_r$. Consequently, the last inequality implies a uniform in time bound
\begin{equation}
\label{p_bound}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^\frac{(k-1)(2r-1)}{r}}\le C=\max\{y(0),\,C_0\}
\end{equation}
with constant
\begin{equation*}
C_0=\Bigl[2\bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{(k-2)r-k+1}{(k-1)r}|\Omega|^\frac{(k-2)r-k+1}{(k-1)(2r-1)}+\|f\|_rM^\frac{r-1}{r}\tfrac{C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}^2((k-2)r-k+1)^2}{2(k-1)r(r-1)}\Bigr]^\frac{(k-1)(2r-1)}{(k-2)r-k+1}_.
\end{equation*}
Next, choosing
\begin{equation}
\label{l}
l=\tfrac{2(k-1)(2r-1)}{(k-2)r-k+1}\in(2,\infty),\quad\text{i.e. so that}\quad\tfrac{(k-1)(2r-1)}{r}=\tfrac{kl}{l-2},
\end{equation}
in estimates \rf{w_es}--\rf{diss_est} and using \rf{p_bound} yields \rf{Msc_es} with
\begin{equation*}
\bar{C}=C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}^{-2}C^{-\frac{kr}{(k-1)(2r-1)}},
\end{equation*}
and, consequently, decay estimate \rf{H1_decay} holds.
We note that the derived exponential asymptotic rate $\bar{C}$ degenerates in the limit $k\rightarrow k^+_{cr}(r)$, namely,
\begin{equation}
\label{as_deg}
C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}^{-2}C^{-\frac{kr}{(k-1)(2r-1)}}\rightarrow 0\quad\text{as}\quad k\rightarrow k^+_{cr}(r).
\end{equation}
To justify this we observe that
\begin{equation*}
(k-2)r-k+1\rightarrow 0^+ \text{ and } C^{\frac{kr}{(k-1)(2r-1)}}\sim2^\frac{kr}{(k-2)r-k+1}\rightarrow +\infty \text{ as } k\rightarrow k^+_{cr}(r).
\end{equation*}
and by Theorem 3.2 of~\cite{MTSO17} (see also Theorem 8.5(ii) of~\cite{LL01})
\begin{equation*}
C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}\sim\tfrac{C(|\Omega|,r)}{\sqrt{(k-2)r-k+1}}\rightarrow +\infty \text{ as } k\rightarrow k^+_{cr}(r).
\end{equation*}
\subsubsection{Case $k=k_{cr}(r)$}
For any parameter $\beta\in(0,1)$ one can estimate the last term in \rf{Entrops} using H\"{o}lder inequality and \rf{CM} as
\begin{equation}
\label{RHS1}
\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k}}=\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k-\beta}\cdot v^\beta}\le\|f\|_rM^{\frac{\beta}{k-1}}\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^{(p+1-k-\beta)k'}}\Bigr)^{\frac{1}{k'}},
\end{equation}
where
\begin{equation*}
k'=\tfrac{(k-1)r}{(r-1)(k-1)-r\beta}\ \underbrace{=}_{k=k_{cr}(r)}\ \tfrac{k-1}{1-\beta}>1.
\end{equation*}
Let us choose $\beta$ so that $(p+1-k-\beta)k'=p$, i.e.
\begin{equation}
\label{beta}
\beta=\tfrac{(k-1)^2-p(k-2)}{p-k+1}.
\end{equation}
Condition $\beta\in(0,1)$ implies the following constraint on $p$
\begin{equation}
\label{p_constr}
p\in \bigl(k,\tfrac{(k-1)^2}{k-2}\bigr)
\end{equation}
with the latter interval having non-zero length $1/(k-2)$.
Next, using \rf{RHS1}, \rf{beta}, again Sobolev-Poincare embedding \rf{MTest} with
\begin{equation*}
q=\tfrac{2p}{p-k}>1
\end{equation*}
and by denoting
\begin{equation*}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^p},
\end{equation*}
one obtains an analogue of inequality \rf{Entrops1}:
\begin{equation}
\label{Entrops3}
y'(t)+\tfrac{2(p+1-k)p}{C_{S,\frac{2p}{p-k}}^2(p-k)^2}y(t)^\frac{p-k}{p}\le\tfrac{4(p+1-k)p}{C_{S,\frac{2p}{p-k}}^2(p-k)^2}
\bigl(\tfrac{M}{|\Omega|}\bigr)^{\frac{p-k}{k-1}}|\Omega|^{\frac{p-k}{p}}
+p\|f\|_rM^\frac{1+(p-k)(2-k)}{(p-k+1)(k-1)}y(t)^{\frac{p-k}{p-k+1}}.
\end{equation}
We rewrite \rf{Entrops3} in the following form
\begin{equation}
\label{Entrops4}
y'(t)\le C_1 z^\frac{p}{p-k+1}(t) - C_2 z(t)+C_3,
\end{equation}
where we introduced the following notation:
\begin{eqnarray*}
z(t)&=&y(t)^\frac{p-k}{p},\ C_1=p\|f\|_rM^\frac{1+(p-k)(2-k)}{(p-k+1)(k-1)},\\[1.5ex] C_2&=&\tfrac{2(p+1-k)p}{C_{S,\frac{2p}{p-k}}^2(p-k)^2},\quad
C_3=2C_2\bigl(\tfrac{M}{|\Omega|}\bigr)^{\frac{p-k}{k-1}}|\Omega|^{\frac{p-k}{p}}.
\end{eqnarray*}
One observes that constants $C_1,C_2,C_3$ are positive if $p$ belongs to the range \rf{p_constr}.
The polynomial at the right-hand side of \rf{Entrops4} has two roots $\lambda_\pm$:
\begin{equation}
\label{lambda}
0<\lambda_- <\bigl(\tfrac{C_2(p-k+1)}{pC_1}\bigr)^\frac{p-k+1}{k-1}<\lambda_+
\end{equation}
provided
\begin{equation*}
C_3 < \tfrac{k-1}{p}\bigl(\tfrac{p-k+1}{pC_1}\bigr)^\frac{p-k+1}{k-1}C_2^\frac{p}{k-1}.
\end{equation*}
The latter condition can be rewritten as
\begin{equation}
\label{cond}
\tfrac{2p}{k-1}\bigl(\tfrac{M}{|\Omega|}\bigr)^{\frac{p-k}{k-1}}|\Omega|^{\frac{p-k}{p}}\bigl(\tfrac{pC_1}{p-k+1}\bigr)^\frac{p-k+1}{k-1}<C_2^\frac{p-k+1}{k-1}.
\end{equation}
Observe that
\begin{equation*}
\mathop {\lim} \limits_{p\rightarrow k^+} \tfrac{2p}{k-1}\bigl(\tfrac{M}{|\Omega|}\bigr)^{\frac{p-k}{k-1}}|\Omega|^{\frac{p-k}{p}}\bigl(\tfrac{pC_1}{p-k+1}\bigr)^\frac{p-k+1}{k-1}
=\tfrac{2k}{k-1}(k^2\|f\|_rM^\frac{1}{k-1})^\frac{1}{k-1} < +\infty,
\end{equation*}
while
\begin{equation*}
\mathop {\lim} \limits_{p\rightarrow k^+} C_2^\frac{p-k+1}{k-1} = +\infty,
\end{equation*}
where we use asymptotics
\begin{equation*}
C_{S,\frac{2p}{p-k}}\sim\tfrac{C(|\Omega|,k)}{\sqrt{p-k}}\rightarrow\infty \quad\text{as}\quad p\rightarrow k^+,
\end{equation*}
see Theorem 3.2 of~\cite{MTSO17} (also Theorem 8.5(ii) of~\cite{LL01}).
Therefore, for $p$ being sufficiently close to value $k$ from above (depending on the given values $M,\,|\Omega|,\,\|f\|_r$ and $k=k_{cr}(r)$) inequality \rf{cond} and, consequently, \rf{lambda} hold with $\lambda_+\rightarrow+\infty$ as $p\rightarrow k^+$.
Now, to obtain a uniform bound on $y(t)$ for all $t>0$ we choose $p$ being sufficiently close to value $k$ from above, such that \rf{cond} holds as well as
\begin{equation}
y(0)\le\lambda_+^\frac{p}{p-k}.
\end{equation}
Such choice of $p$ is possible, because $y(0)$ stays bounded for $p$ being close to $k$ provided \rf{ic_as2} holds.
Then from \rf{Entrops4} and \rf{lambda} it follows that
\begin{equation}
\label{p_bound1}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^p}\le C=\min\Bigl\{y(0),\lambda_+^\frac{p}{p-k}\Bigr\}\quad\forall\, t>0.
\end{equation}
Finally, fixing $p>k$ as above and choosing
\begin{equation*}
l=\tfrac{2p}{p-k}\in(2,+\infty),\quad\text{i.e. so that}\quad p=\tfrac{kl}{l-2},
\end{equation*}
in estimates \rf{w_es}--\rf{diss_est} together with \rf{p_bound1} gives \rf{Msc_es} with
\begin{equation*}
\bar{C}=C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}},
\end{equation*}
and, consequently, decay estimate \rf{H1_decay} holds.
We note that the derived exponential asymptotic rate $\bar{C}$ degenerates in the limit $p\rightarrow k^+(r)$, namely
\begin{equation*}
C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\rightarrow 0\quad\text{as}\quad p\rightarrow k^+.
\end{equation*}
This follows from unboundedness of $\lambda_+$ in \rf{lambda} and $C_{S,\frac{2p}{p-k}}$ as $p\rightarrow k^+$.
\subsection{Time-inhomogeneous case}
In this section, we collect and prove results on asymptotic decay to the steady state $v_\infty(x)$ of solutions to problem \rf{ME1}--\rf{ME3} considered with time-dependent source function $f(x,\,t)$ in \rf{f}. We separate cases $k\ge k_{cr}(r)$ and $k\in(\frac{r}{r-1},\,k_{cr}(r))$ with $k_{cr}(r)$ being defined in \rf{k_cr}. It turns out that in the first case one can take $s=\infty$ in \rf{f} (similar to to the time-homogeneous case $f(x,t)=f(x)$), while in case $k\in(\frac{r}{r-1},\,k_{cr}(r))$ for a given space integrability $r\in(1,\infty]$ we can identify a maximal possible $s_{cr}=s_{cr}(k,r)\in[1,\infty)$, such that asymptotic decay to the steady state $v_\infty(x)$ is shown for any time integrability $s\in[1,\,s_{cr}(k,r))$ in \rf{f}. For $k\le\frac{r}{r-1}$ possibility for an asymptotic decay to $v_\infty(x)$ remains then an open problem.
We start with the following estimation of the right-hand side of \rf{w_es}
\begin{equation*}
\int\limits_{\Omega} v^k(f-f_\infty)\Delta w\,dx\le\tfrac{1}{2}\int\limits_{\Omega} v^k(f-f_\infty)^2\,dx+\tfrac{1}{2}\int\limits_{\Omega}v^k|\Delta w|^2\,dx.
\end{equation*}
Combined with \rf{w_es} this implies
\begin{equation}
\label{w_es1}
\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+\int\limits_{\Omega} v^k|\Delta w|^2\,dx\le
\int\limits_{\Omega} v^k(f-f_\infty)^2\,dx.
\end{equation}
For any real $a>1$ one can estimate the right-hand side of \rf{w_es1} as
\begin{equation*}
\int\limits_{\Omega} v^k(f-f_\infty)^2\,dx\le\|v\|_{ak}^k\|f-f_\infty\|_{\frac{2a}{a-1}}^2 .
\end{equation*}
Using Poincare-Sobolev embedding and \rf{CM} one shows
\begin{equation*}
\|v\|_{ak}^k\le \bigl(C_{S,ak}\|\nabla v\|_2+\bigl(\tfrac{|\Omega|}{M}\bigr)^\frac{1}{k-1}|\Omega|^\frac{1}{ak}\bigr)^k.
\end{equation*}
Choosing $a=1+2/\varepsilon$ with $\varepsilon>0$ small enough one derives from the last two inequalities
\begin{equation}
\label{f_diff}
\int\limits_{\Omega}v^k(f-f_\infty)^2\,dx\le\tfrac{\bar{C}}{\varepsilon^{k/2}}\|f-f_\infty\|_{2+\varepsilon}^2,
\end{equation}
where we use \rf{grad_v} and asymptotics
\begin{equation*}
C_{S,1+2/\varepsilon}\sim\tfrac{C(|\Omega|)}{\sqrt{\varepsilon}} \text{ as } \varepsilon\rightarrow 0.
\end{equation*}
Finally, combining \rf{f_diff} with \rf{w_es1} gives
\begin{equation}
\label{w_es2}
\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+\int\limits_{\Omega} v^k|\Delta w|^2\,dx\le
\tfrac{\bar{C}}{\varepsilon^{k/2}}\|f-f_\infty\|_{2+\varepsilon}^2.
\end{equation}
Below, in subsections 3.2.1 and 3.2.2, we derive uniform in time bounds on $\|\frac{1}{v}\|_\frac{kl}{l-2}$ for certain $l>2$.
Combining them with \rf{diss_est} and \rf{w_es2} will yield asymptotic decay estimate \rf{H1_decay} for any $k>1$ and $r>2$. We note that this proof can be also generalized for the case $r\in(1,\,2]$ if one treats \rf{w_es} similarly as in section 4.2 for $N=3$.
\subsubsection{Case $k\ge k_{cr}(r)$}
Here, w.l.o.g., we assume that $s=\infty$ and $r\in(2,\,+\infty]$ in \rf{f}. In the first part of the proof, we follow exactly the lines of subsection 3.1.1 (in the case $k>k_{cr}(r)$) or subsection 3.1.2 (in the case $k=k_{cr}(r)$) establishing uniform in time bounds \rf{p_bound} or \rf{p_bound1}, respectively, i.e.
\begin{equation*}
\int\limits_{\Omega}\tfrac{dx}{v^p}\le C\quad\forall t>0\quad\text{and fixed}\quad p\in(k,\,+\infty),\ C>0.
\end{equation*}
We note just that by deriving bounds \rf{p_bound} or \rf{p_bound1} $\|f\|_r$ should be now replaced by
$\|f\|_{L^\infty(0,+\infty;L^r(\Omega))}$.
Next, the latter bound combined with \rf{w_es2} and \rf{diss_est} taken with $l=\frac{2p}{p-k}$ implies
\begin{equation*}
\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\int\limits_{\Omega}|\nabla w|^2\,dx\le
\tfrac{\bar{C}}{\varepsilon^{k/2}}\|f-f_\infty\|_{2+\varepsilon}^2.
\end{equation*}
Applying comparison principle to the last inequality one deduces that
\begin{eqnarray}
\int\limits_{\Omega}|\nabla w|^2\,dx&\le& \exp\Bigl\{-C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,t\Bigr\}\Big(\int\limits_{\Omega}|\nabla w_0|^2\,dx+\nonumber\\
&+&\tfrac{\bar{C}}{\varepsilon^{k/2}}\int_0^t\exp\Bigl\{C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,\tau\Bigr\}\|f(\cdot,\tau)-f_\infty(.)\|_{2+\varepsilon}^2\,d\tau\Big).
\end{eqnarray}
The last estimate would imply asymptotic decay
\begin{equation}
\label{as_decay2}
\int\limits_{\Omega}|\nabla w|^2\,dx\rightarrow 0\quad\text{as}\quad t\rightarrow+\infty
\end{equation}
provided
\begin{equation*}
\mathop {\lim} \limits_{t\rightarrow+\infty} \exp\Bigl\{-C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,t\Bigr\}\int_0^t\exp\Bigl\{C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,
\tau\Bigr\}\|f(\cdot,\tau)-f_\infty(.)\|_{2+\varepsilon}^2\,d\tau=0,
\end{equation*}
which is true by \rf{f_conv}.
Using L'Hospital's rule one observes that the last limit holds if and only if
\begin{equation*}
\lim_{t\rightarrow\infty}\|f(\cdot,t)-f_\infty(.)\|_{2+\varepsilon}^2=0.
\end{equation*}
In that case \rf{as_decay2} holds and, after substituting $w=v(x,t)-v_\infty(x)$, one has
\begin{equation*}
\|\nabla(v(\cdot,t)-v_\infty(.))\|_2\rightarrow 0 \text{ and } \|v(\cdot,t)-v_\infty(.)\|_{H^1(\Omega)}\rightarrow 0 \text{ as } t\rightarrow+\infty.
\end{equation*}
We note that the decay rate here is determined by the slowest one between two: given convergence rate of $\|f(\cdot,\tau)-f_\infty(.)\|_{2+\varepsilon}$ to zero and exponential $C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}$ one.
\subsubsection{Case $\tfrac{r}{r-1}<k< k_{cr}(r)$}
Here, we assume $f\in L^s(0,+\infty;L^r(\Omega))$ with $s\in[1,\infty)$ and $r\in(2,\,+\infty]$.
First, for any parameter $\beta\in[0,k-1)$ one estimates the last term in \rf{Entrops}, using H\"{o}lder inequality and \rf{CM}, as
\begin{equation}
\label{RHS2}
\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k}}=\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k-\beta}\cdot v^\beta}\le\|f\|_rM^{\tfrac{\beta}{k-1}}\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^{(p+1-k-\beta)k'}}\Bigr)^{\frac{1}{k'}},
\end{equation}
where
\begin{equation}
\label{k'}
k'=\tfrac{(k-1)r}{(r-1)(k-1)-r\beta}>1.
\end{equation}
Let us choose $\beta$ so that $(p+1-k-\beta)k'=p$, i.e.
\begin{equation}
\label{beta1}
\beta=\tfrac{(k-1)^2r-p(k-1)}{r(p-k+1)}.
\end{equation}
Condition $\beta\in[0,k-1)$ implies the following constraints on $p$:
\begin{equation}
\label{p_constr1}
p\in\bigl(\tfrac{2(k-1)r}{r+1},\,(k-1)r\bigr]
\end{equation}
with the latter interval having non-zero length for $r>1$. Additionally, we demand $p+1-k-\beta>0$,
which, in view of \rf{beta1}, implies
\begin{equation*}
p>\tfrac{(k-1)(2r-1)}{r}.
\end{equation*}
We note that for $1<k<k_{cr}(r)$ and $r>1$ one has
\begin{equation*}
\tfrac{2(k-1)r}{r+1}<\tfrac{(k-1)(2r-1)}{r}<k,
\end{equation*}
and, therefore, we update the constraints on $p$ in \rf{p_constr1} to
\begin{equation}
\label{p_constr2}
p\in\bigl(k,\,(k-1)r\bigr]
\end{equation}
with the latter interval having non-zero length, because $\frac{r}{r-1}<k$.
We conclude that for any $p$ taken from \rf{p_constr2} by \rf{Entrops} and \rf{RHS2}--\rf{beta1}
\begin{equation*}
\tfrac{d}{dt}\int\limits_{\Omega}\tfrac{dx}{v^p}\le p\|f\|_rM^\frac{(k-1)r-p}{r(p-k+1)}\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^p}\Bigr)^\frac{pr-(k-1)(2r-1)}{r(p-k+1)}
\end{equation*}
holds. Applying Gr\"{o}nwall inequality to this estimate results in the following bound
\begin{equation}
\label{p_bound2}
\int\limits_{\Omega}\tfrac{dx}{v^p}\le \Bigl[\Bigl(\int\limits_{\Omega}\tfrac{dx}{v_0^p}\Bigr)^\frac{(k-1)(r-1)}{r(p-k+1)}
+\tfrac{p(k-1)(r-1)}{r(p-k+1)}\|f\|_{L^s(L^r)}M^\frac{(k-1)r-p}{r(p-k+1)}\, t^\frac{s-1}{s} \Bigr]^\frac{r(p-k+1)}{(k-1)(r-1)}
\end{equation}
holding for any $p$ from interval \rf{p_constr2}.
Finally, choosing
\begin{equation*}
l=\tfrac{2p}{p-k}\in(2,\infty), \text{ i.e. so that}\quad p=\tfrac{kl}{l-2},
\end{equation*}
in estimate \rf{diss_est} together with \rf{p_bound2} implies
\begin{equation*}
g(t)\int\limits_{\Omega}|\nabla w|^2\,dx\le\int\limits_{\Omega} v^k|\Delta w|^2\,dx
\end{equation*}
with
\begin{equation}
\label{g}
g(t)= C_{S,\,\frac{2p}{p-k}}^{-2} \Bigl[\Bigl(\int\limits_{\Omega}\tfrac{dx}{v_0^p}\Bigr)^\frac{(k-1)(r-1)}{r(p-k+1)}
+\tfrac{p(k-1)(r-1)}{r(p-k+1)}\|f\|_{L^s(L^r)}M^\frac{(k-1)r-p}{r(p-k+1)}t^\frac{s-1}{s}\Bigr]^\frac{-kr(p-k+1)}{p(k-1)(r-1)}_.
\end{equation}
Combining that with \rf{w_es2} (while noting that $f_\infty=0$) gives
\begin{equation*}
\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+g(t)\int\limits_{\Omega}|\nabla w|^2\,dx\le
\tfrac{\bar{C}}{\varepsilon^{k/2}}\|f\|_{2+\varepsilon}^2.
\end{equation*}
Applying comparison principle to the last inequality one deduces that
\begin{multline*}
\int\limits_{\Omega}|\nabla w|^2\,dx\le \exp\Bigl\{-\int_0^tg(\tau)\,d\tau\Bigr\}\Big(\int\limits_{\Omega}|\nabla w_0|^2\,dx
+ \\
\tfrac{\bar{C}}{\varepsilon^{k/2}}\int_0^t\exp\Bigl\{\int_0^\tau g(\tilde{\tau})\,d\tilde{\tau}\Bigr\}\|f(\cdot,\tau)\|_{2+\varepsilon}^2\,d\tau\Big).
\end{multline*}
The last estimate would imply asymptotic decay
\begin{equation}
\label{as_decay3}
\int\limits_{\Omega}|\nabla w|^2\,dx\rightarrow 0\quad\text{as}\quad t\rightarrow+\infty
\end{equation}
provided
\begin{equation}
\label{g_decay_0}
\int_0^tg(\tau)\,d\tau\rightarrow\infty\quad\text{as}\quad t\rightarrow\infty,
\end{equation}
as well as,
\begin{equation*}
\mathop {\lim} \limits_{t\rightarrow\infty}\exp\Bigl\{-\int_0^tg(\tau)\,d\tau\Bigr\}\int_0^t\exp\Bigl\{\int_0^\tau g(\tilde{\tau})\,d\tilde{\tau}\Bigr\}\|f(\cdot,\tau)\|_{2+\varepsilon}^2\,d\tau=0
\end{equation*}
hold together.
Using L'Hospital's rule and \rf{g}, one observes that the last limit holds if and only if
\begin{equation}
\label{f_constr_0}
\mathop {\lim} \limits_{t\rightarrow\infty} \|f(\cdot,t)\|_{2+\varepsilon}^2t^\frac{(s-1)kr(p-k+1)}{sp(k-1)(r-1)}=0.
\end{equation}
Similarly, \rf{g_decay_0} holds provided
\begin{equation*}
\tfrac{(s-1)kr(p-k+1)}{sp(k-1)(r-1)}\le 1.
\end{equation*}
The last condition together with the constraint \rf{p_constr2} on $p$ implies necessarily
that
\begin{equation*}
s\in[1,\,s_{cr}(k,r))\quad\text{with}\quad s_{cr}(k,r)=\tfrac{r}{r-(k-1)(r-1)}.
\end{equation*}
Note that $s_{cr}>1$ for $k<k_{cr}(r)$ and $r>1$.
Exploring additionally \rf{f_constr_0} we conclude that for any
\begin{equation}
\label{f_cd}
f\in L^s(0,+\infty;L^r(\Omega))
\end{equation}
with $s\in[1,\,s_{cr}(k,r))$ and $r>2$ asymptotic decay \rf{as_decay3} holds.
Substituting $w=v(x,t)-v_\infty(x)$ then yields
\begin{equation*}
\|\nabla(v(\cdot,t)-v_\infty(.))\|_2\rightarrow 0 \text{ and } \|v(\cdot,t)-v_\infty(.) \|_{H^1(\Omega)}\rightarrow 0 \text{ as } t\rightarrow+\infty.
\end{equation*}
\section{Three-dimensional case}
\subsection{Time-homogeneous case}
In this section, we consider independent of time source function \rf{f},
i.e. $f(x,t)=f_\infty(x)=f(x)$. Given $f\in L^r(\Omega)$ with $r\in(3/2,\infty]$ there exists
a critical value of exponent
\begin{equation}
\label{k_cr1}
k_{cr}(r)=\tfrac{5r-3}{2r-3}\ge2.5
\end{equation}
such that for $k>k_{cr}(r)$ we show an exponential asymptotic decay estimate \rf{H1_decay}
for any given positive $|\Omega|,M,\|f\|_r$ and initial condition $v_0(x)\ge0$ provided \rf{ic_as3} holds.
Within the case $k>k_{cr}(r)$ we tactically provide different proofs for two sub-cases $k\in[\frac{4r-2}{r-2},+\infty)$ and $k\in(k_{cr}(r),\,\frac{4r-2}{r-2})$. Showing an unconditional asymptotic decay in the case $k\le k_{cr}(r)$ remains then an open problem.
\subsubsection{Case $k\ge\frac{4r-2}{r-2}>k_{cr}(r)$}
The proof in this case coincides with the one given in subsection 3.1.1 for $N=2$.
We note only that the constraint on power $l$ in estimates \rf{diss_est} and \rf{l} here changes to
\begin{equation*}
l=\tfrac{2(k-1)(2r-1)}{(k-2)r-k+1}\in(2,6].
\end{equation*}
which is satisfied for $k\ge\frac{4r-2}{r-2}$.
Proceeding as in subsection 3.1.1 one obtains decay estimate \rf{H1_decay} with
\begin{equation*}
\bar{C}=C_{S,\frac{2(k-1)(2r-1)}{(k-2)r-k+1}}^{-2}C^{-\frac{kr}{(k-1)(2r-1)}},
\end{equation*}
where $C$ is given by \rf{p_bound}.
Note that one has
\begin{equation*}
\tfrac{kr}{(k-1)(2r-1)}\rightarrow\tfrac{2}{3} \text{ as } k \rightarrow \bigl( \tfrac{4r-2}{r-2}\bigr)^+
\end{equation*}
and, therefore, in this limit, asymptotic decay rate $\bar{C}$ tends to a finite number depending on $r$ only.
\subsubsection{Case $\frac{4r-2}{r-2}>k>k_{cr}(r)$}
The proof in this case proceeds similar to the one presented in subsection 2.1.2.
We first choose $p=\frac{3k}{2}$ in estimate \rf{RHS1}, where as before
\begin{equation*}
k'=\tfrac{(k-1)r}{(r-1)(k-1)-r\beta}
\end{equation*}
and $\beta$ is fixed by condition $(p+1-k-\beta)k'=p=3k/2$. This implies
\begin{equation}
\label{beta2}
\beta=\tfrac{(2r(k-1)-3k)(k-1)}{r(k+2)}\in(0,k-1)\quad\text{and}\quad k'=\tfrac{(k+2)r}{r(4-k)+2(k-1)}>3,
\end{equation}
where the low bound $k'>3$ follows from the condition $k>k_{cr}(r)$.
Using \rf{RHS1}, \rf{beta2} and Sobolev-Poincare embedding \rf{MTest} taken with $q=6$, and by denoting
\begin{equation*}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^\frac{3k}{2}},
\end{equation*}
one obtains an analogue of inequality \rf{Entrops3}:
\begin{equation}
\label{Entrops5}
y'(t)+\tfrac{6(k+2)}{C_{S,6}^2 \, k}y^\frac{1}{3}(t) \le \tfrac{12(k+2)}{C_{S,6}^2 \, k}\bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{k}{2(k-1)}|\Omega|^\frac{1}{3}
+\tfrac{3k}{2}\|f\|_rM^\frac{k(2r-3)-2r}{r(k+2)}y^\frac{1}{k'}(t).
\end{equation}
Next, using \rf{beta2} and Young's inequality
\begin{equation*}
ab\le\tfrac{a^s}{s}+\tfrac{b^{s'}}{s'}\quad\text{with}\quad s=\tfrac{k'}{3}\quad\text{and}\quad s'=\tfrac{k'}{k'-3},
\end{equation*}
one estimates
\begin{equation*}
\tfrac{3k}{2}\|f\|_rM^\frac{k(2r-3)-2r}{r(k+2)} y^\frac{1}{k'}(t) \le \tfrac{18(k+2)}{k'\,C_{S,6}^2\,k}y^\frac{1}{3}(t)
+\tfrac{k'-3}{k'}\Bigl(\tfrac{3k}{2}\bigl[\tfrac{C_{S,6}^2k}{6(k+2)}\bigr]^\frac{3}{k'}\|f\|_rM^\frac{k(2r-3)-2r}{r(k+2)}\Bigr)^\frac{k'}{k'-3}_.
\end{equation*}
This together with \rf{Entrops5} imply
\begin{multline} \label{Entrops6}
y'(t)+\tfrac{2(k+2)(k'-3)}{C_{S,6}^2 \, kk'} y^\frac{1}{3}(t)\le \\
\tfrac{12(k+2)}{C_{S,6}^2k} \bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{k}{2(k-1)}|\Omega|^\frac{1}{3}
+\tfrac{k'-3}{k'}\Bigl(\tfrac{3k}{2}\bigl[\tfrac{C_{S,6}^2k}{6(k+2)}\bigr]^\frac{3}{k'}\|f\|_rM^\frac{k(2r-3)-2r}{r(k+2)}\Bigr)^\frac{k'}{k'-3}.
\end{multline}
Observe that the right-hand side of \rf{Entrops6} is constant depending only on values of $k,\,r,\,|\Omega|,\,M,\,\|f\|_r$.
Therefore, from \rf{Entrops6} one concludes
\begin{equation}
\label{p_bound3}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^\frac{3k}{2}}\le C=\max\{y(0),C_1\}\quad\forall\, t>0
\end{equation}
with constant
\begin{equation*}
C_1=\Bigl[\tfrac{6k'}{k'-3}\bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{k}{2(k-1)}|\Omega|^\frac{1}{3}
+\tfrac{C_{S,6}^2k}{2(k+2)}\Bigl(\tfrac{3k}{2}\bigl[\tfrac{C_{S,6}^2k}{6(k+2)}\bigr]^\frac{3}{k'}\|f\|_rM^\frac{k(2r-3)-2r}{r(k+2)}\Bigr)^\frac{3}{k'-3}\Bigr]^3 .
\end{equation*}
Finally, choosing $l=6$ in estimate \rf{diss_est} together with \rf{p_bound3} imply \rf{Msc_es} with
\begin{equation}
\label{C}
\bar{C}=C_{S,6}^{-2}C^{-\frac{2}{3}},
\end{equation}
and, consequently, decay estimate \rf{H1_decay}.
We note that the derived exponential asymptotic rate $\bar{C}$ does degenerate in the limit $k\rightarrow k^+(r)$
as
\begin{equation*}
k'\go3^+\quad\text{and, consequently,}\quad C\rightarrow+\infty \text{ as } k\rightarrow k^+(r).
\end{equation*}
\begin{remark
In the critical case $k=k_{cr}(r)$ one can still show an asymptotic decay result, provided values of $r,\,M$ and $\|f\|_r$
satisfy condition \rf{f_constr} below. In this case, we set $\beta=1,\,k'=3$ in \rf{RHS1} and \rf{beta2}. Then \rf{Entrops5} becomes
\begin{equation}
\label{Entrops7}
y'(t)\le\tfrac{12(k_{cr}+2)}{C_{S,6}^2k_{cr}} \bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{k_{cr}}{2(k_{cr}-1)}|\Omega|^\frac{1}{3}
-\Bigl(\tfrac{6(k_{cr}+2)}{C_{S,6}^2k_{cr}}-\tfrac{3k_{cr}}{2}\|f\|_r M^\frac{1}{k_{cr}-1}\Bigr)y^\frac{1}{3}(t) .
\end{equation}
If
\begin{equation}
\label{f_constr}
\|f\|_rM^\frac{2r-3}{3r}<\tfrac{4(k_{cr}+2)}{C_{S,6}^2k_{cr}^2}=\tfrac{36}{C_{S,6}^2}\tfrac{(r-1)(2r-3)}{(5r-3)^2},
\end{equation}
one obtains from \rf{Entrops7} that
\begin{equation*}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^\frac{3k_{cr}}{2}}\le C=\max\{y(0),C_2\}\quad\forall\, t>0
\end{equation*}
with constant
\begin{equation*}
C_2=\Bigl[\tfrac{8(k_{cr}+2)k_{cr}\bigl(\tfrac{M}{|\Omega|}\bigr)^\frac{k_{cr}}{2(k_{cr}-1)}|\Omega|^\frac{1}{3}}
{4(k_{cr}+2)-C_{S,6}^2k_{cr}^2\|f\|_r M^\frac{1}{k_{cr}-1}}\Bigr]^3 .
\end{equation*}
Consequently, choosing $l=6$ in estimate \rf{diss_est} yields decay\rf{H1_decay} with $\bar{C}$ given by \rf{C}.
\end{remark}
\subsection{Time-inhomogeneous case}
In this section, we consider time-dependent source function $f(x,\,t)$ in \rf{f}
and separate cases $k\in(k_{cr}(r),\,+\infty)$ and $k\in[\frac{r}{r-3/2},\,k_{cr}(r)]$ with $k_{cr}(r)$ given in \rf{k_cr1}. It turns out that in the first case one can take $s=\infty$ in \rf{f} (similar to to the time-homogeneous case $f(x,t)=f(x)$), while in the case $k\in[\frac{r}{r-3/2},\,k_{cr}(r)]$ for the given space integrability exponent $r\in(3/2,\,+\infty]$ we can identify a maximal possible $s_{cr}=s_{cr}(k,r)\in[1,\infty)$, such that asymptotic decay to the steady state $v_\infty(x)$ is shown for any time integrability exponent $s\in[1,\,s_{cr}(k,r)]$ in \rf{f}. For $k<\frac{r}{r-3/2}$ possibility for an asymptotic decay to $v_\infty(x)$ remains then an open problem.
Our starting point is given by estimate \rf{w_es}. Due to lack of continuous embedding $H^1(\Omega)\hookrightarrow L^p(\Omega)$ for $p>6$ and $N=3$, we estimate the right-hand side of \rf{w_es} in a different way then in subsection 3.2. For $f$ in \rf{f} we introduce the Bogovskii's vector function $\mathbf{B}f(x,\,t)\in\mathbb{R}^3$ having properties
\begin{equation}
\label{B_def}
\mathrm{div}\,\mathbf{B}f(x,\,t)=f(x,t)\quad\text{for all}\quad x\in\Omega\quad\text{and}\quad\mathbf{B}f(x,t)=\mathbf{0}\quad\text{on}\quad\partial\Omega,
\end{equation}
for all $t>0$, and such that
\begin{equation}
\label{bog_bound}
\|\mathbf{B}f(\cdot,\,t)\|_{W_0^{1,\,r}}\le C_r\|f(\cdot,\,t)\|_r\quad\text{for all}\quad t>0
\end{equation}
with constant $C_r>0$ depending only on $r$. Existence of $\mathbf{B}f$ having these properties was shown in~\cite{Bo79}.
We note, in particular, due to continuous embedding $W^{1,r}(\Omega)\hookrightarrow L^\frac{Nr}{N-r}(\Omega)$ for $r\le N$ that \rf{f_conv} implies for $N\ge2$
\begin{equation}
\label{Bog_conv}
\|\mathbf{B}f(\cdot,\,t)-\mathbf{B}f_\infty(.)\|_2\rightarrow 0\quad\text{as}\quad t\rightarrow+\infty,
\end{equation}
while \rf{f_tau} implies
\begin{equation}
\label{Bf_tau}
\|\mathbf{B}(f_t)(.,t) \|_2\rightarrow 0\quad\text{as}\quad t\rightarrow+\infty.
\end{equation}
Next, let us rewrite formula \rf{w_es} in the form
\begin{equation*}
\tfrac{1}{2}\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w|^2\,dx+\tfrac{d}{dt}\int\limits_{\Omega}v(f-f_\infty)\,dx+
\int\limits_{\Omega}v^k(\Delta w-f+f_\infty)^2\,dx=\int\limits_{\Omega}vf_t\,dx.
\end{equation*}
Using integration by parts and \rf{B_def} the last identity transforms as
\begin{multline*}
\tfrac{1}{2}\tfrac{d}{dt}\int\limits_{\Omega}|\nabla w-\mathbf{B}(f-f_\infty)|^2\,dx-
\tfrac{1}{2}\tfrac{d}{dt}\int\limits_{\Omega}|\mathbf{B}(f-f_\infty)|^2\,dx + \\
\int\limits_{\Omega}v^k(\nabla\cdot[\nabla w-\mathbf{B}(f-f_\infty)])^2\,dx =\int\limits_{\Omega}wf_t\,dx
=-\int\limits_{\Omega}\nabla w\cdot\mathbf{B}(f_t)\,dx= \\
-\int\limits_{\Omega}(\nabla w-\mathbf{B}(f-f_\infty))\cdot\mathbf{B}(f_t)\,dx-\int\limits_{\Omega}\mathbf{B}(f-f_\infty)\cdot\mathbf{B}(f_t)\,dx.
\end{multline*}
This yields
\begin{equation*}
\tfrac{1}{2}\tfrac{d}{dt}\int\limits_{\Omega}z^2\,dx+\int\limits_{\Omega}v^k(\nabla\cdot z)^2\,dx=-\int\limits_{\Omega}z\cdot\mathbf{B}(f_t)\,dx
\end{equation*}
with $z:=\nabla w-\mathbf{B}(f-f_\infty) $. Additionally, arguing as in the proof of estimate \rf{diss_est}, one shows
\begin{equation*}
\int\limits_{\Omega}z^2\,dx\le C_{S,l}^2\Bigl(\int\limits_{\Omega} v^k|\nabla z|^2\,dx\Bigr)\Bigl(\|\tfrac{1}{v}\|_\frac{kl}{l-2}\Bigr)^k.
\end{equation*}
Finally, combining the last two inequalities and using Cauchy-Schwarz inequality yields
\begin{equation}
\label{zest}
\tfrac{d}{dt}\int\limits_{\Omega}z^2\,dx+\Bigl(C_{S,l}^{2k}\|\tfrac{1}{v}\|_\frac{kl}{l-2}\Bigr)^{-k}\int\limits_{\Omega}z^2\,dx\le\Bigl(C_{S,l}^{2k}\|\tfrac{1}{v}\|_\frac{kl}{l-2}\Bigr)^k\|\mathbf{B}(f_t)\|_2^2.
\end{equation}
In subsections 4.2.1 and 4.2.2 below, we derive uniform in time bounds on $\|\frac{1}{v}\|_\frac{kl}{l-2}$ for certain $l>2$. These bounds combined with \rf{zest} will then imply the asymptotic decay \rf{non_stat_conv}.
\subsubsection{Case $k>k_{cr}(r)$}
Here, w.l.o.g., we assume that $s=\infty$ and $r\in(3/2,\,+\infty]$ in \rf{f}. In the first part of the proof,
we follow exactly the lines of subsection 4.1.1 (in the case $k\in[\frac{4r-2}{r-2},\,+\infty)$) or subsection 4.1.2 (in the case $k\in(\frac{4r-2}{r-2},\,k_{cr}(r))$) establishing uniform in time bounds \rf{p_bound} or \rf{p_bound3}, receptively, i.e.
\begin{equation*}
\int\limits_{\Omega}\tfrac{dx}{v^p}\le C\quad\forall\, t>0 \text{ and fixed } p\in[\tfrac{3k}{2},\,+\infty),\ C>0.
\end{equation*}
We note just that by deriving \rf{p_bound} or \rf{p_bound3} $\|f\|_r$ should be now replaced by $\|f\|_{L^\infty(0,+\infty;L^r(\Omega))}$.
Next, the latter bound combined with \rf{zest} taken with $l=\frac{2p}{p-k}$ imply
\begin{equation*}
\label{z_est}
\tfrac{d}{dt}\int\limits_{\Omega}z^2\,dx+C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\int\limits_{\Omega}z^2\,dx\le C_{S,\frac{2p}{p-k}}^2C^\frac{k}{p}\|\mathbf{B}(f_t)\|_2^2.
\end{equation*}
Applying comparison principle to the last inequality one deduces that
\begin{eqnarray*}
\int\limits_{\Omega}|\nabla z|^2\,dx&\le& \exp\Bigl\{-C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,t\Bigr\}\Big(\int\limits_{\Omega}|\nabla z_0|^2\,dx+\nonumber\\
&+&C_{S,\frac{2p}{p-k}}^2C^\frac{k}{p}\int_0^t\exp\Bigl\{C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,\tau\Bigr\}\|\mathbf{B}(f_t)\|_2^2\,d\tau\Big).
\end{eqnarray*}
The last estimate would imply asymptotic decay
\begin{equation}
\label{as_decayz}
\int\limits_{\Omega}|\nabla z|^2\,dx\rightarrow 0\quad\text{as}\quad t\rightarrow+\infty
\end{equation}
provided
\begin{equation*}
\mathop {\lim} \limits_{t\rightarrow\infty}\exp\Bigl\{-C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,t\Bigr\}\int_0^t\exp\Bigl\{C_{S,\frac{2p}{p-k}}^{-2}C^{-\frac{k}{p}}\,\tau\Bigr\}\|\mathbf{B}(f_t)\|_2^2\,d\tau=0.
\end{equation*}
Using L'Hospital's rule one observes that this holds because of \rf{Bf_tau}. Hence, \rf{as_decayz} holds and, after substituting $z=\nabla w-\mathbf{B}(f-f_\infty)$ and $w=v(x,t)-v_\infty(x)$, one obtains
\begin{equation*}
\|\nabla(v(\cdot,t)-v_\infty(.) )\|_2\rightarrow 0 \text{ and } \|v(\cdot,t)-v_\infty(.) \|_{H^1(\Omega)}\rightarrow 0 \text{ as } t\rightarrow+\infty,
\end{equation*}
where we also use \rf{Bog_conv}.
\subsubsection{Case $\frac{r}{r-3/2}\le k\le k_{cr}(r)$}
Here, we assume $f\in L^s(0,+\infty;L^r(\Omega))$ with $s\in[1,\infty)$ and $r\in(3/2,\,+\infty)$ and argue similarly as in subsection 3.2.2. First, we set $p=\frac{3k}{2}$ and consider estimate \rf{RHS2} with $k'$ and $\beta$ given by \rf{k'} and \rf{beta1}, respectively. Note that both conditions $\beta\in[0,\,k-1)$ and $p+1-k-\beta>0$ hold for our range of $k$. Then from \rf{Entrops} and \rf{RHS2}--\rf{beta1} considered with $p=3k/2$ one gets
\begin{equation*}
\tfrac{d}{dt}\int\limits_{\Omega}\tfrac{dx}{v^\frac{3k}{2}}\le \tfrac{3k}{2}\|f\|_r M^\frac{2(k-1)r-3k}{r(k+2)}\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^p}\Bigr)^\frac{3kr-2(k-1)(2r-1)}{r(k+2)} .
\end{equation*}
Next, applying Gr\"{o}nwall's inequality to this estimate results in the bound
\begin{equation}
\label{p_bound4}
\int\limits_{\Omega}\tfrac{dx}{v^\frac{3k}{2}}\le \Bigl[\Bigl(\int\limits_{\Omega}\tfrac{dx}{v_0^\frac{3k}{2}}\Bigr)^\frac{2(k-1)(r-1)}{r(k+2)}
+\tfrac{3k(k-1)(r-1)}{r(k+2)}\|f\|_{L^s(L^r)}M^\frac{2(k-1)r-3k}{r(k+2)}\, t^\frac{s-1}{s} \Bigr]^ \frac{r(k+2)}{2(k-1)(r-1)}.
\end{equation}
Choosing
\begin{equation*}
l=6, \text{ i.e. so that } p=\tfrac{kl}{l-2},
\end{equation*}
in estimate \rf{zest} together with \rf{p_bound4} imply
\begin{equation*}
\tfrac{d}{dt}\int\limits_{\Omega}z^2\,dx+g(t)\int\limits_{\Omega}z^2\,dx\le\tfrac{\|\mathbf{B}(f_t)\|_2^2}{g(t)}.
\end{equation*}
with
\begin{equation}
\label{g1}
g(t)= C_{S,\,6}^{-2} \Bigl[\Bigl(\int\limits_{\Omega}\tfrac{dx}{v_0^p}\Bigr)^\frac{2(k-1)(r-1)}{r(k+2)}
\!\!\!\! +\tfrac{3k(k-1)(r-1)}{r(k+2)}\|f\|_{L^s(L^r)}M^\frac{2(k-1)r-3k}{r(k+2)}t^\frac{s-1}{s}\Bigr]^ \frac{-r(k+2)}{3(k-1)(r-1)}
\! \!.
\end{equation}
Applying comparison principle to the last inequality one deduces that
\begin{multline*}
\int\limits_{\Omega}|\nabla z|^2\,dx\le \exp\Bigl\{-\int_0^tg(\tau)\,d\tau\Bigr\} \times \\
\Bigl(\int\limits_{\Omega}|\nabla z_0|^2\,dx
+\int_0^t\exp\Bigl\{\int_0^\tau g(\tilde{\tau})\,d\tilde{\tau}\Bigr\}\tfrac{\|\mathbf{B}(f_\tau)\|_2^2}{g(\tau)}\,d\tau\Bigr).
\end{multline*}
The last estimate would imply asymptotic decay
\begin{equation}
\label{as_decay5}
\int\limits_{\Omega}|\nabla z|^2\,dx\rightarrow 0\quad\text{as}\quad t\rightarrow+\infty
\end{equation}
provided
\begin{equation}
\label{g_decay}
\int_0^tg(\tau)\,d\tau\rightarrow\infty\quad\text{as}\quad t\rightarrow\infty,
\end{equation}
as well as,
\begin{equation*}
\mathop {\lim} \limits_{t\rightarrow\infty}\exp\Bigl\{-\int_0^tg(\tau)\,d\tau\Bigr\}\int_0^t\exp\Bigl\{\int_0^\tau g(\tilde{\tau})\,d\tilde{\tau}\Bigr\}\tfrac{\|\mathbf{B}(f_t)\|_2^2}{g(t)}\,d\tau=0
\end{equation*}
hold together.
Using L'Hospital's rule and \rf{g1} one observes that the last limit holds if and only if
\begin{equation}
\label{f_constr1}
\mathop {\lim} \limits_{t\rightarrow\infty} \|\mathbf{B}(f_t)\|_2\cdot t^\frac{(s-1)r(k+2)}{6s(k-1)(r-1)}=0.
\end{equation}
Similarly, \rf{g_decay} holds provided
\begin{equation*}
\tfrac{(s-1)r(k+2)}{3s(k-1)(r-1)}\le 1.
\end{equation*}
The last condition implies, necessarily,
that
\begin{equation*}
s\in[1,\,s_{cr}(k,r)]\quad\text{with}\quad s_{cr}(k,r)=\tfrac{r(k+2)}{r(5-2k)+3(k-1)}.
\end{equation*}
Note that $s_{cr}(k,r)>1$ for $k<k_{cr}(r)$ and $r>1$.
Exploring additionally \rf{f_constr1} and using \rf{bog_bound}, \rf{f_tau} we conclude that for any
\begin{equation*}
f\in L^s(0,+\infty;L^r(\Omega))
\end{equation*}
with $s\in[1,\,s_{cr}(k,r)]$ decay \rf{as_decay5} holds.
Substituting there\\ $z=\nabla w-\mathbf{B}(f-f_\infty)$ and $w=v(x,t)-v_\infty(x)$, one obtains
\begin{equation*}
\|\nabla(v(\cdot,t)-v_\infty(.))\|_2\rightarrow 0 \text{ and } \|v(\cdot,t)-v_\infty(.)\|_{H^1(\Omega)}\rightarrow 0 \text{ as } t\rightarrow+\infty,
\end{equation*}
where we also use \rf{Bog_conv}.
\section{Discussion}
In this study, we identified critical mobilities $k=k_{cr}(r)$ \rf{k_cr} and \rf{k_cr1} for problem \rf{ME1}--\rf{ME3} considered with source term \rf{f} in dimensions $N=2$ and $N=3$. In dimension $N=2$, Theorems 1.1--1.2 imply asymptotic decay in $H^1$-norm of dynamical solutions to the unique positive state $u_\infty$ for $k\ge k_{cr}(r)$ and provide with explicit decay rates. In dimension $N=3$, the same is shown for $k>k_{cr}(r)$ and possibility of asymptotic decay for $k=k_{cr}(r)$ remains then an open question. Interestingly, condition for existence of a unique positive steady state $v_\infty$ has also form $k\ge k_{cr}(r)$ for $r\in(N/2,\,+\infty]$ (see Appendix A.2). This suggests that for $k<k_{cr}(r)$ possibility of decay to a nonhomogeneous steady state may be conditional to certain relations between given $M$,$|\Omega|$ and $\|f\|_r$.
Besides that, possibility of decay to constant profile $v_\infty$ corresponding to $f_\infty=0$ (see \rf{const_ss} in Appendix A.2) for $k\in(1,\,\frac{r}{r-1}]$ for $N=2$ and $k\in(1,\,\frac{r}{r-3/2})$ for $N=3$ is unknown.
Other open questions related to the assumptions of Theorems 1.1-1.2 include question about positivity of solutions for all $t>0$ for positive initial profiles $v_0>0$ in case $r\in(N/2,\,+\infty)$ as well as existence and regularity properties of solutions starting from nonnegative initial data $v_0\ge 0$. Possible existence and decay of sign-changing solutions to \rf{ME1}--\rf{ME3} considered with \rf{f} in certain rages of $k>1$ and $p=1$ remained also out of the scope of this study.
Finally, investigation of the limiting cases $k\rightarrow 1^+$ or $k\rightarrow+\infty$ in \rf{ME1}--\rf{ME3} corresponding to $m\rightarrow+\infty$ and $m\rightarrow 1$ in the PME problem \rf{PME1}--\rf{PME3} might be of interest. In both of the limits
many of our estimates degenerate.
\begin{appendix}
\section{Positivity of solutions}
In this section, we discuss positivity properties of solutions $v(x,t)$ to problem \rf{ME1}--\rf{ME3} and
of the stationary one $v_\infty(x)$ to \rf{SME1}--\rf{SME3}.
\subsection{Positivity of dynamic solutions}
Using a priori estimate \rf{Entrops}, we can show that solutions $v(x,t)$ to \rf{ME1}--\rf{ME3} stay positive for all $t>0$ and $x\in\bar{\Omega}$ provided initially $v_0(x)>0$. This result is shown here for all parameter values $k\in(1,\,+\infty)$ and in any dimension $N\ge 1$, but only in case $r=+\infty$ considered in \rf{f}. Positivity of solutions in cases $r\in(N/2,\,+\infty)$ then remains an open problem.
The proof in case $r=+\infty$ proceeds as follows. Let us estimate the term at the right-hand side of \rf{Entrops} as
\begin{equation*}
\int\limits_{\Omega}\tfrac{f\,dx}{v^{p+1-k}}\le\|f\|_\infty\Bigl(\int\limits_{\Omega}\tfrac{dx}{v^p}\Bigr)^\frac{p+1-k}{p}|\Omega|^\frac{k-1}{p},
\end{equation*}
and denote
\begin{equation*}
y(t)=\int\limits_{\Omega}\tfrac{dx}{v^p}.
\end{equation*}
Then \rf{Entrops} implies
\begin{equation*}
y'(t)\le p\|f\|_\infty|\Omega|^\frac{k-1}{p}y^\frac{p+1-k}{p}(t).
\end{equation*}
Applying Gr\"{o}nwall's inequality to this estimate results in the following bound
\begin{equation*}
y(t)\le\Bigl(y(0)^\frac{k-1}{p}+(k-1)\|f\|_{L^s(0,+\infty;L^\infty(\Omega))}|\Omega|^\frac{k-1}{p}t^\frac{s}{s-1}\Bigr)^\frac{p}{k-1},
\end{equation*}
i.e.
\begin{equation*}
\|v(\cdot,\,t)^{-1}\|_p\le\Bigl(\|v_0^{-1}\|_p^{k-1}+(k-1)\|f\|_{L^s(0,+\infty;L^\infty(\Omega))}|\Omega|^\frac{k-1}{p}t^\frac{s}{s-1}\Bigr)^\frac{1}{k-1}.
\end{equation*}
Letting $p\rightarrow\infty$ in this inequality gives
\begin{equation*}
\|v(\cdot,\,t)^{-1}\|_\infty\le\Bigl(\|v_0^{-1}\|_\infty^{k-1}+(k-1)\|f\|_{L^s(0,+\infty;L^\infty(\Omega))}t^\frac{s}{s-1}\Bigr)^\frac{1}{k-1}.
\end{equation*}
Hence, $v(x,t)>0$ for all $t>0$ and $x\in\bar{\Omega}$ provided $v_0>0$. We note that the obtained bound from below on
$v(x,t)$ is not uniform in time, but still implies smoothness and uniqueness of solutions $v(\cdot,t)>0$ for all $t>0$.
\subsection{Existence of positive steady states $v_\infty(x)$}
Let us recall the definitions \rf{k_cr} and \rf{k_cr1} for the critical value of exponent $k$ in dimensions $N=2$ and $N=3$, respectively. First, we show that for any parameter $k\in [k_{cr}(r),\,+\infty)$ there exists a unique positive stationary solution $v_\infty$ to problem \rf{SME1}--\rf{SME3}, provided the limit right-hand side $f_\infty\in L^r(\Omega)$. This follows from conservation \rf{SME3} combined with Sobolev continuous embedding (see, e.\,g.~\cite{AF03})
\begin{equation}
\label{embed}
W^{2,r}(\Omega)\hookrightarrow C^\lambda(\bar{\Omega})\quad\text{for}\quad0<\lambda \le 2-\tfrac{N}{r} \text{ and } r > \tfrac{N}{2}.
\end{equation}
Let $v_\infty$ be a solution to \rf{SME1}--\rf{SME2}. The embedding implies that
\begin{equation*}
|v_\infty(x)|\le L|x-x_0|^\lambda
\end{equation*}
holds for any $x\in\bar{\Omega}$ and some constant $L>0$ whenever $v_\infty(x_0)=0$.
Let us further consider nonnegative solutions $v_\infty$ to \rf{SME1}--\rf{SME2}.
For them \rf{SME3} implies that
\begin{equation}
\label{hol}
M=\int\limits_{\Omega}\tfrac{dx}{v_\infty^{k-1}}\ge \frac{1}{L^{k-1}}\int\limits_{\Omega}\tfrac{dx}{|x-x_0|^{\lambda(k-1)}}
\end{equation}
The integral at the right-hand side of \rf{hol} diverges provided $\lambda(k-1)\ge N$, i.e. using \rf{embed} when
\begin{equation*}
k > \tfrac{N+ 2}{2} \text{ and } r\ge \tfrac{N(k-1)}{2(k-1)-N}
\end{equation*}
holds. We note that the latter condition coincides with $k\in [k_{cr}(r),\,+\infty)$ for $N=2,\,3$ and, consequently, condition $f_\infty\in L^r(\Omega)$ implies existence of a unique positive solution $v_\infty$ to \rf{SME1}--\rf{SME3}.
In case $k\in(1,\,k_{cr}(r))$, one has $f_\infty = 0$ in Theorem 1.2 and there exists unique positive solution $v_\infty$ to \rf{SME1}--\rf{SME3}:
\begin{equation}
\label{const_ss}
v_\infty=\bigl(\tfrac{|\Omega|}{M}\bigr)^\frac{1}{k-1}.
\end{equation}
We conclude that assumptions of Theorems 1.1-1.2 imply, in particular, existence of unique positive $v_\infty$.
\section{Boundedness of $\|\nabla v\|_2$}
Here, we prove formula \rf{grad_v} provided \rf{f_tau}--\rf{f_conv} hold. From energy inequality \rf{EE} one obtains
\begin{eqnarray*}
\tfrac{1}{2}\|\nabla v\|_2^2&\le&E(v_0)-\int\limits_{\Omega}fv\,dx+ \iint \limits_{Q_t} {vf_t\,dx\,d\tau}\nonumber\\
&\le&E(v_0) + \|v\|_2\|f\|_2+\int_0^t\|v\|_2\|f_\tau\|_2\,d\tau.
\end{eqnarray*}
Combining this estimate with Poincare inequality
\begin{equation*}
\|v\|_2-\Bigr(\tfrac{|\Omega|}{M}\Bigl)^\frac{1}{k-1}|\Omega|^\frac{1}{2}\le\|v-\Bigr(\tfrac{|\Omega|}{M}\Bigl)^\frac{1}{k-1}\|_2\le C_P\|\nabla v\|_2,
\end{equation*}
gives
\begin{eqnarray}
\label{Msc1}
\tfrac{1}{2}\|\nabla v\|_2^2&\le&E(v_0)+\|f\|_2\Bigl(C_P\|\nabla v\|_2+\Bigr(\tfrac{|\Omega|}{M}\Bigl)^\frac{1}{k-1}|\Omega|^\frac{1}{2}\Bigr)\nonumber\\
&+&\int_0^t\Bigr(C_P\|\nabla v\|_2+ \bigl(\tfrac{|\Omega|}{M}\bigl)^\frac{1}{k-1}|\Omega|^\frac{1}{2}\Bigr)\|f_\tau\|_2\,d\tau.
\end{eqnarray}
Additionally, one has
\begin{equation*}
\tfrac{d}{dt}\|f\|_2=\tfrac{<f,\,f_t>}{\|f\|_2}\le\|f_t\|_2,
\end{equation*}
i.e.
\begin{equation*}
\|f\|_2\le\|f_0\|_2+\int_0^t\|f_\tau\|_2\,d\tau.
\end{equation*}
Combining the last estimate with \rf{Msc1}, while denoting
\begin{equation*}
m(t)=\int_0^t\|f_\tau\|_2\,d\tau\quad\text{and}\quad m_\infty=\int_0^\infty\|f_\tau\|_2\,d\tau<\infty,
\end{equation*}
yields
\begin{multline}\label{Msc2}
(\|\nabla v\|_2-C_P\|f_0\|_2-C_Pm_\infty)^2 \le 2E(v_0)+C_P^2(\|f_0\|_2+m_\infty)^2+ \\
2C_P\int_0^t\|\nabla v\|_2m'(\tau)\,d\tau + 2(\|f_0\|_2+2m_\infty)\Bigl(\tfrac{|\Omega|}{M}\Bigr)^\frac{1}{k-1}|\Omega|^\frac{1}{2} \\
\le a_0+2\int_0^t\Bigl|\|\nabla v\|_2-C_P\|f_0\|_2-C_Pm_\infty\Bigr|m'(\tau)\,d\tau,
\end{multline}
where we denoted
\begin{multline*}
a_0=2E(v_0)+C_P^2(\|f_0\|_2+m_\infty)^2+2m_\infty C_P(\|f_0\|_2+ \\
2m_\infty)+2(\|f_0\|_2+2m_\infty)\Bigl(\tfrac{|\Omega|}{M}\Bigl)^\frac{1}{k-1}|\Omega|^\frac{1}{2}.
\end{multline*}
The last inequality after an application of Bihari-LaSalle lemma to\\ $(\|\nabla v\|_2-\|f_0\|_2-m_\infty)^2$, while
using $m'(t)\ge0$ and $m(0)=0$, implies
\begin{equation*}
|\|\nabla v\|_2-C_P\|f_0\|_2-C_Pm_\infty|\le\sqrt{2a_0}+m(t)\le\sqrt{2a_0}+m_\infty,
\end{equation*}
whence we obtain the sought estimate \rf{grad_v} holding for all $t>0$ with
\begin{equation*}
C=\sqrt{2a_0}+(C_P+1)m_\infty+C_P\|f_0\|_2.
\end{equation*}
\end{appendix}
|
2,869,038,155,302 | arxiv | \section{Introduction}
Wireless Sensor Networks (WSNs) are a kind of self-organized distributed wireless networks composed of a large quantity of energy-limited nodes. Topology construction is one of the primary challenges in WSNs for ensuring network connectivity and coverage, increasing the efficiency of media access control protocols and routing protocols, improving the routing efficiency, extending the network lifetimes, and enhancing the robustness of the network \cite{abbasi2013overview,Uster2011,Akbari2013,Xu2011,Li2006}. The main aim of topology construction is to build a topology to connect network nodes based on a desired topological property. A dense network topology leads to high energy consumption due to overlapped sensing areas and maintenance costs of topology, while a very sparse network topology is vulnerable to network connectivity \cite{Qureshi2013}.
The development of complex networks provides new ideas for topology construction of WSNs. Study on complex networks is a newly emerging subject that focuses on the networks which have non-trivial topological features \cite{Lu2009,Cui2010}. There are many common characteristics between WSNs and typical complex networks models: networks contain a large number of nodes and have non-trivial topological features, and nodes in networks connect to each other through multi-hop paths \cite{Wang2007}. More importantly, typical complex network models, such as the small-world \cite{Watts1998,Newman1999} and scale-free \cite{Barabasi1999} network models, show some characteristics which are beneficial in WSNs. Small-world networks present small average path length between pairs of nodes which is beneficial to saving energy in topology construction and routing in WSNs \cite{Guidoni2010}. Scale-free networks have power-law degree distributions, and show an excellent robustness against random node damage \cite{Albert2000,Gao2010}. A random attack does not significantly affect the scale-free network performance \cite{Cui2010,Xia2008}. Therefore, it is significant to consider complex networks topology when optimising the topology in WSNs \cite{MIshizuka2004}. However, complex networks are a kind of relational graphs whose nodes make direct contact according to their logical relationships, while WSNs are spatial graphs in which the existence of links depends on the node's positions and radio range \cite{Helmy2003}. Thus, the complex networks theory cannot be directly used in WSNs. Some efforts have been made to tune wireless networks into heterogeneous networks with small-world \cite{Guidoni2010,Chitradurga2004,Cavalcanti2004,Verma2011,Ye2008} or scale-free features \cite{luo2011energy,Wang2007,Xuyuan2009,Hailin2009}.
In this paper, we propose a local-area and energy-efficient (LAEE) evolution model to build a WSN with scale-free topology. In this model, topology construction is divided into two phases. In the first phase, nodes are distributed randomly in a fixed region, and a node gets other node's information in its radio range through HELLO message. In the second phase, topology evolution starts from the sink, grows with preferential attachment rule, and stops until all nodes are added into network. Following conditions are considered when we design the evolution model: (i) Links between nodes depend on the positions and transmission range (radio range). Therefore, nodes beyond transmission range cannot make direct contact. (ii) Nodes can only get local information as WSNs are distributed networks. (iii) The remaining energy of each node is considered. Nodes with more remaining energy have higher probability to be connected. (iv) In order to avoid excessive energy consumption, upper bound of degree for each node is needed.
The remainder of this paper is organized as follows: Section 2 reviews background and related works on scale-free networks and scale-free based wireless networks. In section 3, we propose the LAEE evolution model, and deduce the theoretical degree distribution. Section 4 shows simulation results based on LAEE evolution model, and examines the tolerance of LAEE to random failures. Finally, we conclude in section 5.
\section{Background and Related Work}
\subsection{Traditional Topology Constructions in WSNs}
Unit disk graph (UDG) is the underlying topology model for WSNs which contains all links in transmission range (radio range). Assume that all nodes are randomly distributed in region $ S $. Each node is positioned in a particular subarea with independent probability $ \varphi = \pi r^2 / S $, where $ r $ is the transmission range. The probability that a subarea has $ k $ nodes is given by the binomial distribution, $ p(k) = ( \begin{array}{r} n \\ k \end{array} ) \varphi ^k (1- \varphi)^{n-k} $, where $ n $ is the total number of nodes in the network. With the increase of $ n $, this probability becomes the Poisson distribution $ p(k) = (n \varphi)^k e^{-n\varphi}/k! $. Then the average number of neighbor nodes is close to $ n \varphi -1 $. However, UDG model has high concentration of connections that might promote excess energy consumption for periodic topology maintenance and route selection process. Therefore this is an inefficient way of topology construction.
Almost all other topology construction methods in WSNs build a reduced topology from UDG \cite{Wightman2011,SJardosh2008}. Based on the topology production mechanism, they can be categorized into \textit{Flat Networks} and \textit{Hierarchical Networks} with clustering\cite{SJardosh2008}.
In \textit{Flat Networks}, all nodes are considered to perform the same role in topology and functionality. Typical examples include directed relative neighborhood graph (DRNG) \cite{NLi2004}, k-nearest neighbor (KNN) \cite{DMBlough2006}, TopDisc\cite{BDeb2002}, Euclidean minimum spanning tree (EMST) \cite{PKAgarwal1991}, local Euclidean minimum spanning tree (LEMST) \cite{NLi2003}, Delaunay triangulation graph (DTG) \cite{MLi2003}, and cone-based topology control algorithm(CBTC) \cite{LLi2005}.
In KNN, a node sorts all other nodes in its transmission range in Euclidean distance or other distance metric, and then links the $ k $ nearest nodes as neighbors in the final topology. It is a scalable and parameter-free in WSNs and very easy to implement. In DRNG, a link connects node $ u $ and $ v $ if and only if there does not exist a third node $ w $ that closer to both $ u $ and $ v $ in distance. TopDisc discovers topology by sending query messages and describing the node states using three or four color system. It is a greedy approximation method based on minimum dominating set. In EMST or LEMST, each node builds its overall or local minimum spanning tree based on Euclidean distance and only keeps nodes on tree that is one hop away as its neighbors. In DTG, a triangle formed by three nodes $ u $, $ v $, $ w $ belongs to topology if there is no other node in the scope of the triangle. CBTC uses an angle $ \alpha $ as a key parameter. In every cone of angle $ \alpha $ around node $ u $, there is some node that $ u $ can reach.
Nodes in \textit{Hierarchical Networks} with clustering are heterogeneous in functionality as cluster heads or cluster members. LEACH is a typical \textit{Hierarchical Network} topology model \cite{WBHeinzelman2002} in which the network is clustered and periodic updated. The cluster heads have the responsibility to communicate directly with the sink for the whole cluster members. A node selects itself to be a cluster head with a probability related to factors such as its remaining energy, and whether it has served as cluster head in the last $ r $ rounds.
The WSNs topology can be indicated as graph $ G(V, E) $, where the sets of $ V $ and $ E $ are sensor nodes and topological links, respectively. We denote the number of a node's links, also the number of its neighbor nodes, as its degree. All these previous topology construction models show highly concentrated degree distribution, which means these models tend to present homogeneous graph property \cite{CMa2011,CTong2012}.
\subsection{Scale-free Evolution Models}
Barab\'{a}si and Albert provide an evolution model, called BA model, to generate a scale-free network. This model includes the following two features. (i) Growth: The network starts with a small number $ m_0 $ of nodes. At each time step a new node with $ m (m \leqslant m_0) $ edges is added. (ii) Preferential attachment: The new node connects to existing node $ i $ according to the probability $ \Pi_i = k_i / \sum_j k_j $, where $ k_i $ is the degree (i.e., numbers of topological links) of node $ i $. In BA model, the degree distribution follows the power-law distribution $ P(k) = k^{- \gamma} $, where the scaling exponent is $ \gamma = 3 $. The BA model cannot be directly used to generate a WSN because the overall network's degree $ \sum_j k_j $ is needed in BA model, which is unable to achieve in many real networks. As the limitations of transmission range, energy, and processing capacity, nodes in WSNs can only collect information from n-hop neighbor nodes but cannot get global information.
Li and Chen propose a local-world evolution model \cite{XiangLi2003}. In local-world evolution model, the preferential attachment does not work on the global network, but works on a local world of each node. $ M $ nodes are randomly selected from existing network as the \textit{local world} for the new node. The preferential attachment probability for new node at time step $ t $ is
\begin{eqnarray}
\label{formula1}
\Pi_i = \Pi'(i \in local-world) \frac{k_i}{\sum\limits_{j \in local-world} k_j}
\end{eqnarray}
\noindent where $ \Pi ^{'} (i \in local-world) = M / (m_0 + t) $. As these $ M $ nodes are selected randomly, the spatial relationships between nodes are not considered. Therefore, the local-world evolution model still cannot describe the topology evolving mechanism in wireless networks.
\subsection{WSNs Topology Constructions with Scale-free Theory}
Several methods have been proposed to build WSNs with the scale-free property \cite{CTong2008,Xuyuan2009,Wang2007,Hailin2009}. These methods take complex network characteristic such as growth, preferential attachment into account, and some of them consider the local-area feature in WSNs.
Zhang provides a model of WSNs based on scale-free network theory \cite{Xuyuan2009}. In this model, each node has a saturation value of degree, $ k_{max} $, to balance energy consumption. The newly generated node has a certain probability $ P_e $ to be damaged when it is being added to the network. The probability that the new node will be connected to the existing nodes as follows:
\begin{eqnarray}
\label{formula2}
\Pi_i = P( d_{iv} \leqslant r)(1 - P_e) \frac{k_i}{\sum\limits_{total-network}{k_j - q k_{max}}}
\end{eqnarray}
\noindent where $ d_{iv} $ is the distance between the new node $ i $ and existing node $ v $, $ r $ is the transmission range, and $ q $ is the number of nodes which already reach the saturation value of degree $ k_{max} $. In Eq. \eqref{formula2}, $ P(d_{iv} < r) $ refers to the ratio of $ \pi r^2 $ to $ S $, where $ S $ is the entire WSNs coverage region.
One of the main problems of Zhang's model is the sum of $ \Pi_i $ is much smaller than 1. The scaling exponent is $ \gamma = 1 + 2S / \pi r^2 $, where $ S $ is the entire coverage region and $ r $ is the transmission range. Therefore, another problem is that the scaling exponent $ \gamma $ of degree distribution is much greater than 3, which is not rational in real networks.
Wang et al. propose an arbitrary weight based scale-free topology control algorithm (AWSF) \cite{Wang2007}. All nodes in the network are coupled with a sequence of random real numbers $ W $ with a power-law distribution $ \rho (x) = A x^{-\theta} $, where $ \theta > 1 $ and $ A = \int_{min}^{max} \rho (x) d x = 1 $. The balance of energy consumption is not considered in this model. Therefore, there is a possibility that a node with low energy coupled with a large weight $ w $ and therefore has a large degree, which exacerbates the imbalance of energy consumption.
Zhu proposes an energy-aware evolution model (EAEM) of WSNs \cite{Hailin2009}. Energy is taken into account in the EAEM model. This algorithm assumes that the probability $ \Pi_i $ that a new node connects to existing node $ i $ depends on its degree $ k_i $ and the remaining energy of that node. A function $ f(E) $ is defined to present the relationship between remaining energy and its ability to be linked. $ f(E) $ must be an increasing function, as the more energy a node has, the more probability it will be connected to the new node. Therefore the form of $ \Pi_i $ is
\begin{eqnarray}
\label{formula3}
\Pi_i = \frac{f(E_i)k_i}{\sum\limits_{local-area} f(E_j)k_j}
\end{eqnarray}
\noindent where the \textit{local-area} in the EAEM is the set of nodes locating in the new node's transmission range. The sum of all nodes' $ \Pi_i $ is less than 1 and the scaling exponent $ \gamma $ of degree distribution is 1, which is not rational in real networks.
\section{Local-area and Energy-efficient Evolution Model}
In this section, we propose our scale-free topology construction model for WSNs.
Usually, nodes are distributed in a given region with static positions. Then connections between them are built to generate a network. Based on this fact, the process of topology construction is divided into two phases: In the first phase, nodes are distributed randomly. We define the set of \textit{scattered nodes} as the nodes having not access to the network topology yet in the process of evolution, as shown in Fig. \ref{Fig1}. An arbitrary node, marked as node $ v $, gets all other nodes' information in its transmission range through HELLO message, and takes these nodes as its \textit{potential neighbor nodes}. Then in the second phase, topology evolution starts from sink, grows with the preferential attachment rule, and stops until all nodes are added into the network.
The LAEE evolution model is proposed:
\begin{quote}
Step I.
\begin{quote}
Nodes are distributed randomly in region $ S $. Each node gets its potential neighbor nodes' information in its transmission range through HELLO message. All these nodes are scattered and topology has not been formed at this moment.
\end{quote}
Step II.
\begin{quote}
II.1\ \ Topology evolution starts from sink with $ m_0 $ nodes (the sink and its $ m_0 - 1 $ potential neighbor nodes) and $e_0$ random links between them.
II.2\ \ At every time step, add a scattered node into the network. To do that, we find the node which has the most scattered neighbor nodes, and mark it as node $ a $. Choose a scattered node randomly in node $a$'s potential neighbors as the new node, denoting as node $ b $. With this strategy, the network expands outward and fills the region $ S $ as fast as possible.
II.3\ \ Randomly choose $m$ nodes, which are already in the topology and are node $b$'s potential neighbors, and link them to node $b$. If the number of node $b$'s potential neighbors is smaller than $ m $, all these nodes will be linked to this new node. Connect node $ b $ with $ m $ potential neighbor nodes based on the preferential attachment:
\begin{eqnarray}
\label{formulaP}
\Pi_i = \Pi'_i (i \in {local-area}) \frac{f(E_i)k_i}{\sum\limits_{local-area}{f(E_j)k_j-q k_{max}}}
\end{eqnarray}
\noindent where \textit{local-area} is the set of node $b$'s potential neighbor nodes in its transmission range, $ k_{max} $ is the upper bound of node's degree, $ q $ is the number of nodes which already have the degree of $ k_{max} $, and $ f(E) $ is the function mentioned in the EAEM model. When a node reaches the degree of $ k_{max} $, no more link can be added to it.
II.4\ \ Repeat II.1,II.2, and II.3 until all nodes are added to the topology.
\end{quote}
\end{quote}
\begin{figure}
\includegraphics[width=0.6\textwidth]{Fig1.eps}
\caption{At every time step, add a scattered node into the network with $ m (m<m_0) $ links}\label{Fig1}
\end{figure}
In Eq. \eqref{formulaP}, $ \Pi'_i(i \in {local-area}) $ refers to the set of node \textit{i}'s neighbor nodes in its transmission range at time step $ t $, i.e.,
\begin{eqnarray}
\label{formula5}
\Pi'_i (i \in {local-area}) = {n \varphi}/{(m_0 + t)}
\end{eqnarray}
\noindent where $ n $ is the number of nodes in network, $ \varphi = \pi r^2 / S $ is the possibility of two nodes positioned in each other's transmission range. We assume that only few nodes reach the upper bound $ k_{max} $, so $ qk_{max} $ could be ignored here. Therefore, in local area we have
\begin{eqnarray}
\label{formula6}
\sum\limits_{local-area}{f(E_j)k_j - q k_{max}} & \approx & \sum\limits_{local_area}{f(E_j)k_j}
= {\overline M \ \overline E <k>}
\end{eqnarray}
\noindent where $ \overline M $ is the expected number of nodes in new node's local-area which equal to $ n\varphi $ as the expected number of nodes in transmission range mentioned in UDG model, $ \overline E $ is the expected value of $ f(E) $, and $<k> = 2(mt + e_0)/(m_0 + t) $ is the average degree of network at time step $ t $, where $ m_0 $ and $ e_0 $ represent the number of nodes and links at the beginning, respectively. Then we get the varying rate of $ k_i $:
\begin{eqnarray}
\label{formula7}
\frac{\partial k_i}{\partial t} & = & m \Pi_i \nonumber\\
& = & m \Pi'_i(i \in {local-area}) \frac{f(E_i)k_i}{\sum\limits_{local-area}{f(E_j)k_j - q k_{max}}} \nonumber\\
& = & m \frac{n \varphi}{m_0 + t}\frac{f(E_i)k_i}{n \varphi \overline E {\frac{2(mt+e_0)}{m_0+t}}}
=\frac{m f(E_i) k_i}{2 \overline E (mt + e_0)}
\end{eqnarray}
In a very large scale network, $ e_0 $ can be ignored, then we can get
\begin{eqnarray}
\label{formula8}
\frac{\partial k_i}{\partial t}
\approx
\frac{f(E_i)k_i}{2 \overline E t}
\end{eqnarray}
As $ f(E) $ is an increasing function, we set $ f(E_i)k=E $, Then
\begin{eqnarray}
\label{formula9}
\frac{\partial k_i}{\partial t} = \frac{E k_i}{2 \overline E t}
\end{eqnarray}
According to the initial degree of node $ i $ at time step $ t_i $, $ k_i(t_i)=m $, we can get
\begin{eqnarray}
\label{formula10}
k_i = m {(\frac{t}{t_i})} ^ \beta
\end{eqnarray}
\noindent where $ \beta = E / {2 \overline E} $. The probability that node \textit{i}'s degree is smaller than $ k $ is
\begin{eqnarray}
\label{formula11}
p(k_i(t) < k) & = & 1 - p(t_i < t(\frac{m}{k}) ^ {1/ \beta}) = 1 - \frac{t(\frac{m}{k})^{1/ \beta}}{m_0 + t}
\end{eqnarray}
Then we can obtain the probability density of the degree of a node with energy $ E $ as
\begin{eqnarray}
\label{formula12}
P(k_E)
& = & \frac{\partial p(k_i(t) < k)}{\partial k}
= {\frac{1}{\beta}} m^{1 / \beta} {\frac{t}{m_0 + t}}k^{-(1+1 / \beta)} \nonumber\\
& \approx & {\frac{1}{\beta}} m^{1 / \beta} k^{-(1+1 / \beta)}
\end{eqnarray}
In the above equation, $ \beta \in (E_{min}/2 \overline E, E_{max}/2 \overline E) $, where $ E_{min} $ and $ E_{max} $ are the bounds of energy $ E $. Therefore, the distribution $ P(k_E) $ has a power-law from with degree exponent $ \gamma = (1 + 1/ \beta) $.
In order to get the probability density of degree with remaining energy $ E $, we have
\begin{eqnarray}
\label{formula13}
P(k)
& = & \int_{E_{min}}^{E_{max}} \rho P(k_E) d E = \int_{E_{min}}^{E_{max}} \rho {\frac{1}{\beta}} m^{1/ \beta}k^{-(1+1/ \beta)} d E
\end{eqnarray}
\noindent where $ \rho $ is the distribution of $ E $ with the bounds of $ E_{min} $ and $ E_{max} $. $ \rho $ satisfies the equation
$ \int_{E_{min}}^{E_{max}} \rho d E = 1 $.
\section{Numerical Results}
Table \ref{table1} presents the parameters used in our simulation. We distribute $ n=1000 $ nodes randomly in the square region $ S $, and deploy the sink at a corner marked as (0, 0). We select $m_0=10$ nodes and $e_0=10$ links in sink's transmission range as the initial state of our evolution model. Energy in the networks are uniformly distributed. The value of $ \rho $ is a constant which can get from the equation $ \int_{E_{min}}^{E_{max}}\rho dE = 1$. Different values of $ m $ in LAEE are considered in our simulation.
The simulation and theoretical degree distributions of LAEE are presented in Fig. \ref{Fig2}. The theoretical degree distribution of LAEE model is close to the degree distribution of BA model, and the simulation result of degree distribution is close to the theoretical value when $ k \geqslant m $. It is noteworthy that the degree values must be large than $ m $ in BA model (each node has $m$ links at least), while there are nodes with degrees less than $ m $ in LAEE simulation results. This is because in our model a node may have the number of potential neighbors less than $m$. If this happens, this node's degree may keep in a low value. In other words, it is due to WSNs' spatial structure. Fortunately, only a small proportion of nodes have degrees less than $ m $. The power-law degree distribution is valid for most of nodes.
\begin{table}[!htb]
\begin{flushleft}
\caption{\label{table1}Parameters for simulation}
\small
\begin{tabular}{ccc}
\toprule
Parameter & Value & Definition \\
\midrule
$ n $ & 1000 &Number of nodes \\
$ r $ & 100$ m $ &Transmission range \\
$ S $ & 1000$\times$1000$m^2$ &Entire coverage region \\
$ m_0 $ & 10 &Number of nodes in the initial state \\
$ e_0 $ & 10 &Number of links in the initial state \\
$ m $ & 3, 5, and 8 &Links added in every time step \\
$ E_{min} $ & 0.5 $J$ &Lower bound of energy $E$ \\
$ E_{max} $ & 1 $J$ &Upper bound of energy $E$ \\
$ K_{max} $ & 30 &Upper bound of degree \\
$ \rho $ & $\int_{E_{min}}^{E_{max}}\rho dE=1$ &Uniform distribution of energy $E$ \\
\bottomrule
\end{tabular}
\end{flushleft}
\end{table}
\begin{figure}[!htbp]
\subfigure[$ m = 3 $]{
\label{Fig2a}
\includegraphics[width=0.48\linewidth]{Fig2a.eps}}
\hspace{0.01in}
\subfigure[$ m = 5 $]{
\label{Fig2b}
\includegraphics[width=0.48\linewidth]{Fig2b.eps}}
\hspace{0.01in}\\
\subfigure[$ m = 8 $]{
\label{Fig2c}
\includegraphics[width=0.48\linewidth]{Fig2c.eps}}
\caption{LAEE degree distribution with $ m = $ 3, 5, 8}
\label{Fig2}
\end{figure}
\begin{table}[!htb]
\begin{flushleft}
\caption{\label{table2}Degrees in topology}
\small
\begin{tabular}{cccc}
\toprule
Model & Avg. degree & Min. degree & Max. degree \\
\midrule
UDG & 29.21 & 6 & 53 \\
LAEE ($m=3$) & 5.22 & 1 & 25 \\
LAEE ($m=5$) & 7.88 & 1 & 29 \\
LAEE ($m=8$) & 11.34 & 2 & 29 \\
KNN ($k=6$) & 6 & 6 & 6 \\
DTG & 5.85 & 3 & 11 \\
LEACH+KNN ($k=6$)& 5.98 & 4 & 6 \\
LEACH+DTG & 5.64 & 3 & 10 \\
\bottomrule
\end{tabular}
\end{flushleft}
\end{table}
\begin{figure*}[htbp]
\includegraphics[width=1\textwidth]{Fig3
\caption{Giant component size of WSNs under random nodes failure}\label{Fig3}
\end{figure*}
Fault tolerance is a key issue in WSNs. Many real applications do not require all nodes to be connected. It is appropriate to consider relaxing the connectivity requirement \cite{XTa2009}. When a fraction of nodes are out of work, the remains may not be connected, and the application of entire network may become invalid. Then, it is important to introduce the giant component, which means the largest connected component \cite{GGanesan2013,HekmatMNA2006}, to measure the fault tolerance of WSNs with the nodes' failure. Two types of data flows exist in WSNs, flows between any pair of nodes, and between sink and other nodes. Therefore, two kinds of giant component are considered correspondingly: the normal one which contains the largest number of nodes, and the one with the sink. Sometimes these two giant components are the same, whereas sometimes they are not.
We examine how the fault tolerance of WSNs can be improved by LAEE. Nodes are removed randomly to simulate the procedure of energy depletion or random failure. Typical WSNs construction models UDG, KNN, DTG, LEACH+KNN(LEACH for cluster heads election, KNN for topology construction in each cluster), LEACH+DTG (DTG for topology construction in each cluster) are used for comparison. The degree parameters are shown in Table \ref{table2}. We can see that their average degrees are close to that of LAEE with $ m = 3 $. Close average degrees means these topologies contain similar number of links. However, due to the scale-free feature, the degree distribution of LAEE is much wider than other construction models. As Fig. \ref{Fig3} shows, with the removing of nodes randomly and gradually, the sizes of giant components decrease. UDG provides upper bounds of giant components. The LAEE presents a larger giant components than KNN, DTG, LEACH+KNN, LEACH+DTG, though it has the minimum number of average degree. Therefore we deem that LAEE, which presents the scale-free feature in degree distribution, has better tolerance against energy depletion or random failure in WSNs.
\section{Conclusions}
Topology control is one of the primary challenges to make WSNs resource efficient. In this paper, we propose a local information and energy-efficient based topology evolution model. The process of topology evolution is divided into two phases. In the first phase, nodes are distributed randomly in a fixed region. In the second phase, topology evolution starts from sink, grows with preferential attachment rule, and stops until all nodes are added into network. The theoretical degree distribution of LAEE evolution model is approaching that of BA model. Simulation result shows that when $ k \geqslant m $, the degree distribution follows the power law. The LAEE model has better tolerance against energy depletion or random failure than other non-scale-free WSNs topology with close average degrees.
\bibliographystyle{spmpsci}
|
2,869,038,155,303 | arxiv | \section{Introduction}
Nowadays the simplest supersymmetric (SUSY) extension of the Standard
Model (SM), the so--called Minimal Supersymmetric Standard Model
(MSSM), is possibly the best motivated model beyond the SM. Indeed the
quadratic divergences, that lead to the hierarchy problem in the SM,
are naturally canceled in supersymmetric theories. By making
supersymmetry local (supergarvity) a partial unification of gauge
interactions with gravity can be achieved. The remarkable coincidence
of gauge coupling constants at the high energy scale $M_X\sim
10^{16}{\rm GeV}$ obtained in the framework of the MSSM allows one to
embed the simplest SUSY extension of the SM into Grand Unified and
superstring theories.
The stabilization of the mass hierarchy in the MSSM does not provide
any explanation for the origin of the electroweak scale, and therefore
a minimal SUSY model should not know about the electroweak scale
before symmetry breaking. However, the MSSM superpotential contains
one bilinear term $\mu (\hat{H}_1 \epsilon \hat{H}_2)$ which is
present before supersymmetry is broken; from dimensional
considerations one would naturally expect the parameter $\mu$ to be
either zero or the Planck scale, but in order to provide the correct
pattern of electroweak symmetry breaking, $\mu$ is required to be of
the order of the electroweak scale. Thus minimal SUSY has its own
``hierarchy'' problem, known as the $\mbox{$\mu$-problem}$.
The most elegant solution of the $\mu$--problem naturally appears in
the framework of superstring--inspired $E_6$ models where the bilinear
terms are forbidden by gauge symmetry. In general these models contain
a few pairs of the Higgs doublets and a few singlet fields
$S_i$. Assuming that only one pair of Higgs doublets and one singlet
survive to low energies the superpotential of the Higgs sector takes
the form $\lambda \hat{S}(\hat{H}_1 \epsilon \hat{H}_2)$. The
considered model includes only one additional singlet field and almost
the same number of parameters as the MSSM. For this reason it can be
regarded as the simplest extension of the MSSM. As a result of
spontaneous symmetry breakdown at the electroweak scale the superfield
$\hat{S}$ gets a non-zero vacuum expectation value ($\langle S \rangle
\equiv s/\sqrt{2}$) and an effective $\mu$-term ($\mu=\lambda
s/\sqrt{2}$) of the required size is automatically generated.
The model discussed above possesses a $SU(2)\times [U(1)]^2$ global
symmetry. Being broken by the vacuum an extended global symmetry leads
to the appearance of a massless CP-odd spinless particle in the Higgs
boson spectrum which is a Peccei-Quinn (PQ) axion. The usual way to
avoid this axion is to introduce a term cubic in the new singlet
superfield $\hat{S}$ in the superpotential that explicitly breaks the
additional $U(1)$ global symmetry. The superpotential of the Higgs
sector of the obtained model, which is the so--called
Next--to--Minimal Supersymmetric Standard Model (NMSSM), is given by
\cite{1}
\begin{equation}
W_{H}=\lambda \hat{S}(\hat{H}_1 \epsilon \hat{H}_2)+\frac{1}{3}\kappa\hat{S}^3\,.
\label{4}
\end{equation}
In this paper we study the Higgs sector of the NMSSM. In Section 2 we
discuss the MSSM limit of the NMSSM. In Section 3 we investigate the
spectrum and couplings of the Higgs bosons in the exact PQ--symmetry
limit of the NMSSM where $\kappa=0$. The scenario with a slightly
broken PQ--symmetry, where $\kappa$ is small, is considered in Section
4. The results are summarized in Section 5.
\section{The MSSM limit of the NMSSM}
The Higgs sector of the NMSSM includes two Higgs doublets $H_{1,2}$ and one singlet field $S$.
The potential energy of the Higgs field interaction can be written as a sum
\begin{equation}
\begin{array}{rcl}
V&=&V_F+V_D+V_{soft}+\Delta V\,,\\[3mm]
V_F&=&\lambda^2|S|^2(|H_1|^2+|H_2|^2)+\lambda^2|(H_1\epsilon
H_2)|^2+\lambda\kappa\left[S^{*2}(H_1\epsilon
H_2)+h.c.\right]+\kappa^2|S|^4\, ,\\[3mm]
V_D&=&\displaystyle\frac{g^2}{8}\left(H_1^+\sigma_a H_1+H_2^+\sigma_a
H_2\right)^2+\frac{{g'}^2}{8}\left(|H_1|^2-|H_2|^2\right)^2\, ,\\[3mm]
V_{soft}&=&m_1^2|H_1|^2+m_2^2|H_2|^2+m_S^2|S|^2
+\left[\lambda A_{\lambda}S(H_1\epsilon
H_2)+\displaystyle\frac{\kappa}{3}A_{\kappa}S^3+h.c.\right]\, ,\\[2mm]
\end{array}
\label{5}
\end{equation}
where $H_1^T=(H_1^0,\,H_1^{-})$, $H_2^T=(H_2^{+},\,H_2^{0})$ and
$(H_1\epsilon H_2)=H_2^{+}H_1^{-}-H_2^{0}H_1^{0}$. At the tree level
the Higgs potential (\ref{5}) is described by the sum of the first
three terms. $V_F$ and $V_D$ are the $F$ and $D$ terms. Their
structure is fixed by the NMSSM superpotential (\ref{4}) and the
electroweak gauge interactions in the common manner. The last term in
Eq.(\ref{5}), $\Delta V$, corresponds to the contribution of loop
corrections.
The parameters $g$ and $g'$ are $SU(2)$ and $U(1)$ gauge couplings
respectively ($g_1=\sqrt{5/3}g'$), which are known precisely. The
couplings $g,\,g',\,\lambda$ and $\kappa$ do not violate
supersymmetry. The soft supersymmetry breaking terms are collected in
$V_{soft}$. At the tree level the set of soft SUSY breaking parameters
involves soft masses $m_1^2,\, m_2^2,\, m_S^2$ and trilinear couplings
$A_{\kappa},\, A_{\lambda}$. The inclusion of loop corrections draw
into the analysis many other soft SUSY breaking terms that define the
masses of different superparticles. All parameters listed above are
here assumed to be real. In general, complex values of $\lambda$,
$\kappa$ and soft couplings induce CP--violation, and we here restrict
our consideration to the part of the NMSSM parameter space where CP is
conserved.
At the physical minimum of the potential (\ref{5}) the neutral
components of the Higgs doublets $H_1$ and $H_2$ develop vacuum
expectation values $v_1$ and $v_2$ breaking the electroweak symmetry
down to $U(1)$. Instead of $v_1$ and $v_2$ it is more convenient to
use $\tan\beta=\displaystyle\frac{v_2}{v_1}$ and $v=\sqrt{v_1^2+v_2^2}$, where
$\mbox{$v$=246\,GeV}$ in the physical vacuum.
To start with let us specify the transition from the NMSSM to the
minimal SUSY model which has been studied thoroughly. Because the
strength of the interaction of the extra singlet fields with other
bosons and fermions is determined by the size of the coupling
$\lambda$ in the superpotential (\ref{4}) the MSSM sum rules for the
Higgs masses and couplings are reproduced when $\lambda$ tends to be
zero. It implies that the vacuum expectation value of the singlet
field should grow with decreasing $\lambda$ as $M_Z/\lambda$ to
ensure the correct breakdown of the electroweak symmetry. The
increasing of $s$ can be achieved either by decreasing $\kappa$ or by
raising $m_S^2$ and $A_{\kappa}$. Since there is no natural reason why
$m_S^2$ and $A_{\kappa}$ should be very large while all other soft
SUSY breaking terms are left in the TeV range, the values of $\lambda$
and $\kappa$ are obliged to go to zero simultaneously so that their
ratio remains unchanged.
Since, in the MSSM limit of the NMSSM, mixing between singlet states
and neutral components of the Higgs doublets vanish, the structures of
the Higgs mass matrices are simplified. This allows one to obtain
simple approximate solutions for the Higgs masses. The NMSSM Higgs
sector involves two charged states with masses
\begin{equation}
m_{H^{\pm}}^2\approx m_A^2+M_W^2\,,
\label{25}
\end{equation}
where $M_W=\displaystyle\frac{g}{2}v$ is the mass of the charged W--boson while
\begin{equation}
m_A^2=\displaystyle\frac{4\mu^2}{\sin^22\beta}\left(x-\frac{\kappa}{2\lambda}\sin2\beta\right)\,,\qquad
x=\frac{1}{2\mu}\left(A_{\lambda}+2\displaystyle\frac{\kappa}{\lambda}\mu\right)\sin 2\beta\,.
\label{251}
\end{equation}
Also there are five neutral fields in the Higgs spectrum. If CP is conserved then
two of them are CP--odd with masses
\begin{equation}
m_{A_1}^2\approx\displaystyle-3\frac{\kappa}{\lambda}A_{\kappa}\mu\,, \qquad\qquad
m_{A_2}^2\approx m_A^2\,,
\label{252}
\end{equation}
whereas three others are CP--even with masses
\begin{equation}
\begin{array}{rcl}
m_{h_1}^2 &\approx& \displaystyle 4\frac{\kappa^2}{\lambda^2}\mu^2+\frac{\kappa}{\lambda}A_{\kappa}\mu
+\frac{\lambda^2v^2}{2}x\sin^22\beta-\frac{2\lambda^2v^2\mu^2(1-x)^2}{M_Z^2\cos^22\beta}\, ,\\[3mm]
m_{h_2, h_3}^2 &\approx& \displaystyle\frac{1}{2}\left[m_A^2+M_Z^2\mp \sqrt{(m_A^2+M_Z^2)^2-4m_A^2M_Z^2\cos^2 2\beta}\right]\, ,
\end{array}
\label{26}
\end{equation}
where $M_Z=\displaystyle\frac{\bar{g}}{2}v$ is a Z--boson mass and
$\bar{g}=\sqrt{g^2+g'^2}$. In Eqs.(\ref{25})--(\ref{26}) we ignore
the contribution of loop corrections to the Higgs masses. The terms
of the order of $O(\lambda^2 v^2)$ and $O(\lambda\kappa v^2)$ are also
omitted. The only exception is for the mass of the CP--even singlet
field $h_1$ where we retain the two last terms proportional to
$\lambda^2 v^2$, since they become significant when $\kappa$ becomes
very small compared to $\lambda$.
For appreciable values of\, $\kappa/\lambda$\, the Higgs spectrum
presented above depends on four variables: $m_A$, $\tan\beta$,
$\displaystyle\frac{\kappa}{\lambda}\,\mu$ and $A_{\kappa}$. The masses of
MSSM--like Higgs bosons ($m_{H^{\pm}}, m_{A_2}, m_{h_2}$ and
$m_{h_3}$) are defined by $m_A$ and $\tan\beta$. As in the minimal
SUSY model they grow if $m_A$ is increased. At large values of $m_A$
($m_A^2>>M_Z^2$) the masses of the charged, one CP-odd and one CP-even
states are almost degenerate, while the SM-like Higgs boson mass
attains its theoretical upper bound $M_Z |\cos 2\beta|$. Loop
corrections from the top quark and its superpartners raise this upper
limit up to $130-135\,\mbox{GeV}$. The experimental constraints on the
SUSY parameters obtained in the MSSM remain valid in the the NMSSM
with small $\kappa$ and $\lambda$. For example, non-observation of any
neutral Higgs particle at the LEPII restricts $\tan\beta$ and $m_A$
from below.
The combination of the NMSSM parameters
$\displaystyle\frac{\kappa}{\lambda}\,\mu$ set the mass scale of singlet fields
($m_{h_1}$ and $m_{A_1}$). Decreasing $\kappa$ reduces their masses so
that for $\displaystyle \frac{\kappa}{\lambda} \ll 1$ they can be the lightest
particles in the Higgs boson spectrum. The parameter $A_{\kappa}$
occurs in the masses of extra scalar and pseudoscalar with opposite
sign, and is therefore responsible for their splitting. Too large a
value of $|A_{\kappa}|$ pulls the mass-squared of either singlet
scalar or singlet pseudoscalar below zero destabilizing the vacuum. An
even stronger lower constraint on $A_{\kappa}$ is found from the
requirement that the vacuum be the global minimum. Together these
requirements constrain $A_{\kappa}$ and consequently the ratio\,
$(m_{A_1}^2/m_{h_1}^2)$ from below and above
\begin{equation}
-3\left(\displaystyle\frac{\kappa}{\lambda}\mu\right)^2
\leq \displaystyle A_{\kappa}\cdot\left(\frac{\kappa}{\lambda}\mu\right)\leq 0\, , \qquad\qquad
0\leq \displaystyle \frac{m_{A_1}^2}{m_{h_1}^2}\leq 9\, .
\label{27}
\end{equation}
The main features of the NMSSM Higgs spectrum discussed above are
retained when the couplings $\lambda$ and $\kappa$ increase. In this
case Eqs.(\ref{25})--(\ref{26}) provide some insight into the mass
hierarchy of the NMSSM Higgs sector and qualitatively describe the
dependence of the Higgs masses with respect to the variations of
$m_A,\, \kappa/\lambda,\, A_{\kappa},\, \mu$ and $\tan \beta$.
\section{NMSSM with $\kappa=0$}
The analysis of the MSSM limit of the NMSSM reveals one of the main
impediments in the study of the NMSSM Higgs sector --- the large
number of independent parameters. Indeed even in the limit
$\kappa,\,\lambda\to 0$, when the number of variables parameterizing
the spectrum at the tree level reduces drastically, the masses of the
extra Higgs states take arbitrary values. Therefore it seems very
attractive to take a step back to the simplest extension of the MSSM
when $\kappa=0$. Since the self interaction of the singlet fields no
longer appears in the superpotential nor in the Higgs effective
potential, there are only 4 parameters defining the masses and
couplings of the Higgs bosons at the tree-level: $\lambda, \mu,
\tan\beta$ and $m_A$ (or $x$). For $\tan\beta\le 2.5$ and small values
of $\lambda$ ($\lesssim 0.1$) the predominant part of the NMSSM
parameter space is excluded by unsuccessful Higgs searches; although
the lightest Higgs boson may partially decouple and not be seen, the
second lightest scalar would be SM-like and visible. Furthermore,
non-observation of charginos at LEPII restricts the effective
$\mu$-term from below: $|\mu|\ge 90-100\,\mbox{GeV}$. Combining these
limits one gets a useful lower bound on $m_A$ at the tree level:
\begin{equation}
m_A^2\gtrsim 9M_Z^2 x\,.
\label{28}
\end{equation}
When $\lambda\to 0$ the Higgs boson masses are closely approximated by
Eq.(\ref{25}) and Eqs.(\ref{252})--(\ref{26}) where $\kappa$ and
$A_{\kappa}$ must be taken to be zero. In the considered limit the
mass of the lightest CP--odd Higgs boson, which is predominantly a
singlet pseudoscalar, vanishes. This is a manifestation of the
enlarged $SU(2)\times [U(1)]^2$ global symmetry of the Lagrangian; the
extra $U(1)$ (Peccei--Quinn) symmetry is spontaneously broken giving
rise to a massless Goldstone boson (axion) \cite{10}. The
Peccei--Quinn (PQ) symmetry and its axion allows one to avoid the strong CP
problem, eliminating the $\theta$--term in QCD \cite{11}.
The lightest CP--even Higgs boson is also mainly a singlet field. As
evident from Eq.(\ref{26}), at large values of $\tan\beta$ or $\mu$
the mass--squared of the lightest Higgs scalar becomes negative if the
auxiliary variable $x$ differs too much from unity. Therefore vacuum
stability localizes the auxiliary variable $x$ to a rather narrow range
\begin{equation}
1-\frac{M_Z|\cos 2\beta|}{m_A^0} < x < 1+\frac{M_Z|\cos 2\beta|}{m_A^0}\, ,
\label{29}
\end{equation}
where $m_A^0=2\mu/\sin 2\beta$. For example, at $\mu=100\,\mbox{GeV}$
and $\tan\beta=3$ the squared mass of the lightest Higgs scalar
remains positive only if $x$ lies between $0.78$ to $1.22$. From the
definition of $m_A$, Eq.(\ref{251}), we see that the tight bounds on
the auxiliary variable $x$ constrain the $m_A$ to the vicinity of
$\mu\, \tan\beta$, which is much larger than the Z-boson mass. As a
result, a mass splitting occurs, where the heaviest CP-odd, CP-even
Higgs and charged Higgs bosons have a mass rather close to $\mu \tan
\beta$, while the SM--like Higgs $h_2$ has a mass of the order of
$M_Z$.
Increasing the value of $\lambda$ increases the lightest Higgs scalar
mass and mixings between the MSSM--like Higgs bosons and singlet
states. As before the masses of the heaviest states are almost
degenerate and close to $m_A\approx\mu\, \tan\beta$. At the tree level
the masses of the lightest and second lightest Higgs scalars vary
within the limits \cite{4}:
\begin{equation}
\begin{array}{rcccl}
0 & \le & m_{h_1}^2 & \le & \displaystyle \frac{\lambda^2}{2}v^2x\sin^22\beta\, ,\\[1mm]
\displaystyle M_Z^2\cos^22\beta+\frac{\lambda^2}{2}v^2\sin^22\beta & \le & m_{h_2}^2 & \le &
\displaystyle M_Z^2\cos^22\beta+\frac{\lambda^2}{2}v^2(1+x)\sin^22\beta\, .
\end{array}
\label{32}
\end{equation}
In Eq.(\ref{32}) the value of $\lambda$ must be smaller than about
$0.7$ to prevent the appearance of Landau pole up below the GUT scale
$M_X$. Although the masses of the lightest Higgs scalars rise with
growing $\lambda$ the mass hierarchy in the Higgs spectrum is
preserved, i.e. $m_{h_1}, m_{h_2}\ll m_{A}$. The couplings of the
lightest CP--even Higgs states to the Z pair ($g_{ZZi}$) and to the
axion and Z--boson ($g_{ZA_1i}$) obey the sum rules \cite{4}
\begin{equation}
R_{ZZ1}^2+R_{ZZ2}^2\approx 1\, ,
\label{33}
\end{equation}
\begin{equation}
R_{ZA_11}^2+R_{ZA_12}^2\approx\displaystyle\frac{1}{4}\left(\frac{\lambda v}{m_A^0}\right)^4\cos^22\beta\,.
\label{330}
\end{equation}
where $R_{ZZi}$ and $R_{ZA_1i}$ are normalized couplings defined by
$g_{ZZi}=\displaystyle\frac{\bar{g}}{2}M_Z\times R_{ZZi}$ and
$g_{ZA_ji}=\displaystyle\frac{\bar{g}}{2}\times R_{ZA_ji}$.
Searches for massless pseudoscalars and light scalar particles at
accelerators as well as the study of their manifestations in
astrophysics and cosmology rule out almost the entire parameter space
of the NMSSM with $\kappa=0$. A particularly stringent constraint
emerges from stellar-evolution \cite{5}--\cite{6}. Since axions
interact with electrons, nucleons and photons with strength inversely
proportional to the axion decay coupling $f_a\sim s$, they are
produced during the process of star cooling. To evade the modification
of stellar-evolution beyond observational limits one must impose a
lower limit on $f_a$ and $s > 10^9\,\mbox{GeV}$ \cite{7}.
For large values of $\tan\beta$ the restrictions on $s$ are even
stronger. The axion is accompanied by the lightest scalar Higgs boson
(saxion) which has a mass less than $10\,\mbox{KeV}$ for $\tan\beta >
10$ and $\mu< 200\,\mbox{GeV}$, see Eq.(\ref{32}). This light scalar
can be also produced during the cooling of globular--cluster stars
significantly affecting their evolution if the scalar--electron
coupling $g_{Xe}$ is above $1.3\cdot 10^{-14}$ \cite{6}. Since
$g_{h_1e}\sim m_e/s$ this translates into a lower bound on the scale
of PQ--symmetry breaking $f_a\sim s > 10^{11}\,\mbox{GeV}$. Cold dark
matter is composed of both axions and saxions while the supersymmetric
partner to the axion, the axino (the lightest neutralino), is so light
that it does not contribute \cite{L.Hall}.
The constraints on the vacuum expectation value of the singlet field
restrict $\lambda$ to be less then $10^{-6}\,(10^{-9})$ for
$s>10^9\,\mbox{GeV} (10^{11}\,\mbox{GeV})$. The smallness of $\lambda$
could be caused by certain discrete and gauge symmetries which forbid
the operator $\lambda S(H_1H_2)$ at the renormalizable level but which
permit a similar non--renormalizable operator involving additional
singlets, resulting in an effective $\lambda$ proportional to the
ratio of the vacuum expectation values of the new singlet fields and
the Planck scale \cite{King}. It has also been shown that the
interactions of $S$ with other extra singlet fields result in the
stabilization of the Higgs scalar potential which otherwise has a
direction unbounded from below when $\kappa=0$ and
$m_S^2<0$. Moreover, this axion could play the role of an inflaton
field \cite{King1}.
For tiny values of $\lambda$, the decoupling of new singlet states
prevents their observation at future colliders. Thus if the NMSSM
with unbroken global PQ--symmetry is realized in nature, only
MSSM--like Higgs bosons will be discovered in the near
future. However, the strong correlation between the masses of the
heaviest Higgs bosons and $\mu \tan \beta$ revealed in the tree level
analysis (see Eq.(\ref{29})) does not take place in the MSSM but must
be observed here(see also \cite{131}). The inclusion of loop
corrections does not change the qualitative pattern of the Higgs
spectrum and does not enlarge the allowed range of $x$ (or
$m_A$). Loop corrections only slightly shift the admissible range of
the variable $x$ which shrinks with increasing $\mu$ and $\tan\,
\beta$ in compliance with Eq.(\ref{29}). In order to demonstrate the
correlation between $m_A$ and $\mu \tan\beta$ we examined $10^6$
different scenarios, with $m_A$ and $\tan \beta$ chosen randomly
between $0$ to $6$~TeV and $3$ to $30$ respectively. We calculated the
one-loop mass spectrum and, for every scenario with a stable vacuum,
plotted a single point on the $m_A$--$\tan\beta$ plane of
Fig.1. We discarded scenarios with unstable
vacua. It is immediately evident that the physically acceptable
scenarios all lie within a small area around $m_A \approx \mu \tan
\beta$ \cite{7}. Since the positivity or negativity of $m_{h_1}^2$ is
independent of $s$ (or $\lambda$), the NMSSM with $\kappa=0$ is ruled
out for all large values of the singlet expectation value if after
measuring $\mu$ and $\tan\beta$ at future accelerators, the heavy
pseudoscalar mass is not found to lie close to $\mu \tan \beta$.
Alternatively, if the mass prediction were found to hold, it would
provide an indirect evidence for the PQ--symmetric NMSSM as a solution
to the strong CP problem and for the axion and saxion as a source of
dark matter.
\begin{figure}[h]
\label{fig:matbscan}
\begin{center}
{\hspace*{-0mm}\includegraphics[scale=0.6]{matbscan.eps}}\\
\caption{\it The distribution of scenarios with physically acceptable
vacua, with $M_A$ chosen randomly between $0$ and $6$~TeV, $\tan\beta$
chosen randomly between $3$ and $30$ and $\mu=200\,\mbox{GeV}$. The
blow-up allows individual scenario points to be seen.}
\end{center}
\end{figure}
\section{Slight breaking of the PQ--symmetry}
If one wants to avoid the introduction of a new intermediate scale
that arises in the NMSSM with $\kappa=0$ when the astrophysical limits
on the couplings of the lightest scalar and pseudoscalar particles are
applied, one must break the Peccei--Quinn symmetry. For a discussion
of the possible origins of this symmetry breaking in the
NMSSM see Refs.\cite{131}-\cite{13}. Here we assume that the
violation of the Peccei--Quinn symmetry is caused by a non--zero value
of $\kappa$. Moreover we restrict our consideration to small values
of $\kappa$ when the PQ--symmetry is only slightly broken. To be
precise we consider values of $\kappa$ that do not greatly change the
vacuum energy density:
\begin{equation} \kappa \lesssim \lambda^2\,.
\label{34}
\end{equation}
If $\kappa \gg \lambda^2$ then the terms $\kappa^2|S|^4$ and
$\displaystyle\frac{\kappa}{3}A_{\kappa}S^3$ in the Higgs effective potential
(\ref{5}) become much larger than $|\mu|^4\sim M_Z^4$ increasing the
absolute value of the vacuum energy density significantly.
For small values of $\lambda$ the approximate formulae
Eqs.(\ref{252})--(\ref{26}) obtained in section~2 remain valid.
However, breaking the PQ--symmetry gives the lightest CP--odd Higgs an
appreciable mass that allows it to escape the strong astrophysical
constraints previously outlined. One must ensure that the value of
$\kappa$ is large enough for the lightest scalar and pseudoscalar to
escape the exclusion limits of LEP, but it is still physically
reasonable to only slightly break the PQ--symmetry, as defined by
Eq.(\ref{34}).
Indeed, for the appreciable values of $\kappa$ and $\lambda$ this slight
breaking of the Peccei--Quinn symmetry may arise naturally from their
renormalization group (RG) flow from $M_X$ to $M_Z$ \cite{14}. While
the values of the Yukawa couplings at the Grand Unification scale
grow, the region where the solutions of the RG equations are
concentrated at the electroweak scale shrinks and they are focused
near the quasi fixed point \cite{15}:
\begin{equation}
h_t(M_t)\approx1.103,\qquad \lambda(M_t)\approx 0.514,\qquad \kappa(M_t)\approx 0.359\, .
\label{35}
\end{equation}
This point appears as a result of intersection of the Hill-type
effective surface with the invariant line that connects the stable
fixed point in the strong Yukawa coupling limit with the infrared
fixed point of the NMSSM renormalization group equations. The
requirement of perturbativity up to the Grand Unification scale
provides stringent restrictions on the values of $\lambda(M_t)$ and
$\kappa(M_t)$
\begin{equation}
\lambda^2(M_t)+\kappa^2(M_t)< 0.5\, .
\label{34}
\end{equation}
In order to obtain a realistic spectrum, one must of course include
the leading one--loop corrections from the top and stop loops. We
performed this exercise numerically \cite{14} and present in Fig.2
the mass spectrum as a function of $m_A$ for the parameters
$\lambda=0.3$, $\kappa=0.1$, $\mu=157$\,GeV, $\tan \beta=3$ and
$A_{\kappa}=-60\,\mbox{GeV}$.
\begin{figure}[h]
\label{fig:masses}
\begin{center}
{\hspace*{-0mm}\includegraphics[totalheight=70mm,keepaspectratio=true]{masses.eps}}
\end{center}
\caption{\it The dependence of the Higgs boson masses on $m_A$ for
$\lambda=0.3$, $\kappa=0.1$, $\mu=157$\,GeV, $\tan \beta=3$ and
$A_{\kappa}=-60\,\mbox{GeV}$. Solid, dashed and dashed--dotted curves
correspond to the one--loop masses of the CP-even, CP-odd and charged
Higgs bosons respectively. All masses are given in GeV.}
\end{figure}
One sees that most of the structure outlined above is retained. The
heaviest scalar and pseudoscalar are approximately degenerate with the
charged Higgs boson and track $m_A$. The second lightest scalar is of
the order of the $Z$-boson mass plus radiative corrections, mimicking
the lightest scalar of the MSSM. The breaking of the PQ--symmetry has
raised the masses of the lightest scalar and pseudoscalar to values
which agree very well with the approximate expression of
Eqs.(\ref{252})-(\ref{26})\footnote{The agreement with tree-level
expressions is good because the singlet nature of the new fields
suppresses loops corrections.}. Also notice that the vacuum stability
prevents having very high or very low values of $m_A$ (or $x$) but now
the allowed range has increased significantly, permitting $m_A$ to
substantially deviate from $\mu \tan \beta$.
One might expect that such a light Higgs scalar should already be
ruled out by LEP, but this is not the case \cite{16}. The reduced
coupling to the $Z$-boson allows for a singlet like scalar
substantially below the current SM LEP bounds. Indeed, LEP limits have
been included in Fig.2 as a shaded area: for this parameter choice,
values of $m_A$ in the shaded region either provide a scalar Higgs
boson which would have been seen at LEP or have an unstable
vacuum. There is a substantial range in $m_A$, once more around the
value $m_A \approx \mu \tan \beta$, where the Higgs scalar remains
undetected. In this way, the mass hierarchy between the lighter Higgs
bosons, around the electroweak scale, and the heavier Higgs bosons, at
around $\mu \tan \beta$, is maintained. Since the coupling of the
lightest scalar to the $Z$-boson must necessarily be suppressed in
this region to avoid detection at LEP, the sum rule of Eq.(\ref{33})
tells us that the couplings of the second lightest scalar will be
similar to those of the lightest scalar in the MSSM. It is interesting
to note that this light scalar would be very difficult to see at the
LHC since it will principally decay hadronically, presenting a signal
which is swamped by huge QCD backgrounds. According to Eq.(\ref{330}),
the coupling of the pseudoscalar to the $Z$-boson is always rather
small. Nevertheless, if these light Higgs bosons could be seen, one
would have a definitive signature of the NMSSM, even without observing
the heavier states.
\section{Conclusions}
We have studied the Higgs sector of the NMSSM with exact and slightly
broken Peccei--Quinn symmetry. In the PQ--symmetric NMSSM
astrophysical observations exclude any choice of the parameters unless
one allows $s$ to be enormously large
$(>10^9-10^{11}\,\mbox{GeV})$. These huge vacuum expectation values of
the singlet field can be consistent with the electroweak symmetry
breaking only if the coupling $\lambda$ is extremely small
$10^{-6}-10^{-9}$. Such tiny values of $\lambda$ may arise from
non--renormalizable operators. In this limit the main contribution to
the cold dark matter density comes from axion and saxion contributions
while that of the lightest supersymmetric particle, the axino, is
negligible.
If the PQ--symmetry is exact or only slightly broken, vacuum stability
and LEP exclusion require parameters which cause a splitting in the
NMSSM Higgs spectrum. The heaviest scalar, heaviest pseudoscalar and
charged Higgs bosons are approximately degenerate with masses around
$m_A\approx\mu\tan\beta$. The other three neutral states are
considerably lighter. The masses of the lightest scalar and
pseudoscalar, which are predominantly singlet fields, are governed by
the combination of parameters $\displaystyle\frac{\kappa}{\lambda}\,\mu$. The
SM--like Higgs boson mass remains at the electroweak scale.
In the limit of vanishing $\lambda$ and $\kappa$ the extra CP--even
and CP--odd singlet states decouple from the rest of the spectrum and
become invisible. However in the case of exact PQ--symmetry with $\kappa=0$ (or
very slightly broken PQ--symmetry with $\kappa\ll \lambda^2$) the stability of
the physical vacuum constrains the masses of the heavy Higgs bosons in
the vicinity of $m_A\approx \mu\, \tan\beta$. The strong correlation
between $m_A$ and $\mu \tan\beta$ coming from the dark sector of the
NMSSM gives a unique ``smoking gun'' for distinguishing this model
from the MSSM even if no extra Higgs states are discovered.
For appreciable values of $\lambda$ and $\kappa$ the slight breaking
of the PQ--symmetry can be caused by the NMSSM renormalization group
flow. Increasing $\lambda$ increases the mixing between the light
CP--even Higgs bosons, while increasing $\kappa$ increases the masses
of the predominantly singlet states. For small values of $\kappa$, one
can have a light scalar Higgs boson which would not have been seen at
LEP. Although the range of $m_A$ allowed by vacuum stability increases
significantly, one is still required to have $m_A \approx \mu \tan
\beta$ in order to avoid the LEP constraints, leading to a mass
splitting between the light and heavy Higgs bosons. Observing two
light scalars and one pseudoscalar Higgs but no charged Higgs boson at
future colliders would yield another opportunity to differentiate the
NMSSM with a slightly broken PQ--symmetry from the MSSM even if the
heavy Higgs states are inaccessible.
\section*{Acknowledgements}
\noindent
RN is grateful to E.Boos, I.Ginzburg, S.King, M.Krawczyk and
C.Panagiotakopoulos for fruitful discussions and helpful remarks. The
work of RN was partly supported by the Russian Foundation for Basic
Research (projects 00-15-96562 and 02-02-17379) and by a Grant of
President of Russia for young scientists (MK--3702.2004.2).
|
2,869,038,155,304 | arxiv | \section{Introduction}
The state of the art convolutional networks has been quite successful at various computer vision tasks. However, the improved performance has come at the cost of reduced human understanding of the model. It is crucial to bring transparency in these networks to help understand their decisions. Recently, there has been a lot of work on designing models that explain the behavior of a pre-trained convolutional network. These approaches generate saliency maps for explaining the model's behavior or explain the prediction based on training instances. Approaches such as GradCAM \cite{gradCAM} and its variants, use gradient information to generate saliency maps as illustrated in Figure \ref{fig:comparison}. Other approaches like Excitation Backpropagation \cite{DBLP:journals/corr/Zhang0BSS16} use top-down neural attention to generate attention maps. These explanations provide a high-level insight into the working of the model. However, we observe that invariably almost the entire object is highlighted in all the saliency maps and their explanations are almost identical for the predicted class and any other class for a given image. This reduces the interpretability of their explanations. While the explanations can detect the foreground object, they are unable to accurately highlight regions in the object that contributed towards the prediction.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth, height=\linewidth, keepaspectratio]{images/comp/draft_f.png}
\caption{[Best viewed in color] Explanations generated by different approaches for the predicted class and another class for an image of wolf.}
\label{fig:comparison}
\end{figure}
Another style of reasoning for a prediction is by dissecting an image and referring to the presence of smaller concepts. For example, if the image is that of a lion, the smaller concepts could be legs, ears, face, skin texture, etc. The aggregation of these smaller concepts is used to explain the final output. The fundamental objective of our framework: Model-Agnostic Concept Extractor (MACE) is to mimic this style of reasoning; which is to explain the model's behavior in terms of smaller concepts in the image. Our approach learns multiple concepts for a class without any supervision or additional information. These concepts are visualized through multiple high quality localized saliency maps, rather than a single region. Figure \ref{fig:outputIllustration} shows the visualizations of some of the concepts learned by our approach.
\begin{figure}[H]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.24\linewidth]{images/stability/0_5_0_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/0_5_1_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/0_5_4_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/0_5_6_m_.png}
\caption{Muzzle in class Fox.}
\label{fig:muzzle}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.24\linewidth]{images/stability/1_9_0_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/1_9_2_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/1_9_4_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/1_9_8_m_.png}
\caption{Face in class German Shepherd.}
\label{fig:face}
\end{subfigure}
\\
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.24\linewidth]{images/stability/2_8_5_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/2_8_7_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/2_8_8_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/2_8_9_m_.png}
\caption{Legs in class Horse.}
\label{fig:legs}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.24\linewidth]{images/stability/3_2_0_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/3_2_1_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/3_2_6_m_.png}%
\hfill
\includegraphics[width=0.24\linewidth]{images/stability/3_2_9_m_.png}
\caption{Body texture in class Leopard.}
\label{fig:body}
\end{subfigure}
\caption{[Best viewed in color] Visualization of concepts across multiple images of the same class.}
\label{fig:outputIllustration}
\end{figure}
Further, the proposed framework also estimates the relevance of each concept with respect to the output of the model. This is particularly helpful when the explanation for any output of the model is desired. In Figure \ref{fig:comparison} the first row corresponds to the explanations from different approaches for the predicted class and the second row presents the explanations for the class with second-highest prediction probability. It can be noticed that there is no significant difference in the explanations generated by current approaches for the two outputs of the model. However, our approach can generate class-specific concepts as well as their relevance for a more comprehensive explanation. For an incorrect class, most of the concepts have negative relevance thereby suppressing the prediction probability. As illustrated in Figure \ref{fig:comparison} while the predicted class is the wolf, the second-highest probability is for class fox. The MACE framework extracts only a single concept (that of legs) with positive relevance for the class fox.
We also ensure that the MACE framework generates explanations that satisfy a few desirable properties. \textit{Stability}: The learned concepts should be consistent i.e. the localized saliency maps generated for a particular concept should be similar across all the images of the same class, thus achieving higher interpretability. This can be observed in Figure \ref{fig:outputIllustration} where the visualizations for a single concept obtained from different images appear very similar. \textit{Faithfulness}: The explanations generated by the framework should accurately represent the working of the pre-trained model. This will improve the trustworthiness of the explanations. Finally, \textit{Robustness}: The explanations should be robust to perturbations such as noise, translation, and rotation in the input. Explanations of perturbed inputs should not change significantly if there is no significant change in the corresponding outputs.
\section{Related Work}
The last few years have seen significant work on developing post-hoc explanation models \cite{gradCAM, grad_cam_plus_plus, cam, lime, anchors, shap}. Some of the popular approaches construct saliency maps that highlight important regions in the image. The Class Activation Map (CAM) \cite{cam} is one of the earliest approaches to construct a saliency map specific to a classification model. The gradient-class activation maps (Grad-CAM) \cite{gradCAM} is a generalization of CAM that uses gradients flowing into the final convolutional layer to localize salient image regions. GradCAM is among the most popular approach for generating explanations for an image classification network. However, it has multiple drawbacks including poor localization in the presence of multiple instances of the same object in an image and poor distinction of the class-specific saliency maps generated for different classes, given an input \cite{similar_saliency_ieee}. GradCAM++ \cite{grad_cam_plus_plus}, a variant of GradCAM, overcomes the limitation of generating accurate saliency maps for multiple instances of an object in an image. However, GradCAM++ requires the estimation of higher-order derivatives, the cost of which grows with increasing network complexity.
Vis-CNN \cite{vis_cnn} also uses gradients to explain the contribution of each input pixel towards the output. But pixel-wise quantification appears like a foreground detector looking at most of the results presented in the paper and also during our experiments. Full Grad \cite{full_gradient} combines the idea of using gradients flowing back to intermediary layers as well as gradients flowing back till input to generate a saliency map attributing importance of each pixel towards a prediction. On the other hand, Wang et al \cite{similar_saliency_ieee} suggest a possible compromise in the faithfulness of explanations when gradients are used to generate the explanations. There have been other variants of CAM like Ablation-CAM \cite{ablation_cam} and Score-CAM \cite{score_cam} that avoid the need for gradients for computing the saliency maps. Our approach also estimates saliency maps for explaining the pre-trained model without utilizing the gradient information. Further, we automatically synthesize multiple maps each characterizing a different concept present in the image.
Fong et al, \cite{perturbation_deletion} propose a perturbation based approach for estimating the salient regions in the image, that was further extended to also take into account when the region was preserved \cite{perturbation_preservation}. These complementary approaches aim to modify the input image successively and map the region integral for the final prediction. There are many challenges with the feasibility of the approach given the high dimensional nature of the images and voluminous amount of possible perturbations. Excitation backpropagation \cite{DBLP:journals/corr/Zhang0BSS16} proposes a probabilistic attention mechanism to explain the working of black boxes in a fine-grained manner. But as recent works like \cite{attention_mohankumar_2020,attention_ieee_xu,grimsley2020attention} state, attention need not be human interpretable .
Another category of explainable models is the ante-hoc models that are explainable by design \cite{DLFCBP,this_looks_like_that,hierarchical_prototypes,concept_whitening}. Due to explainability being a parameter considered in the design, the explanations are faithful to the underlying model. However, the need to retrain the model from scratch to incorporate the explainability aspect is an obstacle to explain a pre-trained model that has already been deployed.
Our proposed approach is a posthoc explainability technique like \cite{cam,gradCAM,grad_cam_plus_plus} that leverages some aspects from ante-hoc explainability techniques like \cite{this_looks_like_that, DLFCBP} to generate fine-grained and faithful explanations. Like GradCAM, our approach generates human interpretable explanations in the form of a saliency map. However, unlike GradCAM, we do not utilize the gradient backflow information for generating the saliency maps. Our work is closely related to the exciting paradigm of `this looks like that' proposed by Chen et al \cite{this_looks_like_that}. Specifically, the MACE framework generates multiple saliency maps containing different parts/concepts for explaining the output of a single image. However, unlike \cite{this_looks_like_that}, our approach does not require modifying and retraining the black box architecture nor is there a need to slide across the activation maps or tune additional parameters to visualize a concept.
\section{Methodology}
The MACE framework is envisioned as a modular network that can be attached to any convolution layer of a pre-trained network for probing its functionality. However, for describing the framework, we assume that the MACE network is inserted in between the final convolutional layer and the first dense layer of a pre-trained network. This lateral connection taps into the spatial distribution of concepts (as gathered from the downstream convolutional layers) and the discriminative features for classification (from the upstream dense layers). Given, a query image and a class label, the MACE framework extracts the class-specific low-level concepts along with their relevance towards the output of the model.
The process of learning and extracting the concepts and their relevance is divided across four modules namely; the map generator, embedding generator, relevance estimator, and output generator. The MACE framework is illustrated in Figure \ref{fig:architecture}. The map generator learns an attended spatial map representing the spread of a concept using the output of the last convolutional layer of the pre-trained model. The embedding generator transforms the manifestation of a concept into a latent representation that is invariant to spatial information. The relevance estimator determines the importance of a class-specific concept towards the output of the model. Finally, the output generator completes the loop by linking the extracted concepts back to the first dense layer of the pre-trained model.
Let $f$ be a pre-trained model for a $K$-way image classification task; trained using a dataset $\mathcal{D}$. The output of the last convolutional layer in $f$ for a given input is denoted as $\textbf{x}$. Thus $\textbf{x}\in \mathbb{R}^{H\times W\times D}$, where $H$ and $W$ refer to the size of the feature map and $D$ refers to the number of filters in the last convolution operation. Let $\textbf{z}\in \mathbb{R}^L$ represent the output of the dense layer following the convolutional layer. The MACE framework is laterally connected in between the transformation of $\textbf{x}$ to $\textbf{z}$.
For the sake of simplicity, we assume that every class can be explained by the same number of concepts. (In practice as discussed in the supplementary material section S2, pruning may result in a varying number of concepts per class). A concept has two aspects to it. The first is its actual manifestation in an image, termed as the concept map, for example, the region corresponding to the ear or legs. The second aspect is the latent representation, termed as the concept embedding, that is invariant to the manifestation. Irrespective of the location and orientation of the concept ear in an image, the encoding of the concept remains the same. $\textbf{c}_{jk} \in \mathbb{R}^{H\times W}$ denotes the $j^{th}$ concept map for the $k^{th}$ class and $\textbf{e}_{jk} \in \mathbb{R}^Q $ denotes the corresponding concept embedding.
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{archi.png}
\caption{[Best viewed in color] Architecture of the MACE Framework.}
\label{fig:architecture}
\end{figure}
\subsection{Map Generator}
The map generator takes as input the convolutional feature map $\textbf{x}$ and estimates a concept map $\textbf{c}_{jk}$. The concept map encodes two properties - the activation pattern of a concept and the salient region in the image that expresses the activation pattern. The activation pattern is used by the embedding generator to extract a distinct and invariant encoding for the concept. The salient region in the image is used for visualizing the concept post-training, facilitating the human interpretability of the concept.
Due to the nature of convolution operations, the activations of a concept are distributed across the different channels of the feature map $\textbf{x}$. The MACE framework combines the channels in the feature map to produce a single activation pattern for a concept. This is achieved as a weighted average of the channels in the feature map. Specifically, we employ 1D convolutions to obtain the concept map. Let $\theta_{jk}^M$ denote the weights of the 1D convolution filter for generating the $j^{th}$ concept map of the $k^{th}$ class, then $ \textbf{c}_{jk} = ReLU(\textbf{x} * \theta_{jk}^M) $. The $ReLU$ operation is performed because, the locations in the activation map corresponding to large positive values may denote the presence of the concept \cite{gradCAM,grad_cam_plus_plus}. We have to learn $K\times C$ 1D convolution filters, assuming $C$ concepts per class. The concept maps $\textbf{c}_{jk}$ are of the same size as the input feature map $\textbf{x}$.
Each of these concept maps represents an activation pattern. The region in the concept map exhibiting the highest activation is most likely to have expressed the concept. This region in the original image contributing to this activation can be visualized by resizing the concept map to the original image dimension. We use this resized concept map to visualize the concept. The MACE framework does not impose any constraints on the size of the concept. Thus allowing it to learn small concepts such as ears to large concepts like body and background.
\subsection{Embedding Generator}
The next module - the embedding generator takes as input the concept maps that are stacked class-wise. The map generator only uses the convolution layer activation maps to generate the concept map and thus retains the spatial information about the location of the concept in the image. Consider two images one in which the object lion is present on the right half of the image, and a different orientation of the object lion is present in the left half of the second image. The concept maps of these two images will have high activations in different regions even though the underlying concept is the same. Thus, we need to abstract out the concept from its actual manifestation. We refer to this encoding as the concept embedding. We would like the embeddings for a concept extracted from different images belonging to the same class exhibiting the concept to form a tight cluster.
The abstraction of the concept through the embedding also enables a comparison of different concepts. We model the embedding generator as a multi-layer dense network. Having a sufficiently large network enables the extraction of different concepts. However, the parameters of the network are shared across the concepts of a class to reduce the overall learning complexity. Formally, let the embedding generator network for class $k$, be parameterized by $\theta^E_k$. A stack of the concept maps $\{\textbf{c}_{jk}\}_{j=1}^C$ extracted from an input image are fed as inputs to embedding generator resulting in the set of embeddings $\{\textbf{e}_{jk}\}_{j=1}^C$. Restricting to only a positive subspace (as was the case with the concept map generation) is not necessary for learning the concept embedding. We use tanh activations enabling the use of the full space to learn well-separated embeddings.
Triplet loss is used to learn the invariant concept embeddings. We normalize the embeddings before applying the triplet loss. We use the training images belonging to class $k$ from a mini-batch to learn the $C$ concept embeddings for the class. The embeddings generated for concept $j$ across the training images of a particular class are chosen as anchor positives. The embeddings for the other concepts (except $j$) for a specific training image are the anchor negatives. Similar to \cite{schroff2015facenet}, we use all anchor-positive pairs and select semi-hard negatives for anchor-negative pairs. The margin $\alpha$ is set to 1 to make the embeddings orthogonal to each other. Let $\textbf{e}_{jk}^a(i)$ represent an anchor for the $i^{th}$ image in the mini-batch, the anchor positives are represented as $\textbf{e}_{jk}^p(i)$ and the anchor negatives as $\textbf{e}_{jk}^n(i)$. Then the triplet loss for class $k$ using a mini-batch of $B$ images is defined as
\begin{equation}
\mathcal{L}_k^E = \sum_{i=1}^B\sum_{j = 1}^{C}\bigg[ \| \textbf{e}_{jk}^{a}(i) - \textbf{e}_{jk}^{p}(i)\|_{2}^{2} - \|\textbf{e}_{jk}^{a}(i) - \textbf{e}_{jk}^{n}(i)\|_{2}^{2} + \alpha\bigg]_{+}
\label{eq:tripletloss}
\end{equation}
\subsection{Relevance Estimator}
A key aspect of the MACE framework is the estimation of the relevance of the concept towards the model prediction. This score, termed as concept relevance, quantifies the contribution of a concept towards the output of the model for a given class. The concept relevance would enable a comparison of different concepts for a given class and provide better insight into the relationship between the concepts and the model's output.
Given a training image, the concept embeddings of class $k$ are concatenated and passed through a sigmoid activated dense layer (parameterized by $\theta^R_k$) with a single output to learn the probability of classification for class $k$ as estimated by the pre-trained model. $\theta^R_k$ can be divided into chunks representing the connection weights for the individual concepts, i.e. $\theta^R_k=[\theta^R_{1k}, \ldots, \theta^R_{Ck}]$. Then, the concept relevance for concept $j$ with respect to the output for class $k$ is denoted as $r_{jk}={\theta^R_{jk}}^T\textbf{e}_{jk}$. Note that the concept relevance $r_{jk} \in (-\infty, \infty)$. As the argument to the sigmoid activation of the dense layer is $\sum_{j=1}^Cr_{jk}$, the impact of the concept relevance scores on the prediction probabilities can be easily understood. Positive (negative) concept relevance increases (decreases) the prediction probability, thus emphasizing (reducing) the importance of the concept for a particular prediction.
The parameters of the dense layer, $\theta^R_k$, are learned by minimizing the cross-entropy between the output of the sigmoid and the pre-trained model's prediction probability for class $k$. Specifically, for a mini-batch of training instances, the loss for learning the parameters is defined as
\begin{equation}
\mathcal{L}_k^R=-\sum_{i=1}^B f_k(i) \log\left(\sigma\left(\sum_{j=1}^Cr_{jk}(i)\right)\right)
\end{equation}
where $f_k(i)$ and $r_{jk}(i)$ are the pre-trained models output for class $k$ and the concept relevance for concept $j$ with respect to class $k$ respectively for the training instance $i$ .
\subsection{Output Generator}
To increase the faithfulness of the explanations we loop back the concept embeddings into the pre-trained model. The concatenated concept embeddings are passed through a dense layer (parameterized by $\theta^O$) to approximate the output, $\textbf{z}$, of the first dense layer of the pre-trained model. An $L_{2}$ loss between the approximation, $\tilde{\textbf{z}}$, and $\textbf{z}$, defined as $\mathcal{L}^D = \| \textbf{z} - \tilde{\textbf{z}}\|^{2}_{2}$ is used to learn the approximation. If the approximation is accurate we should obtain the output $f(\textbf{z})$ ($\textbf{x}$ is replaced by $\textbf{z}$, a slight abuse of the notation), when
$\tilde{\textbf{z}}$ is passed through the dense layers of the pre-trained model. This requirement is enforced by minimizing the divergence between the pre-trained model's outputs for $\tilde{\textbf{z}}$ and $\textbf{z}$ defined as $\mathcal{L}^O = KL( f(\tilde{\textbf{z}}) \| f(\textbf{z}) )$. The $L_{2}$ loss is required to faithfully mimic the behavior of the pre-trained model. On the other hand, leaving out the divergence loss can still lead to a good approximation $\tilde{z}$, but $f(\tilde{z})$ can be inconsistent with $f(z)$.
Overall, the MACE framework minimizes $\mathcal{L}=\sum_{k=1}^K(\mathcal{L}^E_k + \mathcal{L}^R_k) + \mathcal{L}^D + \mathcal{L}^O$ with respect to the parameters $\theta^M, \theta^E, \theta^R$, and $\theta^O$. Adam optimizer \cite{adam} is used to minimize the loss function.
\section{Experiments}
We validate the MACE framework on VGG \cite{simonyan2014deep}, and ResNet \cite{DBLP:journals/corr/HeZRS15} architectures trained on the Imagenet dataset \cite{DBLP:journals/corr/RussakovskyDSKSMHKKBBF14}. The networks are fine-tuned on AWA2 dataset \cite{AWA2,awa2_paper_xian_2018_zero} and the Places365 dataset \cite{places}. We discuss the results for the AWA2 dataset using the VGG model in the paper and present the results for the Places365 dataset and ResNet architecture in the supplementary material (sections S8 and S7 respectively). The AWA2 dataset consists of 37322 images of 50 animal classes. We select a subset of 10 classes with approximately 400 images per class for our experiments. The VGG network that is fine-tuned with this dataset is our pre-trained model. For every class, we aim to learn 10 concepts ($C=10$) with the concept embedding dimension to be 32 ($Q=32$). Examples of the concepts extracted for various classes are presented in Figure \ref{fig:outputIllustration}. The architectural details of the embedding generator, along with the choices for the hyper-parameters are presented in the supplementary material section S1. Additional examples of the visualization of the concepts for the AWA2 dataset are presented in sections S3 and S4 of the supplementary material. While we describe the results from three salient experiments here, investigations into the stability and robustness of the explanations as well as ablation studies are discussed in sections S5, S6, and S9 of the supplementary material respectively.
\subsection{Faithfulness of the Explanations}
Faithfulness, the ability to accurately explain the working of a pre-trained model, is one of the most important traits of a post-hoc explanation model. As suggested in prior literature, we measure faithfulness using the drop in the classification probability upon masking. We generate a binary mask by thresholding each of the concept maps. As the concepts are trained to be different from each other, masking out just one concept might not give a significant drop in the probability of the predicted class, as the model can still predict with the help of other concepts. For example, as seen in Figure \ref{fig:ear_masked} masking the ear concept does not alter the classification probabilities significantly as the model compensates for it through the other concepts. Thus, we take the union of all binary masks. The region in the generated mask is removed from the image and the resulting drop in the probability of the predicted class is measured. We also compare the drop in the prediction probability when the mask is synthesized using other saliency-map generating approaches as well as a random process (activation maps combined randomly). The results of this experiment are presented in Figure \ref{fig:faithfulness}. The highest drop in the probability is observed for the mask synthesized by the MACE framework for every threshold.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.495\textwidth}
\includegraphics[width=\linewidth]{faith.png}
\caption{}
\label{fig:ear_masked}
\end{subfigure}\hfil
\begin{subfigure}{0.495\textwidth}
\includegraphics[width=\linewidth]{faithfulness.png}
\caption{}
\label{fig:faithfulness}
\end{subfigure}\hfil
\caption{[Best viewed in color] Faithfulness of the explanations extracted by MACE (a) No change in the prediction distribution after masking a concept due to compensation by the model using other concepts, (b) Drop in the probability of classification across different threshold values for generating the binary mask.}
\label{fig:faithfullness_full}
\end{figure}
\subsection{Explaining all the outputs for a test image}
An image classification network outputs a probability distribution over the class labels. An important aspect of any explanation model is to explain all the non-zero probabilities in the output of the classifier. In our approach, this is explained by looking at the concepts generated for every class, and the corresponding concept relevances. We specifically look at the concepts that have positive concept relevance for any class. Figure \ref{fig:wrongexp} shows the results for a few images. For example, the first two rows present the explanations generated by different models for why the classifier has non-zero probabilities for the German Shepherd and Wolf classes when the correctly predicted class is Fox. The original image in the first column of \ref{fig:wrongexp} is a fox image, the visualization of explanation given by MACE (second column) for German Shepherd class and wolf class are meaningful and highlight smaller regions in the fox such as ears and legs, while other approaches give the same explanation for both the classes. We can also observe from the figure that MACE consistently gives different meaningful explanations for different classes, while other approaches give the same or no explanation.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.70\textwidth}
\centering
\includegraphics[width=\linewidth]{wrong_class_imgs.JPG}
\caption{}
\label{fig:wrongexp}
\end{subfigure}\hfil
\begin{subfigure}{0.29\textwidth}
\centering
\includegraphics[width=0.95\linewidth, keepaspectratio]{pvp.png}
\caption{}
\label{fig:pvp}
\includegraphics[width=0.95\linewidth, keepaspectratio]{bargraph.png}
\caption{}
\label{fig:bargraph}
\end{subfigure}\hfil
\caption{[Best viewed in color] (a) contains explanations of different approaches for incorrect class. Each row corresponds to the explanation for some output probability for a wrong class. The wrong class is written on the left-most side in each row. The image shown for MACE approach is the visualization of the concept with the highest concept relevance for that class, and the concept relevance is written below the visualization; (b) Percentage of concepts with positive concept relevance less than the average score using only the images for different classes (See text for more details); (c) Average relevances of all the concepts for the true class and for all the classes except the true class.}
\label{fig:faithfullness_full}
\end{figure}
We further analyze the behavior of the concept relevance scores for different outputs of the pre-trained model. We first compute the average concept relevance for a concept embedding $e_{jk}$ using all the images belonging to class $k$. Over the test set, we compute the percentage of concepts with positive concept relevance less than the average score using only the images for which class $k$ is the second-highest ranked class according to the pre-trained model's predictions. We compute the average percentage across all classes for a particular rank. The result is summarized in Figure \ref{fig:pvp}. We can observe from the figure that the percentage of concepts with positive relevance score less than the average increases as the prediction ranking decreases. This is intuitive as concept relevance for the concept of an incorrect class should decrease as we go down the class ranking.
Figure \ref{fig:bargraph} shows the average concept relevances for all the concepts where the average is split between relevances of concepts belonging to the true class of a test image (AVG True) and relevances of concepts belonging to any other class except the true class (AVG Others). It can be observed that AVG Others (green bars) is significantly lower than AVG True (blue bars) for all the concepts. This further strengthens the credibility of concept relevances as concept relevance should be higher only if the concept actually belongs to the true class of the image than any other class.
\subsection{Human Evaluation of Explanations}
We evaluate the quality of our explanations generated for the VGG16 network fine-tuned on 10 classes of the AWA2 dataset. We present to every user the saliency maps generated by MACE, GradCAM \cite{gradCAM}, GradCAM++ \cite{grad_cam_plus_plus}, Excitation Backpropagation \cite{DBLP:journals/corr/Zhang0BSS16} and VisCNN \cite{vis_cnn} for a random set of 10 images. In addition, we randomly select 2 images from this set for repetition and create a final questionnaire of 12 examples and the corresponding explanations. The subjects were asked to choose the approach whose explanation helped them better understand the classifier's prediction. The responses in which the answers to the repeated questions mismatched were removed to maintain the consistency in the responses. We received 41 consistent responses, resulting in a total of 410 votes. The results have been summarised in Table \ref{HumanExptable}. We observe that explanations from MACE are the most preferred choice amongst the participants. Further, approaches like MACE and Excitation Backpropagation are preferred over gradient-based approaches like VisCNN, GradCAM and GradCAM++.
\begin{table}
\centering
\caption{Preferences of the Human Subjects for Explaining the Classifier's Predictions.}
\begin{tabular}{||c | c||}
\hline
Approach & Vote Percentage \\ [0.5ex]
\hline
\hline
MACE (ours) & \textbf{48.29\%} \\
\hline
Excitation Backpropagation \cite{DBLP:journals/corr/Zhang0BSS16} & 40.73\% \\
\hline
GradCAM++ \cite{grad_cam_plus_plus} & 9.51\% \\
\hline
GradCAM \cite{gradCAM} & 1.21\% \\
\hline
VisCNN \cite{vis_cnn} & 0.24\% \\
\hline
\end{tabular}
\label{HumanExptable}
\end{table}
\section{Summary}
In this work, we define a new form of reasoning for explaining the outputs of an image classification network using multiple concepts/parts of an object. We present the MACE framework, for extracting and visualizing these multiple concepts. We also propose a mechanism for estimating the relevance of a concept towards the output of the model. We perform extensive experiments on the MACE framework using VGG16 and ResNet50 architectures for animal and places classification tasks. Our results confirm the faithfulness of the explanations as well as their human interpretability.
\newpage
\bibliographystyle{plainnat}
\section{Experimental Details}
Our implementation can be accessed at \url{https://github.com/mace19/MACE}. In this section, we discuss our experimental details for various architectures and datasets. For each dataset, we select 10 classes and for every class, we aim to learn 10 concepts with concept embedding dimension equal to 32.
\subsection{VGG16 on AWA2}
We train our MACE framework on VGG16 \cite{simonyan2014deep} architecture using a subset of AWA2 dataset \cite{AWA2,awa2_paper_xian_2018_zero}, having 10 classes with approximately 400 images per class. The learning rate is set to $10^{-4}$ and the model is trained for 64 epochs using an ADAM optimizer \cite{adam}. The accuracy of the model was 92.7\%
\subsection{VGG16 on Places365}
We train our MACE framework on VGG16 \cite{simonyan2014deep} architecture using a subset of Places365 \cite{places}, having 10 classes with approximately 5000 images per class. The learning rate is set to $5*10^{-4}$ and the model is trained for 32 epochs using an ADAM optimizer \cite{adam}. The accuracy of the model was 93.1\%
\subsection{ResNet-50 on AWA2}
We train our MACE framework on ResNet-50\cite{DBLP:journals/corr/HeZRS15} architecture using a subset of AWA2 dataset \cite{AWA2,awa2_paper_xian_2018_zero}, having 10 classes with approximately 400 images per class. The learning rate is set to $10^{-3}$ and the model is trained for 128 epochs using an ADAM optimizer \cite{adam}. The accuracy of the model was 96.5\%
\section{Pruning}
The set of concepts generated by following the proposed methodology generate some concepts that are not meaningful. We prune these concepts so that only the meaningful concepts remain. We use the following strategy for pruning the concepts:
\begin{itemize}
\item We fetch the top $T$ images in terms of concept relevance, and if more than $S$ of those images don't belong to the same class as the concept, then we prune the concept. In our experiments, we set $T = 10$ and $S = 5$.
\item If a large part of test dataset has positive concept relevance for a concept, we prune that concept. In our experiments with AWA2 dataset, we pruned the concept if it had positive concept relevance for more than 50\% of the test images.
\item If the concept masks in almost the entire image, we prune the concept. In our case, we pruned the concept if on an average it masked in 95\% of the image.
\item If a concept has negative concept relevance for most of the images that belong to the same class as the concept, then we prune the concept. In our experiments we pruned the concept if less than 5\% of the images belonging to the same class had positive concept relevance.
\end{itemize}
After pruning the concepts, we fine-tune the MACE model. We use the fine-tuned model to perform all the experiments. Visualizations of some of the pruned concepts are shown in Figure ~\ref{fig:pruned_concepts_visualization}
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/0_9/0_9_7_m_.png}
\label{fig:ear}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/2_6/2_6_8_m_.png}
\label{fig:ear}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_0/4_0_8_m_.png}
\label{fig:ear}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_1/4_1_9_m_.png}
\label{fig:ear}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/pruned_concepts/4_2/4_2_7_m_.png}
\label{fig:ear}
\end{subfigure}
\caption{[Best viewed in color] Some of the pruned concepts. Each row contains prototypical images of a pruned concept.}
\label{fig:pruned_concepts_visualization}
\end{figure}
\newpage
\section{Concept visualization and relevance for true class prediction}
Figure \ref{fig:first_5} shows the visualizations of the learned concepts and their corresponding relevance values for a few images. We observe that for most images the background concept had very high relevance thus suggesting that it plays an important contribution in the model's prediction. We also observe some repetitive pattern in the visualizations of the learned concepts. Such repetitions have been removed from this figure.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{images/true_classes/true_class.png}
\caption{[Best viewed in color] Concepts with relevances corresponding to the predicted class}
\label{fig:first_5}
\end{figure}
\newpage
\section{Concept visualization and relevance for other class prediction}
One of the major contributions of our method is that we can provide rich explanations for why a non-zero prediction probability is assigned to the class other than that of the predicted class. Figures ~\ref{fig:fox_lion_face}, ~\ref{fig:fox_dog_ear}, ~\ref{fig:horse_dog_color}, ~\ref{fig:zebra_tiger}, ~\ref{fig:leopard_wolf_body}, ~\ref{fig:zebra_background}, ~\ref{fig:tiger_background}, ~\ref{fig:tiger_wolf_face} show few such explanations.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{other_classes/1.JPG}
\caption{[Best viewed in color] The model is looking at the face region in the fox image in this concept. The visualizations for lion for this concept is also face.}
\label{fig:fox_lion_face}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/2.JPG}
\caption{[Best viewed in color] The model is looking at the ear region in the fox image in this concept. The visualizations for dog for this concept is the ear and eye region. Since eyes are not clearly visible in the fox image, the model is looking at just the ears in this concept.}
\label{fig:fox_dog_ear}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/3.JPG}
\caption{[Best viewed in color] The bulges on back and front side of horse make the region highlighted in the explanation for german shepherd look like ears (based on color). Since the concept itself is also of ear and eye region, this concept contributes towards german shepherd probability in this image.}
\label{fig:horse_dog_color}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/4.JPG}
\caption{[Best viewed in color] In the concept in the first row, the model is looking at the background. Background of Zebra image and that of the images for the actual concept is quite similar. In the concept in second row, the model is looking at the stripes region in the zebra image, and the concept is also the stripes on tiger.}
\label{fig:zebra_tiger}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/5.JPG}
\caption{[Best viewed in color] The model is looking at the body of leopard in the explanation for wolf class. The concept itself also represents the body of wolf.}
\label{fig:leopard_wolf_body}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/6.JPG}
\caption{[Best viewed in color] The model is looking at the background. Background of Zebra image and that of the images for the actual concept is quite similar}
\label{fig:zebra_background}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/7.JPG}
\caption{[Best viewed in color] The model is looking at the background. Background of Tiger image and that of the images for the actual concept is quite similar}
\label{fig:tiger_background}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{other_classes/8.JPG}
\caption{[Best viewed in color] The model is looking at the face of tiger in the explanation for wolf class. The concept itself also represents the face region of wolf}
\label{fig:tiger_wolf_face}
\end{figure}
\section{Stability}
Stability of an interpretation refers to visual comparison of the learned concepts for images of the same class with varying orientation, size or position. The explanations of such images should be similar as well i.e., the learned concepts should be consistent. Given the saliency maps of a few such images, we should be able to understand the underlying concept easily. Figures \ref{fig:outputIllustration1} and \ref{fig:outputIllustration2} show that our learned concepts have very high stability. We also demonstrate that the concept embeddings for a particular concept for images with varying orientation, size or position are very close to each other. We take a small set of 10 images from the fox class and randomly choose 5 concepts of this class. For each concept we calculate the pairwise Euclidean distance between the concept embedding of the images. The results are shown in Figure \ref{fig:eucl_dis} in the form of a box diagonal matrix. Let $c_i$ denote the concept and $f_i$ denote the image then the axis of the matrix are defined as $[(c_1,f_1),..(c_1,f_{10})), (c_2,f_1),..(c_2,f_{10}),..(c_5,f_1)..(c_5,f_{10})]$. As it can be seen that the distances along the diagonal is lesser compared to the non-diagonal distances. This shows that the embeddings of similar (dissimilar) concepts are closer (farther).
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/0_0_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_0_9_m_.png}
\caption{Fox - Ear}
\label{fig:ear_fox}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/0_7_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/0_7_9_m_.png}
\caption{Fox - Background}
\label{fig:background_fox}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/1_7_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_7_8_m_.png}
\caption{German Shepherd - Eyes}
\label{fig:eyes_german_shepherd}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/1_9_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/1_9_9_m_.png}
\caption{German Shepherd - Face}
\label{fig:face_german_shepherd}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/2_3_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_3_8_m_.png}
\caption{Horse - Crest}
\label{fig:crest_horse}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/2_8_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/2_8_9_m_.png}
\caption{Horse - Legs}
\label{fig:legs_horse}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/3_2_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_2_9_m_.png}
\caption{Leopard - Texture}
\label{fig:texture_leopard}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/3_6_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/3_6_9_m_.png}
\caption{Leopard - Body}
\label{fig:body_leopard}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/4_4_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_4_8_m_.png}
\caption{Lion - Body and Background}
\label{fig:body_background_lion}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/4_8_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/4_8_9_m_.png}
\caption{Lion - Face}
\label{fig:face_lion}
\end{subfigure}
\caption{[Best viewed in color] Visualization of concepts across multiple images of the same class}
\label{fig:outputIllustration1}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/7_3_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_3_7_m_.png}
\caption{Tiger - Body and Background }
\label{fig:body_background_tiger}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/7_6_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/7_6_9_m_.png}
\caption{Tiger - Face \& Upper Body}
\label{fig:face_upper_body_tiger}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/8_7_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_7_7_m_.png}
\caption{Wolf - Legs }
\label{fig:legs_wolf}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/8_9_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/8_9_9_m_.png}
\caption{Wolf - Muzzle}
\label{fig:muzzle_wolf}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/9_2_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_2_9_m_.png}
\caption{Zebra - Background}
\label{fig:background_zebra}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/stab/9_7_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/stab/9_7_7_m_.png}
\caption{Zebra - Stripes}
\label{fig:stripes_zebra}
\end{subfigure}
\caption{[Best viewed in color] Visualization of concepts across multiple images of the same class}
\label{fig:outputIllustration2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{images/draft4.png}
\caption{[Best viewed in color] Pairwise Euclidean distance between concept embeddings}
\label{fig:eucl_dis}
\end{figure}
\section{Comparison of robustness}
Robustness is an important desideratum of an explanation. The explanation generated by any interpretability method should be robust to local perturbations in the input image. Figure \ref{fig:robustness_exp} shows that this is not the case for popular interpretability methods; even adding minimal noise to the input introduces visible changes in the explanations. We formally quantify this parameter of evaluation as the intersection over union between the explanations of the image and its perturbed variation:
\begin{equation}
R(x_{i}) = IoU( f_{expl}(x_{i}) , f_{expl}(\tilde{x}_{i}) )
\end{equation}
Here $x_i$ is an input image and $\tilde{x}_i$ is the perturbed image. $f_{expl}$ refers to the explanation function whose output is the saliency map for the interpretation method. The saliency maps are thresholded to create a binary map before calculating the IoU. We perform robustness experiment on various parameters including variations in brightness, contrast, random noise and rotation. For each parameter we define a range of values that describes the intensity of perturbation, eg. standard deviation of the noise distribution to be added in an image, delta value to increase the brightness/contrast in the image, angle of rotation etc. We define a wide range of threshold values in the range $[0.3-0.7]$ to generate the binary maps and compare their robustness for three approaches including MACE, GradCAM \cite{gradCAM} and GradCAM++ \cite{grad_cam_plus_plus}.
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.24\linewidth]{images/rob/2_4.png}%
\includegraphics[width=0.24\linewidth]{images/rob/3_4.png}%
\includegraphics[width=0.24\linewidth]{images/rob/4_4.png}%
\includegraphics[width=0.24\linewidth]{images/rob/5_4.png}
\caption{Threshold = 0.4}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.24\linewidth]{images/rob/2_5.png}%
\includegraphics[width=0.24\linewidth]{images/rob/3_5.png}%
\includegraphics[width=0.24\linewidth]{images/rob/4_5.png}%
\includegraphics[width=0.24\linewidth]{images/rob/5_5.png}
\caption{Threshold = 0.5}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.24\linewidth]{images/rob/2_6.png}%
\includegraphics[width=0.24\linewidth]{images/rob/3_6.png}%
\includegraphics[width=0.24\linewidth]{images/rob/4_6.png}%
\includegraphics[width=0.24\linewidth]{images/rob/5_6.png}
\caption{Threshold = 0.6}
\end{subfigure}
\includegraphics[width=0.5\textwidth]{images/rob/leg.JPG}
\caption{[Best viewed in color] Intersection Over Union for thresholded saliency maps of original and perturbed image }
\label{fig:robustness_exp}
\end{figure}
\section{Places Experiments}
The visualizations for the VGG model, trained on Places365 are shown in Figure \ref{fig:places_365_concepts}. We observed that most of the concepts were pruned and we had just 2-3 useful concepts per class. One of the reasons could be that the classes in the dataset were different from each other and didn't require a large set of concepts to make the prediction.
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_0/0_0_7_m_.png}
\label{fig:airfield_airplane}
\caption{Airfield - Airplane concept}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/0_2/0_2_9_m_.png}
\label{fig:airfield_background}
\caption{Airfield - Background}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/1_1/1_1_7_m_.png}
\label{fig:airplane_cabin_seats}
\caption{Airplane Cabin - seats in the airplane cabin}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/2_0/2_0_6_m_.png}
\label{fig:airplane_cabin_top_lights}
\caption{Airplane Cabin - Lights on top in airplane cabin}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_6/4_6_8_m_.png}
\label{fig:alley_light_region}
\caption{Alley - Region with light coming from the end of alley}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/4_2/4_2_9_m_.png}
\label{fig:alley_walls}
\caption{Alley - Walls of houses}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/5_3/5_3_9_m_.png}
\label{fig:amphitheater_curvature_steps}
\caption{Amphitheater - The curvature and steps in amphitheater}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/places/6_0/6_0_9_m_.png}
\label{fig:amusement_arcade_games}
\caption{Amusement Arcade - Games region in amusement arcade}
\end{subfigure}
\caption{[Best viewed in color] Concepts for Places365 Dataset.}
\label{fig:places_365_concepts}
\end{figure}
\section{ResNet Experiments}
The visualizations for the ResNet model, trained on AWA2 are shown in Figure \ref{fig:resnet_outputs}. We observed more head and body concepts in the case of the ResNet model, as compared to the VGG model. A possible reason for this could be that ResNet is a much deeper network than the VGG, and hence the features present in the activation maps of the last convolution layer majorly contain high-level body concepts.
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_head/0_3_8_m_.png}%
\caption{Fox - Head}
\label{fig:fox_head}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_0_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_0_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_0_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_0_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_3_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/zebra_legs/9_3_2_m_.png}%
\caption{Zebra-Legs}
\label{fig:zebra_legs}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_5_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_5_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_5_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_5_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_5_9_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_6_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_6_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/lion_face/4_6_3_m_.png}%
\caption{Lion-Head}
\label{fig:lion_head}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/horse_body/2_9_9_m_.png}%
\caption{Horse-Body}
\label{fig:horse_body}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/wolf_head/8_3_9_m_.png}%
\caption{Wolf-Head}
\label{fig:wolf_head}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_9_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/tiger_body/7_1_5_m_.png}%
\caption{Tiger-Body}
\label{fig:tiger_body}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_6_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_8_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/fox_back/0_6_9_m_.png}%
\caption{Fox-Back}
\label{fig:fox_back}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_0_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_1_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_2_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_3_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_4_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_5_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_7_m_.png}%
\hfill
\includegraphics[width=0.12\linewidth]{images/resnet/cat_face/5_0_9_m_.png}%
\caption{Cat-Face}
\label{fig:cat_face}
\end{subfigure}
\caption{ResNet Outputs}
\label{fig:resnet_outputs}
\end{figure}
\newpage
\section{Ablation Study}
In order to justify the need for both the $\mathcal{L}^O$ and $\mathcal{L}^D$ losses, we compared the faithfulness of the approach with and without these losses. According to our hypothesis, we require the $\mathcal{L}^O$ loss to help us recreate the output of the first dense layer of a classifier, so as to keep our concept embeddings faithful to the classifier. We require $\mathcal{L}^D$ Loss to avoid possibility of inconsistency with the final output of the classifier.
We train two models, one without using the $\mathcal{L}^O$ loss, while other without the $\mathcal{L}^D$ loss. We train both the models for 50 epochs, setting the learning rate to be $10^{-4}$.
In Figure \ref{fig:ablation}, we see that there is a significant difference in drop in the true class probability when either of the two losses are reduced, showing both these losses are necessary for the faithfulness of our MACE unit.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{abalation.png}
\caption{[Best viewed in color] Effect of $\mathcal{L}^O$ and $\mathcal{L}^D$ losses on model faithfulness}
\label{fig:ablation}
\end{figure}
\newpage
\bibliographystyle{plainnat}
|
2,869,038,155,305 | arxiv | \section{Decomposition of a manifold}
\subsection{Quantitative Huygens principle}
We introduce the set of points, chosen to lie on the nodal set
\[
\gener{0}=\{ \dian{C}_i^0\}_{i=1}^N \subset \ensuremath{{\bf M}^n}
\]
and lying at distance
\[
d(\dian{C}_i^0,\dian{C}_j^0)\sim \gedi{0}\sim\frac{1}{\sqrt{\ensuremath{\lambda}}}
\]
This is possible since in every geodesic ball of radius
\(O(\ensuremath{\lambda}^{-\frac12})\) there is always a zero.
Furthrmore we define geodesic balls of radii \(r_i\),
\(\mpalla{n}{\poir{0}{i}}\) bounded by geodesic
spheres \( \sphere{n-1}{\poir{0}{i}} \).
These balls are taken to overlap in \((n+1)\)-ples : we set as
\(\epik{0;i_1\dots i_k}\) the overlap region of the balls :
\[
\epik{0;i_1\dots i_k}=\bigcap_{j=1}^k \mpalla{n}{\poir{0}{i_j}}
\]
for \( k=2,\dots,n+1 \). These domains are bounded by
spherical regions denoted as \(\edra{0;i_1\dots i_k;\ell}\),
and have interior curvature data
\[
\ensuremath{{\mathrm Ric}}^{0;i_1\dots i_k}
\]
and face second fundamental form
\[
\ensuremath{\cancel{h}}^{0;i_1\dots i_k;\ell},\,\,\,\, \ensuremath{\cancel{k}}^{0;i_1\dots i_k;\ell},
\]
We call these sets {\it geodesic pixels}.
We produce generations of such pixels after the introduction of new
centers and arrive at the collection of pixels after generation \(j\):
\[
\epik{j;i_1\dots i_k}
\]
Its boundary consists of the elementary wave fronts
\(\edra{j;i_1,\dots,i_k;\ell}\) and curvature data:
\[
\ensuremath{{\mathrm Ric}}^{j;i_1\dots i_k},\,\,\, \ensuremath{\cancel{h}}^{j;i_1\dots i_k}, \,\,\,\,
\ensuremath{\cancel{k}}^{j;i_1\dots i_K},
\]
It is written in the form for
\[
\partial\epik{j;i_1\dots i_k}=
\bigcup_{j',k',\ell'} \edra{j';i_1\dots i_{k'};\ell'}
\]
The faces \( \edra{j';i_1\dots i_{k'};\ell'} \) are called
{\it elementary wave fronts} (EWF). Each pixel defines homothetic EWF
spanned by the tubular neighbouhoods of the elementary wave fronts:
\[
\shell{\sjes{j}{k}{\ell}}{}=
\ensuremath{I_{r,\varepsilon}}\times \edra{\ell;i_1\dots i_k},\quad \ensuremath{I_{r,\varepsilon}}=
((1-\varepsilon)r,(1+\varepsilon)r)\
\]
We introduce the localized tension \({\mathbb T}(h;\vartheta)\) of an EWF
given for a smooth test function
\(\vartheta,\ensuremath{\mbox{supp}}{\vartheta}\subset\edra{\ell;i_1\dots i_k}\):
\[
{\mathbb T}^j(h;\vartheta)=\ensuremath{\int_{{\bf F}}} \vartheta|\klgs^j h|^2
\]
where \(\ensuremath{\cancel{h}}\) is the mean curvature of the (EWF).
We introduce two numbers
\[
r(\ensuremath{ {\bf P} })=\sup_{\zeta\in C_0^\infty(\ensuremath{ {\bf P} })}
\left(\frac{{\mathbb E}^{(1)}(\ensuremath{{\mathrm R}},\ensuremath{{\mathrm B}};\zeta)}{{\mathbb E}^0(\ensuremath{{\mathrm R}};\zeta)}\right)
\]
and
\[
t(\ensuremath{ {\bf F} })=\sup_{\theta\in C_0^\infty(\ensuremath{ {\bf F} })}
\left(\frac{{\mathbb T}^1(h;\theta)}{{\mathbb T}^0(h;\theta)}\right)
\]
Let
\(\eta_\ell,\mu\) be positive constants. We say that a pixel \(\ensuremath{ {\bf P} }\)
with boundary consisting of EWF
\[
\partial\ensuremath{ {\bf P} }=\bigcup_{\ell=1}^{n+1}\ensuremath{ {\bf F} }_\ell
\]
satisfies an \((\eta_\ell,\mu) \) condition if
\[
t(\ensuremath{ {\bf F} }_\ell)\leq \eta_\ell,\qquad r(\ensuremath{ {\bf P} })\leq \mu
\]
Let \((\eta_{j';i_1\dots i_{k'};\ell},\mu_j,\ensuremath{\epsilon}_{\ell})\) be
positive constants. We assume that the geodesic pixels
\(\brick{\bjes{j}{\ell}}\) are selected so that their EWF
\(\face{\fjes{j'}{k'}{\ell'}} \)
satify \((\eta_{j';i_1\dots i_{k'}},\ensuremath{\epsilon}_{\ell})\)
estimate while \((\mu_{j},\ensuremath{\epsilon}_{\ell})\) are the
parameters in the curvature estimate. This set of pixels is a subset of
compact closure in the neighbourhood of the zero section in \(TM\). In the
sequel we will try to establish the equations defining this set, as
consequences of the above integral inequalities.
\paragraph{The structure of the metric in geodesic polar coordinates.}
Every geodesic pixel has center some \(\dian{C}^j_i\) obtained in the
\(j\)-th generation of center selection. We consider geodesic coordinates
from the neighboring pixels. Therefore let
\( \mpalla{n}{\poir{j}{i}}\) be a geodesic ball centered at the point
\( \dian{C}^j_i\) and introduce polar coordinates through Gauss lemma.
The metric is written then as:
\[
g=dr^2+\ensuremath{\cancel{\gamma}}(r)
\]
where \( \ensuremath{\cancel{\gamma}}(r)\) is a riemannian metric on the geodesic sphere
\( \sphere{n-1}{\poir{j}{i}} =\partial \mpalla{n}{\poir{j}{i}} \)
with second fundamental form and mean curvature respectively \( \ensuremath{\cancel{k}},\ensuremath{\cancel{h}} \).
Accordingly we have the first and second variation equations
for the metric \(\gamma\), if we denote the radial derivative by \(\paraf{}\)
while the angular ones by \(;j\) while in this section the symbol
\(\klgs\) denotes collectively the angular derivatives:
\begin{subequations}\label{variations}
\begin{align}
\paraf{\gamma} &= 2k, \label{first}\\
\paraf{k_{ij}} &= -k_{im}k_j^m-\ensuremath{{\mathrm R}}_{0i0j}\label{second}
\end{align}
\end{subequations}
Then setting:
\[
\cancel{\ensuremath{\upsilon}}=\frac12\log\left(\mbox{det}(\gamma)\right)
\]
\[
\sigma=R_{i0j0}k^{ij}+k_m^ik_j^mk_i^j,
\]
\[
\kappa=\kappa_0 =|k|
\]
we infer that
\begin{equation}\label{morefvsv}
\begin{split}
\paraf{\cancel{\ensuremath{\upsilon}}}=h \\
\kappa\paraf{\kappa}=\sigma
\end{split}
\end{equation}
Based on Newton's identities for symmetric polynomials we get the
following inequality for \(\sigma\):
\[
(3-C_{n,3})h\kappa^3 -h^3 -|\ensuremath{{\mathrm Rm}}|\kappa
\leq \sigma \leq \frac{C_{n,3}+3}{3}h\kappa^2-\frac12h^3+|\ensuremath{{\mathrm Rm}}|\kappa
\]
or
\[
-(C_{n,3}+2n)\kappa^3 -|\ensuremath{{\mathrm Rm}}|\kappa
\leq \sigma \leq \frac{C_{n,3}+3n}{3}\kappa^3+|\ensuremath{{\mathrm Rm}}|\kappa
\]
\paragraph{The structure equations}
The Gauss equations that relate the curvature of
\(\gamma, \perio{R}\) to the ambient curvature
\begin{subequations}\label{gauss}
\begin{align}
\perio{R}_{imjn}+(k_{ij}k_{mn}-k_{in}k_{jm}) &= \ensuremath{{\mathrm R}}_{imjn},\label{gauss1} \\
\perio{R}_{ij}+k_{ij}h-k_{im}k^m_j & = \ensuremath{{\mathrm R}}_{ij}, \label{gauss2}\\
\perio{R}+h^2-k^2 = \ensuremath{{\mathrm R}}-\ensuremath{{\mathrm R}}_{00},\label{gauss3}
\end{align}
\end{subequations}
The Codazzi equations
\begin{eqnarray*}
\klgs_ik_{jm}-\klgs_jk_{im}= R_{m0ij},
\end{eqnarray*}
constitute a Hodge system:
\begin{equation}\label{hodge}
\begin{split}
\ensuremath{\cancel{\mbox{curl}}}(k)_{ijm}=R_{moij}\\
\ensuremath{\cancel{\mbox{div}}}(k)_i-\kl_i h=R_{0i}
\end{split}
\end{equation}
where
\[
\ensuremath{\cancel{\mbox{curl}}}(U)_{ijk}=\klgs_kU_{ij}-\klgs_jU_{ik},\,\,\,\,\,\ensuremath{\cancel{\mbox{div}}}(U)_i=\klgs_jU^j_i
\]
\subsection{The second fundamental form and the mean curvature of the fronts}
\subsubsection{Harnack on the slice}
The Gauss-Codazzi equation for the spherical front gives that
\begin{equation}
\begin{split}
|\klgs h|^2 & = \ensuremath{\cancel{\mbox{div}}}(k)_ih_i- R_{0i}h_i,\\
|\ensuremath{\cancel{\mbox{div}}}(k)|^2 & =\ensuremath{\cancel{\mbox{div}}}(k)_ih_i+R_{0i}h_i
\end{split}
\end{equation}
Elaborating the preceding identities with Young inequality for sutiable
\(p\) and obtain \(h\)-growth inequality in the spherical front domain
\( \ensuremath{ {\bf F} } =\face{\fjes{j}{k}{\ell}} \) and cut-off \(\vartheta\):
\[
\ensuremath{\int_{{\bf F}}} |\ensuremath{\cancel{\mbox{div}}}(\vartheta k)|^2 \leq \frac{\eta\ensuremath{\epsilon}^p}{p}\ensuremath{\int_{{\bf F}}}
\vartheta^2|h|^{2p}+
\frac{(p-1)
\ogkos{ \mathscr{F}_{k,\ell}}^{\frac{p+1}{2p}}}{p\ensuremath{\epsilon}^{\frac{p}{p-1}}}
\ensuremath{\int_{{\bf F}}} |\ensuremath{\cancel{\mbox{div}}}(\vartheta k)|^2
\]
or choosing \(\ensuremath{\epsilon}=\left(\frac{2(p-1)}{p}\right)^{1-\frac1p}
\ogkos{\mathscr{F}_{k,\ell}}^{\frac12-\frac{1}{p^2}}\) and get that
\[
\ensuremath{\int_{{\bf F}}} |\ensuremath{\cancel{\mbox{div}}}(\vartheta k)|^2\leq \eta\left(\frac2p\right)^p(p-1)^{p-1}
\ogkos{\mathscr{F}_{k,\ell}}^{\frac{p^2-1}{2p}}
\ensuremath{\int_{{\bf F}}} \vartheta^2 h^{2p}
\]
We recall Sobolev inequality from \cite{lgmt}
for the case of EWF, \(\ensuremath{ {\bf F} }\subset \ensuremath{ {\bf W} }, r=\ensuremath{\frac{n-1}{n-p-1}}\):
\[
\left( \ensuremath{\int_{{\bf F}}} |U|^r \right)^{\frac1r} \leq C \ensuremath{\int_{{\bf F}}} |\klgs U|+|h||U|
\]
Starting from
\[
\left(\ensuremath{\int_{{\bf F}}} |U|^{\ell r} \right)^{\frac1r} \leq \ell
C\ensuremath{\int_{{\bf F}}} |U|^{\ell-1} |\klgs U|+|h||U|^\ell
\]
and applying H\"older's inequality
\begin{equation}\label{sobsli}
\begin{split}
\ensuremath{\int_{{\bf F}}} |U|^{\ell-1}|\klgs U|& \leq
\left(\ogkos{\ensuremath{ {\bf F} }}\right)^{1-\frac1p}\left(\ensuremath{\int_{{\bf F}}}
|U|^{p(\ell-1)}\right)^{\frac1p}
\left(\ensuremath{\int_{{\bf F}}} |\klgs U|^{\frac{p}{p-1}}\right)^{1-\frac1p} \\
\ensuremath{\int_{{\bf F}}}|h||U|^\ell & \leq \left(\ensuremath{\int_{{\bf F}}}
|U|^{q\ell}\right)^{1/q}\left(\ensuremath{\int_{{\bf F}}}|h|^{\frac{q}{q-1}}\right)^{1-\frac1q}
\end{split}
\end{equation}
where \(q<2\). The inequalities on the slice
\[
\ensuremath{\int_{{\bf F}}} |\klgs U|^2\leq C\ensuremath{\int_{{\bf F}}} |\ensuremath{\cancel{\mbox{div}}}(U)|^2+|\ensuremath{\cancel{\mbox{curl}}}(U)|^2+|\ensuremath{{\mathrm Rm}}||U|^2
\]
read for the localization second fundamental form, \(U=\zeta |k|\):
\[
\ensuremath{\int_{{\bf F}}} \zeta^2|\klgs k|^2\leq C\ensuremath{\int_{{\bf F}}} |\zeta|^2|\ensuremath{{\mathrm Ric}}|+\zeta|\klgs h|^2
+\left(|\klgs \zeta|^2+|\ensuremath{{\mathrm Rm}}|\zeta^2\right)|k|^2
\]
The last term in the right hand side leads after the application of
Young's inequality combined with Sobolev's inequality to
\[
\ensuremath{\int_{{\bf F}}} \zeta^2|\klgs k|^2\leq C\ensuremath{\int_{{\bf F}}} |\zeta|^2|\ensuremath{{\mathrm Ric}}|+\zeta|\klgs h|^2
+\ensuremath{\int_{{\bf F}}}\left(|\klgs \zeta|^2+|\ensuremath{{\mathrm Rm}}|\right)^{\frac{n-1}{2}}
\]
We consider now the slice regions \(\ensuremath{ {\bf F} }_j\) determined
by the tension energy through the sequence of constants \(\{\eta_j\}\),
in which \(t(\ensuremath{ {\bf F} }_j)=\eta_j\). The Harnack inequality is proved through Moser iteration
on the domain with smooth boundary \( \ensuremath{ {\bf W} } \subset \ensuremath{ {\bf F} }\)
obtained by smoothing out the boundary of \(\ensuremath{ {\bf F} }_j\).
Therefore we exhaust the domain through the harmonic approximation of
the face defining function \(F,\harm{F}_0\):
\[
\ensuremath{ {\bf W} }_j(\eta)=\{\dian{x}\in{\bf F}:
(\theta-\theta^{j+1})\eta \leq
|\harm{F}_0(\dian{x})|\leq (1-\theta+\theta^j)\eta\}
\]
and then Harnack inequality takes the form:
\[
\eaf{{\bf W}_\infty}|h|\leq \ensuremath{{\mathit D}}(\eta,\eta_\ell)
\mkf{{\bf W}_\infty}|h|
\]
where the quantity \(\ensuremath{{\mathit D}}(\eta,\eta_\ell)>0\) is calculated in the appendix.
\paragraph{The growth of the tension integral}
The preceding estimates necessitate the derivation of
radial growth estimates for the tension integral
\[
T={\mathbb T}^1(h;\vartheta)
\]
We start differentiating and proceed with the application of the
structural equations. We obtain the differentail inequality
\begin{eqnarray*}
\paraf{T}\leq 2\left(\ensuremath{\int_{{\bf F}}} \vartheta^2|h||\klgs h|^2 +\ensuremath{\int_{{\bf F}}}
|\klgs h|^2\vartheta\left|\paraf{\vartheta}\right|+
\ensuremath{\int_{{\bf F}}} \vartheta^2|h^i|\left|\paraf{h_i}\right|\right)
\end{eqnarray*}
We obtain that
\[
\left|\paraf{h_{i}}\right|\leq 2|k||\klgs k|+|\klgs\ensuremath{{\mathrm R}}_{00}|
\]
Therefore we have that:
\[
\paraf{T}\leq \left(\sup_{\edra{}}|h|\right)T+
\left(\sup_{\edra{}}|k|\right)
\left(\ensuremath{\int_{{\bf F}}} |\klgs h|^2\right)^{1/2}\left(\ensuremath{\int_{{\bf F}}} |\klgs k|^2\right)^{1/2}
\]
The last term is majorised by
\[
\left(\ensuremath{\int_{{\bf F}}} |\klgs h|^2\right)^{1/2}\left(\ensuremath{\int_{{\bf F}}} |\klgs k|^2\right)^{1/2}\leq
CT+\ensuremath{\int_{{\bf F}}} \left(\zeta^2|\ensuremath{{\mathrm Ric}}|+|\ensuremath{{\mathrm Rm}}||\klgs \zeta|^2\right)
\]
Furthermore we have that:
\[
\sup_{\edra{}}|k|\leq c_1T^{\frac12}+c_2
\]
We select cutoffs satisfying for some \(\eta>0\)
\[
\paraf{|\klgs\vartheta|}\leq \eta|\klgs \vartheta|
\]
We arrive at the inequality
\[
\paraf{T}\leq c_1T^{3/2}+c_2, \quad c_i=c_i(\edra{},\ensuremath{\epsilon}),\,
i=1,2
\]
We conclude through the use of Young's inequality:
\[
\paraf{T}\leq c\left(T^2+1\right)
\]
and hence for \(r\leq \frac{9\pi}{20C}\):
\[
\left|T(r(1+\ensuremath{\epsilon}))-T(r(1-\ensuremath{\epsilon}))\right|\leq
\tan(Cr)\leq Cr
\]
\subsubsection{Radial Harnack estimates}
In this section we derive the radial variation of the curvature quantities
in the radial interval \(\ensuremath{I_{r,\varepsilon}}=((1-\varepsilon) r,(1+\varepsilon)r)\)
relative to a given value of this quatities at \(r\).
We write \eqref{second} in contracted form:
\begin{eqnarray}
\paraf{h}=-k^2-\ensuremath{{\mathrm R}}_{00}
\end{eqnarray}
Fisrt we have that
\[
\paraf{h} \leq -\frac{1}{n-1} h^2-\ensuremath{{\mathrm R}}_{00}\Leftrightarrow h^2\leq
-(n-1)\paraf{h}-(n-1)\ensuremath{{\mathrm R}}_{00}
\]
We derive the differential inequality for \(k\):
\[
\paraf{|k|}\leq |k|^2+|\ensuremath{{\mathrm Rm}}|\leq (|k|^2+1)(|\ensuremath{{\mathrm Rm}}|+1)
\]
that is written after integration and elementary trigonometry as:
\[
\frac{||k(r(1+\ensuremath{\varepsilon}))|-|k(r(1-\ensuremath{\varepsilon}))||}{1+|k|(r(1+\ensuremath{\varepsilon}))|k|(r(1-\ensuremath{\varepsilon}))}
\leq \tan(Cr)
\]
for
\[
\max_{\ensuremath{I_{r,\varepsilon}}}\left(C(\ensuremath{\varepsilon})\ensuremath{\int_{{\bf K}}} |\ensuremath{{\mathrm Rm}}|\right),
\max_{\ensuremath{I_{r,\varepsilon}}}\left(|k(r(1+\ensuremath{\varepsilon}))|\cdot|k(r(1-\ensuremath{\varepsilon})|\right)\leq C
\]
For \(r\leq\frac{9\pi}{20C}\):
\[
||k(r(1+\ensuremath{\varepsilon}))|-|k(r(1-\ensuremath{\varepsilon}))||\leq C'r
\]
The estimates for \(|\kl k|,|\kl^2k|\) follow from the elementary
differential inequalities:
\[
\paraf{|\kl k|}\leq |k||\kl k|+(|\kl \ensuremath{{\mathrm Rm}}|+|\ensuremath{{\mathrm Rm}}||k|)
\]
\[
\paraf{|\kl^2 k|}\leq C\left(|k||\kl^2 k|+|\kl k|^2+|\ensuremath{{\mathrm Rm}}||\kl k|+
|\kl \ensuremath{{\mathrm Rm}}||k|+|\kl^2\ensuremath{{\mathrm Rm}}|\right)
\]
This lead to the estimates:
\[
|\kl k|\leq \left[Cr^2\left(\ensuremath{\int_{{\bf K}}}|\ensuremath{{\mathrm Rm}}|\right)+
\ensuremath{\int_{{\bf K}}}|\kl \ensuremath{{\mathrm Rm}}|\right]e^{\frac{C^2r^2}{2}}
\]
Similarly
\[
|\kl^2 k|\leq \left[Cr^2\left(\ensuremath{\int_{{\bf K}}}|\ensuremath{{\mathrm Rm}}|\right)+\ensuremath{\int_{{\bf K}}}|\kl \ensuremath{{\mathrm Rm}}|\right]e^{\frac{C^2r^2}{2}}
\]
Let \(\chi\) be a cutoff supported in \(\ensuremath{\mbox{supp}}{(\chi)}\subset \ensuremath{I_{r,\varepsilon}}\) such that
\[
\varepsilon |\chi'| +\varepsilon^2|\chi''|\leq C
\]
as well as
\[
\mu_\varepsilon(r)=\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \chi h(r',\dian{\xi})dr'
\]
\begin{mcres}
The following estimates hold for \(r \leq \frac{1}{\sqrt{C_0}} \) and
are relative to a fixed value of \(h(r)\):
\begin{eqnarray*}
|\mu_\varepsilon| \leq (2\varepsilon r)^{1/2}
\left(|h|^{1/2} + (2\varepsilon r|C_0^0|)^{1/2}\right)\\
|\klgs \mu_\varepsilon|\leq 2\ensuremath{\varepsilon} r
\left(|\klgs h|+ 2(1+\ensuremath{\varepsilon})r c_1(|\ensuremath{{\mathrm Rm}}|,|\kl \ensuremath{{\mathrm Rm}}|)
\right)\\
|\klgs^2\mu_\varepsilon|\leq (2\varepsilon r)
\left(|\klgs^2h|+2r(1+\ensuremath{\varepsilon})c_2(|\ensuremath{{\mathrm Rm}}|,|\kl \ensuremath{{\mathrm Rm}}|,|\kl^2\ensuremath{{\mathrm Rm}}|)\right)
\end{eqnarray*}
\end{mcres}
\paragraph{The estimate of \(\mu_\varepsilon\)} We start applying
and Cauchy-Schwarz in the end
\[
|\mu_\varepsilon|\leq (2\varepsilon r)^{1/2}
\left(\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} h^2\right)^{1/2}\leq ((n-1)\ensuremath{\varepsilon} r)^{1/2}
\left(|h(r(1-\ensuremath{\varepsilon}))-h(r(1+\ensuremath{\varepsilon}))|^{1/2}+(2\varepsilon C_0r)^{1/2}\right)
\leq (n-1)^{\frac12}r\ensuremath{\varepsilon}(|k|+C_0)
\]
\paragraph{Estimate of \(|\klgs \mu_\varepsilon|, |\klgs^2\mu_\ensuremath{\varepsilon}|\)}
We commence with
the integration by parts for \(U:\ensuremath{I_{r,\varepsilon}}\rightarrow \xw{}\):
\[
\left|\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} U \right|=\left|\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \paraf{r}U\right|\leq
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \paraf{r}|U|\leq
2\varepsilon r|U|- \ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} r\paraf{|U|}
\]
and apply this to
\[
\paraf{|\klgs h|}\leq C_1\left(|k||\klgs k|+|\klgs\ensuremath{{\mathrm R}}_{00}|\right)
\]
\[
\paraf{|\klgs^2h|}\leq C_2\left(|k||\klgs^2k|+|\klgs k|^2+
|\klgs^2\ensuremath{{\mathrm R}}_{00}|\right)
\]
and obtain the desired estimates.
\paragraph{The radial-slice estimates.}
We start recalling the differentiation identity:
\[
\paraf{}\left(\ensuremath{\int_{{\bf F}}} U\right) =\ensuremath{\int_{{\bf F}}} \paraf{U}+hU
\]
We introduce a cutoff function \(\harm{\chi}\) supported in the
shell \( \ensuremath{\mbox{supp}}{\harm{\chi}}\subset \ensuremath{ {\bf K} }\):
\begin{eqnarray*}
\ensuremath{ {\bf K} }= \ensuremath{I_{r,\varepsilon}} \times \ensuremath{ {\bf F} },\\
\ensuremath{I_{r,\varepsilon}}=\left((1-\varepsilon)r,(1+\varepsilon)r\right)
\end{eqnarray*}
Therefore applying Sobolev inequality on the slice, H\"older and standard
elliptic estimates we derive the differential inequalities
for the quantities
\[
\paraf{U_j}\leq a U_j+b_j, \qquad \paraf{H_j}\leq a\sqrt{U_j}H_j+c_j
\]
where
\begin{subequations}
\begin{align}
U_j&= \ensuremath{\int_{{\bf F}}} |\klgs_jk|^2,\qquad H_j=\ensuremath{\int_{{\bf F}}} |\klgs_jh|^2\\
a &= \ensuremath{{\mathit D}}(\eta)\left[\ensuremath{\int_{{\bf F}}} |k|^2+h^2\right]^{1/2}, b_0 = \ensuremath{\int_{{\bf F}}} |\ensuremath{{\mathrm Rm}}|,\\
b_1 &= \ensuremath{\int_{{\bf F}}} |\kl\ensuremath{{\mathrm Rm}}|^2+U_0\ensuremath{\int_{{\bf F}}}|\ensuremath{{\mathrm Rm}}|^2,\\
b_2 &= U_1\left[U_1+U_0\ensuremath{\int_{{\bf F}}}|\ensuremath{{\mathrm Rm}}|^2\right]+
\left(\ensuremath{\int_{{\bf F}}}|\kl \ensuremath{{\mathrm Rm}}|^2\right)U_0+\ensuremath{\int_{{\bf F}}}|\kl^2\ensuremath{{\mathrm Rm}}|^2
\end{align}
\end{subequations}
The diffrential equations for \(U_j,H_j\) are of the form
\[
\paraf{U}\leq aU+b
\]
we obtain for \(|\varrho|\leq \ensuremath{\varepsilon}\):
\[
U(r(1+\varrho))\leq e^{r\int_0^{\ensuremath{\epsilon}}a(1+r')dr'}\left(U(r)+r\int_0^{\varrho}
b(1+r')e^{-r'\int_0^{\varrho'}a(1+\varrho')d\varrho'}\right)
\]
\begin{hmcres}
The following hold true in \(\ensuremath{I_{r,\varepsilon}}\) for \(i,j=1,2\) and constants
\(c_{ij}=c_{ij}(r;\eta_1,\dots,\eta_j;\ensuremath{\epsilon}_j)>0\)
\[
U_{j,\pm}(r(1\pm \ensuremath{\epsilon}))\leq c_{1j} U_{j,\pm}(r),\qquad
H_{j,\pm}(r(1\pm \ensuremath{\epsilon}))\leq c_{2j} H_{j,\pm}(r)
\]
\end{hmcres}
\subsection{Selection of the pixels through the front estimates}
\subsubsection{The basic ansatz}
We introduce the notation for a smooth domain \( \ensuremath{ {\bf W} }\) in a riemannian manifold
equipped with riemannian volume \(dv\):
\[
{\cal D}^j(U; \ensuremath{ {\bf W} })=\int_ \ensuremath{ {\bf W} }|\kl^jU|^2dv
\]
We will derive the equation satisfied by \(u_\ensuremath{\lambda}\) near (EWF) i.e. in a
spherical shell of the form
\[
\ensuremath{ {\bf K} }= \ensuremath{I_{r,\varepsilon}} \times \ensuremath{ {\bf F} }= ((1-\varepsilon)r,(1+\varepsilon)r)\times
\ensuremath{ {\bf F} }
\]
for some \(\ensuremath{ {\bf F} }=\face{\fjes{j}{k}{\ell}}\)
with coordinates \( (r,\dian{\theta}) \) and volume
\( dr d\sigma = \sqrt{\gamma}drd\theta\), while \(h_\ell\) stands for the
mean curvature of the shell.
We select the front so that its mean
curvature is controlled by the eigenfunction growth near it.
The eigenfunction equation is written near (EWF) in the form:
\[
\parad{^2u_\ensuremath{\lambda}}{r^2}+h\paraf{u_\ensuremath{\lambda}}+\lap{\gamma}u_\ensuremath{\lambda}=-\ensuremath{\lambda} u_\ensuremath{\lambda}
\]
We make the ansatz for the parameter \(\beta\) to be determined and the
smooth functions \(\alpha,\phi\):
\[
u_\ensuremath{\lambda}(r,\theta)=
A\left(r,\dian{\theta}\right)
\sin\left(\beta_1\phi(r,\dian{\theta})\right),\,\,\,\,
\]
and arrange the (EWF) so that near the front \(A\) is smooth and
positive which implies that \(\phi\) inherits the unique continuation
property from \(u_\ensuremath{\lambda}\). Note also that we shoule inspect also level sets
of
the form \(\frac{k\pi}{\beta-1}\) for suitable \(k\)'s.
The equation then splits as:
\begin{equation}\label{ansatz}
\begin{split}
\left[\parat{A}+h\paraf{A}
-\beta_1^2\left(\left(\paraf{\phi}\right)^2+
|\klgs\phi|^2\right)A+
\left(\ensuremath{\cancel{\Delta}} A+\ensuremath{\lambda} A\right)\right]\sin(\beta_1\phi)+\\
+\beta_1\left[\left(2\paraf{A}+hA\right)
\paraf{\phi}
+A\parat{\phi}+\left(2\klgs A\cdot\klgs \phi+
A\ensuremath{\cancel{\Delta}}\phi\right)\right]\cos(\beta_1\phi)=0
\end{split}
\end{equation}
\subsubsection{Determination of local phase and local amplitude}
We require that for a function \(\alpha\) and a parameter \(\beta_2\)
to be determined there holds in the interval \((r_0(1-r),r_0(1+r))\):
\[
\paraf{A}+\frac{h}{2}A=\frac{\beta_2}{2}\paraf{e^{\beta_2\alpha}}
e^{-\frac{\mu_\epsilon}{2}}
\]
with the conditions
\[
A(0,\theta)=1,\quad \alpha(0,\theta)=-\frac{2\log\beta_2}{\beta_2}
\]
We find
\[
A(\sqrt{\ensuremath{\lambda}}r,\theta)=e^\Lambda,\,\,
\Lambda=-\frac12\mu_\varepsilon+\beta_2\alpha
\]
Therefore we obtain the pair of first order equations:
\[
\left(\paraf{\phi}\right)^2+|\klgs\phi|^2=\frac{\ensuremath{\lambda}}{\beta_1^2}-
\frac{1}{\beta_1^2}s_1, \quad
s_1=\beta_2\left(\ensuremath{\cancel{\Delta}} \alpha +\parat{\alpha}\right)+
\beta_2^2\left[|\klgs\alpha|^2+\left(\paraf{\alpha}\right)^2\right]-
\beta_2\klgs\mu\cdot\klgs\alpha+\ensuremath{{\bf K}}
\]
if
\[
\ensuremath{{\bf K}}=\frac14\left(2|k|^2+2R_{00}-h^2\right)-\frac12\ensuremath{\cancel{\Delta}}\mu+
\frac14|\klgs\mu|^2
\]
The other term gives
\[
\beta_2
\left(\paraf{\phi}\paraf{\alpha}+2\klgs\phi\cdot\klgs\alpha\right)=-s_2, \quad
s_2=\parat{\phi}+\ensuremath{\cancel{\Delta}}\phi-\klgs\mu\cdot \klgs \phi
\]
We choose \(\beta_1=\sqrt{\ensuremath{\lambda}}=\frac{1}{\beta_2}\).
The solution of these equations is achieved through the method of
characteristics. We set
\begin{subequations}\label{auxifu}
\begin{align}
{\bf A}&=|\klgs\alpha|^2+\left(\paraf{\alpha}\right)^2\label{auxifu1} \\
{\bf P}& =|\klgs\phi|^2+\left(\paraf{\phi}\right)^2 \label{auxifu2}\\
\ensuremath{\Lambda} &=\frac2\ensuremath{\lambda}\alpha +\mu \label{auxifu3}
\end{align}
\end{subequations}
Then we will consider this system in the form
\begin{subequations}\label{systemap}
\begin{align}
{\bf P}&= 1+
\frac{1}{\ensuremath{\lambda}\sqrt{\ensuremath{\lambda}}}\left[
\ensuremath{\cancel{\Delta}} \alpha +\parat{\alpha}+
\frac{1}{\ensuremath{\lambda}}{\bf A}-
\frac{1}{\sqrt{\ensuremath{\lambda}}}\klgs\mu\cdot\klgs\alpha+\ensuremath{{\bf K}}\right]
\label{systemap1}\\
\paraf{\phi}\paraf{\alpha}+2\klgs\phi\cdot\klgs\alpha &=
-\ensuremath{\lambda}\left(\parat{\phi}+\ensuremath{\cancel{\Delta}}\phi-\klgs\mu\cdot \klgs \phi\right)
\label{systemap2}
\end{align}
\end{subequations}
We derive estimates for \(\alpha,\phi\) in the form
\begin{subequations}\label{systemap}
\begin{align}
-\ensuremath{\cancel{\Delta}} \alpha -\parat{\alpha}-
\frac{1}{\sqrt{\ensuremath{\lambda}}}\klgs\mu\cdot\klgs\alpha
& =\ensuremath{\lambda}\sqrt{\ensuremath{\lambda}}\left[1-{\bf P}\right]+\frac{1}{\ensuremath{\lambda}}{\bf A}
+\ensuremath{{\bf K}},
\label{systemap1}\\
-\parat{\phi}-\ensuremath{\cancel{\Delta}}\phi+\klgs\Lambda\cdot \klgs \phi &=\frac1\ensuremath{\lambda}
\paraf{\phi}\paraf{\alpha}
\label{systemap2}
\end{align}
\end{subequations}
\subsubsection{Equations for the higher derivatives of the phase and
amplitude functions}
Furthermore differentiating the equations \eqref{systemap} we obtain the necessary
equations for the higher derivatives of the phase function,
\[
\Psi_{0,m}=\frac{d^m\phi}{dr^m},\,\, a_{0,m}=\frac{d^m\alpha}{dr^m}\,\,
\ensuremath{\Lambda}_{0,m}=\frac{d^m\ensuremath{\Lambda}}{dr^m}, \,\,
{\bf A}_m=\frac{d^m{\bf A}}{dr^m},\,\, {\bf P}_{0,m}=\frac{d^m{\bf P}}{dr^m},
\qquad m=1,2,3
\]
and the angular derivatives
\[
\Psi_{\gamma,m}=|\klgs^m\phi|^2,\,\, a_{\gamma,m}=|\klgs^m\alpha|^2
\,\,\ensuremath{\Lambda}_{\gamma,m}=|\klgs^m \ensuremath{\Lambda}|^2, \,\,
{\bf A}_{\gamma,m}=|\klgs^m{\bf A}|^2,\,\, {\bf P}_{\gamma,m}=
|\klgs^m{\bf P}|^2, \qquad m=1,2,3
\]
\paragraph{Radial derivatives of the phase function}
Specifically we obtain, applying the commutation rules, the following
equations:
\begin{subequations}\label{systemap2}
\begin{align}
\ensuremath{\cancel{\Delta}} \Psi_{0,1}+ \parat{\Psi_{0,1}}- \klgs\ensuremath{\Lambda} \cdot
\klgs\Psi_{0,1} &={\bf S}_{0,1} \label{systemap21}\\
\ensuremath{\cancel{\Delta}} \Psi_{0,2}+ \parat{\Psi_{0,2}}- \klgs\ensuremath{\Lambda} \cdot \klgs
\Psi_{0,2} &={\bf S}_{0,2} \label{systemap23}\\
\ensuremath{\cancel{\Delta}} \Psi_{0,3}+\parat{\Psi_{0,3}}- \klgs\ensuremath{\Lambda} \cdot \klgs \Psi_{0,3} &=
{\bf S}_{0,3} \label{systemap24}
\end{align}
\end{subequations}
where the ''source terms'' are the following:
\begin{subequations}\label{sources}
\begin{align}
{\bf S}_{0,1} &=R^0_{\,0}\Psi_1+\ensuremath{{\mathrm R}}^l_{\,0}\klgs_l\phi+\klgs\ensuremath{\Lambda}^1\cdot\klgs\phi
\label{source1}\\
{\bf S}_{0,2} &=2\ensuremath{{\mathrm R}}^l_{\,j}\klgs_j\Psi_{1}+\paraf{\ensuremath{{\mathrm R}}^l_{\,0}}\klgs_l\Psi_1
+\klgs\ensuremath{\Lambda}^2\cdot\klgs\phi
\label{source2}\\
{\bf S}_{0,3} &=-\left(\ensuremath{{\mathrm R}}^l_{\,0}\klgs_l\psi_2+ 2\ensuremath{{\mathrm R}}^l_{\,0j0;j}
\klgs_l\Psi_1+2\ensuremath{{\mathrm R}}^l_{\,0j0}\klgs_l\Psi_1\klgs^j\Psi_1
+\klgs\ensuremath{\Lambda}^3\cdot\klgs\phi\right)
\label{source3}
\end{align}
\end{subequations}
\paragraph{Higher angular derivatives of the phase}
We obtain, applying the commutation rules, the following
equations:
\begin{subequations}\label{systemap4}
\begin{align}
\ensuremath{\cancel{\Delta}} \Psi_{\gamma,1}+\parat{\Psi_{\gamma,1}}- \klgs\ensuremath{\Lambda} \cdot
\klgs \Psi_{\gamma,1}&=
{\bf S}_{2,1} \label{systemap41}\\
\ensuremath{\cancel{\Delta}} \Psi_{\gamma,2}+ \parat{\Psi_{\gamma,2}}- \klgs\ensuremath{\Lambda} \cdot \klgs \Psi_{\gamma,2} &=
{\bf S}_{2,2} \label{systemap42}
\end{align}
\end{subequations}
where the ''source terms'' are the following:
\begin{subequations}\label{sources}
\begin{align}
{\bf S}_{2,1} &= -\left(2\ensuremath{{\mathrm R}}^s_{\,00j}\klgs_j\phi+\ensuremath{{\mathrm R}}^l_{\,j}\klgs_j\phi\right)
\klgs_l\phi-(\tau^i_j\klgs_j\phi\klgs_i\phi+
\tau^i\klgs_i\klgs_j\phi\klgs_j\phi)
\label{ssource1}\\
{\bf S}_{2,2} &= -\ensuremath{{\mathrm Rm}}*\klgs^2\phi*\klgs^2\phi-
\klgs\ensuremath{{\mathrm Rm}}*\klgs\phi*\klgs^2\phi-\ensuremath{{\mathrm Rm}}*\klgs\Psi_{0,1}*\klgs\phi+
\ensuremath{\cancel{\Delta}}\tau^i\klgs_i\phi \label{ssource2}
\end{align}
\end{subequations}
\paragraph{Radial derivatives of the amplitude}
Set first that:
\[
{\bf V}=\frac1\ensuremath{\lambda}{\bf A}+\ensuremath{\lambda}^{3/2}\left(1-{\bf P}\right)+\ensuremath{{\bf K}}
\]
Specifically we obtain, applying the commutation rules, the following
equations:
\begin{subequations}\label{systemap2}
\begin{align}
-\ensuremath{\cancel{\Delta}} a_{0,1}- \parat{a_{0,1}}+\frac{1}{\sqrt{\ensuremath{\lambda}}}\klgs \mu\cdot
\klgs a_{0,1} &={\bf S}_{3,1} +\paraf{{\bf V}}\label{systemap21}\\
-\ensuremath{\cancel{\Delta}} a_{0,2}- \parat{a_{0,2}}+\frac{1}{\sqrt{\ensuremath{\lambda}}}\klgs\mu \cdot \klgs
a_{0,2} &=
{\bf S}_{3,2}+\parat{{\bf V}} \label{systemap23}\\
-\ensuremath{\cancel{\Delta}} a_{0,3}-\parat{a_{0,3}}+\frac{1}{\sqrt{\ensuremath{\lambda}}}\klgs\mu \cdot
\klgs a_{0,3} &= {\bf S}_{3,3}+\frac{d^3{\bf V}}{dr^3} \label{systemap24}
\end{align}
\end{subequations}
where the ''source terms'' are the following:
\begin{subequations}\label{sources}
\begin{align}
{\bf S}_{3,1} &=
R^0_{\,0}a_{0,1}+\ensuremath{{\mathrm R}}^l_{\,0}\klgs_l\alpha+\klgs\mu\cdot\klgs\alpha
\label{source1}\\
{\bf S}_{3,2} &=2\ensuremath{{\mathrm R}}^l_{\,j}\klgs_ja_{0,1}+\paraf{\ensuremath{{\mathrm R}}^l_{\,0}}\klgs_la_{0,1}
+\klgs\mu^{''}\cdot\klgs\alpha
\label{source2}\\
{\bf S}_{3,3} &=-\left(\ensuremath{{\mathrm R}}^l_{\,0}\klgs_la_{0,2}+ 2\ensuremath{{\mathrm R}}^l_{\,0j0;j}
\klgs_la_{0,1}+2\ensuremath{{\mathrm R}}^l_{\,0j0}\klgs_la_{0,1}\klgs^j\Psi_{0,1}
+\klgs\mu^{'''}\cdot\klgs\alpha\right)
\label{source3}
\end{align}
\end{subequations}
\paragraph{Higher angular derivatives of the amplitude}
We obtain, applying the commutation rules, the following
equations:
\begin{subequations}\label{systemap4}
\begin{align}
\ensuremath{\cancel{\Delta}} a_{\gamma,1}+\parat{a_{\gamma,1}}- \klgs\mu \cdot \klgs a_{\gamma,1} &=
{\bf S}_{4,1} \label{systemap41}\\
\ensuremath{\cancel{\Delta}} a_{\gamma,2}+ \parat{a_{\gamma,2}}- \klgs\mu \cdot \klgs a_{\gamma,2} &=
{\bf S}_{4,2} \label{systemap42}
\end{align}
\end{subequations}
where the ''source terms'' are the following:
\begin{subequations}\label{sources}
\begin{align}
{\bf S}_{4,1} &= -\left(2\ensuremath{{\mathrm R}}^s_{\,00j}\klgs_j\alpha+\ensuremath{{\mathrm R}}^l_{\,j}\klgs_j\alpha
\right)\klgs_l\alpha-(\tau^i_j\klgs_j\alpha\klgs_i\alpha+
\tau^i\klgs_i\klgs_j\alpha\klgs_j\alpha)
\label{ssource1}\\
{\bf S}_{4,2} &= -\ensuremath{{\mathrm Rm}}*\klgs^2\alpha*\klgs^2\alpha-
\klgs\ensuremath{{\mathrm Rm}}*\klgs\alpha*\klgs^2\alpha-\ensuremath{{\mathrm Rm}}*\klgs a_{0,1}*\klgs\alpha+
\ensuremath{\cancel{\Delta}}\tau^i\klgs_i\alpha \label{ssource2}
\end{align}
\end{subequations}
\subsubsection{Estimates }
We recall the variation identities:
\begin{subequations}\label{raddif}
\begin{align}
\ensuremath{\int_{{\bf F}}} v\paraf{v}& =\frac12\paraf{}\left(\ensuremath{\int_{{\bf F}}} v^2 \right)-\frac12\ensuremath{\int_{{\bf F}}} hv^2
\label{raddif1}\\
\ensuremath{\int_{{\bf F}}} v\parat{v}& = \frac12\parat{}\left(\ensuremath{\int_{{\bf F}}} v^2\right)
-\ensuremath{\int_{{\bf F}}}\left(\paraf{v}\right)^2-\paraf{}\left(\ensuremath{\int_{{\bf F}}} hv^2\right)
\label{raddif2} \\
\paraf{h} & =-|k|^2-R_{00}
\end{align}
\end{subequations}
\eqref{systemap1} gives the pointwise estimate provided \(\alpha\geq 0\):
\begin{eqnarray}\label{pointamp}
-\alpha\ensuremath{\cancel{\Delta}} \alpha-\alpha\parat{\alpha}
-\frac{1}{\ensuremath{\lambda}}{\bf A}
\alpha +\frac{1}{\sqrt{\ensuremath{\lambda}}}(\klgs \mu\cdot \klgs \alpha)\alpha
\leq \ensuremath{\lambda}^{\frac32}\alpha + \ensuremath{{\bf K}}\alpha
\end{eqnarray}
We employ Hardy's inequalities for the harmonic approximation of \(\mu\)
in the terms of the last parentheses:
\[
\ensuremath{\int_{{\bf F}}} \left(\ensuremath{\cancel{\Delta}}\mu+\frac12|\klgs \mu|^2\right)\theta^2\alpha
\leq \eaf{\ensuremath{ {\bf F} }}|\mu|\ensuremath{\int_{{\bf F}}} \left|\frac{\ensuremath{\cancel{\Delta}} \mu}{\mu}\right|\alpha+
\frac12(\eaf{\ensuremath{ {\bf F} }}|\mu|)^2\ensuremath{\int_{{\bf F}}}\left|\frac{\kl\mu}{\mu}\right|^2
\alpha \leq \ensuremath{\epsilon}\left(\koh{3}+\frac12\koh{2}\eaf{\ensuremath{ {\bf F} }}|\mu|
\right)\ensuremath{\int_{{\bf F}}}|\klgs\alpha|^2
\]
provided that
\[
\eaf{\ensuremath{ {\bf F} }}|\mu|\leq \ensuremath{\epsilon} \alpha
\]
We conclude that after elementary manipulations :
\begin{eqnarray}\label{intampest1}
\ensuremath{\int_{{\bf F}}} \vartheta^2|\klgs \alpha|^2
-\parat{}\left(\ensuremath{\int_{{\bf F}}} \vartheta^2 \alpha^2\right)-\paraf{}\left(
\ensuremath{\int_{{\bf F}}} \vartheta^2h\alpha^2\right)\leq c\ensuremath{\lambda}^{3/2}\ensuremath{\int_{{\bf F}}}
\vartheta^2\alpha^2
\end{eqnarray}
Similalry starting from
\begin{equation}\label{pointphest}
-\phi\ensuremath{\cancel{\Delta}} \phi-\phi\parat{\phi}=-
\klgs\mu\cdot\klgs\phi-\frac1\ensuremath{\lambda}\left(
\phi\paraf{\alpha}\paraf{\phi}+2\phi\klgs\alpha\cdot\klgs\phi
\right)
\end{equation}
we get that
\begin{eqnarray}\label{intampest1}
\ensuremath{\int_{{\bf F}}} \vartheta^2|\klgs \phi|^2
-\parat{}\left(\ensuremath{\int_{{\bf F}}} \vartheta^2 \phi^2\right)-\paraf{}\left(
\ensuremath{\int_{{\bf F}}} \vartheta^2h\phi^2\right)\leq c\ensuremath{\lambda}^{3/2}\ensuremath{\int_{{\bf F}}}
\vartheta^2\phi
\end{eqnarray}
We introduce arbitrary test functions in the angular
variables \(\vartheta\) supported in \(\ensuremath{ {\bf F} }\).
Furthermore in \eqref{systemap}, multiplying by
\(\theta\) and using the harmonic approximation \(\harm{\Lambda}\) of
\(\Lambda\) and its initial form \(\harm{\Lambda}_0\):
\[
|\ensuremath{\cancel{\Delta}}\ensuremath{\Lambda}|+|\klgs\ensuremath{\Lambda}|^2\leq |\ensuremath{\cancel{\Delta}}\harm{\ensuremath{\Lambda}}|+2|\klgs\harm{\ensuremath{\Lambda}}|^2+
\ensuremath{\epsilon}
\]
Then for \(\eta=\eaf{\ensuremath{ {\bf F} }}|\ensuremath{\Lambda}_0|\) we apply GHI and get that
\begin{eqnarray}
\ensuremath{\int_{{\bf F}}}\left(\left|\ensuremath{\cancel{\Delta}} \harm{\Lambda}_0\right|+
2\left|\kl \harm{\Lambda}_0\right|^2+\ensuremath{\epsilon}\right)
\vartheta^2 \leq
\ensuremath{\int_{{\bf F}}}\left(\koh{3}\eta+2\koh{2}\eta^2+\ensuremath{\epsilon}\right)|\klgs\vartheta|^2
\end{eqnarray}
and conclude that for suitable \(\eta>0\)
\[
\ensuremath{\int_{{\bf F}}} \vartheta^2|\klgs \phi|^2\leq \ensuremath{\int_{{\bf F}}}\left(|\ensuremath{{\bf K}}|+\ensuremath{\lambda}\right)\vartheta^2
\]
if we select \(\beta^2_1=C\). Also we get that:
\[
\ensuremath{\int_{{\bf F}}} \vartheta^2\left(\paraf{\phi}\right)^2\leq C\ensuremath{\int_{{\bf F}}}
\left(|\ensuremath{{\bf K}}|+\ensuremath{\lambda}\right)\vartheta^2
\]
We will sketch the derivation of estimates derived
from the preceding integral identities. We multiply the
equation by \(\vartheta^2\) and we obtain that
\begin{equation}\label{intphasest2}
2\ensuremath{\int_{{\bf F}}} \vartheta^2|\klgs \phi|^2 -
\ensuremath{\lambda}\parat{}\left(\ensuremath{\int_{{\bf F}}} \vartheta^2\phi^2\right)+
2\paraf{}\left(\ensuremath{\int_{{\bf F}}} h\vartheta^2\phi^2\right)+
2\ensuremath{\int_{{\bf F}}}\vartheta^2\left(\paraf{\phi}\right)^2
\leq
\ensuremath{\int_{{\bf F}}} \vartheta^2\left(2\ensuremath{\Lambda}_1\phi^2 + 2\sigma |\klgs\phi|^2\right)
\end{equation}
We select
\[
\ensuremath{\epsilon}_1=\frac14,\qquad \ensuremath{\epsilon}_2=\frac14\eaf{\ensuremath{ {\bf F} }}|\alpha|
\]
and obtain that:
\begin{equation}
2\ensuremath{\int_{{\bf F}}}\vartheta^2|\klgs \phi|^2-\parat{}\left(\ensuremath{\int_{{\bf F}}}\vartheta^2\phi^2\right)+
2\ensuremath{\int_{{\bf F}}}\vartheta^2\left(\paraf{\phi}\right)^2+
2 \paraf{}\ensuremath{\int_{{\bf F}}} h\vartheta^2\phi^2\leq 2\ensuremath{\int_{{\bf F}}}\ensuremath{\Lambda}\vartheta^2\phi^2
\end{equation}
and simplifies to the following inequality:
\begin{eqnarray}\label{phasin}
2\ensuremath{\int_{{\bf F}}}\vartheta^2|\klgs \phi|^2-\ensuremath{\lambda}\parat{}\left(\ensuremath{\int_{{\bf F}}}\vartheta^2\phi^2\right)
+\ensuremath{\lambda} \paraf{}\ensuremath{\int_{{\bf F}}} h\vartheta^2\phi^2 \leq \ensuremath{\int_{{\bf F}}} \ensuremath{\Lambda}\vartheta^2\phi^2
\end{eqnarray}
These are the basic equations that will be used in order to derive the
neccessary estimates.
\begin{ewfest}
Let \(\vartheta\) be a cut off along the shell
\( \ensuremath{ {\bf F} }\), then the following holds true:
\[
c_2(r,\ensuremath{ {\bf F} })){\cal D}^1(\vartheta\phi;\ensuremath{ {\bf F} })
\leq {\cal D}^0(\vartheta\phi;\ensuremath{ {\bf F} })\leq
C_2(r,\ensuremath{ {\bf F} }){\cal D}^1(\vartheta\phi;\ensuremath{ {\bf F} })
\]
\end{ewfest}
\subsubsection{The shell estimates} We observe that the equations \eqref{systemap2},
\eqref{systemap4} are of the form as \eqref{systemap2} and hence are set in
the integral form:
\begin{equation}\label{basic_sh}
\ensuremath{\int_{{\bf F}}} \vartheta^2|\klgs v|^2-\parat{}\left(\ensuremath{\int_{{\bf F}}}\vartheta^2v^2\right)
+\paraf{}\left(\ensuremath{\int_{{\bf F}}} h\vartheta^2v^2\right)\leq \ensuremath{\int_{{\bf F}}} \ensuremath{{\bf L}}\vartheta^2v^2
\end{equation}
where \(\ensuremath{{\bf L}}\) is an expression comprising \(\mu,\ensuremath{{\mathrm R}},\klgs\ensuremath{{\mathrm R}}\) etc.
We introduce the quantities
\begin{equation}\begin{split}
\Pi_0(r) &= \ensuremath{\int_{{\bf F}}} \vartheta^2v^2 \quad\geq 0 \\
\Pi_1(r) &= \ensuremath{\int_{{\bf F}}} \left(\vartheta\paraf{v}\right)^2\geq 0
\end{split}\end{equation}
and we integrate \eqref{basic_sh} in the interval
\( \ensuremath{I_{r,\varepsilon}}=((1-\ensuremath{\epsilon})r,(1+\ensuremath{\epsilon})r)\)
choosing
\[
\ensuremath{\mbox{supp}}{(\vartheta)}\cap \{ r/ (r,\theta)\in \ensuremath{ {\bf K} } \} \subset \ensuremath{I_{r,\varepsilon}}
\]
and obtain:
\begin{equation}
\begin{split}
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\left[ \left(\paraf{\Pi_0(r')}\right)^2
-\paraf{\Pi_0}\ypf{1}\right]dr'+\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\dirfin{v}{\vartheta}
\Pi_0(r')dr' = \\
=-\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \Pi_0(r')\ypf{2}(r')dr'
\end{split}
\end{equation}
where
\begin{subequations}\label{grp}
\begin{align}
\dirfin{v}{\vartheta} &= \ensuremath{\int_{{\bf F}}} \vartheta^2|\klgs v|^2,\label{}\\
\ypf{1}&= \ensuremath{\int_{{\bf F}}} h\vartheta^2v^2,\label{}\\
\ypf{2} &= \ensuremath{\int_{{\bf F}}} \left(\ensuremath{{\bf L}}\vartheta^2
+\vartheta\parat{\vartheta}+\left(\paraf{\vartheta}\right)^2+
|\klgs \vartheta|^2-\vartheta\ensuremath{\cancel{\Delta}}\vartheta\right)v^2
\end{align}
\end{subequations}
We notice first that
\begin{subequations}\label{grp}
\begin{align}
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\ypf{1}\paraf{\Pi_0}\leq
\frac12\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\left(\paraf{\Pi_0}\right)^2+\frac12 \ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\ypf{1}^2 \label{}\\
\ypf{1}\leq C(\ensuremath{ {\bf K} })\left(\ensuremath{\int_{{\bf F}}} (h\vartheta)^2 \right)^{1/2}\left[
\left(\dirfin{v}{\vartheta}\right)^{1/2}+\left(\ensuremath{\int_{{\bf F}}}
|h|U^2\right)^{1/2}\right]
\end{align}
\end{subequations}
In conclusion we obtain
\[
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\left(\paraf{\Pi_0}\right)^2\leq C(\ensuremath{ {\bf P} })
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}\Pi_0^2
\]
\begin{ewfest}
Let \(\vartheta\) be a cut off along the shell
\( \ensuremath{ {\bf F} }\), then the following holds true:
\[
c_2(r,\ensuremath{ {\bf F} })){\cal D}^1(\vartheta\phi;\ensuremath{ {\bf F} })
\leq {\cal D}^0(\vartheta\phi;\ensuremath{ {\bf F} })\leq
C_2(r,\ensuremath{ {\bf F} }){\cal D}^1(\vartheta\phi;\ensuremath{ {\bf F} })
\]
\end{ewfest}
\noindent We will derive an upper bound for \(\psi^{2k},\, \psi=\log\Pi_0\)
then for \(r\in[\delta^{\frac1k},1]\):
\[
|\psi|\leq \frac1\delta\left(|\Pi_0|^\delta+\frac{1}{|\Pi_0|^\delta}\right)
\]
Specifically we have by Hardy's inequality
\[
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \psi^{2kp}\leq C_p(2kr(1+\ensuremath{\epsilon}))^p\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} |\psi|^{(2k-1)p}|\psi'|^p
\leq \frac{C_p}{\delta^{\frac1\delta}}(2kr(1+\ensuremath{\epsilon}))^p\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}
|\psi|^{(2k-1-\frac1\delta)p}|\Pi_0'|^p
\]
for \(0<\delta<1\) that we choose shortly. After H\"older inequality we
obtain:
\[
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \psi^{2kp}\leq
\frac{C_pC(\ensuremath{ {\bf P} })}{\delta^{\frac1\delta}}
(2kr(1+\ensuremath{\epsilon}))^p\left(\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}
|\psi|^{(2k-1-\frac1\delta)\frac{2p}{2-p}}\right)
\left(\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}|\Pi_0|^2\right)^{\frac{p}{2}}
\]
Then selecting \(\delta=\frac{1}{k+\ensuremath{\epsilon}'-1}, \ensuremath{\epsilon}'>0\) we get:
\[
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \psi^{2kp} \leq \alpha
\left(\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}}|\Pi_0|^2\right)^{\frac{\ensuremath{\epsilon}'p}{k}}, \qquad
\alpha_k=\left(2kr(1+\ensuremath{\epsilon})\right)^{2+\frac{pk}{\ensuremath{\epsilon}'}}
\left[C_pC(\ensuremath{ {\bf P} })(k+\ensuremath{\epsilon}'-1)^{k+\ensuremath{\epsilon}'-1}\right]^{\frac{k}{\ensuremath{\epsilon}'}}
\]
Then we have that
\[
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \psi^{2kp}\leq
\alpha_k^{\frac{2k-\ensuremath{\epsilon}'p}{k-\ensuremath{\epsilon}'p}}\left(1-\frac{\ensuremath{\epsilon}'p}{k}\right)
\left(\frac{2\ensuremath{\epsilon}'p}{k}\right)^{\frac{k}{k-\ensuremath{\epsilon}'p}}
\]
Summing up we get that:
\[
\sum_{k=0}^\infty
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \frac{|\psi|^{2k}}{(2k)!}\leq \left(\sum_{k=0}^\infty
\frac{\alpha_k^{\frac{2k-\ensuremath{\epsilon}'p}{k-\ensuremath{\epsilon}'p}}}{(2k!)}
\left(1-\frac{\ensuremath{\epsilon}'p}{k}\right)
\left(\frac{2\ensuremath{\epsilon}'p}{k}\right)^{\frac{k}{k-\ensuremath{\epsilon}'p}}\right)^{\frac{1}{p}}
\]
This is majorised after \(\Gamma\)-function duplication formula by
\[
\sum_{k=0}^\infty \left[C_pC(\ensuremath{ {\bf F} })(1+\ensuremath{\varepsilon})r\right]^{k}=c(\ensuremath{ {\bf F} },r)
\]
Therefore we obtain that:
\[
\sum_{j=0}^\infty\frac{x^{2j}}{(2j)!}\geq \frac12e^x
\]
and hence
\[
\ensuremath{\int_{\mathcal{I}_{r,\varepsilon}}} \Pi_0^2 \leq c(\ensuremath{ {\bf F} },r)
\]
Following the usual iteration obtained in the appendix we obtain the usual
Harnack inequalities. These are culminated in the following
\begin{harnshell}
The following estimates hold true:
\begin{eqnarray*}
\ensuremath{\int_{{\bf F}}}\phi^2\theta^2 & \leq C_{00}({\ensuremath{ {\bf K} }})\ensuremath{\epsilon}\\
\ensuremath{\int_{{\bf F}}}|\klgs \phi|^2\theta^2 & \leq C_{01}({\ensuremath{ {\bf K} }})\ensuremath{\epsilon}\\
\ensuremath{\int_{{\bf F}}}|\klgs^2\phi|^2\theta^2 & \leq C_{02}({\ensuremath{ {\bf K} }})\ensuremath{\epsilon}
\end{eqnarray*}
provided that
\[
\ensuremath{\int_{{\bf F}}}\phi^2,\ensuremath{\int_{{\bf F}}}|\kl\phi|^2,\ensuremath{\int_{{\bf F}}}|\kl^2\phi|^2\geq \ensuremath{\epsilon}
\]
\end{harnshell}
\subsection{The lower bound}
The lower bound is obtained by an inductive argument based on the
reduction to the boundary of a pixel. The one dimensional case
indicates the method. Assume that we have a function \(\phi\)
defined in the interval \([0,\mu],\, \phi(0)=\phi(\mu)=0\) satisfying
\[
c_1\int_0^\mu \phi(x)^2 w(x)dx\leq \mu^2\int_0^\mu \phi'(x)^2w(x)dx\leq
c_2\int_0^\mu \phi(x)^2 w(x)dx
\]
for a positive weight \(w, 0<c_1<c_2\). Then we prove min-max principle
aalows us to assert that the roots of \(\phi\) in \([0,\mu]\) are at l
east \(c_1\). Then we
will derive the inequality through the min-max principle, standard
eigenvalue and isoperimetric inequalities.
The construction is based on the estimates of the preceding section
that lead to Harnack inequalities for the restriction of the
eigenfunction on the boundary of a geodesic pixel.
We consider first a domain \( \ensuremath{ {\bf W} }_\ensuremath{\epsilon}\subset \ensuremath{ ({\bf M}^n,g) }, n\geq 2\) as
is described in the appendix on the harmonic approximation.
We drop the \(\ensuremath{\epsilon}\) subsctript.
\begin{lelo_in}
Let \( \ensuremath{ {\bf W} } \) be a domain with smooth boundary \(\partial \ensuremath{ {\bf W} }\) equipped
with a smooth metric \(\gamma\), induced from the metric \({\bf g}\).
Let \( \momwb{\psi}\) be a smooth nonnegative function satisfying the estimate for
\(\gamps{j}=|\klgs^j\psi|^2\):
\begin{eqnarray}\label{sunorekt}
c_{j0}\tau {\cal D}^0(\gamps{j};\partial \ensuremath{ {\bf W} })\leq
{\cal D}^1(\gamps{j};\partial \ensuremath{ {\bf W} })\leq c_{j1} \tau
{\cal D}^0(\gamps{j};\partial \ensuremath{ {\bf W} }),\qquad j=0,1,2
\end{eqnarray}
Let the zero set of \(\psi,\,\, \nod{}{\psi}\) be \((n-2)\)-rectifiable.
Moreover let \(\momb{\phi}\) be such that for \(\tau>0\):
\begin{eqnarray}\label{eswtekt}
c_{30} \tau{\cal D}^0(\phi; \ensuremath{ {\bf W} })\leq
{\cal D}^1(\phi; \ensuremath{ {\bf W} })\leq c_{31}\tau{\cal D}^0(\phi; \ensuremath{ {\bf W} })
\end{eqnarray}
and
\[
{\cal D}^0(\phi-\psi;\partial \ensuremath{ {\bf W} })\leq \ensuremath{\epsilon}
{\cal D}^0(\psi;\partial \ensuremath{ {\bf W} })
\]
Then for \(C_4=C_4\left(\tau, c_{10},c_{11},c_{20},c_{21},c_{30},c_{31}\right)\):
\[
\sharp\{ \partial \ensuremath{ {\bf W} } \setminus \nod{}{\psi}\} < C_4
\]
and
\[
\Ha{n-1}(\nod{}{\phi})\geq C_0 \tau^{-\frac{n-1}{2}}
\]
\(C_0\) is a numerical constant.
\end{lelo_in}
\noindent In the appendix we prove that if a smooth function satisfies
estimates \eqref{sunorekt} then in every connected component of
\[
\ensuremath{ {\bf W} }_\ensuremath{\epsilon}=\{ x\in \ensuremath{ {\bf W} }/\phi(x)>\ensuremath{\epsilon}\}
\]
the following inequalities hold:
\begin{eqnarray}\label{harnekt}
\eaf{ \ensuremath{ {\bf W} }_\ensuremath{\epsilon}} \phi \leq C_0(\ensuremath{ {\bf F} }) \ensuremath{\epsilon},\qquad
\eaf{ \ensuremath{ {\bf W} }_\ensuremath{\epsilon}}|\klgs \phi|\leq C_1(\ensuremath{ {\bf F} })\ensuremath{\epsilon},\qquad
\eaf{ \ensuremath{ {\bf W} }_\ensuremath{\epsilon}}|\klgs^2\phi|\leq C_2(\ensuremath{ {\bf F} }) \ensuremath{\epsilon}
\end{eqnarray}
for the constants \(C_0,C_1,C_2\) explicitly calculated
Moreover the harmonic approximation method implies that the function
\(\kappa=\phi-\harm{\phi}\) satisfies the estimate
\[
\olww{} |\kl(\zeta \kappa)|^2 \leq C\tau\olww{}\left(\zeta\phi\right)^2
\]
\paragraph{The tubular neighbourhood of a nodal set}
The initial form of the harmonic function \(\harm{\phi}\)
denoted by \( \harm{\phi}_0\), is of degree \(m \aplt \sqrt{\tau} \):
Let
\[
\ensuremath{{\bf T}}_\ensuremath{\epsilon}(\nod{}{\harm{\phi}_0})=\{ x\in \ensuremath{ {\bf W} }/|\harm{\phi}_0(x)|\leq\ensuremath{\epsilon}\}
\]
and use Hardy's inequality \eqref{gh2} we obtain
\[
\oltube{\ensuremath{\epsilon}}{\harm{\phi}_0} \phi^2 \leq
C_H\ensuremath{\epsilon}^{\frac2m}\oltube{\ensuremath{\epsilon}}{\harm{\phi}_0}|\kl \phi|^2
\leq c_{21}\ensuremath{\epsilon}^{\frac2m}\tau \oltube{\ensuremath{\epsilon}}{\harm{\phi}_0}(\zeta\phi)^2 \leq
C\tau\ensuremath{\epsilon}^{\frac2m}\ogkos{{\ensuremath{{\bf T}}_\ensuremath{\epsilon}(\harm{\phi}_0)}}
\mkf{\ensuremath{{\bf T}}_\ensuremath{\epsilon}(\harm{\phi}_0)}|\phi|
\]
The usual Moser iteration gives us that
\[
\eaf{\ensuremath{{\bf T}}_\ensuremath{\epsilon}}|\kappa|\leq
C\tau\ensuremath{\epsilon}^{\frac1m}\left(\frac{1}{\ogkos{\ensuremath{{\bf T}}_\ensuremath{\epsilon}(\harm{\phi_0})}}
\oltube{\ensuremath{\epsilon}}{\harm{\phi}_0}
|\kappa|^2\right)^{1/2}
\]
Then according to the usual Harnack estimates we have that if
\(\phi_\ensuremath{\epsilon}>0\) near
y\[
\ensuremath{{\bf T}}_\ensuremath{\epsilon}(\harm{\phi}_0)=\{ x\in \ensuremath{ {\bf W} }:|\harm{\phi}_0(x)|\leq \ensuremath{\epsilon}\}
\bigcap \{x\in \ensuremath{ {\bf W} }: \phi(x)>\eta\ensuremath{\epsilon}\}
\]
we have that:
\[
\eaf{\ensuremath{{\bf T}}}|\kappa_\ensuremath{\epsilon}|\leq C \tau\ensuremath{\epsilon}^{\frac1m}\eta\ensuremath{\epsilon}
\]
we conclude then that \( \phi\sim \harm{\phi}\) near \(\nod{}{\harm{\phi}}\).
The harmonic approximation applied on the slice
allows us to construct the tubular neighbourhoods of nodal sets by the
Lojasiewicz inequality for the approximating function. Specifically, we have
that for suitable choice of \(\ensuremath{\epsilon}\) and the tube near the singularities of
multiplicity \(m\) of the nodal set:
\[
|\phi|\leq |\kappa|+|\harm{\phi}|\leq C(\tau\eta\ensuremath{\epsilon}^{\frac1m}+1)\ensuremath{\epsilon}
\]
and selecting \(\tau\ensuremath{\epsilon}^{\frac1m}=1,\eta=2\)
\[
\nod{}{\phi}\subset \ensuremath{{\bf T}}_{3\tau^{-m}}(\harm{\phi})
\]
\paragraph{The inductive argument} For \(n=2\) we reduce on a disc and
then we derive estimates for the zero sets using the functions \(\harm{u}\)
in order to produce test functions for the application of the min-max
theorem, as it was used in the Courant nodal domain theorem. We recall here
the following lemma from \cite{hs}
\begin{noco}
There exists \(\eta=\eta_0(n)\in(0,\frac12]\) such that with
\(\eta\in(0,\eta_0)\) if \(w_1,w_2\in C^2(B_2(0)), |w_j|_{C^2}\leq 1\) and
if \(|w_1-w_2|_{C^1}\leq \frac{\eta^5}{2}\) then
\[
{\cal H}^{n-1}(B_{2-\eta}(0)\bigcap \{w_1=0,|\kl w_1|\geq \eta\})
\leq (1+c\eta){\cal H}^{n-1}(B_2(0)\bigcap \{w_2=0,|\kl w_2|\geq
\frac{\eta}{2}\})
\]
\end{noco}
\paragraph{Estimates on nodal domains and eigenvalues}
We recall the definition of higher order eigenvalues as
\[
\ensuremath{\lambda}_k=\max_{S_{k-1}\subset H^1(\partial \ensuremath{ {\bf W} })}\min_{v\in S_{k-1}^\perp}
\left(\frac{{\cal D}^1(v;\partial \ensuremath{ {\bf W} })}{{\cal D}^0(v;\partial \ensuremath{ {\bf W} })}\right)
\]
The min-max principle for the eigenvalue, \(\ensuremath{\lambda}_k\) suggests also that
\[
\ensuremath{\lambda}_k=\min_{S_k\subset H^1(\partial \ensuremath{ {\bf W} })}
\max_{v\in S_k}
\left(\frac{{\cal D}^1(v;\partial \ensuremath{ {\bf W} })}{{\cal D}^0(v;\partial \ensuremath{ {\bf W} })}\right)
\]
and therefore:
\[
\ensuremath{\lambda}_k\leq \max_{v\in S_k}
\left(\frac{{\cal D}^1(v;\partial \ensuremath{ {\bf W} })}{{\cal D}^0(v;\partial \ensuremath{ {\bf W} })}\right)
\]
\paragraph{Upper bounds on eigenvalues.} Let us denote that
\[
\sharp\{ \partial \ensuremath{ {\bf F} } \setminus \nod{}{\psi}\}=k
\]
and having selected \(\ensuremath{ {\bf F} } \) containing a geodesic disc of radius
\(\frac{1}{\sqrt{\tau}}\) then \(k>1\).
We select as \( S_k= \{\zeta_1,\dots,\zeta_k\} \) where
\(\zeta_j: \ensuremath{ {\bf W} }\rightarrow \xw{}, \zeta_j>0 \) defined as follows.
Let \(\{\cour{i}\}_{j=1}^k\) be the nodal domains of \(\psi\).
Set then the tubular neighbourhood
\[
\cour{i,\ensuremath{\epsilon}}=\{ x\in\cour{i}/d(x,\nod{}{\psi})>\ensuremath{\epsilon}\}
\]
and
\[
\nod{j,\ensuremath{\epsilon}}{\psi}=\partial \cour{j,\ensuremath{\epsilon}}
\]
For this we approximated \(\psi\) harmonically and replaced near its nodes
by \(\widehat{\psi}_0\) so that the Hardt-Simon estimate holds.
We set as
\[
\zeta_j|_{\cour{j,\ensuremath{\epsilon}}}=\tau_j
\]
and
\[
\zeta_j|_{\cour{i}}=0, j\neq i, \quad \mbox{or} \quad
\zeta_j|_{\ensuremath{{\bf T}}_{\theta\ensuremath{\epsilon}}\nod{}{\widehat{\psi}_0}\cap \cour{j}}=0
\]
and
\[
|\klgs \zeta_j|\leq \eta_j
\]
for \(\tau_j,\eta_j\) to be selected. After Sard 's lemma
\(\nod{j,\ensuremath{\epsilon}}{\psi}\) is smooth for suitable \(\ensuremath{\epsilon}>0\)
and hence we assume that \(\partial\cour{i}\)
is also smooth
We approximate harmonically \(\psi\) and construct
a smooth partition of unity, each member supported in a connected component.
The Harnack inequalities in the appendix suggest that
\[
||\harm{\psi}-\psi||_{2,\ensuremath{ {\bf F} }} \leq C(\ensuremath{ {\bf F} }) \ensuremath{\epsilon}
\]
and
\[
\eaf{\partial \ensuremath{ {\bf W} }}|\psi|\leq \eta
\]
We compute that
\[
\ensuremath{\lambda}_k \leq
\frac{\sum_{j=1}^k\tilde{b}_j\eta_j^2}{\sum_{j=1}^k(\mu_j^2\tilde{b}_j+b_j)\tau_j^2}
\]
where
\[
a_j =\ogkos{\nod{j,\ensuremath{\epsilon}}{\psi}},\quad b_j=\ogkos{\cour{j,\ensuremath{\epsilon}}},
\quad \tilde{b}_j=
\ogkos{\cour{j,\ensuremath{\epsilon}}\setminus \cour{j,\mu\ensuremath{\epsilon}}}
\]
Sard's lemma again allows us to choose \(\ensuremath{\epsilon},\mu\) so that
\[
\tilde{b}_j\sim \mu b_j
\]
Furthermore the isoperimetric inequality suggests:
\[
\tilde{b}_j^{\frac{n-2}{n-1}}\leq
C\left(a_j(1+\ensuremath{\epsilon}_j+\eaf{\partial \ensuremath{ {\bf W} }}|\klgs\psi|)+b_j\eaf{\partial \ensuremath{ {\bf W} }}|h|\right)
\]
Moreover for suitable \(\ensuremath{\epsilon}>0\):
\[
\sum_{j=1}^k b_j = \ogkos{ \ensuremath{ {\bf W} }}-\ogkos{\ensuremath{{\bf T}}_\ensuremath{\epsilon}(\psi)}\geq
(1-\ensuremath{\epsilon})\ogkos{ \ensuremath{ {\bf W} }}-\left(\ogkos{\nod{}{\psi}}\right)^{\frac{n-1}{n-2}}
\]
Let \(b_1,b_k\) be such that:
\[
b_1\geq \frac1k\ogkos{ \ensuremath{ {\bf W} }}
\]
and similarly
\[
b_k\leq \frac1k\ogkos{ \ensuremath{ {\bf W} }}
\]
We conclude
\[
\ensuremath{\lambda}_k\leq
\frac{k\left(\Ha{n-2}(\nod{}{\psi})\right)^{\frac{n-1}{n-2}}}{\ensuremath{\epsilon}\ogkos{ \ensuremath{ {\bf W} }}}
\]
\paragraph{} The first eigenvalue of the laplacian in
\(\cour{i,\ensuremath{\epsilon}}\) satisfies :
\[
\ensuremath{\lambda}_1(\cour{i,\ensuremath{\epsilon}})\leq
\frac{\int_{\cour{i,\ensuremath{\epsilon}}}|\klgs \psi|^2}{\int_{\cour{i,\ensuremath{\epsilon}}}\psi^2}
\]
Therefore
\[
\ensuremath{\lambda}_1(\cour{i,\ensuremath{\epsilon}})\leq c_{01}\tau
\]
We need now a lower bound for the first eigenvalue of \(\cour{i,\ensuremath{\epsilon}}\):
we select \(\ensuremath{\epsilon}\), so that we avoid dumbbell shape of the nodal domain.
Cheeger estimate of the first eigenvalue combined with the isoperimetric
inequality on the spherical piece suggest that
\[
2c_{01}\tau\geq \ensuremath{\lambda}_1(\cour{i,\ensuremath{\epsilon}}) \geq
\frac{c}{\ogkos{\cour{i,\ensuremath{\epsilon}}}^{\frac{2}{n-1}}}-\mkf{\cour{i,\ensuremath{\epsilon}}}|h|
\]
or
\[
\ogkos{\cour{i,\ensuremath{\epsilon}}}\geq \left(c_{01}\tau+\mkf{\cour{i,\ensuremath{\epsilon}}}|h|\right)^{-\frac{n-1}{2}}
\]
Hence since at least one nodal domain should have volume at most
\[
( c_{01}\tau)^{-\frac{n-1}{2}}\leq \frac1k \ogkos{\ensuremath{ {\bf F} }}
\]
hence we have that
\[
k\leq \ogkos{\ensuremath{ {\bf F} }}(c_{01}\tau)^{\frac{n-1}{2}}
\]
\paragraph{Lower bounds on eigenvalues}
The max-min part suggests that
\[
\ensuremath{\lambda}_k\max_{S_{k-1}} \min_{\zeta\in S_{k-1}^\perp}\frac{\int_\ensuremath{ {\bf F} } |\kl
\zeta|^2}{\int_\ensuremath{ {\bf F} } \zeta^2}
\]
Therefore in order to construct a test space \(S_{k-1}^\perp\)
we have to merge some of the nodal domains of \(\nod{}{\psi}\).
We start introducing two numbers for \(l=1,\dots,k\) and for \(\psi\geq 0\):
\[
D_l^2=\frac{1}{\ensuremath{\lambda} N}\int_{\cour{l}}|\kl \psi|^2,\quad
W_l^2=\frac{1}{N}\int_{\cour{l}}\psi^2,\quad
P_l=\frac{1}{P}\int_{\cour{l}}\psi,
\]
where
\[
N=\int_\ensuremath{ {\bf F} }\psi^2 ,\quad P=\int_\ensuremath{ {\bf F} } \psi
\]
We deduce after dyadic considerations\(\{\cour{\ell}\}_{\ell=1,\dots,k}\)'s that we can select two
domains, for \(\ell=1,2\) and find some \(j_1\geq 1\):
\[
\frac{1}{2^{j_1}}\leq |D_1^2-D_2^2|\leq \frac{1}{2^{j_1}}+\frac{1}{2^{j_1+1}}
\]
Similarly for \(j_2,j_3\geq 1\):
\[
\frac{1}{2^{j_2}}\leq |N_1^2-N_2^2|\leq \frac{1}{2^{j_2}}+\frac{1}{2^{j_2+1}}
\]
and
\[
\frac{1}{2^{j_3}}\leq |P_1^2-P_2^2|\leq \frac{1}{2^{j_3}}+\frac{1}{2^{j_3+1}}
\]
As \(k\) increases then we select the smallest triple \((j_1,j_2,j_3)\) with
this order. We consider the space of functions of the form for \(\eta\) a
parameter that we select appropriately and compensates the growth of the
function in the two nodal domains:
\[
\Psi=a_1\left(\frac{1}{P_1}\psi\chi_{\cour{1}}-\frac{\eta}{P_2}\psi\chi_{\cour{2}}
\right)+\sum_{j=3,\dots,k} a_j \psi\chi_{\cour{j}}
\]
We compute that
\[
\Psi^2=a_1^2\left(\frac{1}{P_1}\psi\chi_{\cour{1}}-
\frac{\eta}{P_2}\psi\chi_{\cour{2}}\right)^2
+\sum_{j=3,\dots,k} a_j^2 \psi^2\chi_{\cour{j}}
\]
and
\[
|\kl\Psi|^2=a_1^2\left(\frac{1}{P_1}\chi_{\cour{1}}\klgs\psi-
\frac{\eta}{P_2}\chi_{\cour{2}}\klgs\psi\right)^2
+\sum_{j=3,\dots,k} a_j^2 |\klgs\psi|^2\chi_{\cour{j}}
\]
Furtheromore we have
\[
\int_\ensuremath{ {\bf F} }|\kl\Psi|^2=a_1^2\int_\ensuremath{ {\bf F} }
\left(\frac{1}{P_1}\chi_{\cour{1}}\klgs\psi-\frac{\eta}{P_2}
\chi_{\cour{2}}\klgs\psi\right)^2
+\sum_{j=3,\dots,k} a_j^2 D_j^2
\]
Using Lagrange's identity we write the first term as:
\[
\int_\ensuremath{ {\bf F} }
\left(\frac{1}{P_1}\chi_{\cour{1}}\klgs\psi-\frac{\eta}{P_2}\chi_{\cour{2}}\klgs\psi
\right)^2
=\left(\frac{1}{P_1P_2}\right)^2
\left[\left(D_1^4+\eta D_2^4\right)\left(P_1^4+P_2^4\right)-
\left(P_1^2D_1^2-P_2^2\eta D_2^2\right)^2\right]^{\frac12}
\]
After Young's inequality since
\(D_{12}^2=D_1^2+D_2^2,P_{12}=P_1+P_2\) and setting
\[
a=\frac{P_1}{P_{12}},\qquad b=\frac{P_2}{P_{12}},
\qquad
\upsilon=\frac{\eta D_2}{D_{12}}
\]
we deduce that the last term is minimized by the following expression:
\[
4\ensuremath{\lambda}\left[(4ab-a^2b^2-1)\upsilon^2+
2a^2(1-2ab)\upsilon+\frac{1}{16}-a^4\right]^{\frac12}
\]
After a tedious but otherwise elementary calculation we select
\(a\) close to \( 1 \) and \(\eta\) sufficiently
big then we arrange that the Rayleigh quotient is bounded by \(c\ensuremath{\lambda}\),
for a suitable constant \(c>0\).
\begin{itemize}
\item
\[
\frac34\leq D_1\leq 1, \qquad 0\leq D_2<\frac14
\]
which implies that
\[
\frac12\leq D_1-D_2\leq 1
\]
\item
\[
\frac12\leq D_1\leq \frac34, \qquad \frac14 D_2<\frac12
\]
which implies
\[
0\leq D_1-D_2\leq \frac12
\]
\end{itemize}
Therefore we have to majorise a function of the form:
\[
Q(a_1,\dots, a_{k-1})=
\frac{\sum_{j=1}^{k-1}D_{j-1}a_j^2}{\sum_{j=1}^{k-1}W_ja_j^2}\geq
\frac{\ensuremath{\lambda}}{k}
\]
We pick \(\ensuremath{\epsilon}\sim \frac{1}{\sqrt{\tau}}\) and compute that:
\[
k^2\Ha{n-2}(\nod{}{\psi}\cap \ensuremath{ {\bf F} }) \geq \sqrt{\tau}\ogkos{\ensuremath{ {\bf F} }}
\]
Therefore since \(k^2\leq c\tau^{\frac{n-2}{2}}
\Ha{n-2}(\nod{}{\psi}\cap\ensuremath{ {\bf F} }),\ogkos{\ensuremath{ {\bf F} }}\geq \tau^{\frac{n-1}{2}}\):
we conclude that
\[
\Ha{n-2}(\nod{}{\psi})\geq C \tau^{-\frac{n-1}{2}}
\]
\paragraph{Modification of the pixel for the singularities of \(\phi\)}
Hopf's strong maximum principle for \(\harm{\phi}\)
guarantees that \(\nod{}{\harm{\phi}}\) meets transversely the boundary of
the pixel. The comparison of nodal sets reduces the problem to the
estimation of the nodal set of \(\harm{\phi}\).
Therefore using the geodesic spheres starting near
\(\nod{}{\psi}\) we obtain by the coarea formula, away from
singularities of \(\harm{\phi}\):
\[
\Ha{n-1}(\nod{}{\phi})= \sum_{j=0}^m\int_{\ensuremath{\epsilon}_{j+1}}^{\ensuremath{\epsilon}_j} dr
\int_{\nod{}{\psi}\subset S_r}d\Ha{n-2}(\nod{}{\psi})
\geq c\tau^{-\frac{n-1}{2}}
\]
\paragraph{The eigenfunctions} The eigenfunctions fullfill the hypothesis
of the preceding theorem and therefore
\begin{lo_in}
Let \( u : \ensuremath{ ({\bf M}^n,g) } \rightarrow \xw{}, \lap{g}u=-\ensuremath{\lambda} u, \ensuremath{ {\bf P} } \subset \ensuremath{ ({\bf M}^n,g) }\) be a pixel.
Then
\[
\Ha{n-1}(\nod{}{u_\ensuremath{\lambda}} \cap \ensuremath{ {\bf P} } ) \geq C(\ensuremath{ {\bf P} })\sqrt{\ensuremath{\lambda}}
\]
\end{lo_in}
\noindent We are placed in the \(\ensuremath{ {\bf P} }\) with boundary spanned by the fronts
\( \ensuremath{ {\bf F} }_\ell, \ell=1,\dots,m_{i_1\dots i_k}\):
\[
\partial\ensuremath{ {\bf P} }=\bigcup_{k=1}^{n+1}
\bigcup_{\, \mbox{all} \: i_1,\dots, i_k}
\bigcup_{\ell=1}^{m_{i_1\dots i_k}} \ensuremath{ {\bf F} }_\ell
\]
The boundary of the pixel is not smooth, but we apply the smoothing method
described in the appendix.
Near each front we have the representation of the eigenfunction in the form:
\[
u(r,\dian{\theta})=e^{-\frac{1}{\sqrt{\ensuremath{\lambda}}}\mu(r,\dian{\theta})+
\beta_2\alpha(\dian{\theta})}
\sin(\beta_1\phi(r,\dian{\theta}))
\]
In the preceding paragrpah we obtained the estimates that \(\phi\) satisfeis
with the constant \(c_{11}\sim \ensuremath{\lambda}\). Therefore we have that inside a pixel
of diameter \(\frac{1}{\sqrt{\ensuremath{\lambda}}}\):
\[
\Ha{n-1}(\nod{}{u_\ensuremath{\lambda}} \cap \ensuremath{ {\bf P} })\geq C(\ensuremath{ {\bf P} })\ensuremath{\lambda}^{-\frac{n-1}{2}}
\]
and since the manifold is splitted in \(\ensuremath{\lambda}^{\frac{n}{2}}\) pixels of trhis
size we have the required estimate.
\subsection{The upper bound}
\cite{do}
proved that the Hausdorff measure of the nodal set
contained in a pixel \(\ensuremath{ {\bf P} }\) with its boundary smoothed out
is majorised by:
\begin{equation}\label{locdong}
\Ha{n-1}(\nod{}{u_\ensuremath{\lambda}} \cap \ensuremath{ {\bf P} }) \leq\frac12\ensuremath{\int_{{\bf P}} } |\kl \log q|+\sqrt{n\ensuremath{\lambda}} \mbox{vol}(\ensuremath{ {\bf P} })+
\mbox{vol}(\partial\ensuremath{ {\bf P} })
\end{equation}
where
\[
q(u)= |\kl u|^2+\frac{\ensuremath{\lambda}}{n}u^2
\]
We split the set \(\ensuremath{ {\bf P} }\) in three parts
\begin{itemize}
\item The part that is free of nodes:
\[
\ensuremath{ {\bf P} }_\ensuremath{\epsilon}=\{ x\in \ensuremath{ {\bf P} }/ |u(x)|\geq
\eta\}=\tilde{\ensuremath{ {\bf P} }}\setminus\ensuremath{{\bf T}}_\eta(\nod{}{u})
\]
\item The tubular neighbourhood of the nodal set
\(\ensuremath{{\bf T}}_\eta(\nod{}{u}\cap \ensuremath{ {\bf P} })\) is splitted further as
\[
\ensuremath{{\bf T}}_\eta(\nod{}{u}\cap \ensuremath{ {\bf P} })=
\mathscr{R}_\eta(\nod{}{u}\cap \ensuremath{ {\bf P} })\cup
\mathscr{S}_\eta(\nod{}{u}\cap \ensuremath{ {\bf P} })
\]
\subitem The regular part of the nodal set
\[
\mathscr{R}_\eta(\nod{}{u}\cap \ensuremath{ {\bf P} })=
\{ x\in \ensuremath{ {\bf P} }/ |u(x)|<\eta, |\kl u(x)|\geq \eta\}
\]
\subitem and the neighbourhood of the singular set:
\[
\ensuremath{{\bf Sing}}_\eta(\nod{}{u}\cap \ensuremath{ {\bf P} })=
\{x\in \ensuremath{ {\bf P} }/ |u(x)|<\eta,|\kl u(x)|< \eta \}
\]
\end{itemize}
The problems in \eqref{locdong} arise in the singular part of the nodal
set. We will estimate the behaviour of \(|\kl \ln q|\) near
\(\ensuremath{{\bf Sing}}_\ensuremath{\epsilon}(\ensuremath{ {\bf P} })\). We will use induction with respect to the
multiplicity of the nodal set, introducing new pixels of
multiplicity bounded from below and approximate harmonically the
eignefunction there:
\[
\ensuremath{{\bf T}}_\eta(\nod{}{u}\cap\ensuremath{ {\bf P} })=
\bigcup_{\ell=0}^{m(\lambda)} \bigcup_{m=1}^{c_\ell} \ensuremath{{\bf Sing}}_{\ell,m}
\]
where \(\ensuremath{{\bf Sing}}_{\ell,m}\) is a connected component of mulitiplicity
\(\ell\)
\[
\ensuremath{{\bf Sing}}_{\eta;\ell,m}=\{ x\in\ensuremath{ {\bf P} }/ u(x)^2+\cdots+|\kl^{\ell-1}u(x)|\leq \eta,
|\kl^\ell u(x)|\geq \eta\}
\]
Notice that
\[
\ensuremath{{\bf Sing}}_{0,m}=\ensuremath{ {\bf P} }_\ensuremath{\epsilon},\qquad \ensuremath{{\bf Sing}}_{1,m}=\ensuremath{{\bf Reg}}_\eta(\nod{}{u}\cap\ensuremath{ {\bf P} })
\]
The harmonic approximation of \(u_\ensuremath{\lambda}\) in \(\ensuremath{{\bf Sing}}_{\ell,m}\)
is denoted \(\harm{u}_{\ell,m}\). We introduce the localization functions
\(\zeta_{\ell,m}\) with supports
\(\ensuremath{\mbox{supp}}(\zeta_{\ell,m})\subset \ensuremath{{\bf Sing}}_{\ell,m} \)
We split the integral as
\[
\ensuremath{\int_{{\bf P}} } |\kl \log q|\leq
\sum_{m=0}^{c_0}\int_{\ensuremath{ {\bf P} }_\eta}\zeta_{0,m}|\kl\log q|+
\sum_{\ell=0,m=1}^{\lambda,c_\ell}\olwbn{\ell,m} \zeta_{\ell,m}|\kl \log q|
\]
Therefore we set
\begin{eqnarray*}
I_{\ell,j}[\zeta_m]= \olwbn{\ell,j} \zeta_{\ell,j}|\kl \log q|
\leq C\olwbn{\ell,j}\zeta_{\ell,j} \ensuremath{{\bf U}}
\end{eqnarray*}
where we set
\[
\ensuremath{{\bf U}}=\frac{|\kl|\kl u|^2|+c|u||\kl u|}{|\kl u|^2+cu^2},\qquad
\ensuremath{{\bf IU}}_{\ell,j}=\olwbn{\ell,j} \zeta_{\ell,j}\ensuremath{{\bf U}},
\]
We will estimate the terms appearing in the right hand side of the inequality.
If \(\ell=0\) we have that for \(\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})=\eta^{nt}\ensuremath{\lambda}^{qn+3}\) where
\(t\) is a parameter to be selected:
\begin{eqnarray*}
\begin{split}
\eta \leq |u(x)| & \leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\eta\\
|\kl u(x)|& \leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\sqrt{\ensuremath{\lambda}}\eta\\
|\kl^2u(x)| & \leq C\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\ensuremath{\lambda}\eta
\end{split}
\end{eqnarray*}
Therefore we have that
\[
\ensuremath{{\bf IU}}_{0,j}\leq \ensuremath{\lambda}^{-\frac{n}{2}+qn+\frac32}\eta^{ns}
\]
and we select as
\[
\eta=\ensuremath{\lambda}^{-ms}, \qquad m=\frac{q}{s}+\frac{1}{ns}
\]
If \(\ell=1\) then we split the tube around the regular \(\mathscr{R}\) of
\(\nod(u)\) in the layers
\[
\mathscr{R}=\bigcup_{j=1}^\infty\mathscr{R}_j,\qquad
\mathscr{R}_j= \{ x\in /\theta^j\eta\leq |u(x)|\theta^{j-1}\eta,
|\kl u|\geq \eta\}
\]
We have that in \(\mathscr{R}_j\)
\begin{eqnarray*}
\begin{split}
|u(x)| & \leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\theta^j\eta\\
\eta \leq |\kl u(x)|& \leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\sqrt{\ensuremath{\lambda}}\eta\\
|\kl^2u(x)| & \leq C\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\ensuremath{\lambda}\eta
\end{split}
\end{eqnarray*}
In this case we get
\[
\ensuremath{{\bf IU}}_{0,j}\leq \ensuremath{\lambda}^{-\frac{n}{2}+qn+\frac32}\eta^{ns}
\]
and we select \(\eta\) as before.
If \(\ell>2\): For this we need sharper estimates
and we appeal to the harmonic approximation in order to use \L ojasiewicz
inequalities. We exhaust \(\ensuremath{{\bf Sing}}_{\ell,m}\):
\[
\ensuremath{{\bf Sing}}_{\ell,m}=\bigcup_{j=0}^\infty \ensuremath{{\bf Q}}^j_\ell(\theta,\eta),\qquad
\ensuremath{{\bf Q}}^j_\ell(\theta,\eta)=
\ensuremath{{\bf T}}^j_{\ell,m}\setminus\ensuremath{{\bf T}}^{j+1}_{\ell,m}
\]
for
\[
\ensuremath{{\bf T}}^j_{\ell,m}=\ensuremath{{\bf Sing}}_{\ell,m}\bigcap \{ x\in\ensuremath{ {\bf P} }/
\theta^{j+1}\eta\leq |u|\leq \theta^j\eta,\,\,
\theta^{j+1}\eta\leq |\kl u|\leq \theta^j\eta \}
\]
We introduce in each \(\ensuremath{{\bf Q}}^j_{\ell,m}(\theta,\eta)\) then
\[
\kappa_{\ell,m;\pm}=u\pm \harm{u}_{\ell,m}
\]
and drop the indices
\[
\kl |\kl u|^2= \kl (\kl \kappa_+\cdot \kl\kappa_-)+\kl|\kl \harm{u}|^2=
(\kl^2\kappa_+)\cdot\kl\kappa_-+(\kl^2\kappa_-)\cdot\kl\kappa_++
\kl|\kl \harm{u}|^2
\]
The harmonic approximation method combined with Harnack inequality
suggests that in \(\ensuremath{{\bf Q}}^j_{\ell,m}(\theta,\eta),\ell>1\) we have
Bernstein's inequalities
\[
|\kl u|\leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\sqrt{\ensuremath{\lambda}}\eta\theta^j
\qquad
|\kl^2u|\leq\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\ensuremath{\lambda}\eta\theta^j
\]
and also
\[
|\kl\kappa_-|\leq C_1\ensuremath{\lambda}\theta^{2(j+1)}\eta
\]
Set
\[
\beta=\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\eta\theta^j
\]
Furthermore since
\(|a+b|\geq (1-\varepsilon)\left(a^2 -\frac{1}{\varepsilon}b^2\right) \)
then :
\[
|u|^2\geq (1-\varepsilon)\left(\harm{u}^2 -\frac1\varepsilon |\kappa_-|^2
\right)\geq\left(1 -\frac{\sqrt{\ensuremath{\lambda}}\beta}{\varepsilon}\right)\harm{u}^2
\]
and
\[
|\kl u|^2 \geq (1-\varepsilon)\left(1-\frac{\sqrt{\ensuremath{\lambda}}\beta}{\varepsilon}
\right)
|\kl\harm{u}|^2
\geq(1-\varepsilon)\left(1-\frac{\sqrt{\ensuremath{\lambda}}\beta}{\varepsilon}\right)
|\harm{u}|^{2(1-\nu(\ell))}
\]
In \(\ensuremath{{\bf Q}}^j_{\ell,m}\) we have:
\[
u^2\geq \left(1-\frac{\sqrt{\ensuremath{\lambda}}\beta}{\varepsilon}\right)\theta^{2(j+1)}\eta^2,
\qquad
|\kl u|^2\geq (1-\varepsilon^2)\left(1-\frac{\sqrt{\ensuremath{\lambda}}\beta}{\varepsilon}\right)
\theta^{2(1-\nu(\ell))(j+1)}\eta^{2(1-\nu(\ell))}
\]
The integrand is estimated according to the estimates derived above:
\[
\zeta_{\ell,m,j}\ensuremath{{\bf U}}\leq
\zeta_{\ell,m,j}\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})^2\ogkos{\ensuremath{{\bf Q}}^j_{\ell,m}}
\ensuremath{\lambda}\eta^{2\nu(\ell)}\theta^{2j\nu(\ell)}
\]
We have that due to the multiplicity bound we ahve that
\[
\ogkos{\ensuremath{{\bf Q}}^j_{\ell,m}}\leq C\ensuremath{\lambda}^{-\frac{n-1}2}\sqrt{\ensuremath{\lambda}}\eta\theta^j, \qquad
\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\leq C(\eta^s\ensuremath{\lambda}^q)^n
\]
We set \(\eta=\ensuremath{\lambda}^{-m}\) and compute
\[
m=\frac{4qn+1}{2(2sn+2\nu+1)}+j
\]
Summing up we get
\[
\sum_{j=0}^\infty\ensuremath{{\bf IU}}_{\ell,j}\leq C\sum_{j=0}^\infty
\olwbn{\ell,j}\zeta_{\ell,j}
\leq \frac{C\sqrt{\ensuremath{\lambda}}}{1-\theta^{2\nu}}\ensuremath{\lambda}^{-j}
\]
and therefore we have that
\[
\sum_{j=0}^{C\sqrt{\ensuremath{\lambda}}}\ensuremath{{\bf IU}}_{j}\leq
\frac{C\sqrt{\ensuremath{\lambda}}}{1-\theta^{2\nu}}
\]We will use repeatedly the following method that we call {\it harmonic
approximation method}. The domains encountered here \( \ensuremath{ {\bf W} }\)
have boundaries with {\it normal crossings} singularities: the
singular set \(\mathscr{S}(\partial \ensuremath{ {\bf W} })\) is given by the transversal
intersection of hypersurfaces: geodesic spheres with local equations
\(s_1,\dots,s_\ell, \ell=n,n+1\). The piece of the hypersurface
\(\mathscr{H}_\ensuremath{\epsilon}= \{ \dian{x} \in \ensuremath{ {\bf W} }
(s_1\cdots s_\ell+\ensuremath{\epsilon})(\dian{x}=0\}\) near \(\mathscr{S}\)
is for suitable \(\ensuremath{\epsilon}\) a smooth hypersurface close to
\(\mathscr{S}(\partial \ensuremath{ {\bf W} })\). We will consider the domain
\(\tilde{ \ensuremath{ {\bf W} }}\) obtained by replacing the singular part
\(\mathscr{S}( \ensuremath{ {\bf W} })\) by \(\mathscr{H}_\ensuremath{\epsilon}\) with repacing the defining
function through cut-offs by the function given there.
Let \(\harm{F}:\tilde{ \ensuremath{ {\bf W} }} \rightarrow \xw{}\) be the solution of the
boundary value problem:
\[
\lap{} \harm{F} =0,\qquad \harm{F}|_{\partial\tilde{ \ensuremath{ {\bf W} }}}=F
\]
\paragraph{Harmonic polynomials} We will also approximate the
harmonic function defined in the pixel \(\harm{F}\) by a sequence
\(\{F_n\}_{n\in{\bf N}}\) of functions such that
\begin{equation}\label{iter}
\begin{split}
\Delta_0 \harm{F_0} = 0 ,\qquad
\Delta_0 \harm{F}_n = -\sum_{i,j}{\cal R}^{ij}
\frac{\partial^2 F_{n-1}}{\partial x_i\partial x_j}-g^{ij}\partial_i
\psi\partial_j F_{n-1}
\end{split}
\end{equation}
where
\[
g_{ij}=\delta_{ij}+{\cal R}_{ij} ,\,\, \varrho=\diam{ \ensuremath{ {\bf W} }},\,
\psi=\frac12\log(g),\, g=\mbox{det}(g_{ij})
\]
and for \(j=0,1,2\):
\[
||\kl^j{\cal R}||\leq C\mu\varrho^{2-j},
\]
Integration by parts after multiplication by \(\zeta^2F_n\) and
incorporation of the preceding estimates along with Young's inequality
leads to:
\[
\olww{} \zeta^2|\kl F_n|^2\leq C\mu\varrho^2 \olww{} \zeta^2|\kl F_{n-1}|^2+
C_2\olww{}\left(|\kl\zeta|^2+\zeta^2\right)F_n^2
\]
and
\[
\mbox{supp}(|\kl \zeta|)\subset \ensuremath{ {\bf A} }( \ensuremath{ {\bf W} })=
\{x\in \ensuremath{ {\bf W} }/d(x,\partial \ensuremath{ {\bf W} })<\ensuremath{\epsilon}\}
\]
and
\[
|\kl^j \zeta|\leq \frac{C_j}{\ensuremath{\epsilon}^j}
\]
We select \(C\mu \rho^2=1\) then
\[
\olww{} \zeta^2|\kl F_n|^2\leq C \olww{}|\kl(\zeta F_0)|^2
\]
Similarly we have the inequalities:
\[
\olww{}\zeta^2 |\kl^2F_n|^2\leq C\left(\rho^4\olww{}\zeta^2
|\kl^2 F_{n-1}|^2+\rho^2
\olww{}\left(|\lap{} \zeta|+|\kl\zeta|^2\right)|\kl F_{n-1}|^2\right)
\]
and
\[
\olww{}\zeta^2 |\kl^3F_n|^2\leq C^2\left(
\rho^4\olww{}\zeta^2|\kl^3 F_{n_1}|^2+
\rho^2\olww{}\zeta^2|\kl^2F_{n-1}|^2+
\rho^4\olww{}\left(|\lap \zeta|+|\kl \zeta|^2\right)|\kl^2 F_{n-1}|^2\right)
\]
Therefore we have that after iteration:
\[
\olww{} \zeta^2|\kl^2F_n|^2\leq C\olww{}\zeta^2|\kl^2 F_0|^2
\]
and as well as
\[
\olww{} \zeta^2|\kl^3F_n|^2\leq C\olww{}\zeta^2|\kl^3 F_0|^2
\]
The Nash-Moser iteration that we describe in the sequel allows us to bound
the sequence in \(C_0^2( \ensuremath{ {\bf W} })\). Rellich lemma allows us to extract a
sequence that converges in \(H^1( \ensuremath{ {\bf W} })\) and therefore we use it in order
to approximate the initial function by a harmonic polynomial with any accuracy
we desire. \subsubsection{ The brick localization details}
Let \(\ensuremath{ {\bf P} }=\brick{\bjes{\ell}{k}}\) be a brick of size determined by the
parameters \((r(\ensuremath{ {\bf P} }),\mu(\ensuremath{ {\bf F} }_l)\). We use coordinates
\( \dian{x}=\dian{\xi}+\dian{c}\) for \(\dian{c}\) denoting the centre
of the brick. We construct cut-offs
Let \(\psi\in C^\infty_0(\overline{\bf R}_+)\) be the following function:
\[
\psi_\epsilon(t)= \left\{ \begin{array}{lll}
1 & \mbox{if} & 0 \leq t\leq 1\\
0 & \mbox{if} & t \geq \frac32
\end{array}
\right.
\]
Then the following function that localizes in the brick:
\[
\ell_0(\dian{x})=\prod_{i=1}^n
\psi\left(\frac{\sqrt{4\xi_i^2+\ensuremath{\epsilon}_i^2}}{\sqrt{5}\ensuremath{\epsilon}_i}\right)
\]
Also we will use the function that localizes in the neighbourhood of the
zeroes of the function
\[
\harm{h}:\ensuremath{ {\bf P} }\rightarrow \xw{},
\]
\[
\nod{}{\harm{h}}=\{\dian{x}\in \ensuremath{ {\bf P} }:\harm{h}(\dian{x})=0\}
\]
then the function
\[
\ell_(\dian{x})=\ell_0(\dian{x}_0)\psi\left(\frac{\harm{h}(\dian{x})}{\ensuremath{\epsilon}}\right)
\]
localizes in the set:
\[
\nod{\ensuremath{\epsilon}}{\harm{h}}=\{\dian{x}\in\ensuremath{ {\bf P} }/|\harm{h}|\leq \ensuremath{\epsilon}\}
\]
We can prove inductively that:
\[
|\kl^j\ell_0|\leq C_j
\]
and
\[
|\kl^j \ell_{\harm{h}}(\dian{x})|\leq C_j
\sum |\kl^{i_1}\harm{h}|^{p_1}\dots|\kl^{i_m}\harm{h}|^{p_m}
|\kl^{j_1}\psi|^{q_1}\dots|\kl^{j_l}\psi|^{q_l}
\]
where we sum over all indices \( i_1p_1+\cdots+i_mp_m+j_1q_1+\cdots+j_lq_l=\ell \)
and hence that again for indices \( {i_1p_1+\cdots+i_mp_m=\ell,\ell=0,\dots j}\):
\[
|\kl^j\ell_{\harm{h}}(\dian{x})|\leq C_j
\sum
\left|\frac{\kl^{i_1}\harm{h}}{\harm{h}}\right|^{p_1}
\dots\left|\frac{\kl^{i_m}\harm{h}}{\harm{h}}\right|^{p_m}
\]
These will be used succesively in the sequel.
\subsubsection{\L ojasiewicz, Hardy for functions of the form
\( \harm{h}\circ N_{\dian{0}}^{-1}\)}
Let \(\harm{h}_0\) be a polynomial function in rectangular coordinates in
\(\mpalla{n}{\dian{0},R}\) then we have the following immediate result
\begin{noc}
The function \(\harm{h}=\harm{h}_0\circ N^{-1}\) is a function that satisfies
the following
\begin{itemize}
\item The mulitplicity strata \(\Sigma_m(\harm{h})\)
of the variety \(\nod{}{\harm{h}}\)
are mapped to \(\Phi(\Sigma_m(\harm{h}_0))=\Sigma(\harm{h})\).
\item The Lojaziewicz inequalties hold true
\[
|\kl\harm{h}|\geq c_1|\harm{h}|^{1-\ell_1}, \,\,\,\, |\harm{h}(x)|\geq
d(x,\nod{}{\harm{h}}))
\]
\end{itemize}
\end{noc}
The first conclusion comes form the chain rule in many variables:
\begin{eqnarray*}
D^\alpha\harm{h}_0= \sum C_{\alpha,\beta,\gamma(j)}
(D^{\beta}\harm{h})(\Phi(x)) (D^{\gamma_1}\Phi)^{e_1}\cdots
(D^{\gamma_l}\Phi)^{e_\ell}
\end{eqnarray*}
where thje sum extends over all multiindices \(\alpha,\beta,\gamma(j)\in {\bf N}^n,
j=1,\dots,\ell,e_1,\dots,e_\ell\in {\bf N}\) such that:
\[
|\beta|=1, e_1\gamma_1+\cdots+e_\ell\gamma_\ell=|\alpha|-|\beta|
\]
Exchanging the role of \(\harm{h},\harm{h}_0,\Phi,N\) we get the defining equations of
the equimultiple locus.
For the second we compute:
\begin{eqnarray*}
|\kl f|=|D\Phi_{\dian{x}}(\kl \harm{h}_0)(\Phi(\dian{x})|\geq C
|\kl \harm{h}_0(\Phi(\dian{x})|
\geq C'|\harm{h}(\Phi(x)|^{1-\ell_1}=C'|f(\dian{x})|^{1-\ell_1}
\end{eqnarray*}
where we have chosen \(R,r\) so that
\[
|D\Phi(\dian{x})|=|A+B(\dian{x})|\geq |A|-|B(\dian{x})|\geq \frac{|A|}{2}
\]
where \(|B|\leq \frac{|A|}{2}\). Similiarly.
\[
|\harm{h}(\dian{x})|=|\harm{h}_0(\Phi(\dian{x})|\geq c_2 d(\Phi(x),
\nod{\harm{h}_0}{})^{\ell_2}
\]
now due to the first inequality and the definition of the exponential map
we conclude that:
\[
d(\Phi(\dian{x}),\nod{}{\harm{h}_0})\geq
c'd(\dian{x},\nod{}{\harm{h}}))
\]
A consequence of this is that (GHI)'s hold for such functions.
\subsubsection{Hardy's inequalities}
Let \(P:\xw{n}\rightarrow\xw{} \) be a homogeneous polynomial of degree \(m\)
and \(\nod{}{P}\) its set of zeroes
\[
\nod{}{P}=\{ \dian{x}\in\xw{n}/P(\dian{x})=0\}
\]
Moreover let \(f\in C_0^\infty(\xw{n}\setminus \nod{}{P})\)
there exist constants, \cite{p}
\( 0<\koh{j}=\frac{4}{(n-2)^2}+O(\ensuremath{\epsilon}) n>2\)
\begin{subequations}\label{gh}
\begin{align}
\int_{\xw{n}} |P|^{-\frac2m} f^2 & \leq \koh{1}\int_{\xw{n}}|\kl f|^2
\label{gh1}\\
\int_{\xw{n}} \left|\frac{\kl P}{P}\right|^2f^2 & \leq\koh{2}\int_{\xw{n}}|\kl f|^2,
\label{gh2}\\
\int_{\xw{n}} \left|\frac{\Delta P}{P}\right|f^2 & \leq\koh{3}\int_{\xw{n}}|\kl f|^2,
\label{gh3}
\end{align}
\end{subequations}
From the euclidean Hardy' s inequalities we obtain the riemannian
versions by modifying suitably the constants by \(1+\ensuremath{\epsilon}\).
\subsubsection{Integration formulas}
Let \(T\) be a tensor field of typr \((p+1,0)\) then we introduce:
\[
A(T)_{i_1\dots i_pk}=T_{i_1\dots i_p;k}-T_{i_1\dots k;i_p}
\]
\[
D(T)_{i_1\dots i_{p-1}}=g^{\ell j}T_{i_1\dots i_{p-1}\ell;j}
\]
The general integration by parts formula reads as
\begin{int_by_parts}
Let \( T\) be a \((p,0)\) tensor field on the riemannian
manifold \(\ensuremath{ ({\bf M}^n,g) }\) and \( \phi=|T|^{k-1}\chi,\,\,\chi,\) a smooth
cut-off function supported in the domain \(\ensuremath{ {\bf K} }\). Then we have that
\[
\ensuremath{\int_{{\bf K}}} \phi^2|\kl T|^2 \leq\ensuremath{\int_{{\bf K}}} \frac12|A(T)|^2 + |D(T)|^2+
\ensuremath{\int_{{\bf K}}} \phi^2\sum_{i=1}^5(\ensuremath{{\mathrm Rm}}*T*T)_i+\ensuremath{\int_{{\bf K}}} |\kl \chi|^2 |T|^2
\]
\end{int_by_parts}
\noindent We set \(T_{{\bf i}j}=T_{i_1\dots i_{p-1}j}\)
and hence we have that
\[
\phi^2|A(T)|^2=2\phi^2\left(|\kl T|^2 - T_{{\bf i}k;j}T^{{\bf i}j;k}\right)
\]
The last term gives that
\[
\phi^2T_{{\bf i}k;j}T^{{\bf i}j;k}=
\left(\phi^2T_{{\bf i}k}T^{{\bf i}j;k}\right)_{;j}-
\phi^2 T_{{\bf i}k}T^{{\bf i}j;kj}-2\phi T_{{\bf i}k}\phi_j T^{{\bf i}j;k}
=\mbox{(BT)+(I)+(II)}\]
Furthemore we have that
\[
(\mbox{I})=\phi^2T_{{\bf i}k}T^{{\bf i}j;kj}=
\phi^2T_{{\bf i}k}\left(D(T)_{{\bf i}}\right)_{;k}
+\phi^2\ensuremath{{\mathrm Ric}}^k_{\,\,s}T^{{\bf i}s}T_{{\bf i}k}+
\phi^2\sum_{\ell=1}^p\ensuremath{{\mathrm Rm}}_s^{\,i_\ell kj}T^{{\bf i}_\ell}T_{{\bf i}k}
\]
for \({\bf i}_\ell=i_1\cdot i_{\ell-1}si_{\ell+1}\cdot i_p\).
The first term then is written as
\[
\phi^2T_{{\bf i}k}\left(D(T)_{{\bf i}}\right)_{;k}=(BT)-
\phi^2 |D(T)|^2 - 2\phi T_{{\bf i}j}\phi_j (D(T))^{{\bf i}}
\]
The term (II) is written using that \(\phi=\chi |T|^{k-1}\):
\[
(\mbox{II})=2\phi T_{{\bf i}k}\phi_jA(T)^{{\bf i}jk}+\frac12\kl \log\chi\cdot
\kl |T|^2+(k-1)\phi^2|\kl |T||^2
\]
In summary we have that:
\[
\ensuremath{\int_{{\bf K}}} \phi^2 |\kl T|^2 \leq \ensuremath{\int_{{\bf K}}} \phi^2 \left(|D(T)|^2+\frac12|A(T)|^2\right)+
C\ensuremath{\int_{{\bf K}}} \left(|\ensuremath{{\mathrm Rm}}|+|\ensuremath{{\mathrm Ric}}|+|\kl \log\chi|^2\right)\phi^2|T|^2
\]
\subsubsection{Functions}
For a function \(\moms{f},\,\ensuremath{\mbox{supp}}{\phi}\subset \ensuremath{ {\bf K} }\) we have that:
\begin{subequations}\label{fint}
\begin{align}
\ensuremath{\int_{{\bf K}}} \phi^2|\kl^2 f|^2 &\leq
C\ensuremath{\int_{{\bf K}}} \phi^2\left(|\lap{}f|^2 +|\ensuremath{{\mathrm Ric}}||\kl f|^2\right)\label{fint1}\\
\ensuremath{\int_{{\bf K}}} \phi^2|\kl^3f|^2& \leq
C\ensuremath{\int_{{\bf K}}} \phi^2\left( |\kl(\lap{}f)|^2+|\ensuremath{{\mathrm Rm}}| |\kl f|^2\right)\label{fint2}
\end{align}
\end{subequations}
\subsubsection{Curvature}
The diffrential Bianchi identities are written as:
\[
\ensuremath{\cancel{\mbox{curl}}}(\ensuremath{{\mathrm Rm}})_{ijkl;m}=\ensuremath{{\mathrm Rm}}_{ijml;k}
\]
also:
\[
\ensuremath{\cancel{\mbox{div}}}(\ensuremath{{\mathrm Rm}})_{ijkl;l}=A(\ensuremath{{\mathrm Ric}})_{jki}+A(\ensuremath{{\mathrm Ric}})_{ikj}
\]
We recall here the Bach tensor:
\[
\mbox{B}_{ijk}=A(\ensuremath{{\mathrm Ric}})_{ijk}+\frac{1}{(n-1)(n-2)}\left[
g_{ij}\kl_k\ensuremath{{\mathrm R}}-g_{ik}\kl_j\ensuremath{{\mathrm R}}\right]
\]
and the contracted identities are writen as
\[
\ensuremath{\cancel{\mbox{div}}}(\ensuremath{{\mathrm Ric}})_{i}=\klgs_i \ensuremath{{\mathrm R}}
\]
Therefore we have that
\[
\ensuremath{\int_{{\bf K}}} \phi^2|\kl \ensuremath{{\mathrm Rm}}|^2 = 4\ensuremath{\int_{{\bf K}}}|B|^2+\frac{2}{(n-1)(n-2)^2}|\kl \ensuremath{{\mathrm R}}|^2+\sum_{i=1}^5 (\ensuremath{{\mathrm Rm}}*\ensuremath{{\mathrm Rm}})_i
\]
\paragraph{The iterative method}
Let \( \momw{g,\chi}\) be smooth functions and \(\moms{\harm{h}}\) a polynomial
weight function of degree \(m\) then set
\[
\ensuremath{ {\bf W} }_j=\{ x\in \ensuremath{ {\bf W} }/ \quad
\theta(1-\theta^j)\frac{\eta}{2}\leq |\harm{h}(x)| \leq
(1-\theta+\theta^j)\eta\}
\]
and \( \ensuremath{\mbox{supp}}{\chi}_j\subset \ensuremath{ {\bf W} }_j\). This for instance is given for
\[
\chi_j(x)=\ell\left(\frac{\harm{h}(x)}{(1-\theta+\theta^j)\eta}\right)
\ell\left(\frac{\theta(1-\theta^j)\eta}{\harm{h}(x)}\right)
\]
We suppose that the smooth function \(g\) satisfies the inequality, for
positive constants \(\gamma>1, e=2,4\) and any smooth cut-off \(\chi\):
\[
\olww{} \chi^2|\kl g|^2 \leq \gamma\olww{} \chi^2 |g|^e
\]
Then Sobolev inequality suggests for
\(s=\frac{np}{n-p},1< p<2, k,\ell,d>1\):
\begin{eqnarray}\label{init}
\left(\olww{j} \left(\chi^d |\harm{h}|^{-\ell} g^ k\right)^s\right)^{p/s}
\leq C_0k \olww{j-1}
\left(|\harm{h}|^{-\ell}\chi^d g^{k-1}\right)^p|\kl g|^p +
\ell\olww{j-1}\left|\frac{\kl \harm{h}}{\harm{h}}\right|^p
\left(\chi^d |\harm{h}|^{-\ell}g^k\right)^p+\\
+p\olww{j-1}\chi^{(d-1)p}|\kl \chi|^pg^{kp}|\harm{h}|^{-p\ell}
\end{eqnarray}
The first term then gives for \(r=\frac{2p}{2-p}\)
\begin{equation}
\begin{split}
\olww{j}
\left(|\harm{h}|^{-\ell}\chi^d g^{k-1}\right)^p|\kl g|^p& \leq
\left(\olww{j-1} \chi^{dp}
\left(|\harm{h}|^{-\ell}g^{k-1}\right)^r\right)^{\frac{p}{r}}
\left(\olww{j-1} \chi^{dp} |\kl g|^2\right)^{p/2}\leq \\
& \leq C_1((1-\theta+\theta^j)\eta)^{p\ell}\gamma^{\frac{p}{2}}
\ogkos{ \ensuremath{ {\bf W} }_j}^{\frac{(k-1)r-e}{(k-1)r}}
\left(\olww{j-1} \chi^{\frac{2rdp}{e}}|\harm{h}|^{-r\ell} g^{(k-1)r}
\right)^{\frac{p(k-1+\frac{e}{2})}{r(k-1)}}
\end{split}
\end{equation}
since we have
\begin{eqnarray*}
\left(\olww{j} \chi^{2dp} |\kl g|^2\right)^{p/2} \leq
\gamma^{\frac{p}{2}}\left((1-\theta+\theta^j)\eta\right)^{\frac{pe\ell}{k-1}}
\ogkos{ \ensuremath{ {\bf W} }_j}^{\frac{(k-1)r-e}{(k-1)r}}
\left(\olww{j-1} \chi^{\frac{2rdp}{e}}|\harm{h}|^{-r\ell} g^{(k-1)r}
\right)^{\frac{pe}{2r(k-1)}}
\end{eqnarray*}
The middle term after application of \eqref{gh2} gives that:
\begin{eqnarray*}
\begin{split}
\olww{j}\left|\frac{\kl \harm{h}}{\harm{h}}\right|^p
\left(\chi^d |\harm{h}|^{-\ell}g^k\right)^p& \leq
\left(\olww{j-1}
\chi^{dp}\left(h^{-\ell}g^{(k-1)}\right)^r\right)^{\frac{p}{r}}
\left(\olww{j-1}\left|\frac{\kl \harm{h}}{\harm{h}}\right|^2\chi^{dp}g^2
\right)\\
& \leq C_1 ((1-\theta+\theta^j)\eta)^{p\ell}\gamma^{\frac{p}{2}}
\ogkos{ \ensuremath{ {\bf W} }_j}^{\frac{(k-1)r-e}{(k-1)r}}
\left(\olww{j-1} \chi^{\frac{2rdp}{e}}|\harm{h}|^{-r\ell} g^{(k-1)r}
\right)^{\frac{p(k-1+\frac{e}{2})}{r(k-1)}}
\end{split}
\end{eqnarray*}
In an analogous way the last term leads to:
\[
\olww{j} |\kl \chi|^p(\harm{h}^{-\ell}g)^k\leq
C_1((1-\theta+\theta^j)\eta)^{p\ell}\gamma^{\frac{p}{2}}
\ogkos{ \ensuremath{ {\bf W} }_j}^{\frac{(k-1)r-e}{(k-1)r}}
\left(\olww{j-1} \chi^{\frac{2rdp}{e}}|\harm{h}|^{-r\ell} g^{(k-1)r}
\right)^{\frac{p(k-1+\frac{e}{2})}{r(k-1)}}
\]
In summary we arrive at
\[
\left(\olww{j}
\left(\chi^d|\harm{h}|^{-\ell}g^k\right)^s\right)^{\frac{1}{ks}}\leq
C_3k^{\frac1k}(\eta(1-\theta+\theta^j))^{\frac{p\ell}{k}}\gamma^{\frac{p}{2k}}
\ogkos{ \ensuremath{ {\bf W} }_j}^{\frac{(k-1)r-e}{(k-1)kr}}
\left(\olww{j-1} \chi^{\frac{2}{e}dr}|\harm{h}|^{-r\ell} g^{(k-1)r}
\right)^{\frac{(k-1+\frac{e}{2})}{k}\cdot\frac{1}{r(k-1)}}
\]
Notice that
\[
r=qs, \quad q(n,p)=\frac{1-\frac{p}{n}}{1-\frac{p}{2}}, \qquad
1-\frac{2}{n}\leq q(n,p)\leq 2-\frac{2}{n}
\]
We conclude with the basic inequality that we will iterate
\[
\left(\frac{1}{v_j}\olww{j} G^{ks}\right)^{\frac{1}{ks}}\leq C_j
\left(\frac{1}{v_{j-1}}\olww{j-1} G^{(k-1)r}\right)^{\frac{1}{(k-1)r}}
\]
where
\[
r=\frac{s}{a}, \qquad a=\frac{n}{n-1}, \qquad v_j=\ogkos{ \ensuremath{ {\bf W} }_j}
\]
\[
C_j=C_3\left[k\eta^{\frac{pks+1}{ks}\ell}
\left(\theta(1-\theta^{j-1})\right)^{-\frac{k-1+\frac{e}{2}}{k}\cdot\frac{\ell}{k-1}}
\gamma^{\frac{p}{2k}}
v_j^{\frac{(k-1)r-e}{(k-1)r}}\beta_j\right]^{\frac1k},
\]
\[
\beta_j=\frac{v_j^{\frac{1}{r(k_j-1)}}}{v_{j+1}^{\frac{1}{k_{j+1}s}}}
\]
and we replace \(\beta_j\) by the upper bound
\[
\beta_j\leq \frac{ \eta^{\frac{1}{a(n-2)}} }{\theta^{\frac1a}}=\sigma
\]
In order to bring this to the standard iteration form we do the following:
\[
k_j=\frac{a^{j+1}}{s}+1
\]
Finally we arrive at the basic iteration inequality:
\[
I_{j+1}=\left(\frac{1}{v_{j+1}}\olww{j+1}G^{a^{j+1}}\right)^{\frac{1}{a^{j+1}}}\leq
C_j
\left(\frac{1}{v_j}\olww{j}G^{a^j}\right)^{\frac{1}{a^j}},
\]
Then we have the iteration inequality
\begin{eqnarray}\label{anadr}
I_{j+1} \leq C_j I_j
\end{eqnarray}
The iteration leads to the inequality
\[
\eaf{ \ensuremath{ {\bf W} }_\infty}|G|\leq D \left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }_0}}
\olww{0}G^p\right)^{\frac1p},\qquad
D=\lim_{j\rightarrow\infty}\left(C_1\dots C_j\right)
\]
We select the local density parameter as
\(\theta\sim\frac{1}{\gamma^t},t>0\).
The constant is estimated through elementary inequalities of the form:
\[
\frac{1}{\gamma^t a^2-1}\leq
-\sum_{j=0}^\infty \frac{1}{a^{2j}}\log\left(1-\frac{1}{\gamma^{tj}}\right)
\leq \frac{\gamma^t}{\gamma^t-1}\frac{1}{\gamma^ta^2-1}
\]
We arrive finally at
\[
c=\eta^{\ell pn+\frac{a+1}{a(n-2)}}\gamma^{\frac{pn(t+1)}{2t}+\frac{t-1}{n-2}}
\]
We will denote the constant in the form:
\[
\ensuremath{{\mathit D}}(\eta,\gamma)=\left(\eta^s\gamma^q\right)^n ,
\qquad s=\ell p+\frac{a+1}{na(n-2)},
\qquad
q=\frac{p(t+1)}{2t}+\frac{t}{n(n-2)}
\]
\paragraph{Back to harmonic approximation} The harmonic
approximation method suggests that
\[
\eaf{ \ensuremath{ {\bf W} }_\infty}|\kappa|\leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})
\left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }_0}}\olww{0}u^2\right)^{1/2}
\]
Similarly we have the higher order inequalites
\[
\eaf{ \ensuremath{ {\bf W} }_\infty}|\kl\kappa|\leq
\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda}^2)
\left(1+\ensuremath{{\mathit D}}(\eta,\mu)\ensuremath{\lambda}\frac{1}{\ogkos{ \ensuremath{ {\bf W} }_0}}\olww{0}|\ensuremath{{\mathrm Ric}}|\right)^{1/2}
\left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }_0}}\olww{0}u^2\right)^{1/2}
\]
Let \(\harm{u}>0\) be a harmonic function. Then set for \(j=0,1\)
\[
h_j=\left|\frac{\kl^{j+1}\harm{u}}{|\kl^j\harm{u}|}\right|^2,\qquad
H_0=\left|\frac{\kl|\kl \harm{u}|^2}{|\kl \harm{u}|^2}\right|
\]
and compute
\[
\lap{g}h_0\geq -(|\ensuremath{{\mathrm Ric}}|+2)h_0^2+\frac23h_1h_0\geq -(|\ensuremath{{\mathrm Ric}}|+2)h_0^2
\]
or that
\[
\olww{} \zeta^2|\kl h_0|^2\leq \olww{} (|\ensuremath{{\mathrm Ric}}|+2)\zeta^2h_0^3
\]
We note that the integral is of the form
\[
\olww{}\phi^2h_0^3\leq (1+\ensuremath{\epsilon}^2)\koh{2}\olww{}\phi^2|\kl h_0|^2+
\frac{1}{\ensuremath{\epsilon}^2} |\kl \phi|^2
\]
majorised after application of Hardy's inequality.
We compute that
\[
|\kl h_0|^2\leq (1+\ensuremath{\epsilon})^2(H_0h_0^2+\frac{1}{\ensuremath{\epsilon}^2}h_0)
\]
then we find
\[
\olww{}\phi^2|\kl h_0|^2\leq (1+\ensuremath{\epsilon})^2\olww{}
\phi^2(H_0h_0^2+\frac{1}{\ensuremath{\epsilon}^2}h_0) \leq
(1+\ensuremath{\epsilon}^2)\koh{2}\olww{}|\kl\phi|^2\left(\frac{1}{\ensuremath{\epsilon}^2}+h_0^2\right)+
\phi^2|\kl h_0|^2
\]
If \(n>1,\ensuremath{\epsilon}'>0\) then select
\[
\frac{n^2-2n-\ensuremath{\epsilon}'^2}{n^2-2n+8}=\ensuremath{\epsilon}^2<\frac{n^2-2n}{n^2-2n+8}
\]
and conclude that
\[
\olww{}\phi^2|\kl h_0|^2\leq c_n(\ensuremath{\epsilon}')
\olww{}|\kl\phi|^2\left(\frac{1}{\ensuremath{\epsilon}^2}+h_0^2\right), \quad
c_n(\ensuremath{\epsilon}')=\frac{8-\ensuremath{\epsilon}'^2}{n^2-2n-\ensuremath{\epsilon}'^2}
\]
and hence
\[
\olww{} \phi^2h_0^3 \leq c_n(\ensuremath{\epsilon}')\olww{} \phi_1^2(1+\ensuremath{\epsilon}'h_0^2),\quad
\phi_1=|\kl\phi|
\]
Repeating the procedure we get
\[
\olww{}\phi_1^2h_0^2\leq c_n(\ensuremath{\epsilon}')\olww{}
(1+\ensuremath{\epsilon}^2h_0)|\phi_2|^2
\]
and
\[
\olww{}\phi_2^2h_0\leq c_n(\ensuremath{\epsilon}')\olww{}
(1+\ensuremath{\epsilon}^2)|\phi_3|^2,\qquad \phi_3=|\kl\phi_2|
\]
We apply this formula for \(\phi=(|\ensuremath{{\mathrm Ric}}|+2)^{1/2}\zeta\) that
\[
\olww{}\zeta^2|\kl h_0|^2\leq \olww{}
\sum_{i=1,j\leq i}^3c_n(\ensuremath{\epsilon}')^i|\kl^{i-j}\ensuremath{{\mathrm Ric}}||\kl^j\zeta|^2
\]
and we obtain
\[
\eaf{ \ensuremath{ {\bf W} }}h_0\leq \ensuremath{{\mathit D}}(t,\gamma^{-3})
\olwwr{ \ensuremath{ {\bf W} }}|\ensuremath{{\mathrm R}}|, \qquad t=\max(t_1( \ensuremath{ {\bf W} }),t_2( \ensuremath{ {\bf W} }),t_3( \ensuremath{ {\bf W} })
\]
where \(|\kl\zeta|^j\leq c\gamma^{-j}\)\paragraph{Growth of a function near its nodal set }
We assume that
\[
\olww{}\zeta^2|\kl u|^2\leq \tau\olww{}\zeta^2u^2
\]
Let
\[
\ensuremath{ {\bf W} }_j=\{x\in \ensuremath{ {\bf P} }/ (1-\theta^j)\theta\eta\leq
|\harm{u}(x)|\leq (1-\theta+\theta^j)\eta \}\bigcap
B_{C,\frac{1}{\sqrt{\tau}}}
\]
and
\[
\ensuremath{ {\bf A} }( \ensuremath{ {\bf W} }_j)=
\ensuremath{ {\bf W} }_j\setminus
\{x\in \ensuremath{ {\bf P} }/ (1+\theta^{j+1})\theta^2\leq |\harm{u}(x)|\leq
\theta(1-\theta^j)\eta\}\bigcap B_{C,\frac{1}{\sqrt{\tau}}}
\]
The cut-off function \(\zeta\) satisfies the following estimate:
\[
|\kl^\ell\zeta|\leq \frac{c_\ell}{\theta^\ell}
\]
We apply Hardy's inequality
\[
\olww{j}\zeta^2u^2=\olww{j}\harm{u}^{\frac2m-\frac2m}(\zeta u)^2\leq
\koh{1} \left(\eaf{ \ensuremath{ {\bf W} }_j}|\harm{u}|\right)^{\frac2m}\olwwr{j}|\kl(\zeta u)|^2 \leq
\]
\[
\leq
C \left(\eaf{ \ensuremath{ {\bf W} }_j}|\harm{u}|\right)^{\frac2m}(1+\ensuremath{\epsilon})\left[
\frac1\ensuremath{\epsilon}\olwwr{ \ensuremath{ {\bf W} }{j}}|\kl\zeta|^2u^2+
\tau\olwwr{ \ensuremath{ {\bf W} }{j}}(u\zeta)^2\right]
\]
Therefore we have that close to \(\ensuremath{{\bf T}}_\eta\left(\nod{}{\harm{u}}\right)\), for
\(\eta=(2\koh{1}\tau(1+\ensuremath{\epsilon}))^{-\frac{m}{2}}\). We have that
\[
\olww{j}\zeta^2u^2\leq
\frac{1}{\tau^{\frac{m}{2}}\ensuremath{\epsilon}}\olwwr{ \ensuremath{ {\bf W} }_j}|\kl \zeta|^2u^2\leq
\frac{c_1}{\tau^{\frac{m}{2}}\ensuremath{\epsilon}\theta^2}\olwwr{ \ensuremath{ {\bf W} }_j}u^2
\]
We have that:
\[
\olwwr{ \ensuremath{ {\bf W} }_j}u^2\leq 2\olwwr{ \ensuremath{ {\bf W} }_j}\harm{u}^2+\kappa^2
\]
and the second integral is estimated again as
\[
\olwwr{ \ensuremath{ {\bf W} }_j}\kappa^2\leq
\koh{1}(\theta\eta)^{\frac2m}\tau\olwwr{ \ensuremath{ {\bf W} }_j} u^2
\]
and hence
\[
\olwwr{ \ensuremath{ {\bf W} }_j}u^2\leq \frac{4}{2-\theta^{\frac2m}}
\olwwr{ \ensuremath{ {\bf W} }_j}\harm{u}^2
\]
The coarea formula suggests then after \L ojasiewicz inequality that
\[
\olwwr{ \ensuremath{ {\bf W} }_j}\zeta^2\harm{u}^2=
\int_{(\theta-\theta^j)\eta}^{(\theta-\theta^j)(1+\xi)\eta}
d\mu\int_{\{ \harm{u}=\mu\}} \frac{\harm{u}^2d\sigma_\mu}{|\kl \harm{u}|}
+\int_{(1-\theta+\theta^j)\xi\eta}^{(1-\theta+\theta^j)\eta}
d\mu\int_{\{ \harm{u}=\mu\}} \frac{\harm{u}^2d\sigma_\mu}{|\kl \harm{u}|}
\leq
\]
\[
\int_{(\theta-\theta^j)\eta}^{(\theta-\theta^j)(1+\xi)\eta}
\mu^{\nu+1}\alpha(\mu)d\mu
+\int_{(1-\theta+\theta^j)\xi\eta}^{(1-\theta+\theta^j)\eta}
\mu^{\nu+1}\alpha(\mu)d\mu
\]
Inside a ball of radius \(\tau\) applying Crofton formula if the
multiplicity of \(\harm{u}\) is \(m\)
\[
\alpha(\mu)=\int_{\{ \harm{u}=\mu\}}d\sigma_\mu\leq c\tau^{-\frac{n-1}{2}}m
\]
Therefore we find that for \(\sigma_j=\theta-\theta^j\):
\[
\olwwr{ \ensuremath{ {\bf W} }_j}\zeta^2\harm{u}^2\leq
\tau^{-\frac{n-1}{2}}\eta^{\nu+2}
\left(\sigma_j^{\nu+1}\xi^{\nu+2}+
\frac{(1-\sigma_j)^{\nu+2}(1-3\sigma_j)}{1-2\sigma_j} \right)
\]
Finally we have that
\[
\olww{j}\zeta^2u^2\leq C\tau^{-\frac{n-1}{2}}\eta^{\nu+2}
\]
Moreover we recall the folowing identity from \cite{p}:
\[
\eta^3\eta\frac{d}{d\eta}\eta^{-3}\olww{}\zeta^2\harm{u}^2=
-\olww{}\frac{\kl Q\cdot\kl\harm{u}}{Q^2}\zeta^2\harm{u}^3
\]
for \(Q=|\kl \harm{u}|^2\geq c|\harm{u}|^{2(1-\nu)}\). This leads to
\[
\eta^3\eta\frac{d}{d\eta}\eta^{-3}\olww{}\zeta^2\harm{u}^2\leq
\olww{}\left|\frac{\kl Q}{Q}\right|\zeta^2\harm{u}^{\nu+2}\leq
\olww{}\left|\frac{\kl Q}{Q}\right|^2\zeta^2\harm{u}^{\nu+2}
\]
We apply Hardy's inequality and get
\[
\eta I'(\eta)\leq
C\left(1+\tau^{-\frac{n+1}{2}}\eta^{\nu}\right)\tau\eta^\nu I(\eta)
\]
for
\[
I(\eta)=\eta^{-3}\olww{}\zeta^2\harm{u}^2
\]
Finally we get that for \(\eta>\eta_0\):
\[
I(\eta)\leq C
e^{c_\nu\left(1+\tau^{-\frac{n+1}{2}}\eta^{\nu+1}\right)\tau\eta^\nu}
I(\eta_0)
\]
and
\[
\olww{}\zeta^2\harm{u}^2 \leq C
e^{c_\nu\left(1+\tau^{-\frac{n+1}{2}}\eta^{\nu+1}\right)\tau\eta^\nu}
\left(\frac{\eta}{\eta_0}\right)^3 \olww{0}\zeta^2\harm{u}^2
\]
\paragraph{Morrey estimates}
Let \(\ensuremath{\epsilon}<1,\,\, 0<\gamma<1\,\, \mbox{or}\,\, \gamma<0,\, p<2\):
\[
u_\ensuremath{\epsilon}=\sqrt{u^2+\ensuremath{\varepsilon}^2},\qquad
\psi_\ensuremath{\varepsilon}=\log u_\ensuremath{\varepsilon}, \qquad w=u_\ensuremath{\epsilon}^\gamma
\]
and for \(\zeta,\ensuremath{\mbox{supp}}{(\zeta)}\subset \ensuremath{ {\bf W} }\):
\begin{eqnarray}\label{assum}
\olww{} \zeta^2|\kl u_\ensuremath{\epsilon}|^2\leq \tau\olww{} \zeta^2 u_\ensuremath{\epsilon}^2
\end{eqnarray}
Then for \(q=\frac2\gamma\):
\begin{subequations}\label{moiq}
\begin{align}
\olww{} |\kl w|^p\zeta^p &\leq C_1(\tau)\olww{} |\kl \zeta|^2w^q\label{moiq1}\\
\olww{} |\kl \psi_\ensuremath{\varepsilon}|^2\zeta^2 &\leq C_2(\tau)\olww{} |\kl
\zeta|^2+\zeta^2\label{moiq2}
\end{align}
\end{subequations}
The inequality \eqref{moiq1} follows after selection for
\(\zeta\) as \(u_\ensuremath{\epsilon}^{\gamma-1}\) gives since
\[
|\kl w|=\gamma w^{1+\frac1\gamma}|\kl u_\ensuremath{\epsilon}|
\]
that
\[
\olww{} \zeta^2|\kl w|^2\leq \frac{C_0}{\gamma^2}\olww{} \zeta^2w^2
\]
\noindent The inequality \eqref{moiq2} requires the additional assumption
for \(\tau>0\):
\[
\olww{}\zeta^2|\kl^2u|^2\leq \tau^2 \olww{} \zeta^2 u^2
\]
We start selecting values \(v_1,\dots, v_m>0\) and assume that
\[
u=v_j+h_j
\]
making the following choice:
\[
|h_j|\leq \ensuremath{\epsilon} |v_j|
\]
then
\[
(v_j+h_j)^2 \geq (1-\ensuremath{\epsilon})\left(v_j^2-\frac{h_j^2}{\ensuremath{\epsilon}}\right)\geq
\left(1-\ensuremath{\epsilon}\right)^2h_j^2
\]
We approximate harmonically \(h_j\) in suitable bricks selected
so that we use the initial form of \(\harm{h}_j\). Hence we have that
for \(\psi=\log(u_\ensuremath{\epsilon}),\tilde{\psi}=\log(h)\)
\[
\ensuremath{\int_{{\bf P}} }{} |\kl\psi|^2\zeta^2\leq c\ensuremath{\int_{{\bf P}} }{}|\kl \tilde{\psi}|^2\zeta^2
\]
Therefore let \(\harm{h}\) be the harmonic approximation of \(h\) in
\( \ensuremath{ {\bf W} }\) and we set:
\[
h=\harm{h}+\kappa
\]
The standard harmonic approximation method estimates combined
with partial integration leads us to
\[
\olww{}\zeta^2 |\kl^2\kappa|\leq \ensuremath{{\mathit D}}(\eta,\tau^2)\olww{} \zeta^2u^2
\]
The estimate of the preceding paragraph
\[
|\kl\kappa|\leq \eaf{ \ensuremath{ {\bf W} }_0}|\kl\kappa|\leq c
\left(\olww{} \zeta^2u^2\right)^{1/2},\qquad
c=\frac{\ensuremath{{\mathit D}}(\eta,\tau)+\ensuremath{{\mathit D}}(\eta,\mu)\tau ||\ensuremath{{\mathrm Ric}}||_{1}}{\ogkos{ \ensuremath{ {\bf W} }{0}}}
\]
We compute for \(\ensuremath{\epsilon}<1\):
\[
u_\ensuremath{\epsilon}^2=\harm{u}^2+2\kappa\harm{u}+\kappa^2+\ensuremath{\epsilon}^2\geq
(1-\ensuremath{\epsilon}^2)\harm{u}^2+(1-\frac{1}{\ensuremath{\epsilon}^2})\kappa^2+\ensuremath{\epsilon}^2
\]
Then we select \(\ensuremath{\epsilon}\) such that
\[
\kappa^2(1-\frac{1}{\ensuremath{\epsilon}^2})+\ensuremath{\epsilon}^2>\frac12\ensuremath{\epsilon}^2
\]
and therefore we get that
\[
|\kappa|\leq \frac{\ensuremath{\epsilon}^2}{\sqrt{2(1-\ensuremath{\epsilon}^2)}}
\]
Let then \(m\) denote the highest multiplicity of
\(\harm{u}\). We apply the preceding estimates to conclude that for
\(\ensuremath{\epsilon}'=\frac{\ensuremath{\epsilon}^2}{2(1-\ensuremath{\epsilon}^2)}\):
\[
\olww{} \zeta^2\frac{|\kl u_\ensuremath{\epsilon}|^2}{u_\ensuremath{\epsilon}^2}\leq
\frac{1}{2(1-\ensuremath{\epsilon}'^2)}\olww{}
\left|\frac{\kl\harm{u}_{\ensuremath{\epsilon}'}^2}{\harm{u}_{\ensuremath{\epsilon}'}^2}\right|\zeta^2+
c \olww{} \zeta^2u^2\olww{}\frac{\zeta^2}{\harm{u}_{\ensuremath{\epsilon}'}^2}
\]
Now we use the estimate of the preceding paragraph
\[
||\zeta u||_{2, \ensuremath{ {\bf W} }}\leq C\mu\tau^{-\frac{n-1}{2}}\eta^{\nu+2}
\]
and obtain that
\[
\olww{}\zeta^2|\kl \psi_\ensuremath{\epsilon}|^2\leq C_\ensuremath{\epsilon}\olww{}|\kl\zeta|^2+
c\tau^{-\frac{n-1}{2}}\eta^{\nu+2}\olww{}\frac{\zeta^2}{u_{\ensuremath{\epsilon}'}^2}
\]
We set \(e_k(m)=1-\frac{k}{m}\) and then
\[
\olww{} \frac{\zeta^2}{\harm{u}^2}=
\olww{} \frac{(\harm{u}^{-e_1}\zeta)^2}{\harm{u}^{\frac2m}}
\leq
\koh{1}\left[e_1^2\olww{} \left|\frac{\kl \harm{u}}{\harm{u}}\right|^2
\left(\harm{u}^{-e_1}\zeta\right)^2 +\olww{}\harm{u}^{-2e_1}|\kl\zeta|^2\right]
\]
The constant in Hardy's inequality is
\(\koh{1,2}\sim\frac{4}{(d-1)^2}, d\geq 3\):
\[
\olww{}|\kl\harm{\psi}|^2\harm{u}^{-2e_1}\zeta^2\leq(1+\ensuremath{\epsilon})\koh{1}
\left[\frac1\ensuremath{\epsilon}\olww{}\harm{u}^{-2e_1}|\kl\zeta|^2+e_1^2\olww{}
|\kl\harm{\psi}|^2\harm{u}^{-2e_1}\zeta^2\right]
\]
We select then \(\ensuremath{\epsilon}\) so that :
\[
\ensuremath{\epsilon}\leq \frac{(d-3)m+2}{(d+1)m-2}
\]
Hence we have
\begin{eqnarray*}
\olww{}\zeta^2|\kl \harm{\psi}|^2\leq \frac{(1+\ensuremath{\epsilon})\koh{1}}{\ensuremath{\epsilon}}
\olww{}\harm{u}^{-2e}|\kl\zeta|^2
\end{eqnarray*}
and we conclude that:
\begin{eqnarray}\label{mobast}
\olww{}\zeta^2|\kl\psi|^2\leq \frac{(1+\ensuremath{\epsilon})\koh{1}}{\ensuremath{\epsilon}}\olww{}
\harm{u}^{-2e}|\kl\zeta|^2
\end{eqnarray}
Near the zeros of \(u,\harm{u}\), i.e. selecting \(\eta\) suitably small
then we retrieve the same inequality.
Further treatment is required when we introduce in \eqref{mobast}
\[
\zeta = \psi^\ell\vartheta
\]
and we get:
\[
\olww{}\vartheta^2
\psi^{2\ell}|\kl\psi|^2\leq C\ell^2
\olww{} \harm{u}^{\frac2m}\left(
\vartheta^2\psi^{2(\ell-2)}|\kl \psi|^2+\psi^{2(\ell-1)}|\kl \vartheta|^2\right)
\]
through the elementary inequality for \(x\in(\delta^{\frac1k},1]\):
\[
|\log x|\leq
\frac2\delta\left(|x|^{\frac{\delta}{2}}+|x|^{-\frac{2}{\delta}}\leq
\frac4\delta^{-\frac{2}{k\delta}-1}\right)
\]
We obtain for \(\varrho<1,\delta=(e\varrho\ell)^{-1}\):
\begin{eqnarray}\label{finid}
\olww{}\vartheta^2\psi^{2\ell}|\kl \psi|^2\leq
2C\ell^{2(e\varrho\ell+1)}\olww{} \psi^{2\ell(1-e\varrho))}
\left[\vartheta^2|\kl\psi|^2+\psi^{2e\varrho\ell}|\kl\vartheta|^2\right]
\end{eqnarray}
\paragraph{The iteration for the lower bound.} We follow the method of
\cite{lspde}. We select as \(\zeta\)
\[
v^{2(\ell-1)}\vartheta^{2(a\ell-b)}, \qquad v=\psi_\ensuremath{\epsilon}-\sigma,\qquad a\geq 1+b
\]
obtaining that:
\[
\olww{} \vartheta^{2(a\ell-b)}v^{2(\ell-1)}|\kl v|^2\leq
\ell^{2(e\varrho\ell+1)}\olww{} v^{2\ell-4}|\kl v|^2
\eta^{2(a\ell-b)}+v^{2(\ell-1)}\vartheta^{2(a\ell-b-1)}
\]
We get that
\[
\olww{} |\kl (|v|^\ell\eta^{a\ell-b})|^2\leq \ell^{2\ell(e\varrho+3)}
\ogkos{ \ensuremath{ {\bf W} }}+\ell^{2\ell(e\varrho+1)}\olww{} v^{2\ell}\vartheta^{2(a\ell-b-1)}
\]
Therefore we have that for \(a=kb, \qquad b=\frac{m}{k-m},\qquad
m=\frac{\ell}{\ell-2}\)
\[
\left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }}}\olww{}(|v|\vartheta)^{2\ell k}\right)^{\frac1k}
\leq \ell^{2\ell(e\varrho+3)}+\ell^{2\ell(e\varrho+1)}
\left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }}}\olww{}(|v|\vartheta^a)^{2m\ell}\right)^{\frac1m}
\]
Hence we have that
\[
\left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }}}\olww{}(|v|\vartheta)^{2\ell k}\right)^{\frac{1}{\ell k}}
\leq \ell^{2(e\varrho+1)}+\ell^{2(e\varrho+1)}
\left(\frac{1}{\ogkos{ \ensuremath{ {\bf W} }}}\olww{}(|v|\vartheta)^{2m\ell}\right)^{\frac1m}
\]
Selecting a sequence \(\{\ell_j\}\):
\[
\ell_j=k^j \qquad
I_j =\left(\frac{1}{v_j}\olww{}(|v|\eta)^{2k^j}\right)^{1/k^j}
\]
Hence we obtain
\[
I_{j+1}\leq k^{j(e\varrho+3)}+k^{j(e\varrho+1)} I_j
\]
or that
\[
I_{j+1}\leq k^{(e\varrho+3)j}(1+I_j)
\]
Examining separately the two cases: for some \(j_0\):
\[
I_{j_0}\leq 1
\]
and the complementary case we arrive at the conclusion
\[
I_j\leq (2k^{j_0})^{j-j_0}
\]
Therefore following the reasoning in \cite{lspde} we conclude that
\[
\olww{\infty}e^{c_1|\psi_\ensuremath{\epsilon}-\sigma|}\leq C\ogkos{ \ensuremath{ {\bf W} }}
\]
which implies that
\[
\left(\olww{\infty} (u^2+\ensuremath{\epsilon}^2)^{\frac{c_1}{2}}\right) \left(\olww{}
(u^2+\ensuremath{\epsilon}^2)^{-\frac{c_1}{2}}\right)\leq C^2 \ogkos{ \ensuremath{ {\bf W} }}^2
\]
and for small \(p>0\):
\[
\mkf{ \ensuremath{ {\bf W} }_\infty} u_\ensuremath{\epsilon}\geq
C\left(\frac{1}{v_\infty}\olww{\infty}u_\ensuremath{\epsilon}^p\right)^{1/p}
\]
We follow again \cite{lspde} appealing to \eqref{moiq} and get the bound:
\[
\mkf{ \ensuremath{ {\bf W} }_\infty} |u|\geq
C\left(\frac{1}{v_\infty}\olww{\infty}u^p\right)^{1/p}
\]
for any \(p\leq \frac{n}{n-2}\).
\subsection{The two dimensional case}
We will derive a version of Hardy's inequality for the
two dimensional situation that is not covered in the general case.
Therefore we start with the radial blow-up of the plane covered in the
\[
C_1=\xw{2}\setminus
\{ |x_1|\geq \ensuremath{\epsilon} |x_2|\} , C_2=\xw{2}\setminus \{|x_1|\geq
\ensuremath{\epsilon} |x_2|\}
\]
We set in \(C_1\)
\[
x_1=\frac{r\xi}{\sqrt{1+\xi^2}},\quad x_2=\frac{r}{\sqrt{1+\xi^2}}
\]
and interchange the roles of \(x_1,x_2\) in \(C_2\) We obtain for
\[
P(\dian{x})= r^mR_j(\xi),\qquad j=1,2
\]
the elementary identity:
\[
|\kl P|^2=r^{2(m-1)}\left[\frac{m^2R_j^2}{(1+\xi^2)^m}
+(1+\xi^2)^2r^2\frac{d}{d\xi}
\left((1+\xi^2)^{-\frac{m}{2}} R_j\right)^2\right]
\]
Therefore we have that:
\[
\frac{P^{2(1-\frac1m)}}{|\kl P|^2}=
\frac{R^2}{m^2+\left((1+\xi^2)R'-m\xi R\right)^2}\leq \frac{R^2}{m^2}
\]
Similarly:
\[
\left|\frac{\kl P}{P}\right|^2=\frac{m^2}{r^2}+
\left[(1+\xi^2)\frac{R'}{R}-m\xi\right]^2
\]
We compute then that for \(\delta<1\) and
\(f\in C_0^\infty(\xw{2}\setminus \left(\{ P=0\}\bigcup \{\log
|P|=\delta\}\right)) \) then
\[
\int_{\xw{2}} \frac{1}{P^{\frac2m}|\log\frac{|P|}{\delta}|}f^2\leq
\frac{C_1(P)}{|\log(\delta)|}\int_{\xw{2}} |\kl f|^2
\]
and
\[
\int_{\xw{2}} \frac{|\kl P|^2}{P^2|\log\frac{|P|}{\delta}|}f^2\leq
\frac{C_2(P)}{|\log(\delta)|} \int_{\xw{2}} |\kl f|^2
\]
Then we start localizing in areas of constant sign for \(R\):
\[
I_{j,\ensuremath{\epsilon}}=
\int_{\xw{2}}\frac{1}{|\log\frac{|P|}{\delta}|}\left|\frac{\kl P}{P}\right|^2
\chi_{j,\ensuremath{\epsilon}}^2f^2\leq
C_\ensuremath{\epsilon}\int_{\xw{2}}\frac{m^2}{r|\log \frac{r}{\delta}|}\chi_jf^2+
\frac{r}{|\log\frac{r}{\delta}|}
\left((1+\xi^2)\frac{R'}{R}-m\xi\right)^2\chi_jf^2
\]
We used the inequality:
\[
(a+b)^2\geq (1-\ensuremath{\epsilon})^2\left(a^2-\frac{1}{\ensuremath{\epsilon}^2}b^2\right)
\]
alternatively in the regions \((\log R)^2\geq \eta^2 (\log r)^2\) and
\((\log R)^2\leq 2\eta^2(\log r)^2\).
The inequality is majorised after integration by parts in the radial variable
through the elementary inequality:
\[
\int_0^\infty \frac{1}{r^2|\log\frac{r}{\delta}|} g^2 dr \leq
\frac{4}{|\log\delta|}\int_0^\infty g^2
\]
This proved easily by splitting the integral after arranging \(\ensuremath{\epsilon}\)
according to the support of \(g\)
\[
\int_0^\infty \frac{1}{r^2|\log\frac{r}{\delta}|} g^2 dr=
\int_0^{\delta^{1+\ensuremath{\epsilon}}} \frac{1}{r^2|\log\frac{r}{\delta}|} g^2 dr
+\int_{\delta^{-(1+\ensuremath{\epsilon})}}^\infty \frac{1}{r^2|\log\frac{r}{\delta}|} g^2 dr
\]
The first integral is written after integration by parts:
\[
2\int_0^{\delta^{1+\ensuremath{\epsilon}}}
\frac{1}{r\log {\delta}{r}}gg' +\frac{1}{r^2(\log\frac{\delta}{r})^2}g^2\leq
\frac1\varepsilon\int_0^\infty g'2 + \frac{1+\varepsilon}{|\log\delta|}
\int_0^{\delta^{1+\ensuremath{\epsilon}}} \frac{1}{r^2|\log\frac{\delta}{r}|}f^2
\]
then we choose \(\varepsilon=\frac{|\log\delta|}{2}\) and we get:
\[
\int_0^{\delta^{1+\ensuremath{\epsilon}}}\frac{1}{r^2\log {\delta}{r}}g^2\leq
\frac{C}{\ensuremath{\epsilon}|\log\delta|}\int_0^{\delta^{1+\ensuremath{\epsilon}}}g'2
\]
Similiarly for the other integral.
setting as \(g^2=rf^2\) and splitting the integral in two pieces then
\[
\int_0^\infty \frac{1}{r^2|\log \frac{r}{\delta}|} rf^2\leq
\frac{2C_\ensuremath{\epsilon}}{|\log\delta|}
\int_0^\infty rf'^2+\frac{2C_\ensuremath{\epsilon}}{|\log\delta|}
\int_0^\infty \frac{1}{r^2|\log(\frac{r}{\delta})|} rf^2\leq
\]
\[
\leq \frac{2C_\ensuremath{\epsilon}}{|\log\delta|}\int_0^\infty f'^2rdr
\]
The two dimenional inequality is obtained by a direct application of the
usual one dimensional inequality in the
\(\xi\)-variable after the formula:
\[
\left(\frac{R'}{R}\right)^2\leq 2\left(C+
\sum_{j=1}^m \frac{m_j^2}{(\xi-\xi_j)^2}\right)
\]
\subsection{Curvature estimates}
In the integration by parts formulas we substitute the curvature identities
we find that:
\begin{eqnarray}
\begin{split}
\ensuremath{\int_{{\bf K}}}\phi^2|\kl\ensuremath{{\mathrm Rm}}|^2\leq 3\ensuremath{\int_{{\bf K}}} \phi^2(|\ensuremath{{\mathrm R}}|^3+|\ensuremath{{\mathrm Rm}}|^2),\qquad
\ensuremath{\int_{{\bf K}}}\phi^2|\kl\ensuremath{{\mathrm Ric}}|^2\leq 3\ensuremath{\int_{{\bf K}}} \phi^2|\ensuremath{{\mathrm R}}|^3
\end{split}
\end{eqnarray}
The iteration scheme suggests that
\[
\eaf{ \ensuremath{ {\bf W} }^*}|\ensuremath{{\mathrm Rm}}|\leq \ensuremath{{\mathit D}}(\eta,\mu)\mkf{ \ensuremath{ {\bf W} }^*}|\ensuremath{{\mathrm R}}|
\]
\[
\eaf{ \ensuremath{ {\bf W} }^*}|\ensuremath{{\mathrm Ric}}|\leq \ensuremath{{\mathit D}}(\eta,\mu)\mkf{ \ensuremath{ {\bf W} }^*}|\ensuremath{{\mathrm R}}|
\]
\subsection{Local properties of eigenfunctions}
In the case of an eigenfunction we have the following
\begin{subequations}
\begin{align}
\ensuremath{\int_{{\bf K}}} \phi^2|\kl^2 u_\ensuremath{\lambda}|^2 & \leq \ensuremath{\lambda}\left(\ensuremath{\lambda}+
\ensuremath{{\mathit D}}(\eta,\mu)||\ensuremath{{\mathrm Ric}}||_{1, \ensuremath{ {\bf W} }}\right)
\left(\ensuremath{\int_{{\bf K}}} \phi^2u_\ensuremath{\lambda}^2\right)\label{eig1}\\
\ensuremath{\int_{{\bf K}}} \phi^2 |\kl^3u_\ensuremath{\lambda}|^2 &\leq
\ensuremath{\lambda}^2\left(\ensuremath{\lambda}+\ensuremath{{\mathit D}}(\eta,\mu)||\ensuremath{{\mathrm Ric}}||_{1, \ensuremath{ {\bf W} }}\right)
\ensuremath{\int_{{\bf K}}} \phi^2u_\ensuremath{\lambda}^2\label{eig2}
\end{align}
\end{subequations}
Performing partial integration to the term:
\[
\ensuremath{\int_{{\bf K}}} \phi^2|\ensuremath{{\mathrm Ric}}||\kl f|^2=-2\ensuremath{\int_{{\bf K}}} |\ensuremath{{\mathrm Ric}}|f\phi\kl\phi\cdot \kl f-
\ensuremath{\int_{{\bf K}}} \phi^2 f\kl|\ensuremath{{\mathrm Ric}}|\cdot\kl f-\ensuremath{\int_{{\bf K}}} |\ensuremath{{\mathrm Ric}}|\phi^2f\lap{g} f
\]
Young's inequality along with harmonic approximation for
\(\sqrt{|\ensuremath{{\mathrm Ric}}|^2+\ensuremath{\epsilon}}\) leads to
\[
\ensuremath{\int_{{\bf K}}} \phi^2 |\ensuremath{{\mathrm Ric}}||\kl u_\ensuremath{\lambda}|^2 \leq C\ensuremath{\lambda} \ensuremath{\int_{{\bf K}}} |\ensuremath{{\mathrm Ric}}|\phi^2u_\ensuremath{\lambda}^2
\]
Similarly we get for
Finally we conclude that
\begin{subequations}\label{eig}
\begin{align}
\ensuremath{\int_{{\bf K}}} \phi^2|\kl^2 u_\ensuremath{\lambda}|^2 & \leq \ensuremath{\lambda}^2\ensuremath{\int_{{\bf K}}}
\phi^2\left(1+\frac{|\ensuremath{{\mathrm Ric}}|}{\ensuremath{\lambda}}\right)\label{eig1}\\
\ensuremath{\int_{{\bf K}}} \phi^2 |\kl^3u_\ensuremath{\lambda}|^2 &\leq C\ensuremath{\lambda}^3\ensuremath{\int_{{\bf K}}} \phi^2(1+\frac{|\ensuremath{{\mathrm Rm}}|}{\ensuremath{\lambda}})
u_\ensuremath{\lambda}^2+\ensuremath{\int_{{\bf K}}} \left[\frac{1}{\ensuremath{\epsilon}^2}\left(|\kl\ensuremath{{\mathrm Rm}}|^2+|\kl\ensuremath{{\mathrm Ric}}|^2\right)+|\ensuremath{{\mathrm Rm}}|^2+|\ensuremath{{\mathrm Ric}}|^2\right]
\phi^2|\kl u_\ensuremath{\lambda}|^2\label{eig2}
\end{align}
\end{subequations}
\subsubsection{Harnack inequalities}
\paragraph{The eigenfunction.} We have for \(\gamma=\ensuremath{\lambda},
\tilde{u}_{\ensuremath{\lambda},\ensuremath{\epsilon}}=\sqrt{u_\ensuremath{\lambda}^2+\ensuremath{\epsilon}^2} \)
that
\begin{eqnarray}
\eaf{ \ensuremath{ {\bf W} }} \tilde{u}_{\ensuremath{\lambda},\ensuremath{\epsilon}}\leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})
\mkf{ \ensuremath{ {\bf W} }}\tilde{u}_{\ensuremath{\lambda},\ensuremath{\epsilon}}
\end{eqnarray}
\paragraph{The gradient. }
Now the gradient \(G_{\ensuremath{\lambda},\ensuremath{\epsilon}}=|\kl u_\ensuremath{\lambda}|^2 +\ensuremath{\epsilon}^2\)
requires that we use the \(\gamma_1=\ensuremath{\lambda}+\kappa \) and we get that
\begin{eqnarray}
\eaf{ \ensuremath{ {\bf W} }} G_{\ensuremath{\lambda},\ensuremath{\epsilon}} \leq
\ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\left(\ensuremath{\lambda}+\ensuremath{{\mathit D}}(\eta,\mu)||\ensuremath{{\mathrm Ric}}||_{1, \ensuremath{ {\bf W} }}\right)
\mkf{ \ensuremath{ {\bf W} }}G_{\ensuremath{\lambda},\ensuremath{\epsilon}}
\end{eqnarray}
\paragraph{The hessian estimate}
The estimate
\begin{eqnarray}
\eaf{ \ensuremath{ {\bf W} }}H_{\ensuremath{\lambda},\ensuremath{\epsilon}}
\leq \ensuremath{{\mathit D}}(\eta,\ensuremath{\lambda})\ensuremath{\lambda}^2\left(\ensuremath{\lambda}+\ensuremath{{\mathit D}}(\eta,\mu)||\ensuremath{{\mathrm Ric}}||_{1, \ensuremath{ {\bf W} }}\right)
\mkf{ \ensuremath{ {\bf W} }}H_{\ensuremath{\lambda},\ensuremath{\epsilon}}
\end{eqnarray}
\paragraph{The estimates for the restriction on the spherical front} The
restriction of the eigenfunction
\[
u(r,\dian{\theta})=e^{\ensuremath{\Lambda}}\sin(\beta_1\phi)
\]
of the spherical front satifies the following inequalities for \(j=1,2\):
\[
\ensuremath{\int_{{\bf F}}}{} \theta^2| \klgs^ju|^2\leq C\ensuremath{\int_{{\bf F}}}{}\theta^2(\ensuremath{\lambda}+R^2)u^2
\]
where \(R\) is a polynomial depending on
\(\ensuremath{{\mathrm Rm}},\kl\ensuremath{{\mathrm Rm}},\dots,\kl^2\ensuremath{{\mathrm Rm}},\ensuremath{{\mathrm Ric}},\kl\ensuremath{{\mathrm Ric}},\kl\ensuremath{{\mathrm Ric}}\). This combined with
Michael-Simon Sobolev inequality provides Harnack inequalities for the
restriction of \(u,|\klgs u|,\klgs^2u\) on the spherical front.
\subsubsection{The Bernstein inequalities}
The integration by parts formulas suggest along with the Harnack
inequalities the following Berstein type
inequalities in geodesic pixels:
\begin{berniq}
The following estimates hold in a domain inside a
geodesic pixel \( \ensuremath{ {\bf W} }\subset \ensuremath{ {\bf P} }\)
\begin{align}
|\kl^2u_\ensuremath{\lambda}| & \leq \eaf{ \ensuremath{ {\bf W} }}|\kl^2u_\ensuremath{\lambda}|\leq
C_2( \ensuremath{ {\bf W} }) \ensuremath{\lambda}^{\frac{n}{2}+1}\left(|\kl u_\ensuremath{\lambda}|+\ensuremath{\epsilon}\right) \leq
C_3( \ensuremath{ {\bf W} }) \ensuremath{\lambda}^{\frac{n}{2}+2}\left(|u_\ensuremath{\lambda}|+\ensuremath{\epsilon}\right) \\
|\kl u_\ensuremath{\lambda}| &\leq \eaf{ \ensuremath{ {\bf W} }}|\kl u_\ensuremath{\lambda}|\leq
C_4( \ensuremath{ {\bf W} })\ensuremath{\lambda}^{\frac{n}{2}+1}(|u_\ensuremath{\lambda}|+\ensuremath{\epsilon})
\end{align}
\end{berniq}
|
2,869,038,155,306 | arxiv | \section{Introduction}
\setcounter{footnote}{=-1}
\begin{figure}
\centering
\includegraphics[width=8cm,height=6cm]{Speed_acc/speed_acc-eps-converted-to.pdf}
\caption[1]{Speed and accuracy trade-off between different models on PASCAL VOC 2007 \texttt{test} dataset. Here, ``$+$'' refers to the performance tested on a single NVIDIA Titan X (Maxwell) GPU, and ``$\blacksquare$'' refers to the performance tested on NVIDIA Tian X (Pascal) GPU, which is a more advanced model with higher computational capability\footnotemark.}
\label{fig:acc_speed}
\vspace{-4mm}
\end{figure}
Aggregating multi-scale information is critical for object detection models to exploit context and achieve better performance in challenging conditions, especially for Single Shot Detector (SSD)~\cite{liu2016ssd}. Different from the conventional two-stage object detectors (\emph{e.g.}\ RCNN~\cite{girshick2016region}, Faster RCNN~\cite{ren2015faster}) and single-stage detectors (\emph{e.g.}\ YOLO~\cite{redmon2016you} and YOLO-v2~\cite{redmon2016yolo9000}) that detect objects based on features extracted from the last layer, SSD detects objects from both shallow and deep layers. Under the SSD framework, detectors placed in shallow layers are responsible for detecting small objects, while detectors placed in deeper layers are responsible for detecting larger objects. Such a design significantly improves the detection accuracy on small objects, since fine features from the shallow layers contain richer details in much higher resolution, which may be missed by coarse features from top layers due to down-sampling. However, it still performs not so well on detecting difficult objects, like bottles. This is because the features from shallow layers have a limited receptive filed and all decisions are made based only on the local area. These features cannot perceive global context information from surrounding areas for context reasoning and support making accurate decisions. Integrating information from other scales helps widen the receptive field, thus can alleviate such ambiguities and reduce information uncertainty in the local area.
\footnotetext{CAD~\cite{zhou2017cad} is tested on a GTX 1070 (Pascal) which has similar performance with Nvidia Titan X Maxwell.}
However, the accuracy improvement brought by introducing multi-scale information does not come for free. Existing multi-scale information fusion techniques always introduce extra components to the existing network which lead to significant speed drop. See Figure~\ref{fig:acc_speed}. One of the main extra components consuming a large amount of computational resource is the up-sampling unit which is mostly conducted by deconvolution layers using computational extensive kernels, \emph{e.g.} $4\times4$ kernels. Besides, the extra convolution layers also cost large computational resource, which are used for gathering and fusing information from different scales. Both of them are essential. The up-sampling unit helps match feature maps to the same scale and the extra convolution layers refine the features before sending them to the detector. Studies on how to reduce the computation consumption by building a more efficient information fusion unit without degrading performance are still rare.
In this work, we propose to build a light-weight information fusion architecture that can effectively fuse multi-scale information without consuming much computational resource. We reveal that combining information from both lower level finer features and higher level coarser features can lead to more efficient fusion. Then we propose a novel context reasoning architecture which enjoys a smoother information flow by only considering adjacent scales and controllable complexity with an iterative inference mechanism. The proposed light-weight information weaving architecture is called \emph{WeaveNet}, which iteratively conducts multi-scale reasoning with only information from adjacent scales and progressively fuses the long-range information across multiple scales. It does not require batch normalization. Therefore, a deeper backbone network, such as DPN-131~\cite{chen2017dual}, can be adopted for further improving the accuracy as long as the GPU memory can accommodate 1 image per mini-batch. More importantly, WeaveNet is highly efficient and can gradually improve the performance by simply performing more iterations, as shown in Figure~\ref{fig:acc_speed}. We apply WeaveNet for object detection on PASCAL VOC 2007 and 2012 benchmarks. It achieves 79.5\% mAP on PASCAL VOC 2007 with processing speed as fast as 101 fps.
In the following sections, we will first revisit existing multi-scale methods and highlight the uniqueness of the proposed WeavNet. Then we will introduce the proposed WeaveNet detailedly and evaluate its performance on benchmark datasets.
\section{Related Work}
\FloatBarrier
Recently, the single-stage detector has attracted increasingly more attention due to its simplicity and high detection speed, compared with two-stage detectors, \emph{e.g.} Faster RCNN~\cite{girshick2016region} and RFCN~\cite{dai2016r}. However, despite their advantages, single-stage detectors usually do not perform well for detecting difficult and small objects.
Different from many two-stage detectors and other single-stage models (\emph{e.g.} YOLO~\cite{redmon2016you}), SSD~\cite{liu2016ssd} detects objects at multiple scales for suiting their sizes: small objects are detected at shallow layers with low-level high-resolution features, while large objects are detected in top layers with high-level low-resolution features. Such a design reduces the demand of using very large input size, \emph{e.g.} $600\times1000$ as commonly used in Faster RCNN~\cite{girshick2016region}, for keeping rich feature details for the top layers and thus significantly reduces the computational cost. However, it brings another problem on detecting difficult and small objects. Since features in lower layers have a much smaller receptive field, the network cannot perceive a boarder view to utilize more global context information and suffer from ambiguity and insufficient context exploration.
To improve the accuracy of SSD on detecting difficult and small objects, various strategies~\cite{fu2017dssd,lin2017focal,woo2017stairnet,jeong2017enhancement,ren2017accurate} have been proposed to introduce multi-scale information to the conventional SSD framework. One main stream is to attach a top-down pyramid-like structure to propagate information from top layers to bottom layers to enlarge the receptive filed of each shallow layer. For example, the Deconvolutional Single Shot Detector~\cite{fu2017dssd} uses a deconvolutional module to enlarge the scale of top layers and adds them back to the shallow layer features, followed by several extra convolutional layers for fusing the merged information. The very recently proposed StairNet~\cite{woo2017stairnet} and RetinaNet~\cite{lin2017focal} share a similar idea. But they adopt slightly different strategies for choosing a proper adaptive layer before the element-wise sum and they use more effective blocks to conduct further inference on the merged information. However, in order to enable enough information to flow to the final bounding box regressor and classifier, all of the new attached layers usually have large input and output channel sizes, \emph{e.g.} $256$, leading to considerable amount of computational cost.
Another stream of utilizing multi-scale information is to consider both low-level and high-level information. The main idea is: in addition to introducing information from top layers to enlarge the receptive fields, they also pass more detailed local information to top layers for making bound box localization and category inference more precise. However, most of existing methods following this stream cost more computational resource since information from all scales (other than the target scale) are merged together simultaneously. Rainbow SSD (RSSD)~\cite{jeong2017enhancement} proposes to utilize both low- and high-level information by concatenation, which increases the final fused information from hundreds of channels to 2,816 channels, introducing significant computational cost in the following layers. Besides, the Recurrent Rolling Convolution Network~\cite{ren2017accurate} proposes to recurrently forward information from top to bottom and bottom to top. However, the inner state-to-state adaptive layers are quite computationally expensive and also significantly slow down the speed compared with the vanilla SSD.
Different from previous works, we propose a weaving structure for multi-scale information fusion. The proposed WeaveNet is naturally friendly to optimization and does not require the batch normalization layer to ease the training. It is also highly parallel in each iteration and costs much lower computation, which is preferable for real-time application.
\section{WeaveNet}
\subsection{Information Weaving}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{Main/weaving_info.png}%
\caption{The overall framework of the proposed WeaveNet. The squares on the left stands for the features extracted from the backbone network, \emph{e.g.} Reduced VGG-16~\cite{liu2015parsenet}; the proposed WeaveNet is constructed by weaving multiple blocks together. The final output is propagated to the final classifier/regressor, a simple $3\times3$ convolutional layer.}
\label{fig:weavenet_main}
\vspace{-4mm}
\end{figure*}
The conventional Single Shot Detector uses features extracted from multiple resolutions to detect objects at various scales. However, as discussed above, the receptive filed in shallower layers can only cover limited local areas, which makes it hard to conduct complex context reasoning based on global features. Moreover, features from higher layers passing through several down-sampling stages may lose detailed information for precise localization. In this work, we propose to introduce features from both lower and higher layers through a novel information weaving architecture to overcome both drawbacks without efficiency decline.
As shown in Figure \ref{fig:weavenet_main}, the idea of the proposed information weaving structure is to gradually weave the information from adjacent scales for the detector in the current scale. Here, the ``gradually'' means that only information from adjacent scales is taken into account, since we believe current scale should focus more on adjacent scales instead of those faraway at the very first iterations. We propose to integrate the long-range information by an iterative inference process, allowing the information to propagate from neighbors to its neighbors to the current scale. By weaving information iteratively, sufficient multi-scale context information can be transferred and integrated to the current scale thoroughly.
\subsection{Network Architecture}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Main/block_detail.png}%
\caption{The inner architecture of each WeaveNet block. It contains two $3\times3$ convolutional layers and each of them is followed by a ReLu activation layer. The transformed features are then sent to the upper and lower scales separately.}
\label{fig:zoom_in}
\end{figure}
The overall architecture of the proposed WeaveNet is shown in Figure~\ref{fig:weavenet_main} and we disclose the inner structure of each block in Figure~\ref{fig:zoom_in}.
In the main architecture, the right part shows our proposed context information weaving structure, while the left part shows the raw features extracted from the backbone CNN. The WeaveNet takes the raw features extracted from the backbone network as input and outputs the refined features. Each of the refined features is then attached by two separate convolution layers for bounding box regression and classification respectively, same as the vanilla SSD. Within the proposed WeaveNet, information are gathered and fused iteratively. Each scale extracts useful information from both of its higher adjacent layers (with low resolution) and lower adjacent layer (with high resolution). These information is added to the current state for next decision making after going through a \textit{block} for information fusion and inference. During the iterative refinement, besides propagating information to longer range scales, the information can also propagate back to its own scale for making more complex reasoning and introducing greater non-linearity. To make information from adjacent scales be compatible with current scale, we use a bi-linear interpolation for enlarging the feature maps and use $2\times2$ max pooling with stride $= 2$ for reducing the feature maps. Both of them are parameter-free and introduce negligible computational cost.
Details about the inner block architecture are shown in Figure \ref{fig:zoom_in}, which is designed to be as light as possible for reducing computational cost. At the beginning of the bock, information from both higher and lower layers are concatenated together to form inputs to the block, which contains information from all previous iterations including the raw features from the backbone network. Then, all the information is passed though a $3\times3$ convolution layer with a ReLU activation layer for a spatial non-linear fusion. The output is then passed to its neighborhood for next iteration. We repetitively stack these simple blocks for modeling and aggregating more complex and richer context information. Based on our experiments, the most significant improvement comes from the 1st iteration and usually its maximum performance is achieved at the 5th iteration.
\paragraph{Model details}
We use a reduced VGG16 network as the backbone and attach the proposed WeaveNet for multi-scale information fusion. The overall setting follows the original SDD for fair comparison, where we extract the features from \texttt{conv4\_3}, \texttt{fc6}, and add 6 more layers after the \texttt{fc6}. In our training platform, the existing bi-linear interpolation is implemented by using channel-wise devconvolution, which can only support integer scale factor as the pooling layer. Thus, we set the input image size as $320\times320$, so that the size of each scale is $\{40, 20, 10, 5, 3, 1\}$ respectively where the scales are reduced by a factor of 2 in the first 4 scales. In this way, it is easier to attach our proposed WeaveNet. Since the last two scales are small, one $3\times 3$ convolution kernel is able to cover the whole map. Hence, we do not refine them further and keep them the same as the vanilla SSD.
\subsection{Architecture Simplification}
Single Shot Detector is known for its fast speed as well as high accuracy. Losing either of these advantages would make the detector less favorable. In this subsection, we elaborate on how to accelerate the WeaveNet by grouping fragmented computation together to reduce the data allocation and communication cost and increase the hardware usage rate by reformulating the network topology.
Suppose $f_{i}^{0}$ is the raw feature extracted from the $i$-th scale of the backbone network. Let $f_{i}^{t} = [\hat{f}_{i-1}^{t-1}; f_{i}^{t-1}; \check{f}_{i+1}^{t-1}]$ be the input and $[\check{f}_{i}^{t},f_{i}^{t}, \hat{f}_{i}^{t}]$ be outputs of the \textit{Block} shown in Figure~\ref{fig:zoom_in} in the $t$-th iteration. Here $\check{f}_{i}^{t}$ will be sent to the lower scale and $\hat{f}_{i}^{t}$ will be sent to the upper scale. Then, the convolution operation in Figure \ref{fig:zoom_in} can be formulated as
\begin{equation}
\label{eq:1}
\begin{aligned}
\check{f}_{i}^{t} & = \sigma( [\hat{f}_{i-1}^{t-1}; f_{i}^{t-1}; \check{f}_{i+1}^{t-1}] * \check{W}^{t} + \check{b} ), \\
\hat{f}_{i}^{t} & = \sigma( [\hat{f}_{i-1}^{t-1}; f_{i}^{t-1}; \check{f}_{i+1}^{t-1}] * \hat{W}^{t} + \hat{b} ), \\
\end{aligned}
\end{equation}
where $\check{W}$, $\check{b}$ are the convolutional kernel and bias of the lower $3\times3$ layer respectively, $\hat{W}$, $\hat{b}$ are the convolutional kernel and bias for the upper $3\times3$ layer, and $\sigma(\cdot)$ is the ReLu activation function shown in Figure \ref{fig:zoom_in}.
We propose to group the computation by $W^{t} = [\check{W}^{t}; \hat{W}^{t}]$ and $b^{t} = [\check{b}^{t}; \hat{b}^{t}]$. Then, Eqn.~\eqref{eq:1} can be simplified as
\begin{align*}
[\check{f}_{i}^{t},\hat{f}_{i}^{t}] = \sigma([\hat{f}_{i-1}^{t-1}; f_{i}^{t-1}; \check{f}_{i+1}^{t-1}] * W^{t} + b).
\end{align*}
Further splitting $W^{t}$ into $W^{t}=[W_a^{t}, W_b^{t}]$ gives
\begin{align*}
&[\check{f}_{i}^{t},\hat{f}_{i}^{t}] \\
= & \sigma([\hat{f}_{i-1}^{t-1}; f_{i}^{t-1}; \check{f}_{i+1}^{t-1}] * W^{t} + b) \\
= &\sigma([\hat{f}_{i-1}^{t-1};...;\hat{f}_{i-1}^{1}; f_{i}^{0};\check{f}_{i+1}^{1};...;\check{f}_{i+1}^{t-1}] * W^{t} + b) \\
= & \sigma([\hat{f}_{i-1}^{t-1};...;\hat{f}_{i-1}^{1};\check{f}_{i+1}^{1};...;\check{f}_{i+1}^{t-1}] * W_a^{t} + b + f_{i}^{0} * W_b^{t}).
\end{align*}
The computation in each block can be separated into two parts. One depends on the previous states and the other part does not. Thus the latter part can be grouped and pre-computed for further acceleration, \emph{i.e.} $[f_{i}^{0} * W_b^{1}, ..., f_{i}^{0} * W_b^{t}] = f_{i}^{0} * W_{b}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Main/simplified_block.pdf}
\caption{Simplified inner architecture of each WeaveNet block. ``$\sim$'' refers to the pre-computed result which does not depend on current input features and ``$\oplus$'' stands for the element-wise sum.}
\label{fig:simplified}
\end{figure}
Figure \ref{fig:simplified} shows the corresponding simplified architecture, where fragmented computations are grouped together to reduce the unnecessary data allocation and communication cost and increase the hardware usage rate for acceleration. More specifically, different from the topology shown in Figure \ref{fig:zoom_in}, only the outputs from adjacent scales are gathered by concatenation. Within the block, these inputs would all be sent to a single transformer. The output is element-wisely summed with a pre-computed source and then split and sent to its adjacent scales. Both inputs and outputs of each block are usually in low dimension, \eg~{32}. When only considering the pure computational cost, such computational cost of adding one block can be more than $ 10\times $ less than adding a single $3\times3$ layer with both input and output channels equal to $256$ for fusing multi-scale information.
\section{Experiments}
We implement the proposed WeaveNet based on vanilla SDD~\cite{liu2016ssd} using the same version of Caffe~\cite{jia2014caffe}. Following the original SSD, we adopt the same optimization strategy and train the same number of iterations for fair comparison. In particular, all networks are trained with batch size of 32, and the learning rate is reduced from $4\times10^{-3}$ to $4\times10^{-5}$ by $10^{-1}$ at the $80$K and $100$K iterations and terminated at the $120$K iterations. Frames Per Second (fps) is reported on both Nvidia Titan X (Maxwell) GPU and Nvidia Titan X (Pascal) GPU with the same NVIDIA Library, \emph{i.e.} CUDA 8.0 $+$ cuDNN v5.1.
In the experiments, we evaluate the WeaveNet on the widely used PASCAL VOC 2007 and PASCAL VOC 2012 benchmarks~\cite{everingham2010pascal}. In order to provide more insights, we first conduct controlled experiments on the PASCAL VOC 2007 benchmarks to study the properties of WeaveNet. Then, based on the results, we test the proposed WeaveNet with its best settings on both benchmarks and compare it with existing state-of-the-art object detection methods through an in-depth analysis.
\subsection{Importance of Coarse and Fine Features}
We start with an ablation study to investigate the importance of top-down and bottom-up information propagation. Since top-down information propagation has already been observed to be important by many papers~\cite{jeong2017enhancement,lin2016feature,lin2017focal,woo2017stairnet}, in our experiments, we are more concerned about whether further introducing finer features from lower layers would improve the accuracy.
\begin{table*}
\renewcommand{\arraystretch}{1.4}
\centering
\caption{Ablation study on the importance of using the coarse and fine features. The ``Top-down'' experiment studies the importance of introducing high-level coarse features from top layers, and the ``Bottom-up'' studies the importance of introducing low-level fine features from bottom layers. Performance is measured by mean of Average Precision (mAP) on PASCAL VOC 2007 test set.}
\label{table:ablation_study:feature}
\vspace{1mm}
{
\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
Method & Settings & Input Size & Top-down & Bottom-up & mAP (small) & mAP (medium) & mAP (large) & mAP (overall)
\\ \hline
SSD~\cite{liu2016ssd} & N/A & $320\times320$ & & & 43.4\% & 74.1\% & 79.2\% & 77.3\%
\\
WeaveNet & k=16, iter=1 & $320\times320$ & \checkmark & & 47.2\% & 75.4\% & 79.7\% & 78.6\%
\\
WeaveNet & k=16, iter=1 & $320\times320$ & & \checkmark & 43.7\% & 75.0\% & 79.7\% & 78.1\%
\\
WeaveNet & k=16, iter=1 & $320\times320$ & \checkmark & \checkmark & 47.4\% & 76.3\% & 78.6\% & 79.0\%
\\ \hline
\end{tabular}
}
\end{table*}
The ablation study is designed by blocking either the bottom-up information path or the top-down information path individually to study effects of different components. In the experiments, the number of iteration is set to be $1$ for WeaveNet. Table~\ref{table:ablation_study:feature} shows the experiment results. As can be seen from the last column, the ``top-down'' information propagation improves the overall mAP by $1.3\%$ as expected, while the ``bottom-up'' information propagation can also improve the overall mAP by $0.8\%$. By utilizing both top-down and bottom-up information, it further improves the mAP to $79.0\%$, indicating that both top-down and bottom-up feature are important.
To further investigate effects of introducing top and bottom features, we follow \cite{woo2017stairnet} to sort the testing images into different scales by the area of ground truth bounding box and evaluate mAP w.r.t.\ specific object size. Specifically, the ground truth bounding boxes are divided into three parts per class: \ie $\text{small:} [0 \sim 25\%), \text{medium:} [25\% \sim 75\%), \text{large:} [75\%\sim 100\%]$. When doing evaluation on a specific scale, ground truth bounding box on other scales are ignored. The results are shown in the 6th to 8th columns in Table~\ref{table:ablation_study:feature}. As can be seen from the results, the ``Top-down'' information significantly benefits small object detection. Further introducing the fine detailed features from bottom layers help detect medium objects. However, it is interesting to see that once both top and bottom features are considered, the detection accuracy on large objects slightly drops.
\begin{figure}
\centering
\resizebox{0.45\textwidth}{!}{
\includegraphics{voc07/voc4x4.pdf}%
}
\caption{Comparisons on detection quality among four different fusion strategies. (a) Vanilla SSD (w/o fusion); (b) Top-down fusion; (c) Bottom-up fusion; (d) Weaving both top-down and bottom-up information.}
\label{fig:fine_coarse_fea}
\end{figure}
We visualize the testing results in Figure~\ref{fig:fine_coarse_fea} to compare the detection results of each detector. As can be seen from Figure~\ref{fig:fine_coarse_fea}~(d), smaller and more difficult objects are detected correctly, compared with other strategies in Figure~\ref{fig:fine_coarse_fea} (a)-(c).
The above ablation studies verify our conjecture that combining both low-level feature and high-level features can further improve object detection accuracy. Thus, in the following experiments, we use the full version of WeaveNet to further study its properties and compare it with state-of-the-arts.
\subsection{Effectiveness and Efficiency}
The most attractive advantage of using a single stage detector is the speed. Different from the two-stage object detectors which require an object proposal generation stage and an object prediction stage, the single stage detector directly performs the prediction and thus saves a huge amount of computation resource. It would be less useful if the attached multi-scale fusion component loses the speed advantage with only slight performance improvement. In experiments here, we first study the performance improvement brought from each individual component and then adjust the number of iteration in the WeaveNet to study the speed-accuracy tread-off comparing with state-of-the-art multi-scale fusion methods.
\begin{table}
\renewcommand{\arraystretch}{1.4}
\centering
\caption{Ablation study of WeavNet on PASCAL VOC 2007 test set. Speed (in fps) is tested on a single NVIDIA Titan X (Maxwell) with batch size equal to 1. The performance is measured by mean of Average Precision (mAP).}
\label{table:ablation_study:component}
\vspace{1mm}
\resizebox{0.48\textwidth}{!}
{
\begin{tabular}{c|cc|cc|ccc|c}
\hline
Component
& \multicolumn{7}{c|}{WeaveNet} & SSD \\ \hline
bbox refinement
& \checkmark & \checkmark & & & & & & \\
use anchor {[}2,3{]}
& \checkmark& \checkmark & \checkmark & \checkmark & & & & \\ \hline
iter=1, k=64
& \checkmark & & \checkmark & & \checkmark & & & \\
iter=1, k=32
& & & & & & \checkmark & & \\
iter=1, k=16
& & \checkmark & & \checkmark & & & \checkmark & \\ \hline
\begin{tabular}[c]{@{}c@{}}fps\end{tabular}
& 42 & 45 & 42 & 45 & 43 & 44 & 46 & 48 \\ \hline
mAP (\%)
& 79.25 & 78.96 & 79.03 & 78.95 & 79.02 & 78.69 & 78.60 & 77.33 \\
\hline
\end{tabular}
}
\end{table}
\paragraph{Width of WeaveNet} To study the effectiveness of introducing different amount of scale information from adjacent scales, we vary the channels size of concatenated adjacent channels, noted as $k$, from $k=16$ to $k=64$. The larger the number is, the richer information can be introduced for each iteration. As can be seen from the fourth column Table~\ref{table:ablation_study:component}, the accuracy consistently improves. However, the speed is slightly decreased because of the higher computational complexity.
\paragraph{Number of anchor boxes} The regressor of single shot detector predicts the offset w.r.t.\ its default anchors. The original anchor setting is to use 5 different aspect ratios, \emph{i.e.} $[1/2, 1, 2]$, for the first one and last two scales, and use 3 aspect ratios \emph{i.e.} $[1/3, 1/2, 1, 2, 3]$, for the middle scales. However, as can be seen from the third column of Table~\ref{table:ablation_study:component}, introducing more anchors by using five kinds of aspect ratios for all six scales can also improve the accuracy by about $0.4\%$, with a slight computation overhead.
\paragraph{Bounding box refinement} We refine the final bounding box location by conducting a bounding box refinement after the NMS stage. Instead of directly using the NMS output, we further refine the location of each bounding box using a weighted sum with its surrounding boxes whose IoU is greater than 0.6. The weight is set to be the score of each box. We deploy the bounding box refinement upon a well trained model. The 2nd column in Table~\ref{table:ablation_study:component} shows improvement of the proposed bounding box refinement technique. In our experiment, we find the proposed refinement technique can always help gain $0.1\%-0.3\%$ mAP compared with bounding box without refinement.
\begin{table}
\renewcommand{\arraystretch}{1.4}
\centering
\caption{Speed accuracy tread-off of different methods on PASCAL VOC 2007 test set. Results with two batch sizes, \ie, 1 and 8, are reported. The performance is measured by mean of Average Precision (mAP, in \%).}
\label{table:iter_inference}
\vspace{1mm}
\resizebox{0.48\textwidth}{!}
{
\begin{tabular}{ccccccc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Backbone}} & \multicolumn{2}{c|}{fps (Maxwell)} & \multicolumn{2}{c|}{fps (Pascal)} & \multicolumn{1}{c}{\multirow{2}{*}{mAP (\%)}} \\ \cline{3-6}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Size = 1} & \multicolumn{1}{c|}{Size = 8} & \multicolumn{1}{c|}{Size = 1} & \multicolumn{1}{c|}{Size = 8} & \multicolumn{1}{c}{}
\\ \hline
\multicolumn{1}{c|}{SSD~\cite{liu2016ssd}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{48} & \multicolumn{1}{c|}{67} & \multicolumn{1}{c|}{75} & \multicolumn{1}{c|}{105} & \multicolumn{1}{c}{77.3}
\\
\multicolumn{1}{c|}{DiSSD~\cite{xiang2017context}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{41} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c}{78.1}
\\
\multicolumn{1}{c|}{RSSD~\cite{jeong2017enhancement}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{35} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c}{78.5}
\\
\multicolumn{1}{c|}{DSSD~\cite{fu2017dssd}} & \multicolumn{1}{c|}{ResNet101} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c}{78.6}
\\
\multicolumn{1}{c|}{StairNet~\cite{woo2017stairnet}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{30} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c}{78.8}
\\ \hline
\multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}WeaveNet\\ (k=16, iter=1)\end{tabular}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{45} & \multicolumn{1}{c|}{63} & \multicolumn{1}{c|}{71} & \multicolumn{1}{c|}{104} & \multicolumn{1}{c}{79.0}
\\
\multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}WeaveNet\\ (k=32, iter=1)\end{tabular}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{44} & \multicolumn{1}{c|}{61} & \multicolumn{1}{c|}{67} & \multicolumn{1}{c|}{101} & \multicolumn{1}{c}{79.5}
\\
\multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}WeaveNet\\ (k=16, iter=3)\end{tabular}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{38} & \multicolumn{1}{c|}{56} & \multicolumn{1}{c|}{55} & \multicolumn{1}{c|}{82} & \multicolumn{1}{c}{79.2}
\\
\multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}WeaveNet\\ (k=16, iter=5)\end{tabular}} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{33} & \multicolumn{1}{c|}{50} & \multicolumn{1}{c|}{44} & \multicolumn{1}{c|}{72} & \multicolumn{1}{c}{79.7}
\\ \hline
\end{tabular}
}
\end{table}
\paragraph{Iterative information weaving} One major attractive property of the proposed information weave structure is: by gradually weaving information from different scales, the receptive filed is enlarged and the learning ability of the whole network is increased. To investigate effectiveness of such an iterative inference procedure, we design another set of experiments by varying the number of iterations of the proposed WeaveNet while keeping other components fixed. The results are summarized in Table~\ref{table:iter_inference}. As can be seen from the results, the accuracy keeps increasing along with more iterations, demonstrating the proposed information weaving architecture can effectively fuse and refine the multi-scale information. The performance boost is also observed for other settings consistently. We also plot the results in Figure~\ref{fig:acc_speed}, where the solid line represents the same WeaveNet but using different number of iterations. The speed does gradually decrease when using more iterations. However, when comparing with other existing multi-scale fusion techniques in the first block of Table~\ref{table:iter_inference}, WeaveNet still shows significant superiority in both speed and accuracy.
\subsection{Results on PASCAL 2007}
In this subsection, we show the performance comparison between the WeaveNet and state-of-the-art object detection models. All of the models are trained on the union set of PASCAL VOC 2007 trainval and VOC 2012 trainval, and evaluated on PASCAL VOC 2007 \texttt{test} set.
\renewcommand{\arraystretch}{1.5}
\begin{table*}[t]
\centering
\caption{Object detection results on PASCAL VOC 2007 \texttt{test} set. The performance is measured by mean of Average Precision (mAP, in \%). }
\label{table:VOC07}
\resizebox{\textwidth}{!}{
\setlength\tabcolsep{2.5pt}
\begin{tabular}{c|c|c|cccccccccccccccccccc}
\toprule
\textbf{Two-Stage} & Backbone
& \textbf{mAP}
& areo & bike & bird & boat & bottle & bus & car & cat & chair & cow
& table & dog & horse & mbk & prsn & plant & sheep & sofa & train & tv \\
\midrule
Faster~\cite{ren2015faster} & VGG16
& 73.2
& 76.5 & 79.0 & 70.9 & 65.5 & 52.1 & 83.1 & 84.7 & 86.4 & 52.0 & 81.9
& 65.7 & 84.8 & 84.6 & 77.5 & 76.7 & 38.8 & 73.6 & 73.9 & 83.0 & 72.6 \\
ION~\cite{bell2016inside} & VGG16
& 75.6
& 79.2 & 83.1 & 77.6 & 65.6 & 54.9 & 85.4 & 85.1 & 87.0 & 54.4 & 80.6
& 73.8 & 85.3 & 82.2 & 82.2 & 74.4 & 47.1 & 75.8 & 72.7 & 84.2 & 80.4 \\
MR-CNN~\cite{gidaris2015object} & VGG16
& 78.2
& 80.3 & 84.1 & 78.5 & 70.8 & 68.5 & 88.0 & 85.9 & 87.8 & 60.3 & 85.2
& 73.7 & 87.2 & 86.5 & 85.0 & 76.4 & 48.5 & 76.3 & 75.5 & 85.0 & 81.0 \\
Faster~\cite{he2016deep} & ResNet101
& 76.4
& 79.8 & 80.7 & 76.2 & 68.3 & 55.9 & 85.1 & 85.3 & 89.8 & 56.7 & 87.8
& 69.4 & 88.3 & 88.9 & 80.9 & 78.4 & 41.7 & 78.6 & 79.8 & 85.3 & 72.0 \\
R-FCN~\cite{dai2016r} & ResNet101
& 80.5
& 79.9 & 87.2 & 81.5 & 72.0 & 69.8 & 86.8 & 88.5 & 89.8 & 67.0 & 88.1
& 74.5 & 89.8 & 90.6 & 79.9 & 81.2 & 53.7 & 81.8 & 81.5 & 85.9 & 79.9 \\
\bottomrule
\toprule
\textbf{One-Stage} & Backbone
& \textbf{mAP}
& areo & bike & bird & boat & bottle & bus & car & cat & chair & cow
& table & dog & horse & mbk & prsn & plant & sheep & sofa & train & tv \\
\midrule
SSD300*~\cite{liu2016ssd} & VGG16
& 77.5
& 79.5 & 83.9 & 76.0 & 69.6 & 50.5 & 87.0 & 85.7 & 88.1 & 60.3 & 81.5
& 77.0 & 86.1 & 87.5 & 83.9 & 79.4 & 52.3 & 77.9 & 79.5 & 87.6 & 76.8 \\
DSSD 321~\cite{fu2017dssd} & ResNet101
& 78.6
& 81.9 & 84.9 &\bf{80.5}& 68.4 & 53.9 & 85.6 & 86.2 &\bf{88.9}& 61.1 & 83.5
& 78.7 & 86.7 &\bf{88.7}&\bf{86.7 }& 79.7 & 51.7 & 78.0 &\bf{80.9}& 87.2 &\bf{79.4}\\
I-SSD~\cite{ning2017inception} & VGG16
& 78.6
& 82.4 & 84.3 & 78.1 & 70.6 & 52.8 & 85.7 & 86.8 & 88.3 & 62.4 & 82.7
& 78.0 & 86.7 & 88.3 & 86.0 & 79.9 & 53.4 & 78.5 &\bf{80.9}&\bf{88.5}& 77.8 \\
StairNet~\cite{woo2017stairnet} & VGG16
& 78.8
& 81.3 & 85.4 & 77.8 & 72.1 &\bf{59.2}& 86.4 & 86.8 & 87.5 &\bf{62.7}&\bf{85.7}
& 76.0 & 84.1 & 88.4 & 86.1 & 78.8 & 54.8 & 77.4 & 79.0 & 88.3 & 79.2 \\
\bf{WeaveNet} & VGG16
& \bf{79.7}
&\bf{83.0}&\bf{88.1}& 79.6 &\bf{72.7}& 57.7 &\bf{87.1}&\bf{86.9}& 88.7 & 62.5 & 84.6
&\bf{79.0}&\bf{87.1}& 86.9 & 85.1 &\bf{80.4}&\bf{56.2}&\bf{82.4}& 80.2 & 87.9 & 78.8 \\
\bottomrule
\end{tabular}
}
\end{table*}
\renewcommand{\arraystretch}{1.2}
Table \ref{table:VOC07} shows the results of comparing WeaveNets with the state-of-the-art models. As can be seen from the results, WeaveNet achieves $79.7\%$ mAP, much higher than the $77.5\%$ mAP achieved by vanilla SSD. Comparing with other state-of-the-art multi-scale fusion methods, \emph{e.g.} StariNet, WeaveNet further improves the mAP by $0.9\%$. We note that WeaveNet in Table~\ref{table:VOC07} has $k = 16$ and $iter = 5$, which is slower than WeaveNets using less iterations. As can be seen in Figure~\ref{fig:acc_speed}, we found the WeaveNet with $k = 32$ and $iter = 1$ actually achieves slightly better speed and accuracy tread-off, with $mAP = 79.5$ in $67$ fps (batch size = 1) using TITAN X (PASCAL) GPU.
\subsection{Results on PASCAL 2012}
\begin{figure}[t!]
\center
\resizebox{0.5\textwidth}{!}{
\includegraphics{voc12/voc12.pdf}
}
\caption{Detection result comparison between the vanilla SSD (left) and the proposed WeaveNet(right) on VOC 2012 \texttt{test} set.}
\label{fig:VOC12_viz}
\end{figure}
We also evaluate the proposed method on PASCAL VOC 2012 benchmark, where more training samples are included and the testing set is replaced by a more difficult set which has about 2 times more testing images than PASCAL VOC 2007. We did not extensively tune the training parameter and all models are trained in the same number of iterations following exactly the same learning rate and weight decay as used on PASCAL VOC 2007. The training set is a union of PASCAL VOC 07 trainval + test and PASCAL VOC 2012 trainval, and the final trained model is submitted to online testing server for evaluation.
\renewcommand{\arraystretch}{1.5}
\begin{table*}[t]
\centering
\caption{Object detection results on PASCAL VOC 2012 \texttt{test} set. The performance is measured by mean of Average Precision (mAP, in \%).
}
\label{table:VOC12}
\resizebox{\textwidth}{!}{
\setlength\tabcolsep{2.5pt}
\begin{tabular}{c|c|c|cccccccccccccccccccc}
\toprule
\textbf{Two-Stage} & Backbone
& \textbf{mAP}
& areo & bike & bird & boat & bottle & bus & car & cat & chair & cow
& table & dog & horse & mbk & prsn & plant & sheep & sofa & train & tv \\
\midrule
HyperNet~\cite{kong2016hypernet} & VGG16
& 71.4
& 84.2 & 78.5 & 73.6 & 55.6 & 53.7 & 78.7 & 79.8 & 87.7 & 49.6 & 74.9
& 52.1 & 86.0 & 81.7 & 83.3 & 81.8 & 48.6 & 73.5 & 59.4 & 79.9 & 65.7 \\
Faster~\cite{ren2015faster} & ResNet101
& 73.8
& 86.5 & 81.6 & 77.2 & 58.0 & 51.0 & 78.6 & 76.6 & 93.2 & 48.6 & 80.4
& 59.0 & 92.1 & 85.3 & 84.8 & 80.7 & 48.1 & 77.3 & 66.5 & 84.7 & 65.6 \\
ION~\cite{bell2016inside} & VGG15
& 76.4
& 87.5 & 84.7 & 76.8 & 63.8 & 58.3 & 82.6 & 79.0 & 90.9 & 57.8 & 82.0
& 64.7 & 88.9 & 86.5 & 84.7 & 82.3 & 51.4 & 78.2 & 69.2 & 85.2 & 73.5 \\
R-FCN~\cite{dai2016r}& ResNet101
& 77.6
& 86.9 & 83.4 & 81.5 & 63.8 & 62.4 & 81.6 & 81.1 & 93.1 & 58.0 & 83.8
& 60.8 & 92.7 & 86.0 & 84.6 & 84.4 & 59.0 & 80.8 & 68.6 & 86.1 & 72.9 \\
\bottomrule
\toprule
\textbf{One-Stage} & Backbone
& \textbf{mAP}
& areo & bike & bird & boat & bottle & bus & car & cat & chair & cow
& table & dog & horse & mbk & prsn & plant & sheep & sofa & train & tv \\
\midrule
SSD300*~\cite{liu2016ssd} & VGG16
& 75.8
& 88.1 & 82.9 & 74.4 & 61.9 & 47.6 & 82.7 & 78.8 & 91.5 & 58.1 & 80.0
& 64.1 & 89.4 & 85.7 & 85.5 & 82.6 & 50.2 & 79.8 & 73.6 & 86.6 & 72.1 \\
DSSD 321~\cite{fu2017dssd} & ResNet-101
& 76.3
& 87.3 & 83.3 & 75.4 &\bf{64.6}& 46.8 & 82.7 & 76.5 &\bf{92.9}& 59.5 & 78.3
& 64.3 &\bf{91.5}& 86.6 & 86.6 & 82.1 &\bf{53.3}& 79.6 &\bf{75.7}& 85.2 &\bf{73.9}\\
StairNet~\cite{woo2017stairnet} & VGG16
& 76.4
& 87.7 & 83.1 & 74.6 & 64.2 &\bf{51.3}& 83.6 & 78.0 & 92.0 & 58.9 &\bf{81.8}
&\bf{66.2}& 89.6 & 86.0 & 84.9 & 82.6 & 50.9 & 80.5 & 71.8 & 86.2 & 73.5 \\
\bf{WeaveNet} & VGG16
&\bf{77.0}
&\bf{88.5}&\bf{83.6}&\bf{76.6}& 63.0 & 51.2 &\bf{83.7}&\bf{79.6}& 92.7 &\bf{60.9}& 81.5
& 65.2 & 90.4 &\bf{87.9}&\bf{87.0}&\bf{82.8}& 50.5 &\bf{80.8}& 73.0 &\bf{86.8}&\bf{73.9}\\
\bottomrule
\end{tabular}
}
\end{table*}
\renewcommand{\arraystretch}{1.2}
The evaluation results are summarized in Table~\ref{table:VOC12}. As can be seen from the results, our proposed method surpasses the competitors under the same backbone network with a large margin. In particular, a Reduced VGG16 base WeaveNet surpass the vanilla SSD by $1.2\%$ and improves the overall mAP by $0.6\%$ upon the one of the strongest multi-scale method --- StairNet~\cite{woo2017stairnet}.
We also visualize some detection results on the testing set in Figure~\ref{fig:VOC12_viz}. Compared with the vanilla SSD, WeaveNet shows stronger ability for detecting both tiny and difficult objects. For the objects with medium sizes, WeaveNet provides more accurate bounding boxes which fit the objects tightly thanks to its unique iterative information weaving.
\subsection{Conclusion}
In this work, we observe that both fine information from lower layers and coarse information from higher layers are crucial for building a highly efficient object detector. We propose a novel multi-scale fusion architecture, named WeaveNet. WeaveNet iteratively \textit{weaves} information from adjacent scales, which not only gradually increases the detector receptive filed but also smoothly introduces more fine details from lower layers for robust and precise bounding box prediction. It can be easily trained and deployed without batch normalization and consumes very little additional computational cost, which make it superior over existing multi-scale fusion methods. The experimental results well demonstrate the remarkable speed and accuracy advantages of the proposed WeaveNet on PASCAL VOC 2007 and PASCAL VOC 2012 dataset. In the near further, we would like to further evaluate the proposed WeaveNet on challenging MS COCO benchmark~\cite{lin2014microsoft}, and release all the trained models and source codes on GitHub.
{\small
\bibliographystyle{ieee}
|
2,869,038,155,307 | arxiv | \section{Introduction}
The ATLAS and CMS collaborations have probed the
lepton-flavor-violating (LFV) Higgs decay $h \to \mu\tau$ around 125
GeV at the LHC run-I \cite{150803372,160407730,cmstamu} and early
run-II \cite{161201644-5}. By the analysis of data sample
corresponding to an integrated luminosity of 20.3 fb$^{-1}$ at the
$\sqrt{s}$ = 8 TeV LHC, the ATLAS Collaboration found a mild
deviation of $1\sigma$ significance in the $h \to\mu\tau$ channel
and set an upper limit of $Br(h \to \mu\tau) < 1.43\%$ at 95\%
confidence level with a best fit $Br(h \to \mu \tau )$ = $(0.53 \pm
0.51)\%$ \cite{160407730}. Based
on the data sample corresponding to an integrated luminosity of 19.7
fb$^{-1}$ at the $\sqrt{s}$ = 8 TeV LHC, the CMS collaboration imposed an upper
limit of $Br(h\to \mu\tau ) < 1.51\%$ at 95\% confidence level, while the best fit value is
$Br(h\to \mu\tau)=(0.84^{+0.39}_{-0.37})\%$ with a small excess of
$2.4\sigma$ \cite{cmstamu}. At the $\sqrt{s}$ = 13 TeV LHC run-II
with an integrated luminosity of 2.3 fb$^{-1}$, the CMS
collaboration did not observe the excess and imposed an upper limit
of $Br(h\to \mu\tau ) < 1.2\%$ \cite{161201644-5}. However, the CMS
search result at the early LHC run-II can not definitely kill the
excess of $h\to \mu\tau$ due to the low integrated luminosity.
If the $h\to \mu\tau$ excess is not a statistical
fluctuation, the new physics with the LFV interactions can give a simple
explanation for the excess. On the other hand, the long-standing anomaly
of the muon anomalous magnetic moment (muon g-2) implies that the new
physics is connected to muons. The two excesses can be simultaneously
explained by the LFV Higgs interactions, such as the general
two-Higgs-doublet model (2HDM) with the LFV Higgs interactions.
There have been many studies on the $h\to\mu\tau$ excess in the
framework of 2HDM \cite{2hdmtamu1,2hdmtamu2,xp1601.02616} and some
other new physics models \cite{htumodel}.
In this paper, we discuss the excesses of $h\to \mu\tau$ and
muon g-2 in the exact alignment limit of the general 2HDM where
one of the neutral Higgs mass eigenstates is aligned with the
direction of the scalar field vacuum expectation value (VEV)
\cite{alignment1}. In the interesting scenario, the SM-like Higgs
couplings to the SM particles are the same as the Higgs couplings in the SM at the tree level,
and the tree-level LFV coupling $h\mu\tau$ is absent. We assume the
$\mu\tau$ excess observed by CMS to be respectively from the
other neutral Higgses, $H$ and $A$, which almost degenerates with
the SM-like Higgs at the 125 GeV. In our discussions, we impose the
relevant theoretical constraints from the vacuum stability,
unitarity and perturbativity as well as the experimental
constraints from the precision electroweak data, $B$-meson decays,
$\tau$ decays and Higgs searches.
Our work is organized as follows. In Sec. II we recapitulate the
alignment limit of 2HDM. In Sec. III we perform the numerical
calculations and discuss the muon g-2 anomaly and the $\mu\tau$
excess around 125 GeV after imposing the relevant theoretical
and experimental constraints. Finally, we give our conclusion in Sec.
IV.
\section{two-Higgs-doublet model and the alignment limit}
The alignment limit of 2HDM is defined as the limit in which one
of the two neutral CP-even Higgs mass eigenstates aligns with the
direction of the scalar field VEV \cite{alignment1}. The alignment
limit can be easily realized in the decoupling limit
\cite{1507.00933-12}, namely that all the non-SM-like Higgses
are very heavy. The possibility of alignment without decoupling
limit was first noted in \cite{1507.00933-12}, "re-invented" in
\cite{12074835,12100559,13052424} and further studied in
\cite{1507.00933-22,1507.00933-23,alignment1,aligndecp,alignment2}. The
alignment limit is basis-independent, and clearly exhibited in
the Higgs basis. The alignment limit also exists in the
Minimal Supersymmetric Standard Model which is a constrained incarnation of the general 2HDM. There are some
detailed discussions in \cite{13102248,14104969} and a very recent study in \cite{160800638}.
\subsection{Two-Higgs-doublet model in the Higgs basis}
The general Higgs potential is written as \cite{2h-poten}
\begin{eqnarray} \label{V2HDM} \mathrm{V} &=& \mu_{1}
(H_1^{\dagger} H_1) + \mu_{2} (H_2^{\dagger}
H_2) + \left[\mu_{3} (H_1^{\dagger} H_2 + \rm h.c.)\right]\nonumber \\
&&+ \frac{k_1}{2} (H_1^{\dagger} H_1)^2 + \frac{k_2}{2}
(H_2^{\dagger} H_2)^2 + k_3 (H_1^{\dagger} H_1)(H_2^{\dagger} H_2) +
k_4 (H_1^{\dagger}
H_2)(H_2^{\dagger} H_1) \nonumber \\
&&+ \left[\frac{k_5}{2} (H_1^{\dagger} H_2)^2 + \rm h.c.\right]+
\left[k_6 (H_1^{\dagger} H_1)
(H_1^{\dagger} H_2) + \rm h.c.\right] \nonumber \\
&& + \left[k_7 (H_2^{\dagger} H_2) (H_1^{\dagger} H_2) + \rm
h.c.\right].
\end{eqnarray}
All $\mu_i$ and $k_i$ are real in the CP-conserving case. In the
Higgs basis, the $H_1$ field has a VEV $v=$246 GeV, and the VEV of
$H_2$ field is zero. The two complex scalar doublets have the
hypercharge $Y = 1$,
\begin{equation}
H_1=\left(\begin{array}{c} G^+ \\
\frac{1}{\sqrt{2}}\,(v+\rho_1+iG_0)
\end{array}\right)\,, \ \ \
H_2=\left(\begin{array}{c} H^+ \\
\frac{1}{\sqrt{2}}\,(\rho_2+iA_0)
\end{array}\right).
\end{equation}
The Nambu-Goldstone bosons $G^0$ and $G^+$ are eaten by the gauge bosons.
The $H^\pm$ and $A$ are the mass eigenstates of the charged Higgs
boson and CP-odd Higgs boson, and their masses are given by
\begin{equation}\label{mamhp} m_A^2=m^2_{H^{\pm}}+
\frac{1}{2}v^2(k_4-k_5). \end{equation} The physical CP-even Higgs
bosons $h$ and $H$ are the linear combinations of $\rho_1$ and
$\rho_2$,
\begin{equation}\label{mixalign}
\left(\begin{array}{c} \rho_1 \\
\rho_2
\end{array}\right)\, =\ \
\left(\begin{array}{c} ~s_\theta~~~~c_\theta \\
c_\theta~-s_\theta
\end{array}\right)\,
\left(\begin{array}{c} h \\
H
\end{array}\right),\,
\end{equation}
and their masses are given as \begin{equation}
m_{h,H}^2=\frac{1}{2}\left[m^2_{A} + (k_1 + k_5) v^2 \mp
\sqrt{[m^2_{A}+(k_5-k_1)v^2]^2+4k_6^2v^4}\right]. \end{equation} Where
$s_\theta\equiv\sin\theta$ and $c_\theta\equiv\cos\theta$, \begin{equation}
\cos\theta=\frac{-k_6v^2}{\sqrt{(m_H^2-m_h^2)(m_H^2-k_1v^2)}}.
\label{ctheta}\end{equation}
In this paper we take the light CP-even Higgs $h$ as the 125 GeV
Higgs. For $\cos\theta=0$, the mass eigenstates of CP-even Higgs
bosons are obtained from the Eq. (\ref{mixalign}), \begin{equation}
h=\rho_1,~~~~~H=-\rho_2, \end{equation} which is so called "alignment
limit". The Eq. (\ref{ctheta}) shows that the alignment limit can be
realized in two ways: $k_6=0$ or $m_H^2 \gg v^2$. The latter is
called the decoupling limit. In this paper we focus on the
former, which is the alignment without decoupling limit. In the
alignment limit, the $h$ couplings to gauge bosons are the same as the Higgs couplings in
the SM, and the $H$ has no couplings to gauge bosons.
\subsection{The Higgs couplings}
We can rotate the Higgs basis by a mixing
angle $\beta$,
\begin{equation}
\left(\begin{array}{c} \Phi_1 \\
\Phi_2
\end{array}\right)\, =\ \
\left(\begin{array}{c} ~c_\beta~~~-s_\beta \\
s_\beta~~~~~~c_\beta
\end{array}\right)\,
\left(\begin{array}{c} H_1 \\
H_2
\end{array}\right).\,
\end{equation}
Where $s_\beta\equiv\sin\beta$, $c_\beta\equiv\cos\beta$, and
$\tan\beta=v_2 /v_1$ with $v_2$ and $v_1$ being the VEVs of $\Phi_2$
and $\Phi_1$ and $v^2 = v^2_1 + v^2_2 = (246~\rm GeV)^2$.
The general Higgs potential is written as
\cite{2h-poten}
\begin{eqnarray} \label{V2HDM} \mathrm{V} &=& m_{11}^2
(\Phi_1^{\dagger} \Phi_1) + m_{22}^2 (\Phi_2^{\dagger}
\Phi_2) - \left[m_{12}^2 (\Phi_1^{\dagger} \Phi_2 + \rm h.c.)\right]\nonumber \\
&&+ \frac{\lambda_1}{2} (\Phi_1^{\dagger} \Phi_1)^2 +
\frac{\lambda_2}{2} (\Phi_2^{\dagger} \Phi_2)^2 + \lambda_3
(\Phi_1^{\dagger} \Phi_1)(\Phi_2^{\dagger} \Phi_2) + \lambda_4
(\Phi_1^{\dagger}
\Phi_2)(\Phi_2^{\dagger} \Phi_1) \nonumber \\
&&+ \left[\frac{\lambda_5}{2} (\Phi_1^{\dagger} \Phi_2)^2 + \rm
h.c.\right]+ \left[\lambda_6 (\Phi_1^{\dagger} \Phi_1)
(\Phi_1^{\dagger} \Phi_2) + \rm h.c.\right] \nonumber \\
&& + \left[\lambda_7 (\Phi_2^{\dagger} \Phi_2) (\Phi_1^{\dagger}
\Phi_2) + \rm h.c.\right].
\end{eqnarray}
The parameters $m_{ij}$ and $\lambda_i$ are the linear combinations
of the parameters in the Higgs basis: $\mu_i$ and $k_i$. The
detailed expressions are introduced in \cite{alignment1,0504050}.
After spontaneous electroweak symmetry breaking, there are five
physical Higgses: two neutral CP-even $h$ and $H$, one neutral
pseudoscalar $A$, and two charged scalar $H^{\pm}$.
The general Yukawa interaction can be given as
\bea
- {\cal L} &=& Y_{u1}\,\overline{Q}_L \, \tilde{{ \Phi}}_1 \,u_R
+\,Y_{d1}\,
\overline{Q}_L\,{\Phi}_1 \, d_R\, + \, Y_{\ell 1}\,\overline{L}_L \, {\Phi}_1\,e_R\nonumber\\
&&+\, Y_{u2}\,\overline{Q}_L \, \tilde{{ \Phi}}_2 \,u_R\,
+\,Y_{d2}\, \overline{Q}_L\,{\Phi}_2 \, d_R\,+\, Y_{\ell 2}
\overline{L}_L\, {\Phi}_2\,e_R \,+\, \mbox{h.c.}\,, \eea where
$Q_L^T=(u_L\,,d_L)$, $L_L^T=(\nu_L\,,l_L)$,
$\widetilde\Phi_{1,2}=i\tau_2 \Phi_{1,2}^*$, and $Y_{u1,2}$,
$Y_{d1,2}$ and $Y_{\ell 1,2}$ are $3 \times 3$ matrices in family
space.
To avoid the tree-level FCNC couplings of the quarks, we take \bea
&&Y_{u1}=c_u~\rho_u,~~Y_{u2}=s_u~ \rho_u, \nonumber\\
&&Y_{d1}=c_d~ \rho_d,~~Y_{d2}=s_d~ \rho_d, \eea where $c_u\equiv
\cos\theta_u$, $s_u\equiv \sin\theta_u$, $c_d\equiv \cos\theta_d$,
$s_d\equiv \sin\theta_d$ and $\rho_u$ ($\rho_d$) is the $3 \times 3$
matrix. For this choice, the interaction corresponds to the aligned
2HDM \cite{a2hm1,a2hm2}.
For the Yukawa coupling matrix of the lepton, we take \bea\label{klm}
&&X_{ii}=\frac{\sqrt{2}m_{\ell_i}}{v}(s_\beta+c_\beta
\kappa_\ell),\nonumber\\
&&X_{\tau\mu}=c_\beta \rho_{\tau\mu},\nonumber\\
&&X_{\mu\tau}=c_\beta \rho_{\mu\tau}.\eea Where $X=V_L Y_{\ell2}
V_R^{\dagger}$, and $V_L$ $(V_R)$ is the unitary matrix which
transforms the interaction eigenstates to the mass eigenstates of
the left-handed (right-handed) lepton fields. The other nondiagonal
matrix elements of $X$ are zero.
The Yukawa couplings of the neutral Higgs bosons are given as
\bea\label{hffcoupling} &&
y_{hf_if_i}=\frac{m_{f_i}}{v}\left[\sin(\beta-\alpha)+\cos(\beta-\alpha)\kappa_f\right],\nonumber\\
&&y_{Hf_if_i}=\frac{m_{f_i}}{v}\left[\cos(\beta-\alpha)-\sin(\beta-\alpha)\kappa_f\right],\nonumber\\
&&y_{Af_if_i}=-i\frac{m_{f_i}}{v}\kappa_f~{\rm (for~u)},~~~~y_{Af_if_i}=i \frac{m_{f_i}}{v}\kappa_f~{\rm (for~d,~\ell)},\nonumber\\
&&y_{h\tau\mu}=\cos(\beta-\alpha)\frac{\rho_{\tau\mu}}{\sqrt{2}},~~~~~~~~y_{h\mu\tau}=\cos(\beta-\alpha)\frac{\rho_{\mu\tau}}{\sqrt{2}},\nonumber\\
&&y_{H\tau\mu}=-\sin(\beta-\alpha)\frac{\rho_{\tau\mu}}{\sqrt{2}},~~~~y_{H\mu\tau}=-\sin(\beta-\alpha)\frac{\rho_{\mu\tau}}{\sqrt{2}},\nonumber\\
&&y_{A\tau\mu}=i\frac{\rho_{\tau\mu}}{\sqrt{2}},~~~~~~~~~~~~~~~~~~~~y_{A\mu\tau}=i\frac{\rho_{\mu\tau}}{\sqrt{2}}.
\eea Where $\kappa_u\equiv-\tan(\beta-\theta_u)$,
$\kappa_d\equiv-\tan(\beta-\theta_d)$. The $\kappa_\ell$ is a free input parameter, which is used to parameterize
the matrix element of the lepton Yukawa coupling, as shown in Eq. (\ref{klm}). In other words,
the matrix elements of the lepton Yukawa coupling are taken as the Eq. (\ref{klm}) in order to obtain
the Yukawa couplings of lepton in Eq. (\ref{hffcoupling}).
The neutral Higgs bosons couplings to the gauge bosons normalized to the
SM Higgs boson are given by \begin{equation} y^h_{V}=\sin(\beta-\alpha),~~~
y^H_{V}=\cos(\beta-\alpha),\label{hvvcoupling}\end{equation} where $V$ denotes
$Z$ and $W$.
In the exact alignment limit, namely $\cos(\beta-\alpha)=0$, the Eq.
(\ref{hffcoupling}) and Eq. (\ref{hvvcoupling}) show that the 125
GeV Higgs ($h$) has the same couplings to the fermions and gauge
bosons as the SM values, and the tree-level LFV couplings are absent. The
heavy CP-even Higgs ($H$) has no coupling to the gauge bosons, and
there are the tree-level LFV couplings for the $A$ and $H$.
\section{Numerical calculations and discussions}
\subsection{Numerical calculations}
In the exact alignment limit, the SM-like Higgs has no tree-level
LFV coupling. In order to explain the $h\to \mu\tau$ excess
reported by CMS, we assume the signal to be respectively from $H$ and
$A$, which almost degenerates with the SM-like Higgs at the 125 GeV.
Here we take two scenarios simply: (i) $m_A$=126 GeV and (ii)
$m_H$=126 GeV.
In our calculations, the other involved parameters are randomly
scanned in the following ranges:
\begin{eqnarray}
&&-(400~{\rm GeV})^2 \leq m_{12}^2 \leq (400~{\rm GeV})^2,~~~0.1\leq\tan\beta\leq 10,\nonumber\\
&&100 {\rm\ GeV} \leq ~m_{H^\pm} \leq 700 {\rm\ GeV},\nonumber\\
&&0\leq \kappa_u \leq 1.2, ~~~ -150\leq \kappa_\ell \leq 150,~~~-0.3\leq\rho_{\tau\mu}\leq 0.3\nonumber\\
&&{\rm Scenario~i}:~~ m_A=126~{\rm GeV}, ~~150 {\rm\ GeV} \leq ~m_{H} \leq 700 {\rm\ GeV},~~ \rho_{\mu\tau}=-\rho_{\tau\mu},\nonumber\\
&&{\rm Scenario~ii}:~m_H=126~{\rm GeV},~~150 {\rm\ GeV} \leq ~m_{A}
\leq 700 {\rm\ GeV},~~ \rho_{\mu\tau}=\rho_{\tau\mu}.
\end{eqnarray}
In order to relax the constraints from the observables of down-type
quarks, we take $\kappa_d=0$. For the cases of $m_A=126$ GeV and
$m_H$=126 GeV, we respectively take $\rho_{\mu\tau}=-\rho_{\tau\mu}$
and $\rho_{\mu\tau}=\rho_{\tau\mu}$ to produce a large positive
contribution to the muon g-2. The pseudoscalar $A$ can give the positive
contributions to the muon g-2 via the two-loop Barr-Zee diagrams
with the lepton-flavor-conserving (LFC) coupling. Therefore, we take
$\mid\kappa_\ell\mid<150$ to examine the possibility of explaining
the muon g-2. In the exact alignment limit, the $h\tau\bar{\tau}$
coupling is independent on $\kappa_\ell$ and equals the SM value.
However, the $A\tau\bar{\tau}$ and $H\tau\bar{\tau}$ couplings can
reach 1.08 and be slightly larger than 1 for
$\mid\kappa_\ell\mid=150$, which can not lead to the problem on the
perturbativity due to the suppression of the loop factor. In
addition, for such large $\kappa_\ell$ the $Br(A\to \tau\bar{\tau})$
and $Br(H\to \tau\bar{\tau})$ can reach 1. Due to $\kappa_d=0$ and
$\cos(\beta-\alpha)=0$, the cross sections of $A$ and $H$ are equal to
zero in the $b\bar{b}$ associated production mode and vector boson fusion
production mode. However, the searches for $gg\to A/H \to \tau\bar{\tau}$ can
give the constraints on $\kappa_u$. We will discuss the constraints in the following item
(5).
During the scan, we consider the following experimental constraints
and observables:
{\bf (1) Theoretical constraints and precision electroweak data}. We
use $\textsf{2HDMC-1.6.5}$ \cite{2hc-1} to implement the theoretical
constraints from the vacuum stability, unitarity and
coupling-constant perturbativity, as well as the constraints from
the oblique parameters ($S$, $T$, $U$) and $\delta\rho$.
{\bf (2) $B$-meson decays and $R_b$}. Although the tree-level FCNCs
in the quark sector are absent, they will appear at the one-loop
level in this model. We consider the constraints of $B$-meson decays
from $\Delta m_{B_s}$, $\Delta m_{B_d}$, $B\to X_s\gamma$, and
$B_s\to \mu^+\mu^-$, which are respectively calculated using the
formulas in \cite{deltmq,bsr,bsmumu}. In addition, we consider the $R_b$
constraints, which is
calculated following the formulas in \cite{rb}. In fact, in this
paper we take $\kappa_d=0$ and $0\leq\kappa_u\leq 1.2$, which will
relax the constraints from the bottom-quark observables sizably.
{\bf (3) $\tau$ decays}. In this model, the non-SM-like Higgses have
the tree-level LFV couplings to $\tau$ lepton, and the LFC
couplings to lepton can be sizably enhanced for
$-150\leq\kappa_\ell\leq150$. Therefore, some $\tau$ decay processes
can give very strong constraints on the model.
\begin{itemize}
\item[(i)] $\tau\to 3\mu$. In the exact alignment limit, the LFV $A\tau\mu$ and $H\tau\mu$ couplings
generate the $\tau\to\mu^+\mu^-\mu^-$ process at the tree level, and the corresponding Feynman diagrams
are shown in Fig. \ref{fmtta3mu}.
The branching ratio of $\tau\to 3\mu$ is given as \cite{ta3mu} \begin{equation}
\frac{Br(\tau\to3\mu)}{Br(\tau\to\mu\bar{\nu}\nu)}=\sum_{\phi_1,\phi_2=A,H}\frac{I(\phi_1,\phi_2)}{64G_F^2},\end{equation}
where \bea
I(\phi_1,\phi_2)=&&2\frac{y_{\phi_1\mu\tau}y_{\phi_1\mu\mu}^*}{m_{\phi_1}^2}
\frac{y_{\phi_2\mu\tau}^*y_{\phi_2\mu\mu}}{m_{\phi_2}^2}+
2\frac{y_{\phi_1\tau\mu}y_{\phi_1\mu\mu}^*}{m_{\phi_1}^2}
\frac{y_{\phi_2\tau\mu}^*y_{\phi_2\mu\mu}}{m_{\phi_2}^2}\nonumber\\
&&+\frac{y_{\phi_1\mu\tau}y_{\phi_1\mu\mu}}{m_{\phi_1}^2}
\frac{y_{\phi_2\mu\tau}^*y_{\phi_2\mu\mu}^*}{m_{\phi_2}^2}+
\frac{y_{\phi_1\tau\mu}y_{\phi_1\mu\mu}}{m_{\phi_1}^2}
\frac{y_{\phi_2\tau\mu}^*y_{\phi_2\mu\mu}^*}{m_{\phi_2}^2}.\label{ta3mu}\eea
The current experimental upper bound of $Br(\tau\to 3\mu)$ is
\cite{expta3mu}, \begin{equation} Br(\tau\to 3\mu) < 2.1\times 10^{-8}. \end{equation}
From the Eq. (\ref{ta3mu}), we can find that the experimental data of $Br(\tau\to
3\mu)$ will give the very strong constraints on the products,
$\rho_{\tau\mu}\times \kappa_\ell$ and $\rho_{\mu\tau}\times
\kappa_\ell$.
\begin{figure}[tb]
\epsfig{file=fig1.ps,height=3.5cm}
\vspace{-0.5cm} \caption{The main Feynman diagrams of $\tau^-\to\mu^-\mu^-\mu^+$.
The two $\mu^-$ in final states can be exchanged.
In the exact alignment limit $H^0_k$ denotes $A$ and $H$.}
\label{fmtta3mu}
\end{figure}
\begin{figure}[tb]
\epsfig{file=fig2.ps,height=3.5cm}
\vspace{-0.5cm} \caption{The main Feynman diagrams of $\tau\to\mu\gamma$.
In the exact alignment limit $H^0_k$ denotes $A$ and $H$.}
\label{fmttamur}
\end{figure}
\item[(ii)] $\tau\to \mu\gamma$. The main Feynman diagrams of $\tau\to \mu\gamma$ in the model
are shown in Fig. \ref{fmttamur}. In the exact alignment limit, the
SM-like Higgs has no tree-level LFV coupling, and the heavy CP-even
Higgs couplings to the gauge bosons are equal to zero. Therefore, the
SM-like Higgs does not contribute to the $\tau\to
\mu\gamma$, and the $\tau\to \mu\gamma$ can not be corrected
via the two-loop Barr-Zee diagrams with the $W$ loop. The $Br(\tau\to \mu\gamma)$ in this model is given by \begin{equation}
\frac{{\rm BR}(\tau \rightarrow \mu \gamma)}{{\rm BR}(\tau
\rightarrow \mu \bar{\nu} \nu)}
=\frac{48\pi^3\alpha\left(|A_{1L0}+A_{1Lc}+A_{2L}|^2+|A_{1R0}+A_{1Rc}+A_{2R}|^2\right)}
{G_F^2},
\end{equation} where $A_{1L0}$, $A_{1Lc}$, $A_{1R0}$ and $A_{1Rc}$ are from
the one-loop diagrams with the Higgs boson and $\tau$ lepton
\cite{2hdmtamu2}, \bea\label{tamura1}
A_{1L0}&=&\sum_{\phi=H,~A}\frac{y^{*}_{\phi\;\tau \mu}}{16\pi^2
m_\phi^2}
\left[y^{*}_{\phi\;\tau\tau}\left(\log\frac{m_\phi^2}{m_\tau^2}-\frac{3}{2}\right)
+\frac{y_{\phi\;\tau\tau}}{6}\right], \\
A_{1Lc}&=&-\frac{(\rho^{e\dagger} \rho^e)^{\mu \tau}}{192\pi^2
m_{H^-}^2}, \\\label{tamura1r} A_{1R0}&=&A_{1L0}\left({y^{*}_{\phi\;
\tau\mu}\rightarrow y_{\phi\; \mu\tau},
~~y_{\phi \;\tau \tau} \leftrightarrow y^{*}_{\phi\; \tau\tau}}\right),\\
A_{1Rc}&=&0. \eea The $A_{2L}$ and $A_{2R}$ are from the two-loop
Barr-Zee diagrams with the third-generation fermion loop
\cite{2hdmtamu2}, \bea\label{tamura2}
A_{2L}&&=-\sum_{\phi=H,A;f=t,b,\tau}
\frac{N_C Q_f \alpha}{8\pi^3}
\frac{y^{*}_{\phi\;\tau\mu}}{m_\tau m_{f}}
\left[
Q_f\left\{
{\rm Re} (y_{\phi\; ff})
F_H\left(x_{f\phi}\right)
- i {\rm Im} (y_{\phi\; ff})
F_A\left(x_{f\phi}\right)\right\}\right.
\nonumber \\
&&\left. +\frac{(1-4 s_W^2)(2T_{3f}-4Q_f s_W^2)}{16s_W^2 c_W^2}
\left\{
{\rm Re} (y_{\phi\; ff})
\tilde{F}_H\left(x_{f\phi},x_{fZ}\right)
- i {\rm Im} (y_{\phi\; ff})
\tilde{F}_A\left(x_{f\phi},x_{f Z}\right)\right\}\right], \nonumber \\
A_{2R} &=&A_{2L}\left(y^{*}_{\phi\;\tau\mu}\rightarrow
y_{\phi\;\mu\tau},~i\rightarrow -i\right),
\label{Barr-Zee}\eea where $T_{3f}$ denotes the isospin of the
fermion, and
\begin{align}
F_{H}(y)&=\frac{y}{2}\int_0^1 dx \frac{1-2x(1-x)}{x(1-x)-y}\log
\frac{x(1-x)}{y}
~~({\rm for}~\phi=H), \nonumber\\
F_A(y) &=\frac{y}{2}\int_0^1 dx \frac{1}{x(1-x)-y}\log
\frac{x(1-x)}{y}~~
({\rm for}~\phi=A), \nonumber\\
\tilde{F}_H(x,y)&=\frac{xF_H(y)-yF_H(x)}{x-y},\nonumber\\
\tilde{F}_A(x,y)&=\frac{xF_A(y)-yF_A(x)}{x-y}.
\end{align}
The two terms of $A_{2L}$ come from the effective $\phi
\gamma\gamma$ vertex and $\phi Z\gamma$ vertex induced by the
third-generation fermion loop. The current experimental data give an
upper bound of $Br(\tau\to\mu\gamma)$ \cite{exptamur}, \begin{equation}
Br(\tau\to\mu\gamma) < 4.4\times 10^{-8}. \end{equation}
\begin{figure}[tb]
\epsfig{file=fig3.ps,height=3.5cm}
\vspace{-0.2cm} \caption{The main Feynman diagram of $\tau\to\mu\pi^0$.}
\label{fmttamupi}
\end{figure}
\item[(iii)] $\tau\to \mu\pi^0$. The $\tau$ can decay into a lepton and a pseudoscalar meson at the tree level
via the CP-odd Higgs with the LFV couplings, such as $\tau\to \mu\pi^0$.
The corresponding Feynman diagrams are shown in Fig.
\ref{fmttamupi}. The width of $\tau\to \mu\pi^0$ is given as
\cite{tamupi}, \begin{equation}\Gamma(\tau\to\mu\pi^0)=\frac{f^2_\pi m^4_\pi
m_\tau}{512\pi m^4_A
v^2}(|\rho_{\tau\mu}|^2+|\rho_{\mu\tau}|^2)(\kappa_u+\kappa_d)^2.
\end{equation} The current upper bound of $Br(\tau\to\mu\pi^0)$ is
\cite{exptamupi}, \begin{equation} Br(\tau\to\mu\pi^0) < 1.1\times 10^{-7}. \end{equation}
\end{itemize}
{\bf(4) muon g-2}. The dominant contributions to the muon g-2 are
from the one-loop diagrams
with the Higgs LFV coupling \cite{mua1loop}, and the corresponding Feynman diagrams
can be obtained by replacing the initial states $\tau$ with $\mu$ in Fig. \ref{fmttamur} (a) and
Fig. \ref{fmttamur} (b). In the exact alignment
limit,
\bea
\delta a_{\mu1}^{LFV}&=&\frac{m_\mu m_\tau \rho_{\mu\tau}\rho_{\tau\mu}}{16\pi^2}
\left[\frac{(\log\frac{m_H^2}{m_\tau^2}-\frac{3}{2})}{m_H^2}
-\frac{\log(\frac{m_A^2}{m_\tau^2}-\frac{3}{2})}{m_A^2}
\right].
\label{mua1}
\eea
At the one-loop level, the diagrams with the Higgs LFC coupling can
also give the contributions to the muon g-2, especially for a large
lepton Yukawa coupling \cite{mua1looplfc}. The corresponding
Feynman diagrams can be obtained by replacing $\tau$ in the initial
state and loop with $\mu$ in Fig. \ref{fmttamur} (a) as well as
replacing the initial state $\tau$ with $\mu$ and $\nu_\tau$ in the
loop with $\nu_\mu$ in Fig. \ref{fmttamur} (b). The contributions
from the one-loop diagrams with the Higgs LFC coupling are given as
\begin{equation}
\Delta a_{\mu1}^{LFC} =
\frac{1}{8 \pi^2 } \, \sum_{\phi = h,~ H,~ A ,~ H^\pm}
|y_{\phi\mu\mu}|^2 r_{\phi\mu} \, f_\phi(r_{\phi\mu}),
\label{amuoneloop}
\end{equation}
where $r_{\phi\mu} = m_\mu^2/m_\phi^2$ and $y_{H^\pm\mu\mu}=
y_{A\mu\mu}$. For $r_{\phi\mu}\ll$ 1, \begin{equation}
f_{h,H}(r) \simeq- \ln r - 7/6,~~
f_A (r) \simeq \ln r +11/6, ~~
f_{H^\pm} (r) \simeq -1/6.
\label{oneloopintegralsapprox3}
\end{equation}
The muon g-2 can be corrected by the two-loop Barr-Zee diagrams with
the fermions loops by replacing the initial state $\tau$ with $\mu$
in Fig. \ref{fmttamur} (c). Further replacing the fermion loop with
$W$ loop, we obtain the two-loop Barr-Zee diagrams with $W$ loop
which can contribute to muon g-2 for the SM-like Higgs $h$ as the
mediator in the exact alignment limit. Using the well-known
classical formulates in \cite{mua2loop}, the main contributions of two-loop Barr-Zee diagrams
in the exact alignment limit are given as
\bea \label{mua2} \delta a_{\mu2} &=&-\frac{\alpha
m_\mu}{4\pi^3m_f}\sum_{\phi=h,H,A;f=t,b,\tau} N_f^c~Q_f^2~
y_{\phi\mu\mu}~ y_{\phi ff}~ F_\phi(x_{f\phi})
\nonumber\\
&&+\frac{\alpha m_\mu}{8\pi^3v}\sum_{\phi=h} y_{\phi\mu\mu}~g_{\phi
WW} \left[3F_H\left(x_{W\phi}\right)
+\frac{23}{4} F_A\left(x_{W\phi}\right)
\right.\nonumber\\
&&\left.
+\frac{3}{4} G\left(x_{W\phi}\right) +\frac{m_\phi^2}{2
m_W^2}\left\{
F_H\left(x_{W\phi}\right)-F_A\left(x_{W\phi}\right)
\right\}\right],
\eea where $x_{f\phi}=m_{f}^2/m_\phi^2$, $x_{W\phi}=m_W^2/m_\phi^2$,
$g_{h WW}=1$ and \begin{equation}
G(y)=-\frac{y}{2}\int_0^1 dx \frac{1}{x(1-x)-y}\left[
1-\frac{y}{x(1-x)-y}\log \frac{x(1-x)}{y}
\right].
\end{equation}
The experimental value of muon g-2 excess is \cite{muaexp}
\begin{equation} \delta a_{\mu} =(26.2\pm8.5) \times 10^{-10}. \end{equation}
{\bf (5) Higgs searches experiments}.
\begin{itemize}
\item[(i)] Non-observation of additional Higgs bosons. We employ
$\textsf{HiggsBounds-4.3.1}$ \cite{hb} to implement the exclusion
constraints from the neutral and charged Higgses searches at LEP,
Tevatron and LHC at 95\% confidence level.
\item[(ii)] The global fit to the 125 GeV Higgs signal data. In the exact alignment
limit, the SM-like Higgs has the same coupling to the gauge boson
and fermions as the Higgs couplings in the SM, which is favored by the 125 GeV Higgs signal
data. However, in order to explain the $\mu\tau$ excess around 125
GeV, we assume that the $A$ ($H$) almost degenerates with the SM-like
Higgs at the 125 GeV. Since the mass splitting of $A$ ($H$) and $h$
is smaller than the mass resolution of detector, $A$ ($H$) can
affect the global fit to the 125 GeV Higgs signal data. Following
the method in \cite{chi}, we perform a global fit to the 125 GeV
Higgs data of 29 channels, which are given in the appendix A. The signal strength for
a channel is defined as \begin{equation}
\mu_i=\sum_{\hat{H}=h,~\phi}\epsilon_{gg\hat{H}}^i
R_{gg\hat{H}}+\epsilon_{VBF\hat{H}}^i
R_{VBF\hat{H}}+\epsilon_{V\hat{H}}^i
R_{V\hat{H}}+\epsilon_{t\bar{t}\hat{H}}^i R_{t\bar{t}\hat{H}}. \end{equation}
Where $R_{j}=(\sigma \times BR)_j/(\sigma\times BR)_j^{SM}$
with $j$ denoting the partonic process
$gg\hat{H},~VBF\hat{H},~V\hat{H},$ or $t\bar{t}\hat{H}$.
$\epsilon_{j}^i$ denotes the assumed signal composition of the
partonic process $j$. If $A$ ($H$) almost degenerates with the SM-like Higgs, $\phi$ denotes
$A$ ($H$). For an uncorrelated observable $i$, \begin{equation}
\chi^2_i=\frac{(\mu_i-\mu^{exp}_i)^2}{\sigma_i^2}, \end{equation} where
$\mu^{exp}_i$ and $\sigma_i$ denote the experimental central value
and uncertainty for the $i$-channel. We retain the uncertainty
asymmetry in the calculation. For the two correlated observables, we
take \begin{equation} \chi^2_{i,j}=\frac{1}{1-\rho^2}
\left[\frac{(\mu_i-\mu^{exp}_i)^2}{\sigma_i^2}+\frac{(\mu_j-\mu^{exp}_j)^2}{\sigma_j^2}
-2\rho\frac{(\mu_i-\mu^{exp}_i)}{\sigma_i}\frac{(\mu_j-\mu^{exp}_j)}{\sigma_j}\right],
\end{equation} where $\rho$ is the correlation coefficient. We sum over
$\chi^2$ in the 29 channels, and pay particular attention to the
surviving samples with $\chi^2-\chi^2_{\rm min} \leq 6.18$, where
$\chi^2_{\rm min}$ denotes the minimum of $\chi^2$. These samples
correspond to the 95.4\% confidence level region in any
two-dimension plane of the model parameters when explaining the
Higgs data (corresponding to the $2\sigma$ range).
\item[(iii)] The Higgs decays into $\tau\mu$. In the exact
alignment limit, the $\mu\tau$ excess around 125 GeV is from
$A~(H)\to \tau\mu$ where $A$ ($H$) almost degenerates with the
SM-like Higgs. The width of $A~(H)\to \mu\tau$ is given by \begin{equation}
\Gamma(A~(H)\to \mu\tau)=\frac{
(\rho_{\mu\tau}^2+\rho_{\tau\mu}^2)m_{A~(H)}}{16\pi}. \end{equation}
We take the best fit value of $Br(h\to \mu\tau)=(0.84^{+0.39}_{-0.37})\%$
based on the CMS search for the $h\to \mu\tau$
at the LHC run-I.
Since the $\mu\tau$ excess is assumed to be from the $A$ ($H$), we require the
production rates of $pp\to A~(H)\to \mu\tau$ to vary from
$\sigma(pp\to h)\times 0.1\%$ to $\sigma(pp\to h)\times 1.62\%$.
In addition, the CMS collaboration did not publish the bound on the
heavy Higgs decaying into $\mu\tau$. Ref. \cite{xp1601.02616} gave the
bound on the production rate of $pp\to\phi\to\mu\tau$ by recasting
results from the original $h\to\mu\tau$ analysis of CMS.
\end{itemize}
\subsection{Results and discussions}
In Fig. \ref{cpvscp}, we project the surviving samples on the planes
of $\rho_{\tau\mu}$ versus $\kappa_\ell$ and $\kappa_u$ versus
$\rho_{\tau\mu}$. The lower panels show the $\kappa_u$ is required
to be smaller than 1 due to the constraints of $B$-meson decays and
$R_b$. The upper panels show that there is a strong correlation
between $\rho_{\tau\mu}$ and $\kappa_\ell$, which is mainly due to
the constraints of $Br(\tau\to 3\mu)$ on the product
$|\rho_{\tau\mu}\times \kappa_\ell|$, and obviously affected
by the constraints of $Br(\tau\to\mu\gamma)$. For example, in the case of $m_A=126$ GeV,
$|\rho_{\tau\mu}|$ is required to be smaller than 0.06 for
$\kappa_\ell=-10$.
\begin{figure}[tb]
\epsfig{file=fig4a.ps,height=7.0cm}
\epsfig{file=fig4b.ps,height=7.0cm}
\epsfig{file=fig4c.ps,height=7.1cm}
\epsfig{file=fig4d.ps,height=7.0cm}
\vspace{-0.5cm} \caption{The surviving samples projected on the
planes of $\rho_{\tau\mu}$ versus $\kappa_\ell$ and $\kappa_u$
versus $\rho_{\tau\mu}$. The circles (green) are allowed by the
"pre-muon g-2" constraints: theoretical constraints, precision
electroweak data, $R_b$, $B$ meson decays, $\tau$ decays, the
exclusion limits of Higgses, and the 125 GeV Higgs data; the pluses
(red) allowed by the pre-muon g-2 and muon g-2 excess; the bullets
(black) and triangles (blue) allowed by the pre-muon g-2, the
muon g-2 anomaly and $\mu\tau$ excess around 125 GeV, and the
triangles (blue)
further allowed by the experimental constraints of the heavy Higgs
decaying into $\mu\tau$.} \label{cpvscp}
\end{figure}
In the case of $m_A=126$ GeV, there are two different regions where
the muon g-2 anomaly can be explained. (i) $\rho_{\tau\mu}=0$ and
$|\kappa_\ell|>100$: The Higgs LFV couplings are
absent due to $\rho_{\tau\mu}=0$, and the muon g-2 can be only corrected via the diagrams with
the Higgs LFC couplings. Without the contributions of top quark
loops, the contributions of the CP-even (CP-odd) Higgs to muon g-2
are negative (positive) at the two-loop level and positive
(negative) at one-loop level. As $m^2_f/m^2_\mu$ could easily
overcome the loop suppression factor $\alpha/\pi$, the two-loop
contributions may be larger than one-loop ones. Therefore, the muon g-2 can obtain the
positive contributions from $A$ loop and negative contributions from
$H$ loop. For the enough mass splitting of $H$ and $A$, the muon g-2
can be sizably enhanced by the diagrams with the large Higgs LFC
couplings. The corresponding $\kappa_u$ is required to be
smaller than 0.2 due to the constraints of the search for $gg\to A\to \tau\bar{\tau}$ at the LHC, see the pluses (red) with
$\rho_{\tau\mu}=0$ shown in the lower-left panel of Fig. \ref{cpvscp}. (ii) $0.04<|\rho_{\tau\mu}|<0.18$ and
$-9<\kappa_\ell<3$: The muon g-2 can be corrected by the diagrams with
the Higgs LFV interactions and the Higgs LFC interactions, and the contributions of the former dominate over those of the latter due
to the small $\mid\kappa_\ell\mid$. For the diagrams with the Higgs LFV couplings, the
muon g-2 obtains the positive contributions from $A$ loop and
negative contributions from $H$ loop due to
$\rho_{\mu\tau}=-\rho_{\tau\mu}$. For the enough mass splitting of
$H$ and $A$, the muon g-2 can be sizably enhanced by the diagrams
with the large Higgs LFV couplings, and slightly corrected by
those with the Higgs LFC couplings.
In the case of $m_H=126$ GeV, the contributions of the CP-even
Higgs dominate over those of the CP-odd Higgs due to $m_A > m_H$.
The muon g-2 obtains
the negative contributions from the diagrams with the Higgs LFC couplings and
positive contributions from the diagrams with the LFV couplings due
to $\rho_{\mu\tau}=\rho_{\tau\mu}$. Therefore, a proper
$\rho_{\tau\mu}$ is required to explain the muon g-2 excess,
$0.04<|\rho_{\tau\mu}|<0.18$ and $-3<\kappa_\ell<8$ as shown in the
right panels of Fig. \ref{cpvscp}.
\begin{figure}[tb]
\epsfig{file=fig5a.ps,height=6.5cm}
\epsfig{file=fig5b.ps,height=6.5cm}
\vspace{-0.5cm} \caption{Same as Fig. \ref{cpvscp}, but $\kappa_u$
versus $m_H$ and $\kappa_u$ versus $m_A$.} \label{csbigh}
\end{figure}
In the case of $m_A=126$ GeV, $0.02<\kappa_u<0.1$ is favored by
the $\mu\tau$ excess around 125 GeV and allowed by
the experimental constraints of the heavy Higgs decaying into $\mu\tau$. In the case of $m_H=126$ GeV, $0.03<\kappa_u<0.15$
is favored by the $\mu\tau$ excess around 125 GeV, but some samples with a relatively large
$\kappa_u$ are excluded by the experimental constraints of the heavy Higgs decaying into $\mu\tau$. As
well known, the effective $ggA$ coupling is larger than the $ggH$ coupling for
the same Yukawa couplings and Higgs masses since the form factor of CP-odd
Higgs loop is larger than the CP-even Higgs loop. Thus, $\kappa_u$
in the case of $m_H=126$ GeV is required to be larger than that in the case of
$m_A=126$ GeV in order to obtain the correct $\mu\tau$ excess around 125 GeV.
$\sigma(pp\to A \to \mu\tau)$ in the case of $m_H=126$ GeV ($A$ as
the heavy Higgs) is much larger than $\sigma(pp\to H \to \mu\tau)$
in the case of $m_A=126$ GeV ($H$ as the heavy Higgs) due to the
enhancements of the large top Yukawa coupling and the form factor of
the CP-odd Higgs. Therefore, the experimental data of the heavy
Higgs decaying into $\mu\tau$ give more strong constraints on the case
of $m_H=126$ GeV than the case of $m_A=126$ GeV.
In Fig. \ref{csbigh}, we project the surviving samples on the plane
of $\kappa_u$ versus $m_A$ ($m_H$) in the case of $m_H=126$ GeV ($m_A=126$ GeV). The upper
bound of $\sigma(pp\to A/H \to \mu\tau)$ is taken from Ref.
\cite{xp1601.02616}, which is obtained by recasting results from the
original CMS $h\to\mu\tau$ analysis for the heavy Higgs in the range
of 125 GeV and 275 GeV. From the right panel, for the case of $m_H=126$ GeV
we find that the experimental data of the
heavy Higgs decaying into $\mu\tau$ can exclude most samples in the
ranges of $0.07<\kappa_u<0.15$ and $m_A<230$ GeV, which can explain
the excesses of muon g-2 and $\mu\tau$ around 125 GeV. For $m_A>230$
GeV, all the surviving samples which are consistent with the
$\mu\tau$ excess around 125 GeV are allowed by the experimental
constraints of the heavy Higgs decaying into $\mu\tau$. As discussed before, the left panel shows that
all the surviving samples are allowed by the experimental constraints of the
heavy Higgs decaying into $\mu\tau$ in the case of $m_A=126$ GeV.
Note that there is the $\kappa_\ell$ asymmetry in the regions of
$0.04<|\rho_{\tau\mu}|<0.18$, $-9<\kappa_\ell<3$ and
$0.02<\kappa_u<0.1$ for $m_A=126$ GeV where muon g-2 can be
explained. The main reason is from the constraints of
$\tau\to\mu\gamma$. In the above regions, the top quark can give
sizable contributions to $\tau\to\mu\gamma$ via the $"A_{2L}"$ and
$"A_{2R}"$ terms as shown in Eq. (\ref{tamura2}), which have
destructive (constructive) interferences with the $"A_{1L0}"$ of Eq.
(\ref{tamura1}) and $"A_{1R0}"$ of Eq. (\ref{tamura1r})
induced by the one-loop contributions of $\tau$ for $\kappa_\ell<0$
($\kappa_\ell>0$). Therefore, $\mid\kappa_\ell\mid$ for
$\kappa_\ell<0$ is allowed to be much larger than that for
$\kappa_\ell>0$. Similar reason is for the $\kappa_\ell$ asymmetry in
the case of $m_H=126$ GeV but the destructive (constructive)
interferences for $\kappa_\ell>0$ ($\kappa_\ell<0$).
In Fig. \ref{cpmh}, we project the surviving samples on the planes
of $\rho_{\tau\mu}$ versus $m_H$ and $\rho_{\tau\mu}$ versus $m_A$
in the cases of $m_A=126$ GeV and $m_H=$ 126 GeV, respectively. We
find that $\rho_{\tau\mu}$ is sensitive to the mass of heavy Higgs,
and the absolute value decreases with increasing of the mass of
heavy Higgs in order to explain the muon g-2 anomaly and the $\mu\tau$ excess around
125 GeV. As we discussed above, there is an opposite sign between
the contributions of the $H$ loops and $A$ loops to the muon g-2.
Therefore, with the decreasing of the mass splitting of $H$ and $A$,
the cancelation between the contributions of $H$ and $A$ loops
becomes sizable so that a large absolute value of $\rho_{\mu\tau}$
is required to enhance the muon g-2.
\begin{figure}[tb]
\epsfig{file=fig6a.ps,height=6.5cm}
\epsfig{file=fig6b.ps,height=6.5cm}
\vspace{-0.5cm} \caption{Same as Fig. \ref{cpvscp}, but
$\rho_{\tau\mu}$ versus $m_H$ and $\rho_{\tau\mu}$ versus $m_A$.}
\label{cpmh}
\end{figure}
\begin{figure}[tb]
\epsfig{file=fig7a.ps,height=6.5cm}
\epsfig{file=fig7b.ps,height=6.5cm}
\vspace{-0.5cm} \caption{Same as Fig. \ref{cpvscp}, but $m_{H^\pm}$
versus $m_H$ and $m_{H^\pm}$ versus $m_A$.} \label{hmass}
\end{figure}
In Fig. \ref{hmass}, we project the surviving samples on the planes
of $m_{H^{\pm}}$ versus $m_H$ and $m_{H^{\pm}}$ versus $m_A$ in the
cases of $m_A=126$ GeV and $m_H=$ 126 GeV, respectively. We find
that the mass splitting of $H^{\pm}$ and
$H$ ($A$) decreases with increasing of $m_{H^{\pm}}$ in the case of
$m_A=126$ GeV ($m_H=$ 126 GeV), which is due to the constraints of
the oblique parameters and $\delta \rho$. However, for $m_{H^{\pm}}<130$
GeV, $m_H$ ($m_A$) is allowed to be as large as 625 GeV in the case
of $m_A=126$ GeV ($m_H=$ 126 GeV).
In this paper we focus on the exact alignment limit. If the
alignment limit is approximately realized, the $\mu\tau$ excess can
be from the SM-like Higgs ($h$) in addition to $H$ or $A$ around the
125 GeV. Therefore, the upper limits of $\kappa_u$ become more
stringent. When the $\mu\tau$ excess is mainly from $h$, the lower
limit of $\kappa_u$ will disappear since the $ht\bar{t}$ coupling
hardly changes with $\kappa_u$, and the $A(H)t\bar{t}$ coupling is
(nearly) proportional to $\kappa_u$. In addition, the upper limit of
$\rho_{\mu\tau}$ can become more strong for the proper deviation
from the alignment limit. For example, for
$\sin(\beta-\alpha)=0.996$, $Br(h\to\mu\tau)<1.62\%$ will give an
upper limit of $\mid\rho_{\mu\tau}\mid<0.0408$, which is much
smaller than that in the exact alignment limit. In the exact
alignment limit, the widths of $H\to hh,~WW^{(*)},~ZZ^{(*)}$ and
$A\to hZ$ are equal to zero, and increase with decreasing of
$\mid\sin(\beta-\alpha)\mid$. Therefore, the searches for $H\to
hh,~WW^{(*)},~ZZ^{(*)}$ and $A\to hZ$ can be used to probe the
deviation from the alignment limit. These signatures refer to the $H$ or $A$ whose mass is not near 125 GeV.
Otherwise, its signal would be indistinguishable from that coming from the SM-like light Higgs,
and even $H\to hh$ ($A\to hZ$) is absent for $H$ ($A$) near 125 GeV.
Some similar studies have been
done in the singlet extension of the SM \cite{150102234}.
In the previous studies, the $\mu\tau$ excess is assumed to be from
the SM-like Higgs $h$. In this paper we discuss another interesting
scenario where the $\mu\tau$ excess is from either $H$ or $A$ near
the observed Higgs signal. There is no $AVV$ coupling due to the
CP-conserving. The $HVV$ coupling is absent and the $hVV$ coupling is the same as the SM value
in the exact alignment limit.
Therefore, the two scenarios can be distinguished by observing the
$\mu\tau$ signal via the vector boson fusion production process at
the LHC with high integrated luminosity. In other words, if the
$\mu\tau$ signal excess is observed in the gluon fusion process and
not observed in the vector boson fusion process, the scenario in
this paper will be strongly favored. Even when $\sin(\beta-\alpha)$
deviates from the alignment limit sizably, the production rates of
$\mu\tau$ signal via the gluon fusion and vector boson fusion can
still have different correlations in the two different
scenarios.
\section{Conclusion}
In this paper we examine the muon g-2 anomaly and the $\mu\tau$
excess around 125 GeV in the exact alignment limit of 2HDM. In the scenario, the
SM-like Higgs couplings to the SM particles are
the same as the Higgs couplings in the SM at the tree level, and the tree-level LFV coupling $h\mu\tau$ is
absent. We assume the $\mu\tau$ signal excess observed by CMS to be
respectively from the $H$ and $A$, which almost degenerates with the
SM-like Higgs at the 125 GeV. After imposing various relevant
theoretical constraints and experimental constraints from precision
electroweak data, $B$-meson decays, $\tau$ decays and Higgs
searches, we obtain the following observations:
For the case of $m_A=126$ GeV, the muon g-2 anomaly can be explained
in two different regions: (i) $\rho_{\tau\mu}=0$ and
$|\kappa_\ell|>100$; (ii) $0.04<|\rho_{\tau\mu}|<0.18$
($|\rho_{\tau\mu}|$ is sensitive to $m_H$) and $-9<\kappa_\ell<3$.
Further, the $\mu\tau$ excess around 125 GeV can be explained in the
region (ii) with $0.02<\kappa_u<0.1$ where all the surviving samples
are allowed by the experimental constraints of the heavy Higgs decaying into $\mu\tau$.
For the case of $m_H=126$ GeV, the muon g-2 anomaly excludes the
region of $\rho_{\tau\mu}=0$, and can be only explained in the
region with a proper $\rho_{\tau\mu}$. The muon g-2 anomaly
and $\mu\tau$ excess favor $0.04<|\rho_{\tau\mu}|<0.18$ ($|\rho_{\tau\mu}|$ is sensitive to $m_A$),
$-3<\kappa_\ell<8$ and
$0.03<\kappa_u<0.15$. However, most samples in the ranges
of $0.07<\kappa_u<0.15$ and $m_A<230$ GeV are further excluded by the experimental constraints of the heavy
Higgs decaying into $\mu\tau$.
\section*{Acknowledgment}
This work has been supported by the National Natural Science
Foundation of China under grant Nos. 11575152, 11375248.
|
2,869,038,155,308 | arxiv | \section{Introduction}
\label{sec:intro}
Unexpected Emergencies can cause substantial loss of both life and property if assistance is not available in a timely manner. Recent studies have sought solutions for more efficient emergency response (ES) using computational techniques~\cite{caragea2011classifying,vieweg2012situational,imran2015processing}. Among these works, social media is acknowledged as a promising venue for mining important messages for ES given that some people do tend to seek help by posting messages on social media as a crisis situation unfolds, these messages may contain critical information of relevance to emergency responders~\cite{imran2015processing,McCreadie2019,McCreadie2020}.
This motivated the Incident streams (IS) track~\cite{McCreadie2019,McCreadie2020}, which challenges the community to explore effective approaches for identifying important messages from user-posted streams on social media during crises. The IS track is a research challenge consisting of two main tasks. The first asks participating systems to classify a stream of crisis-related tweets into humanitarian aid related categories, known as the multi-label information types (ITs) classification task. IS comprises a total of 25 information types that are defined as the categories of possible aid needs in a crisis such as requesting donations, call for search and rescue, reporting weather, etc. The 25 ITs are further divided into two subcategories; 6 are defined as ``actionable'' ITs (e.g., search and rescue) and the remaining 19 are ``non-actionable'' ones (e.g., reporting weather)\footnote{For a full list of the ITs, see the official website at~\url{http://dcs.gla.ac.uk/\~richardm/TREC\_IS/}}. The second task is known as the priority estimation task. It requires participants to estimate the criticality of those tweets that have been classified into ITs. This criticality is represented by a numeric value from 0 to 1 indicating the least to the most importance.
Having participated in this track since 2019 (the second iteration of the IS track), our system has evolved based on the experience learnt from our prior participations in past TREC-IS editions~\footnote{The IS track normally runs two editions every year and a new test set is annotated and added to the training set after each edition.}. Unlike previous IS editions~\cite{McCreadie2019,McCreadie2020}, TREC-IS 2021 initiated an online leaderboard for participants~\footnote{\url{https://trecis.github.io/}}. It is noted that the leaderboard only reports the performance of participating runs in the 2021A Edition where the test set is partially annotated within events based on pooling by priority (the submitted test tweets are predicted by ITs and sorted by priority score within each event). In the 2021B Edition, the test set comprises the tweets of more annotated events and deeper pooling (new judgements). Hence, the 2021B Edition acted as an enhanced evaluation for the participating runs that had been submitted to the 2021A leaderboard. Given the timeliness of performance feedback from the leaderboard, we explored a wide range of approaches including a Na\"ive Bayes classifier using contextual sentence embeddings as the features, multi-task learning approaches with text augmentations, and an ensemble technique. We found our runs perform consistently well in both A and B editions and in particular our multi-task learning runs and ensemble runs perform the best in many metrics amongst all participating runs. However, the results did not show that text augmentations can bring overall improvements.
\section{Related Work}
Since the launch of TREC-IS, many works have been produced on the topic of crisis tweet classification and priority estimation (CTC-PE). \citet{wangcmu2019} applied Na\"ive Bayes, Support Vector Machine (SVM), Random Forest, and the ensemble of these models with hand-crafted features for CTC-PE. \citet{choi2018cbnu} applied SVM and deep learning models which combine class activation mapping with one-shot learning in convolutional neural networks for CTC-PE. \citet{miyazaki2019label} applied a BiLSTM model for CTC-PE by incorporating the hierarchical structure of labels into the model. \citet{congcong2020cls} applied a BiLSTM model along with pre-trained ELMo embeddings and trainable embeddings as the input features for CTC-PE. \citet{wang2021} fine-tuned BERT~\cite{BERT2018} in a multi-task learning manner for CTC-PE while \citet{wang2021multi} extended the multi-task learning approach to a sequence-to-sequence transformer-based model T5~\cite{raffel2019exploring}. To alleviate the class imbalanced problem, \citet{sharmaimproving2020} applied synonym replacements as well as crisis image labels to augment the original training data. Other techniques such as downsampling the training data or generating new examples via GPT-2 are also found in the literature~\cite{congcong2020cls,hepburn8university}.
\section{Methods and Experiments}
\label{sec:method}
Table~\ref{tab:runs-overview} summarises the runs we submitted to TREC-IS 2021. The major techniques used in the runs are described as follows.
\textbf{ML run}: In this run, we convert each tweet to a representation via pre-trained sentence embeddings (SBERT) models~\cite{reimers-2019-sentence-bert}. Having tested multiple combinations of the available publicly-available pre-trained variants of SBERT~\footnote{\url{https://huggingface.co/sentence-transformers}}, we finally choose \texttt{all-mpnet-base-v2} and \texttt{paraphrase-xlm-r-multilingual-v1} to embed the tweets, where each tweet's representation is the concatenation of outputs of the two models. Similarly, in choosing the downstream classifier, we exhaustively searched over a list of candidates including SVC, logistic regression, decision tree and random forest. We finally used GaussianNB as the downstream classifier as it brought the best result on the development set. Here the classifier is only for IT prediction whereas the priority is simply mapped from the predicted ITs (An IT's mapped priority score is the average priority of all tweets belonging to this IT in the training set). In this approach, priority is assumed to be a function of the IT.
\textbf{Multi-task and ensemble run}. Similar to~\citet{wang2021}, we train a single model for both the downstream IT classification and the priority estimation tasks in a multi-task learning manner. In simple terms, we fine-tuned a pre-trained DeBERTa model~\cite{he2020deberta} jointly on the two tasks through adding a multi-label classification head and a regression head on top of the model. The model is optimised on a linear combination of the cross entropy loss of classification and MSE regression loss. By doing so, the model is capable of making predictions on both tasks for the test tweets with only one input forward at inference time. Based on this idea, we train multiple individual models varying in model size and training data size. Ultimately the individual models consist of a fine-tuned \texttt{deberta-base}, \texttt{deberta-base} with Easy Data Augmentation (EDA)~\cite{wei2019eda} and \texttt{deberta-large}. EDA is used in our system to augment the training data in order to ensure that every IT has at least 500 examples. We apply this augmentation since the original training data is heavily class-imbalanced. Moreover, we adopt the ensemble approach from~\citet{wang2021} to leverage the predictions of individual models for IT classification and priority estimation. The ensemble approach is simple, using the union of predicted ITs by individual models as the final IT prediction and the highest priority among individual priority predictions as the final priority score for test tweets.
\textbf{Ensemble run with post-processing}. Among the pre-defined 25 ITs~\footnote{\url{http://dcs.gla.ac.uk/\~richardm/TREC\_IS/}}, there is an IT called ``Irrelevant''. The multi-label ITs predicted by the above ensemble approach can contain this class along with other ITs. However, a tweet that is classified as ``Irrelevant'' cannot also be labelled with other ITs. We thus adopt a post-processing step to handle this issue. For any tweet with this type of prediction, we compare the prediction probability for ``Irrelevant'' with the probabilities of other ITs. The tweet is assigned ``Irrelevant'' if its probability for ``Irrelevant'' is greater than all the individual probabilities of the other ITs. Otherwise it is predicted to be one of the other ITs. As a result, the tweet's priority score also becomes $0$ if it is considered to be ``Irrelevant''.
\textbf{Direct-Generation Augmentation (DGA) and Noise Label Annealing (NLA)}. Aside from EDA augmentation, described above, we also explored other augmentation techniques. Inspired by~\citet{wang2021towards} who applied large pre-trained language models to generate training data without any human annotation and model training but though carefully-crafted prompts, we utilise a similar approach using a small number of examples as the prompt. We choose the pre-trained checkpoint \texttt{gpt-neo-2.7B}~\footnote{\url{https://huggingface.co/EleutherAI/gpt-neo-2.7B}} as the generation model and the prompt template is formulated as follows:
\begin{small}
\vspace{0.2cm}
\colorbox{lightgray}{Tweet for help in disaster}\\
\colorbox{lightgray}{Title: \{IT name\}} \\
\colorbox{lightgray}{Content: \{Tweet text\}}
\end{small}
\vspace{0.2cm}
The template constructs a stream of natural language, starting with a task description~\footnote{The task description is carefully chosen based on our preliminary experiments evaluated on the development set.}, followed by the title and content fields, which are replaced by the IT name and the tweet text respectively. This is something we refer to Direct-Generation Augmentation (DGA). In our DGA-based runs, we sampled two examples of non-target ITs from the training data to construct the prompt. To generate a new example for a target IT, we omit the textual part of the content so that the model learns from the prompt (two sampled non-target examples) to complete the content part of the target IT. Finally, we used DGA to augment the training data, thus ensuring that every IT has at least $1000$ examples. One challenge associated with this kind of augmentation is that the generated texts are likely not to be label-aligned with the label it should be and these generated texts are deemed to be noisy or label-incompatible data that is harmful to the downstream task performance. We adopt a strategy called Noisy Label Annealing (NLA) introduced in~\citet{wang2021towards} to filter out noisy training signals as training progresses. The general idea is that we check the predictions of augmented training examples at the end of each epoch of downstream model training and remove an example if the model disagrees with its label with high confidence.
Regarding model training, we remove approximately $10\%$ of the original training data to use as the development set. We fine-tune the multi-task learning model with $10$ epochs and select the best checkpoint based on the IT macro-F1 score on the development set. The model's parameters are tuned on batches (batch size = $16$) of training data using Adam~\cite{kingma2014adam} as the optimizer with a linear warm-up scheduler changing the learning rate from $0$ to $5e-5$ within the first $10\%$ of total training steps and then linearly decays to $0$. Apart from these, the rest of hyper-parameters are set up the same as the default by the transformers library~\cite{Wolf2019HuggingFacesTS}.
\section{Results and Discussions}
\begin{table*}[t]
\small
\centering
\begin{tabular}{ll}
\hline
Run names & Description \\
\hline
ucdcs-strans.nb & \textbf{ML run} of using SBERT as the fixed features and GaussianNB as the downstream classifier \\
ucdcs-run1 & \textbf{Multi-task run} using deberta-base \\
ucdcs-run2 & \textbf{Multi-task run} using deberta-base with \textbf{EDA augmentation} \\
ucdcs-run3 & \textbf{Multi-task run} using deberta-large \\
ucdcs-mtl.ens (run4) & \textbf{Ensemble run} of run 1, 2 and 3 \\
ucdcs-mtl.ens.new & \textbf{Ensemble run} with \textbf{post processing} \\
ucdcs-mtl.fta & \textbf{Multi-task run} of deberta-base with \textbf{direct-generation augmentation (DGA)} \\
ucdcs-mtl.fta.nla & \textbf{Multi-task run} of deberta-base with \textbf{DGA} plus \textbf{noise label annealing (NLA)} \\
ucdcs-mtl.ens.fta & \textbf{Ensemble run} of run1, 3 and mtl.fta.nla \\
\hline
\end{tabular}
\caption{Overview of UCD-CS runs at TREC-IS 2021. Details of the techniques in bold are elaborated in Section~\ref{sec:method}.}
\label{tab:runs-overview}
\end{table*}
\begin{table*}[]
\centering
\small
\begin{tabular}{lllllllll}
\hline
& \textbf{nDCG} & \textbf{IT F1 {[}A{]}} & \textbf{IT F1 {[}All{]}} & \textbf{IT Acc.} & \textbf{Pri F1 {[}A{]}} & \textbf{Pri F1 {[}All{]}} & \textbf{Pri R {[}A{]}} & \textbf{Pri R {[}All{]}} \\
\hline
ucdcs-strans.nb & 0.4297 & 0.2423 & 0.2695 & 0.8294 & 0.1998 & 0.1988 & 0.147 & 0.1514 \\
ucdcs-run1 & \textbf{0.6115} & 0.215 & 0.2951 & 0.8837 & 0.3032 & 0.3068 & 0.2592 & 0.297 \\
ucdcs-run2 & 0.5848 & 0.2215 & 0.2984 & 0.8835 & 0.25 & 0.2781 & 0.2305 & 0.2748 \\
ucdcs-run3 & 0.6051 & 0.2391 & 0.31 & 0.8852 & 0.272 & 0.3066 & 0.3112 & 0.3325 \\
ucdcs-mtl.ens (run4) & 0.5907 & 0.2579 & \textbf{0.3211} & 0.8646 & 0.3052 & 0.3125 & 0.325 & 0.3416 \\
ucdcs-mtl.ens.new & 0.5951 & 0.2627 & 0.3205 & 0.8686 & 0.305 & \textbf{0.3211} & 0.2892 & 0.3089 \\
ucdcs-mtl.fta & 0.589 & 0.1986 & 0.2793 & \textbf{0.8902} & 0.2769 & 0.2807 & 0.2471 & 0.3001 \\
ucdcs-mtl.fta.nla & 0.529 & 0.2007 & 0.2751 & 0.8815 & 0.262 & 0.281 & 0.1721 & 0.2193 \\
ucdcs-mtl.ens.fta & 0.5755 & 0.1592 & 0.2597 & 0.8034 & \textbf{0.306} & 0.3141 & 0.2786 & 0.2855 \\
\hline
med & 0.5695 & 0.206 & 0.2823 & 0.8827 & 0.2113 & 0.2175 & 0.1728 & 0.2099 \\
max & \textbf{0.6115} & 0.2815 & \textbf{0.3211} & \textbf{0.8902} & \textbf{0.306} & \textbf{0.3211} & 0.4349 & 0.3585 \\
\hline
\end{tabular}
\caption{The performance of UCD-CS runs at TREC-IS 2021 based on results using only the judgments in 2021A Edition. The figures in \textbf{bold} indicate the best scores across all participating runs. The med and max rows present the median and maximum scores of each metric respectively across all participating runs.}
\label{tab:2021a-results}
\end{table*}
\begin{table*}[]
\centering
\small
\begin{tabular}{lllllllll}
\hline
& \textbf{nDCG} & \textbf{IT F1 {[}A{]}} & \textbf{IT F1 {[}All{]}} & \textbf{IT Acc.} & \textbf{Pri F1 {[}A{]}} & \textbf{Pri F1 {[}All{]}} & \textbf{Pri R {[}A{]}} & \textbf{Pri R {[}All{]}} \\
\hline
ucdcs-strans.nb & 0.338 & 0.1861 & 0.2395 & 0.8557 & 0.1733 & 0.1688 & 0.0425 & 0.1003 \\
ucdcs-run1 & 0.4499 & 0.2177 & 0.247 & 0.8966 & 0.2376 & 0.2566 & 0.1547 & 0.2525 \\
ucdcs-run2 & 0.4361 & 0.2087 & 0.2433 & 0.8947 & 0.242 & 0.2528 & 0.207 & 0.2622 \\
ucdcs-run3 & 0.4583 & 0.2218 & 0.2539 & 0.8964 & 0.2543 & 0.2756 & 0.221 & 0.2571 \\
ucdcs-mtl.ens (run4) & 0.4521 & 0.2361 & 0.2591 & 0.8708 & 0.2753 & 0.2582 & \textbf{0.2302} & \textbf{0.2952} \\
ucdcs-mtl.ens.new & 0.4555 & \textbf{0.251} & \textbf{0.2623} & 0.8753 & 0.2783 & 0.2703 & 0.2116 & 0.2604 \\
ucdcs-mtl.fta & 0.4464 & 0.1958 & 0.2369 & \textbf{0.9067} & 0.2309 & 0.2333 & 0.2044 & 0.2454 \\
ucdcs-mtl.fta.nla & 0.4069 & 0.1687 & 0.2187 & 0.8945 & 0.2588 & 0.2635 & 0.1861 & 0.2122 \\
ucdcs-mtl.ens.fta & 0.4448 & 0.0946 & 0.1889 & 0.8113 & \textbf{0.2798} & 0.2724 & 0.2149 & 0.2629 \\
\hline
med & 0.4272 & 0.1842 & 0.233 & 0.8947 & 0.2107 & 0.2031 & 0.1495 & 0.1993 \\
max & 0.4791 & \textbf{0.251} & \textbf{0.2623} & \textbf{0.9067} & \textbf{0.2798} & 0.2756 & \textbf{0.2302} & \textbf{0.2952} \\
\hline
\end{tabular}
\caption{The performance of UCD-CS runs at TREC-IS 2021 based on results using only the judgments in 2021B Edition. The figures in \textbf{bold} indicate the best scores across all participating runs. The med and max rows present the median and maximum scores of each metric respectively across all participating runs.}
\label{tab:2021b-results}
\end{table*}
\begin{table*}[]
\centering
\footnotesize
\begin{tabular}{lllllllll}
\hline
& \textbf{nDCG} & \textbf{IT F1 {[}A{]}} & \textbf{IT F1 {[}All{]}} & \textbf{IT Acc.} & \textbf{Pri F1 {[}A{]}} & \textbf{Pri F1 {[}All{]}} & \textbf{Pri R {[}A{]}} & \textbf{Pri R {[}All{]}} \\
\hline
ucdcs-strans.nb & 0.3368 & 0.2083 & 0.2575 & 0.8474 & 0.1959 & 0.1712 & 0.1096 & 0.1417 \\
ucdcs-run1 & 0.4727 & 0.2433 & 0.2772 & 0.8926 & 0.2657 & 0.2632 & 0.259 & 0.2888 \\
ucdcs-run2 & 0.4569 & 0.2326 & 0.2753 & 0.8911 & 0.2536 & 0.2524 & 0.1995 & 0.2686 \\
ucdcs-run3 & 0.4707 & 0.2538 & 0.286 & 0.893 & 0.253 & 0.2694 & 0.2741 & 0.3053 \\
ucdcs-mtl.ens (run4) & 0.4617 & 0.267 & 0.2923 & 0.8685 & 0.2817 & 0.2623 & 0.2886 & \textbf{0.3182} \\
ucdcs-mtl.ens.new & 0.4643 & \textbf{0.2784} & \textbf{0.2946} & 0.8728 & \textbf{0.2864} & \textbf{0.2734} & 0.2748 & 0.2827 \\
ucdcs-mtl.fta & 0.463 & 0.2159 & 0.2647 & \textbf{0.9016} & 0.2497 & 0.2361 & 0.2779 & 0.2834 \\
ucdcs-mtl.fta.nla & 0.4193 & 0.1936 & 0.2488 & 0.8907 & 0.2727 & 0.2609 & 0.265 & 0.2339 \\
ucdcs-mtl.ens.fta & 0.4515 & 0.1131 & 0.217 & 0.8073 & 0.2852 & 0.2724 & 0.2479 & 0.2665 \\
\hline
med & 0.4381 & 0.2008 & 0.26 & 0.8911 & 0.2086 & 0.2044 & 0.2087 & 0.2431 \\
max & 0.4904 & \textbf{0.2784} & \textbf{0.2946} & \textbf{0.9016} & \textbf{0.2864} & \textbf{0.2734} & 0.3072 & \textbf{0.3182} \\
\hline
\end{tabular}
\caption{The performance of UCD-CS runs at TREC-IS 2021 based on results using the judgments in 2021A and 2021B Editions. The figures in \textbf{bold} indicate the best scores across all participating runs. The med and max rows present the median and maximum scores of each metric respectively across all participating runs.}
\label{tab:all-results}
\end{table*}
In order to measure a system's performance from different perspectives, the IS track defines multiple metrics. The metrics can be broadly divided into two categories: IT classification measurements and priority estimation measurements. They are described as follows:
\begin{itemize}
\item \textbf{IT classification measurements}: To measure the performance of ITs classification, three metrics are defined. They are IT F1 [A], IT F1 [All] and IT Acc., referring to the F1 score of only actionable ITs classification, the F1 score and the accuracy of all 25 ITs classification respectively.
\item \textbf{Priority estimation measurements}: There are five metrics related to the evaluation of priority estimation. Four of them are: Pri F1 [A], Pri F1 [All], Pri R [A] and Pri R [All], referring to the F1 scores and recall scores of only actionable and all ITs classification respectively. Besides these, nDCG is a ranking metric included in this category to measure a run's average performance in ranking the top 100 test tweets per event by priority.
\end{itemize}
As TREC-IS 2021 has been run with two editions (A and B) that produce two sets of judgements, we report our runs' performance separately in the judgements of each edition as well as in the combined judgements of both editions, as presented in Tables~\ref{tab:2021a-results},~\ref{tab:2021b-results} and~\ref{tab:all-results}.
First in overview, most of our runs perform well consistently in both editions across the participating runs. When compared to the median and maximum of each metric, we find that our multi-task and ensemble runs in particular achieve strong performance, hitting the best scores in many cases. To examine the figures by task, we notice that some of our runs can perform well in one task while under-performing in the other. For example, the \texttt{ML run} achieves decent scores in IT classification but its scores for priority estimation are relatively poor. This is likely due to the simple mapping from IT predictions to priority estimation in that run. We expect the \texttt{ML run} to be improved upon by modelling not just the IT classification but also the priority estimation (a regression task).
In terms of our multi-task runs, we find that these runs tend to achieve strong scores in both tasks. This indicates that the joint learning on both tasks through fine-tuning pre-trained language models (DeBERTa in our case) can help achieve strong performance, which adds support to the results in~\cite{wang2021} in previous TREC-IS editions. Also unsurprisingly, the bigger fine-tuned model brings slightly-improved performance when comparing our \texttt{run3} to our \texttt{run1}. Regarding our ensemble runs without augmentation, they outperform our other runs in almost every metric for both tasks as well as achieving highest scores in many metrics among the participating runs. It is noted the \texttt{mtl.ens.new} run with post-processing (to deal with the ``Irrelevant'' IT) further improves the performance in IT classification as compared to \texttt{run4}.
To check the effect of EDA augmentation when comparing \texttt{run2} to \texttt{run1}, we see marginal improvements in priority estimation in 2021B (Table~\ref{tab:2021b-results}) but not in the other two tables. Hence, it is difficult to conclude whether the EDA augmentation adds benefit in this scenario. we also see similar results for our DGA-based runs (see our \texttt{mtl.fta}, \texttt{mtl.fta.nla} and \texttt{mtl.ens.fta} runs). This seems somewhat inconsistent with the previous study~\cite{congcong2020cls}. We attribute these results to two possible reasons. First, the training data has grown from around 5,000 to 50,000 since then, so it is quite possible that the advantage of text augmentation in a low-data situation is more obvious. Second,
since the downstream model used in our current runs is pre-trained on big general text data, the new examples generated by text augmentation may be noisy as well as be redundant (the model learns general language features at pre-training and is likely to augment similar examples itself implicitly during fine-tuning). Hence, better approaches for de-noising and diversifying the augmented examples are avenues of research that we seek to explore in the future.
\section{Conclusion}
In this paper, we report UCD-CS's participation at the TREC 2021 Incident Streams track (TREC-IS). We submitted multiple runs and our approaches included machine learning algorithms, multi-task learning techniques and ensemble approaches. Among these runs, we find in particular that our multi-task and ensemble runs achieve strong performance in both the information type classification and priority estimation tasks through two rounds of evaluation: TREC-IS 2021A and B editions. Although we explored some text augmentation approaches with the intent of boosting the performance, the results did not indicate consistent performance improvements and thus we seek better augmentation techniques in the future.
|
2,869,038,155,309 | arxiv | \section{Introduction}
We shall study the following formulations of the initial-boundary value problem for the Korteweg-de Vries (KdV) equation. On the right half-line $\mathbb{R}^+=(0,+\infty)$, we consider
\begin{equation} \label{SE:100}
\left\{
\begin{aligned}
&\partial_tu + \partial_x^3 u + u\partial_xu = 0 && \text{for }(x,t)\in (0,+\infty)\times (0,T) \\
& u(0,t) = f(t) && \text{for }t\in (0,T)\\
&u(x,0) = \phi(x) && \text{for }x\in (0,+\infty)
\end{aligned}
\right.
\end{equation}
On the left half-line $\mathbb{R}^-=(-\infty,0)$, we consider
\begin{equation} \label{SE:101}
\left\{
\begin{aligned}
&\partial_tu + \partial_x^3 u + u\partial_xu = 0 && \text{for }(x,t)\in (-\infty,0)\times (0,T) \\
& u(0,t) = g_1(t) && \text{for }t\in (0,T)\\
&\partial_xu(0,t) = g_2(t) && \text{for }t\in (0,T)\\
&u(x,0) = \phi(x) && \text{for }x\in (-\infty,0)
\end{aligned}
\right.
\end{equation}
The presence of one boundary condition in the right half-line problem \eqref{SE:100} versus two boundary conditions in the left half-line problem \eqref{SE:101} can be motivated by uniqueness calculations for smooth decaying solutions to the linear equation $\partial_t u + \partial_x^3u=0$. Indeed, for such $u$ and $T>0$, we have
\begin{equation} \label{E:1}
\int_{x=0}^{+\infty} u(x,T)^2 \, dx=
\begin{aligned}[t]
&\int_{x=0}^{+\infty} u(x,0)^2\, dx \\
&+2\int_{t=0}^T (u(0,t) \partial_x^2 u(0,t) - \partial_xu(0,t)^2) \, dt
\end{aligned}
\end{equation}
and
\begin{equation} \label{E:2}
\int_{x=-\infty}^0 u(x,T)^2 \, dx =
\begin{aligned}[t]
&\int_{x=-\infty}^0 u(x,0)^2 \, dx\\
&-2\int_{t=0}^T (u(0,t) \partial_x^2 u(0,t) + \partial_xu(0,t)^2) \, dt .
\end{aligned}
\end{equation}
Assuming $u(x,0)=0$ for $x>0$ and $u(0,t)=0$ for $0<t<T$, we can conclude from \eqref{E:1} that $u(x,T)=0$ for $x>0$. However, the existence of $u(x,t)\neq 0$ for $x<0$ such that $u(x,0)=0$ for $x<0$ and $u(0,t)=0$ for $0<t<T$ is not precluded by \eqref{E:2}. In fact, such nonzero solutions do exist (see \S \ref{S:linear}). On the other hand, \eqref{E:2} does show that assuming $u(x,0)=0$ for $x<0$, $u(0,t)=0$ for $0<t<T$, and $\partial_xu(0,t)=0$ for $0<t<T$ forces $u(x,t)=0$ for $x<0$, $0<t<T$. These uniqueness considerations carry over to the nonlinear equation $\partial_t u + \partial_x^3 u + u\partial_xu =0$, at least in the high regularity setting.
Given the formulations \eqref{SE:100} and \eqref{SE:101}, it is natural to consider the following configuration for the line segment $0<x<L$ problem:
\begin{equation} \label{SE:102}
\left\{
\begin{aligned}
&\partial_tu + \partial_x^3 u + u\partial_xu = 0 && \text{for }(x,t)\in (0,L)\times (0,T) \\
& u(0,t) = f(t) && \text{for }t\in (0,T)\\
& u(L,t) = g_1(t) && \text{for }t\in (0,T)\\
&\partial_xu(L,t) = g_2(t) && \text{for }t\in (0,T)\\
&u(x,0) = \phi(x) && \text{for }x\in (0,L)
\end{aligned}
\right.
\end{equation}
Now we discuss appropriate spaces for the initial and boundary data, again examining the behavior of solutions to the linear problem on $\mathbb{R}$ for motivation. On $\mathbb{R}$, we define the $L^2$-based inhomogeneous Sobolev spaces $H^s=H^s(\mathbb{R})$ by the norm $\| \phi \|_{H^s} = \| \langle \xi \rangle^s \hat{\phi}(\xi) \|_{L^2_\xi}$, where $\langle \xi \rangle = (1+|\xi|^2)^{1/2}$. Let $e^{-t\partial_x^3}$ denote the linear homogeneous solution group on $\mathbb{R}$, defined by
\begin{equation} \label{E:3}
e^{-t\partial_x^3}\phi(x) = \tfrac{1}{2\pi}\int_\xi e^{it\xi^3} \hat{\phi}(\xi) \, d\xi ,
\end{equation}
so that $(\partial_t + \partial_x^3)e^{-t\partial_x^3}\phi(x)=0$ and $e^{-t\partial_x^3}\phi(x)\big|_{t=0} = \phi(x)$.
The \textit{local smoothing} inequalities of \cite{KPV91} for the operator \eqref{E:3} are
\begin{align*}
\|\theta(t)e^{-t\partial_x^3}\phi \|_{L_x^\infty H_t^\frac{s+1}{3}} &\leq c \|\phi\|_{H^s}\\
\|\theta(t)\partial_x e^{-t\partial_x^3}\phi \|_{L_x^\infty H_t^\frac{s}{3}} &\leq c \|\phi\|_{H^s},
\end{align*}
which can be deduced directly from the definition \eqref{E:3} by a change of variable. These are sharp in the sense that the Sobolev exponents $\frac{s+1}{3}$ and $\frac{s}{3}$ cannot be replaced by higher numbers. In \S \ref{S:N}, we shall define analogues of the inhomogeneous Sobolev spaces on the half-line, $H^s(\mathbb{R}^+)$, $H^s(\mathbb{R}^-)$, and on the line segment, $H^s(0,L)$. We are thus motivated to consider initial-boundary data pairs $(\phi, f) \in H^s(\mathbb{R}^+) \times H^\frac{s+1}{3}(\mathbb{R}^+)$ for \eqref{SE:100}, $(\phi, g_1, g_2) \in H^s(\mathbb{R}^-)\times H^\frac{s+1}{3}(\mathbb{R}^+) \times H^\frac{s}{3}(\mathbb{R}^+)$ for \eqref{SE:101}, and $(\phi, f, g_1, g_2) \in H^s(0,L)\times H^\frac{s+1}{3}(\mathbb{R}^+)\times H^\frac{s+1}{3}(\mathbb{R}^+) \times H^\frac{s}{3}(\mathbb{R}^+)$ for \eqref{SE:102}. From these motivations, we are inclined to consider this configuration optimal in the scale of $L^2$-based Sobolev spaces.
Local well-posedness (LWP), i.e.\ existence, uniqueness, and uniform continuity of the data-to-solution map, of the initial-value problem (IVP)
\begin{equation} \label{E:4}
\left\{
\begin{aligned}
&\partial_tu + \partial_x^3 u + u\partial_x u = 0 && \text{for }(x,t)\in \mathbb{R}\times \mathbb{R}\\
& u(x,0) = \phi(x) && \text{for }(x,t)\in \mathbb{R}
\end{aligned}
\right.
\end{equation}
has been studied by a number of authors over the past three decades. For $s>\frac{3}{2}$, an \textit{a priori} bound can be obtained by the energy method and a solution can be constructed via the artificial viscosity method. To progress to rougher spaces, it is necessary to invoke techniques of harmonic analysis to quantitatively capture the dispersion of higher frequency waves. For $s>\frac{3}{4}$, \cite{KPV91} proved LWP of \eqref{E:4} by the contraction method in a space built out of various space-time norms, using oscillatory integral and local smoothing estimates. For $s>-\frac{3}{4}$, \cite{B93} \cite{KPV94} \cite{KPV96} proved LWP of \eqref{E:4} via the contraction method in Bourgain spaces (denoted in the literature as $X_{s,b}$), which are constructed to delicately analyze the interaction of waves in different frequency zones. LWP for $s=-\frac{3}{4}$ is proved in \cite{CCT03} by using the Miura transform to convert KdV to mKdV (nonlinearity $u^2\partial_xu$) where the corresponding endpoint result is known. These authors also prove local ill-posedness of \eqref{E:4} for $s<-\frac{3}{4}$ in the sense that the data-to-solution map fails to be uniformly continuous. If one only requires that the data-to-solution map be continuous ($C^0$ well-posedness), and not uniformly continuous, then the regularity requirements can possibly be relaxed further. Although this has not yet been shown for the KdV equation on the line, \cite{KT03} have proved, for the KdV equation on the circle $\mathbb{T}$, $C^0$ local well-posedness in $H^{-1}(\mathbb{T})$, whereas it has been shown by \cite{CCT03} that the data-to-solution map cannot be uniformly continuous in $H^s(\mathbb{T})$ for $s<-\frac{1}{2}$.
Our goal in studying \eqref{SE:100} is to obtain low regularity results. It therefore seems reasonable to restrict to $-\frac{3}{4}<s<\frac{3}{2}$. We shall omit $s=\frac{1}{2}$ due to difficulties in formulating the compatibility condition (see below). A Dini integral type compatibility condition would probably suffice at this point, although we have decided not to explore it. We have also decided not to explore the case $s=-\frac{3}{4}$ or the likely ill-posedness result for \eqref{SE:100} and \eqref{SE:101} when $s<-\frac{3}{4}$.
Note that the trace map $\phi \to \phi(0)$ is well-defined on $H^s(\mathbb{R}^+)$ when $s>\frac{1}{2}$. If $s>\frac{1}{2}$, then $\frac{s+1}{3}>\frac{1}{2}$, and both $\phi(0)$ and $f(0)$ are well-defined quantities. Since $\phi(0)$ and $f(0)$ are both meant to represent $u(0,0)$, they must agree. On the other hand, if $s<\frac{3}{2}$, then $s-1<\frac{1}{2}$ and $\frac{s}{3}<\frac{1}{2}$, so in \eqref{SE:101}, neither $\partial_x u \in H^{s-1}$ nor $g_2\in H^\frac{s}{3}$ have a well-defined trace at $0$.
Therefore, we consider \eqref{SE:100} for $-\frac{3}{4}< s<\frac{3}{2}$, $s\neq \frac{1}{2}$ in the setting
\begin{equation} \label{E:111}
\phi\in H^s(\mathbb{R}^+), \; f\in H^\frac{s+1}{3}(\mathbb{R}^+), \; \text{and if }\tfrac{1}{2}<s<\tfrac{3}{2}, \; \phi(0)=f(0) .
\end{equation}
We consider \eqref{SE:101} for $-\frac{3}{4}< s<\frac{3}{2}$, $s\neq \frac{1}{2}$ in the setting
\begin{equation} \label{E:112}
\begin{gathered}
\phi\in H^s(\mathbb{R}^-), \; g_1\in H^\frac{s+1}{3}(\mathbb{R}^+), \; g_2\in H^\frac{s}{3}(\mathbb{R}^+) \\
\text{and if }\tfrac{1}{2}<s<\tfrac{3}{2}, \; \phi(0)=g_1(0)
\end{gathered}
\end{equation}
We consider \eqref{SE:102} for $-\frac{3}{4}< s<\frac{3}{2}$, $s\neq \frac{1}{2}$ in the setting
\begin{equation}\label{E:113}
\begin{gathered}
\phi\in H^s(0,L), \; f\in H^\frac{s+1}{3}(\mathbb{R}^+), \; g_1\in H^\frac{s+1}{3}(\mathbb{R}^+), \; g_2\in H^\frac{s}{3}(\mathbb{R}^+) \\
\text{and if }\tfrac{1}{2}<s<\tfrac{3}{2}, \; \phi(0)=f(0), \phi(L)=g_1(0)
\end{gathered}
\end{equation}
The solutions we construct shall have the following characteristics.
\begin{definition} \label{D:strong}
$u(x,t)$ will be called a \emph{distributional solution of \eqref{SE:100}, \eqref{E:111} [resp. \eqref{SE:101}, \eqref{E:112}] on $[0,T]$} if
\begin{enumerate}
\item \emph{Well-defined nonlinearity}: $u$ belongs to some space $X$ with the property that $u\in X \Longrightarrow \partial_x u^2$ is a well-defined distribution.
\item $u(x,t)$ satisfies the equation \eqref{SE:100} [resp. \eqref{SE:101}] in the sense of distributions on the set $(x,t) \in (0,+\infty) \times (0,T)$ [resp. $(x,t) \in (-\infty,0) \times (0,T)$].
\item \emph{Space traces:} $u\in C( [0,T]; \; H^s_x)$ and in this sense $u(\cdot,0) = \phi$ in $H^s(\mathbb{R}^+)$ [resp. $u(\cdot,0) = \phi$ in $H^s(\mathbb{R}^-)$].
\item \emph{Time traces:} $u\in C( \mathbb{R}_x ; H^\frac{s+1}{3}(0,T))$ and in this sense $u(0, \cdot )=f$ in $H^\frac{s+1}{3}(0,T)$ [resp. $u(0, \cdot )=g_1$ in $H^\frac{s+1}{3}(0,T)$].
\item \emph{Derivative time traces:} $\partial_x u\in C( \mathbb{R}_x ; H^\frac{s}{3}(0,T))$ and only for \eqref{SE:101},\eqref{E:112} we require that in this sense, $u(0, \cdot )=g_2$ in $H^\frac{s}{3}(0,T)$.
\end{enumerate}
\end{definition}
In our case, $X$ shall be the modified Bourgain space $X_{s,b}\cap D_\alpha$ with $b<\frac{1}{2}$ and $\alpha>\frac{1}{2}$, where
\begin{equation}
\label{E:400}
\begin{gathered}
\|u\|_{X_{s,b}} = \left( \iint_{\xi,\tau} \langle\xi\rangle^{2s}\langle\tau-\xi^3\rangle^{2b}|\hat{u}(\xi,\tau)|^2 \, d\xi \, d\tau \right)^{1/2},\\
\|u\|_{D_\alpha} = \left( \iint_{|\xi|\leq 1} \langle\tau\rangle^{2\alpha} |\hat{u}(\xi,\tau)|^2 \, d\xi \,d \tau \right)^{1/2}.
\end{gathered}
\end{equation}
The space $X_{s,b}$, with $b>\frac{1}{2}$, is typically employed in the study of the IVP \eqref{E:4}. For $b>\frac{1}{2}$, the bilinear estimate (Lemma \ref{L:bilinear}) holds without the low frequency modification $D_\alpha$, and thus $D_\alpha$ is not necessary in the study of the IVP. The introduction of the Duhamel boundary forcing operator in our study of the IBVP, however, forces us to take $b<\frac{1}{2}$, and then $D_\alpha$ must be added in order for Lemma \ref{L:bilinear} to hold.
A definition for \eqref{SE:102}, \eqref{E:113} can be given in the obvious manner. We shall next introduce the concept of mild solution used by \cite{BSZ04}.
\begin{definition}
$u(x,t)$ is a \emph{mild solution} of \eqref{SE:100} [resp. \eqref{SE:101}] on $[0,T]$ if $\exists$ a sequence $\{ u_n \}$ in $C( [0,T]; \; H^3(\mathbb{R}_x^+) )\cap C^1([0,T]; \; L^2(\mathbb{R}_x^+) )$ such that
\begin{enumerate}
\item $u_n(x,t)$ solves \eqref{SE:100} in $L^2(\mathbb{R}_x^+)$ [resp. \eqref{SE:101} in $L^2(\mathbb{R}_x^-)$] for $0<t<T$.
\item $\displaystyle \lim_{n\to +\infty} \|u_n -u \|_{C([0,T];\, H^s(\mathbb{R}_x^+))} =0$ [resp. $\displaystyle \lim_{n\to +\infty} \|u_n -u \|_{C([0,T];\, H^s(\mathbb{R}_x^-))} =0$].
\item $\displaystyle \lim_{n\to +\infty} \|u_n(0,\cdot)-f\|_{H^\frac{s+1}{3}(0,T)} =0$ [resp. $\displaystyle \lim_{n\to +\infty} \|u_n(0,\cdot)-g_1\|_{H^\frac{s+1}{3}(0,T)} =0$, $\displaystyle \lim_{n\to +\infty} \|\partial_x u_n(0,\cdot)-g_2\|_{H^\frac{s}{3}(0,T)} =0$].
\end{enumerate}
\end{definition}
\cite{BSZ05} have recently introduced a method for proving uniqueness of mild solutions for \eqref{SE:100}, \eqref{E:111}.
Our main result is the following existence statement.
\begin{theorem} \label{T:main} Let $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$.
\begin{enumerate}
\item \label{I:right}Given $(\phi,f)$ satisfying \eqref{E:111}, $\exists \; T>0$ depending only on the norms of $\phi$, $f$ in \eqref{E:111} and $\exists\; u(x,t)$ that is both a mild and distributional solution to \eqref{SE:100}, \eqref{E:111} on $[0,T]$.
\item \label{I:left}Given $(\phi,g_1,g_2)$ satisfying \eqref{E:112}, $\exists \; T>0$ depending only on the norms of $\phi$, $g_1$, $g_2$ in \eqref{E:112} and $\exists\; u(x,t)$ that is both a mild and distributional solution to \eqref{SE:101}, \eqref{E:112} on $[0,T]$.
\item \label{I:lineseg}Given $(\phi,f, g_1,g_2)$ satisfying \eqref{E:113}, $\exists \; T>0$ depending only on the norms of $\phi$, $f$, $g_1$, $g_2$ in \eqref{E:113} and $\exists\; u(x,t)$ that is both a mild and distributional solution to \eqref{SE:102}, \eqref{E:113} on $[0,T]$.
\end{enumerate}
In each of the above cases, the data-to-solution map is analytic as a map from the spaces in \eqref{E:111}, \eqref{E:112}, \eqref{E:113} to the spaces in Definition \ref{D:strong}.
\end{theorem}
The proof of Theorem \ref{T:main} involves the introduction of an analytic family of boundary forcing operators extending the single operator introduced by \cite{CK02} (further comments in \S \ref{S:overview}).
The main new feature of our work is the low regularity requirements for $\phi$ and $f$. Surveys of the literature are given in \cite{BSZ02} \cite{BSZ03} and \cite{CK02}. Here, we briefly mention some of the more recent contributions. The problem \eqref{SE:102}\eqref{E:113} for $s\geq 0$ is treated in \cite{BSZ03} and \eqref{SE:100} \eqref{E:111} for $s>\frac{3}{4}$ in \cite{BSZ02} by a Laplace transform technique. In a preprint appearing after this paper was submitted, \cite{BSZ06} have shown LWP of the problem \eqref{SE:100} for $s>-1$ with $H^s(\mathbb{R}^+)$ in \eqref{E:111} replaced by the weighted space
$$H^s(\mathbb{R}^+) = \{ \, \phi \in H^s(\mathbb{R}^+) \; \mid \: e^{\nu x}\phi(x) \in H^s(\mathbb{R}^+) \, \}$$
for $\nu>0$. They further show LWP of the problem \eqref{SE:102},\eqref{E:113} for $s>-1$, thus improving Theorem \ref{T:main}\eqref{I:lineseg}. In both of these results, the data-to-solution map is analytic, in contrast to the results of \cite{KT03} mentioned above. A global well-posedness result for the problem \eqref{SE:100}\eqref{E:111} is obtained by \cite{Fam03} for $s\geq 0$. Inverse scattering techniques have been applied to the problem \eqref{SE:101} by \cite{Fok02} and the linear analogue of the problem \eqref{SE:102} in \cite{FP01b} for Schwartz class data.
I have carried out similar results for the nonlinear Schr\"odinger equation \cite{Hol05}.
\noindent\textbf{Acknowledgements}. I would like to thank my Ph.D.\ advisor Carlos Kenig for invaluable guidance on this project. I would also like to thank the referee for a careful reading and helpful suggestions.
\section{Overview}
\label{S:overview}
In this section, after giving some needed preliminaries, we introduce the Duhamel boundary forcing operator of \cite{CK02} and first apply it and a related operator to solve linear versions of the problems \eqref{SE:100}, \eqref{SE:101}. Then we explain the need for considering a more general class of operators to address the nonlinear versions in $H^s$ for $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$.
Since precise numerical coefficients become important, let us set down the convention
$$\hat{f}(\xi) = \int_x e^{-ix\xi} f(x)\, dx .$$
Also, define $C_0^\infty(\mathbb{R}^+)$ as those smooth functions on $\mathbb{R}$ with support contained in $[0,+\infty)$. Let $C_{0,c}^\infty(\mathbb{R}^+) = C_0^\infty(\mathbb{R}^+)\cap C_c^\infty(\mathbb{R})$.
The tempered distribution $\frac{t_+^{\alpha-1}}{\Gamma(\alpha)}$ is defined as a locally integrable function for $\text{Re }\alpha>0$, i.e.\
$$\left< \frac{t_+^{\alpha-1}}{\Gamma(\alpha)}, f \right> = \frac{1}{\Gamma(\alpha)}\int_0^{+\infty} t^{\alpha-1} f(t) \, dt .$$
Integration by parts gives, for \text{Re }$\alpha>0$, that
\begin{equation}
\label{E:rev1}
\frac{t_+^{\alpha-1}}{\Gamma(\alpha)} = \partial_t^k \left[ \frac{t_+^{\alpha+k-1}}{\Gamma(\alpha+k)} \right]
\end{equation}
for all $k\in \mathbb{N}$.
This formula can be used to extend the definition (in the sense of distributions) of $\frac{t_+^{\alpha-1}}{\Gamma(\alpha)}$ to all $\alpha \in \mathbb{C}$. In particular, we obtain
$$\left. \frac{t_+^{\alpha-1}}{\Gamma(\alpha)}\right|_{\alpha=0} = \delta_0(t) .$$
A change of contour calculation shows that
\begin{equation}
\label{E:410}
\left[\frac{t_+^{\alpha-1}}{\Gamma(\alpha)} \right]\sphat(\tau)=e^{-\frac{1}{2}\pi i \alpha}(\tau - i0)^{-\alpha}
\end{equation}
where $(\tau-i0)^{-\alpha}$ is the distributional limit. If $f\in C_0^\infty(\mathbb{R}^+)$, we define
$$\mathcal{I}_\alpha f = \frac{t_+^{\alpha-1}}{\Gamma(\alpha)} * f .$$
Thus, when $\text{Re }\alpha>0$,
$$\mathcal{I}_\alpha f(t) = \frac{1}{\Gamma(\alpha)}\int_0^t (t-s)^{\alpha-1} f(s) \, ds$$
and $\mathcal{I}_0f=f$, $\mathcal{I}_1f(t)=\int_0^t f(s) \, ds$, and $\mathcal{I}_{-1}f=f'$. Also $\mathcal{I}_{\alpha}\mathcal{I}_\beta = \mathcal{I}_{\alpha+\beta}$, which follows from \eqref{E:410}. For further details on the distribution $\frac{t_+^{\alpha-1}}{\Gamma(\alpha)}$, see \cite{F98}.
\begin{lemma} \label{L:RL}
If $f\in C_0^{\infty}(\mathbb{R}^+)$, then $\mathcal{I}_{\alpha}f \in C_0^{\infty}(\mathbb{R}^+)$, for all $\alpha\in \mathbb{C}$.
\end{lemma}
\begin{proof}
By \eqref{E:rev1} and integration by parts, it suffices to consider the case $\text{Re}\, \alpha>1$. In this case, it is clear that $\text{supp}\, I_\alpha f \subset [0,+\infty)$ and it remains only to show that $I_\alpha f(t)$ is smooth. By a change of variable
$$I_\alpha f(t) = \frac{1}{\Gamma(\alpha)} \int_0^t s^{\alpha-1} f(t-s) \, ds .$$
Smoothness of $I_\alpha f(t)$ follows by the fundamental theorem of calculus, differentiation under the integral sign, and that $\partial_t^kf(0)=0$ for all $k$.
\end{proof}
The \textit{Airy function} is
$$A(x) = \frac{1}{2\pi} \int_\xi e^{ix\xi}e^{i\xi^3} \, d\xi .$$
$A(x)$ is a smooth function with the asymptotic properties
\begin{align*}
A(x) &\sim c_1 x^{-1/4} e^{-c_2 x^{3/2}}(1+O(x^{-3/4})) && \text{as }x\to +\infty \\
A(-x) &\sim c_2 x^{-1/4}\cos(c_2x^{3/2}-\tfrac{\pi}{4})(1+O(x^{-3/4})) && \text{as }x\to +\infty
\end{align*}
for specific $c_1,c_2>0$ (see, e.g.\ \cite{SS04}, p.\ 328). We shall below need the values of $A(0)$, $A'(0)$, and $\int_0^{+\infty} A(y)\, dy$, and so we now compute them.
$$A(0)= \frac{1}{2\pi} \int_\xi e^{i\xi^3} \, d\xi = \frac{1}{6\pi} \int_\eta \eta^{-2/3}e^{i\eta} \, d\eta = \frac{\frac{\sqrt 3}{2}\Gamma(\frac{1}{3})}{3\pi} = \frac{1}{3\Gamma(\frac{2}{3})}$$
by a change of contour calculation, and in the final step, an application of the identity $\Gamma(z)\Gamma(1-z)=\pi/\sin \pi z$. Similarly one finds
$$A'(0) = \frac{1}{2\pi} \int_\xi i\xi e^{i\xi^3} \, d\xi = -\frac{1}{3\Gamma(\frac{1}{3})} .$$
Also,
$$\int_{y=0}^{+\infty} A(y)\, dy = \frac{1}{2\pi} \int_\xi \int_{y=0}^{+\infty} e^{iy\xi} \, dy \; e^{i\xi^3} \, d\xi = \frac{1}{2\pi} \int_\xi \hat{H}(-\xi) e^{i\xi^3} \, d\xi$$
where $H(y)=0$ for $y<0$, $H(y)=1$ for $y>0$ is the Heaviside function. Now (see \cite{F98}, p.\ 101) $\hat{H}(\xi) = \text{p.v.}\frac{1}{i\xi} + \pi \delta_0(\xi)$, which inserted above and combined with the identity $(\text{p.v.}1/x)\sphat(\xi) = -i\pi\text{sgn}\, \xi$ yields
$$\int_0^{+\infty} A(y)\, dy = \frac{1}{3} .$$
\subsection{Linear versions}
\label{S:linear}
We define the Airy \textit{group} as
\begin{equation}
\label{E:G}
e^{-t\partial_x^3}\phi(x) = \frac{1}{2\pi} \int_\xi e^{ix\xi} e^{it\xi^3} \hat{\phi}(\xi) \, d\xi
\end{equation}
so that
\begin{equation}
\label{E:Geq}
\left\{
\begin{aligned}
& (\partial_t + \partial_x^3) [e^{-t\partial_x^3}\phi](x,t) = 0 && \text{for }(x,t)\in \mathbb{R}\times \mathbb{R}\\
& [e^{-t\partial_x^3}\phi](x,0) = \phi(x) && \text{for }x\in \mathbb{R}
\end{aligned}
\right.
\end{equation}
We now introduce the \textit{Duhamel boundary forcing operator} of \cite{CK02}. For $f\in C_0^\infty(\mathbb{R}^+)$, let
\begin{equation}
\label{E:Dbf}
\begin{aligned}
\mathcal{L}^0f(x,t) &= 3 \int_0^t e^{-(t-t')\partial_x^3}\delta_0(x)\mathcal{I}_{-2/3}f(t')\, dt' \\
&= 3\int_0^t A\left( \frac{x}{(t-t')^{1/3}} \right) \frac{\mathcal{I}_{-2/3}f(t')}{(t-t')^{1/3}} \, dt'
\end{aligned}
\end{equation}
so that
\begin{equation}
\label{E:Dbfeq1}
\left\{
\begin{aligned}
& (\partial_t + \partial_x^3) \mathcal{L}^0f(x,t) = 3\delta_0(x)\mathcal{I}_{-2/3}f(t) && \text{for }(x,t)\in \mathbb{R}\times \mathbb{R}\\
& \mathcal{L}^0f(x,0) = 0 && \text{for }x\in \mathbb{R}\\
\end{aligned}
\right.
\end{equation}
We begin with the spatial continuity and decay properties of $\mathcal{L}^0f$, $\partial_x \mathcal{L}^0f$, and $\partial_x^2\mathcal{L}^0f$, for $f\in C_0^\infty(\mathbb{R}^+)$.
\begin{lemma} \label{L:Dbf2}
Let $f\in C_0^\infty(\mathbb{R}^+)$. Then for fixed $0\leq t\leq 1$, $\mathcal{L}^0f(x,t)$ and $\partial_x\mathcal{L}^0f(x,t)$ are continuous in $x$ for all $x\in \mathbb{R}$ and satisfy the spatial decay bounds
\begin{equation} \label{AE:463}
|\mathcal{L}^0f(x,t)| + |\partial_x \mathcal{L}^0f(x,t)| \leq c_k\|f\|_{H^{k+1}} \langle x \rangle^{-k} \quad \forall \; k\geq 0 .
\end{equation}
For fixed $0\leq t\leq 1$, $\partial_x^2\mathcal{L}^0f(x,t)$ is continuous in $x$ for $x\neq 0$ and has a step discontinuity of size $3\mathcal{I}_{2/3}f(t)$ at $x=0$. Also, $\partial_x^2\mathcal{L}^0f(x,t)$ satisfies the spatial decay bounds
\begin{equation} \label{AE:2021}
|\partial_x^2 \mathcal{L}^0f(x,t)| \leq c_k \|f\|_{H^{k+2}} \langle x \rangle^{-k} \qquad \forall \; k\geq 0
\end{equation}
\end{lemma}
\begin{proof}
To establish \eqref{AE:463}, it suffices to show that $\| \langle \xi \rangle \partial_\xi^k\, \widehat{\mathcal{L}^0f}(\xi,t)\|_{L^1_\xi} \leq c_k\|f\|_{H^k}$, $\forall \; k\geq 0$. Let $\phi(\xi,t) = \int_0^t e^{i(t-t')\xi}h(t') \, dt'$ for some (yet to be prescribed) $h\in C_0^\infty(\mathbb{R}^+)$.
We have
\begin{equation}\label{AE:2000}
\partial_\xi^k \phi(\xi,t) = i^k \int_0^t (t-t')^k e^{i(t-t')\xi} h(t') \, dt' .
\end{equation}
By integration by parts in $t'$,
\begin{equation} \label{AE:2001}
\partial_\xi^k\phi(\xi,t) =
\begin{aligned}[t]
&\frac{i(-1)^{k+1}k!}{\xi^{k+1}} \int_0^t e^{i(t-t')\xi} \partial_{t'}h(t') \, dt' + \frac{i(-1)^kk!}{\xi^{k+1}}h(t)\\
& + \frac{i(-1)^{k+1}}{\xi^{k+1}} \int_0^t e^{i(t-t')\xi} \partial_{t'} \sum_{\substack{\alpha+\beta=k \\ \alpha\leq k-1}} c_{\alpha, \beta} \partial_{t'}^\alpha(t-t')^k \partial_{t'}^\beta h(t') \, dt'
\end{aligned}
\end{equation}
By \eqref{AE:2000}, \eqref{AE:2001} and the time localization, $|\partial_\xi^k\phi(\xi,t)| \leq c_k \|h\|_{H^k} \langle \xi \rangle^{-k-1}$. Since $\widehat{\mathcal{L}^0f}(\xi,t) = \phi(\xi^3,t)$ with $h=3\mathcal{I}_{-2/3}f$, we have by Lemma \ref{L:RL1} that $|\partial_\xi^k\widehat{\mathcal{L}^0f}(\xi,t)|\leq c_k\|f\|_{H^{k+1}}\langle \xi \rangle^{-k-3}$, establishing \eqref{AE:463}. By integration by parts in $t'$ in \eqref{E:Dbf},
\begin{equation} \label{AE:2006}
\partial_x^3 \mathcal{L}^0f(x,t) = 3\delta_0(x)\mathcal{I}_{-2/3}f(t) - \mathcal{L}^0(\partial_tf)(x,t) .
\end{equation}
This, together with the continuity properties of $\mathcal{L}^0(\partial_t f)$, shows that $\partial_x^2 \mathcal{L}^0f(x,t)$ is continuous in $x$ for $x\neq 0$ and has a step discontinuity of size $3\mathcal{I}_{-2/3}f(t)$ at $x=0$. To see that $\partial_x^2\mathcal{L}^0f(x,t) \to 0$ as $x\to \pm \infty$, we first note that for $x<-1$, $\partial_x^2\mathcal{L}^0f(x,t) = \partial_x^2\mathcal{L}^0f(-1,t) - \int_x^{-1} \partial_y^3 \mathcal{L}^0f(y,t) \, dy$. By \eqref{AE:2006} and \eqref{AE:463}, we can send $x\to -\infty$ and obtain that $\partial_x^2\mathcal{L}^0f(x,t) \to c$, for some constant $c$, as $x\to -\infty$. Since $\partial_x \mathcal{L}^0f(0,t) = \int_{-\infty}^0 \partial_x^2\mathcal{L}^0f(y,t) \, dy$, we must have $c=0$. We can similarly show that $\partial_x^2 \mathcal{L}^0f(x,t) \to 0$ as $x\to +\infty$. For $x<0$, use $\partial_x^2 \mathcal{L}^0f(x,t) = \int_{-\infty}^x \partial_y^3 \mathcal{L}^0f(y,t) \, dy$, and for $x>0$, use $\partial_x^2\mathcal{L}^0f(x,t) = -\int_x^{+\infty} \partial_y^3\mathcal{L}^0f(y,t) \, dy $, together with \eqref{AE:463} and \eqref{AE:2006} to obtain the bound \eqref{AE:2021}.
\end{proof}
By Lemma \ref{L:Dbf2}, if $f\in C_0^\infty(\mathbb{R}^+)$, then $\mathcal{L}^0f(x,t)$ is continuous in $x$ on $\mathbb{R}$. Since $A(0)=(3\Gamma(\frac{2}{3}))^{-1}$, the second representation of $\mathcal{L}^0f(x,t)$ in \eqref{E:Dbf} gives
\begin{equation}
\label{E:Dbfeq2}
\mathcal{L}^0f(0,t) = f(t).
\end{equation}
It is thus clear that if we set
$$u(x,t) = e^{-t\partial_x^3}\phi(x) + \mathcal{L}^0\left( f- e^{-\cdot\partial_x^3}\phi\big|_{x=0} \right)(t)$$
then $u(x,t)$ solves the linear problem
$$\left\{
\begin{aligned}
&(\partial_t + \partial_x^3)u(x,t) = 0 && \text{for }x\neq 0 \\
&u(x,0) = \phi(x) &&\text{for }x\in \mathbb{R}\\
&u(0,t) = f(t) &&\text{for }t\in \mathbb{R}
\end{aligned}
\right.
$$
This would suffice, then, to solve the linear analogue of the right half-line problem \eqref{SE:100}, which has only one boundary condition.
Now we consider the linear analogue of the left half-line problem \eqref{SE:101}, which has two boundary conditions.
Consider, in addition to $\mathcal{L}^0$, the second boundary forcing operator
\begin{equation}
\label{E:Dbf2}
\begin{aligned}
\mathcal{L}^{-1}f(x,t) &= \partial_x \mathcal{L}^0\mathcal{I}_{1/3}f(x,t) \\
&= 3\int_0^t A'\left( \frac{x}{(t-t')^{1/3}} \right) \frac{\mathcal{I}_{-1/3}f(t')}{(t-t')^{2/3}} \, dt'
\end{aligned}
\end{equation}
By Lemma \ref{L:Dbf2}, if $f\in C_0^\infty(\mathbb{R}^+)$, then $\mathcal{L}^{-1}f(x,t)$ is continuous in $x$ for all $x\in \mathbb{R}$ and, since $A'(0)=-(3\Gamma(\frac{1}{3}))^{-1}$, the second representation of $\mathcal{L}^{-1}f(x,t)$ in \eqref{E:Dbf2} gives
\begin{equation}
\label{E:Dbfeq4}
\mathcal{L}^{-1}f(0,t) = -f(t).
\end{equation}
By \eqref{E:Dbfeq1}, $\mathcal{L}^{-1}$ satisfies
$$
\left\{
\begin{aligned}
& (\partial_t + \partial_x^3) \mathcal{L}^{-1}f(x,t) = 3\delta_0'(x)\mathcal{I}_{-1/3}f(t) && \text{for }(x,t)\in \mathbb{R}\times \mathbb{R}\\
& \mathcal{L}^{-1}f(x,0) = 0 && \text{for }x\in \mathbb{R}
\end{aligned}
\right.
$$
By Lemma \ref{L:Dbf2}, $\partial_x \mathcal{L}^0f(x,t)$ is continuous in $x$ for all $x\in \mathbb{R}$ and since $A'(0)=-(3\Gamma(\frac{1}{3}))^{-1}$,
\begin{equation}
\label{E:Dbfeq10}
\partial_x \mathcal{L}^0f(0,t) = -\mathcal{I}_{-1/3}f(t).
\end{equation}
Again by Lemma \ref{L:Dbf2}, $\partial_x \mathcal{L}^{-1}f(x,t)=\partial_x^2 \mathcal{L}^0\mathcal{I}_{1/3}f(x,t)$ is continuous in $x$ for $x\neq 0$ and has a step discontinuity of size $3\mathcal{I}_{-1/3}f(t)$ at $x=0$. Since
\begin{align*}
\lim_{x\downarrow 0} \partial_x^2 \mathcal{L}^0f(x,t) &= -\int_0^{+\infty} \partial_y^3 \mathcal{L}^0f(y,t) \, dy \\
&= +\int_0^{+\infty} \mathcal{L}^0(\partial_t f)(y,t) \, dy && \text{by \eqref{AE:2006}} \\
&= 3 \int_{y=0}^{+\infty} A(y) \, dy \int_0^t \partial_t\mathcal{I}_{-2/3}f(t') \, dt' && \text{by \eqref{E:Dbf} and Fubini}\\
&= \mathcal{I}_{-2/3}f(t)
\end{align*}
we have
\begin{equation}
\label{E:Dbfeq3}
\lim_{x\uparrow 0} \partial_x \mathcal{L}^{-1}f(x,t) = -2\mathcal{I}_{-1/3}f(t), \qquad \lim_{x\downarrow 0} \partial_x \mathcal{L}^{-1}f(x,t) = \mathcal{I}_{-1/3}f(t).
\end{equation}
By \eqref{E:Dbfeq2}, \eqref{E:Dbfeq4}, \eqref{E:Dbfeq10}, \eqref{E:Dbfeq3}, for yet to be assigned $h_1$ and $h_2$, we have
\begin{align}
\mathcal{L}^0h_1(0,t)+\mathcal{L}^{-1}h_2(0,t) &= h_1(t) - h_2(t) \label{E:Dbfeq5}\\
\lim_{x\uparrow 0} \mathcal{I}_{1/3}\partial_x (\mathcal{L}^0h_1(x,-)+\mathcal{L}^{-1}h_2(x,-))(t) &= -h_1(t) -2h_2(t) \label{E:Dbfeq6}\\
\lim_{x\downarrow 0} \mathcal{I}_{1/3}\partial_x (\mathcal{L}^0h_1(x,-)+\mathcal{L}^{-1}h_2(x,-))(t) &= -h_1(t) +h_2(t) \label{E:Dbfeq7}
\end{align}
If we are given $g_1(t)$, $g_2(t)$, $\phi$, and set
$$
\begin{bmatrix}
h_1 \\ h_2
\end{bmatrix}
= \frac{1}{3}\begin{bmatrix}
2 & -1 \\
-1 & -1
\end{bmatrix}
\begin{bmatrix}
g_1 -e^{\cdot \partial_x^3}\phi\big|_{x=0} \\
\mathcal{I}_{1/3}(g_2-\partial_x e^{-\cdot \partial_x^3}\phi\big|_{x=0})
\end{bmatrix}
$$
then by letting $u(x,t)= e^{-t\partial_x^3}\phi(x) + \mathcal{L}^0h_1(x,t) + \mathcal{L}^{-1}h_2(x,t)$, we have
$$
\left\{
\begin{aligned}
&(\partial_t + \partial_x^3)u(x,t) = 0 && \text{for }x\neq 0\\
&u(x,0) = \phi(x) && \text{for }x\in \mathbb{R}\\
&u(0,t) = g_1(t) && \text{for }t\in \mathbb{R}\\
&\lim_{x\uparrow 0}\partial_xu(x,t) = g_2(t) && \text{for }t\in \mathbb{R}\\
\end{aligned}
\right.
$$
Owing to the degeneracy in the right-hand limits \eqref{E:Dbfeq5}, \eqref{E:Dbfeq7}, we see that we cannot specify both boundary data $u(0,t)$ and derivative boundary data $\lim_{x\downarrow 0}\partial_x u(x,t)$ for the right half-line problem, which is consistent with the uniqueness calculation \eqref{E:1}.
\subsection{Nonlinear versions}
We define the \textit{Duhamel inhomogeneous solution operator} $\mathcal{D}$ as
\begin{equation}
\label{E:Di}
\mathcal{D}w(x,t) = \int_0^t e^{-(t-t')\partial_x^3}w(x,t') \, dt'
\end{equation}
so that
\begin{equation}
\label{E:Dieq}
\left\{
\begin{aligned}
& (\partial_t + \partial_x^3) \mathcal{D}w(x,t) = w(x,t) && \text{for }(x,t) \in \mathbb{R}\times \mathbb{R} \\
& \mathcal{D}w(x,0) = 0 && \text{for }x\in \mathbb{R}
\end{aligned}
\right.
\end{equation}
For the right half-line problem \eqref{SE:100}, let
\begin{equation}
\label{E:370}
\Lambda_+ w = e^{-t\partial_x^3} \phi - \tfrac{1}{2}\mathcal{D}(\partial_x w^2) + \mathcal{L}^0h
\end{equation}
where
$$h(t) = f(t) - e^{-t\partial_x^3}\phi\big|_{x=0} + \tfrac{1}{2}\mathcal{D}(\partial_x w^2)(0,t)$$
and observe that if $u$ is such that $\Lambda_+ u = u$, then $u$ solves \eqref{SE:100}.
For the left half-line problem \eqref{SE:101}, let
\begin{equation}
\label{E:371}
\Lambda_- w = e^{-t\partial_x^3} \phi - \tfrac{1}{2}\mathcal{D}(\partial_x w^2) + \mathcal{L}^0h_1 + \mathcal{L}^{-1}h_2
\end{equation}
where
$$
\begin{bmatrix}
h_1(t) \\ h_2(t)
\end{bmatrix}
= \begin{bmatrix}
2 & 1 \\
-1 & -1
\end{bmatrix}
\begin{bmatrix}
g_1(t) - e^{-t\partial_x^3}\big|_{x=0} + \tfrac{1}{2}\mathcal{D}(\partial_xw^2)(0,t) \\ \mathcal{I}_{1/3}(g_2(\cdot) - \partial_x e^{-\cdot\partial_x^3}\phi\big|_{x=0} + \tfrac{1}{2}\partial_x \mathcal{D}(\partial_x w^2)(0,\cdot))
\end{bmatrix}
$$
and observe that if $u$ is such that $\Lambda_- u=u$, then $u$ solves \eqref{SE:101}.
One approach, then, to solving \eqref{SE:100} and \eqref{SE:101} is to prove that $\Lambda_+$, $\Lambda_-$ (or actually time-truncated versions of them) are contraction mappings in suitable Banach spaces. As is the case for the IVP, we need the auxiliary Bourgain space \eqref{E:400}.
\begin{remark} In order to prove Lemma \ref{L:Dbf}\eqref{I:DbfBse}, we shall need to take $b<\frac{1}{2}$. The $D_\alpha$ norm is a low frequency correction for the $X_{s,b}$ norm that is needed in order for the bilinear estimates (Lemma \ref{L:bilinear}) to hold for $b<\frac{1}{2}$. This problem is particular to our treatment of initial-boundary value problems and does not arise in the standard treatment of the initial-value problem (IVP) using the $X_{s,b}$ spaces (see \cite{KPV96}). In treating the IVP, one does not need the Duhamel boundary forcing operators and is thus at liberty to take $b>\frac{1}{2}$, and the bilinear estimate Lemma \ref{L:bilinear} holds in this case without the low frequency modification $D_\alpha$.
\end{remark}
Consider the space $Z$ consisting of all $w$ such that $w\in C(\mathbb{R}_t; H_x^s) \cap C(\mathbb{R}_x; H_t^\frac{s+1}{3}) \cap X_{s,b}\cap D_\alpha$ and $\partial_x w\in C(\mathbb{R}_x; H_t^\frac{s}{3})$. Suppose we wanted to show that the maps $\Lambda_\pm$ above are contractions in a ball in $Z$ with radius determined by the norms of the initial and boundary data. (This was done by \cite{CK02} for $\Lambda_+$ with $s=0$ without the estimates on $\partial_x u$ in $C(\mathbb{R}_x; H_t^\frac{s}{3})$, and their arguments easily extend to $-\frac{1}{2}<s<\frac{1}{2}$.) The needed estimates for such an argument appear below in \S\ref{S:estimates} as Lemma \ref{L:G} for $e^{-t\partial_x^3}$, Lemma \ref{L:Di} for $\mathcal{D}$, Lemma \ref{L:Dbf} with $\lambda=0$ for $\mathcal{L}^0$, and Lemma \ref{L:Dbf} with $\lambda=-1$ for $\mathcal{L}^{-1}$. The constraints in Lemma \ref{L:Dbf}\eqref{I:DbfBse} for $\lambda=0$ are $-\frac{1}{2}<s\leq 1$, and the constraints in Lemma \ref{L:Dbf}\eqref{I:DbfBse} for $\lambda=-1$ are $-\frac{3}{2}<s\leq 0$, thus restricting us to $-\frac{1}{2}<s\leq 0$. In order to achieve the results in the wider range $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$, we next introduce (in \S \ref{S:Dbf}) two analytic families of operators $\mathcal{L}_+^\lambda$ and $\mathcal{L}_-^\lambda$ such that $\mathcal{L}_\pm^0=\mathcal{L}^0$, $\mathcal{L}_\pm^{-1}=\mathcal{L}^{-1}$. The solution properties are:
$$
\left\{
\begin{aligned}
&(\partial_t + \partial_x^3)\mathcal{L}_+^\lambda f(x,t) = 3\frac{x_-^{\lambda-1}}{\Gamma(\lambda)} \mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}}f(t) \\
&\mathcal{L}_+^\lambda f(x,0) = 0 \\
&\mathcal{L}_+^\lambda f(0,t) = e^{\pi i \lambda} f(t)
\end{aligned}
\right.
$$
and
$$
\left\{
\begin{aligned}
&(\partial_t + \partial_x^3)\mathcal{L}_-^\lambda f(x,t) = 3\frac{x_+^{\lambda-1}}{\Gamma(\lambda)} \mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}}f(t) \\
&\mathcal{L}_-^\lambda f(x,0) = 0 \\
&\mathcal{L}_-^\lambda f(0,t) = 2\sin(\tfrac{\pi}{3}\lambda+\tfrac{\pi}{6}) f(t)
\end{aligned}
\right.
$$
Due to the support properties of $\frac{x_-^{\lambda-1}}{\Gamma(\lambda)}$ and $\frac{x_+^{\lambda-1}}{\Gamma(\lambda)}$, $(\partial_t + \partial_x^3)\mathcal{L}_+^{\lambda}f(x,t) = 0$ for $x>0$ and $(\partial_t + \partial_x^3)\mathcal{L}_-^\lambda f(x,t)=0$ for $x<0$. For any $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$, we will be able to address the right half-line problem \eqref{SE:100} by replacing $\mathcal{L}^0$ in \eqref{E:370} with $\mathcal{L}_+^\lambda$ for suitable $\lambda=\lambda(s)$ and address the left half-line problem \eqref{SE:101} by replacing $\mathcal{L}^0$, $\mathcal{L}^{-1}$ in \eqref{E:371} with $\mathcal{L}_-^{\lambda_1}$, $\mathcal{L}_-^{\lambda_2}$ for suitable $\lambda_1\neq \lambda_2$ chosen in terms of $s$.
After the classes $\mathcal{L}_\pm^{\lambda}$ have been defined and examined in \S \ref{S:Dbf}, some properties of the half-line Sobolev spaces $H_0^s(\mathbb{R}^+)$, $H^s(\mathbb{R}^+)$ will be given in \S\ref{S:N}. The needed estimates for the contraction arguments are given in \S\ref{S:estimates}. Finally in \S\ref{S:left}-\ref{S:linesegment}, we prove the local well-posedness results in Theorem \ref{T:main}.
\section{The Duhamel boundary forcing operator class}
\label{S:Dbf}
Define, for $\text{Re }\lambda >0$, and $f\in C_0^\infty(\mathbb{R}^+)$
\begin{equation} \label{AE:2020}
\begin{aligned}
\mathcal{L}_-^\lambda f(x,t) &= \left[ \frac{x_+^{\lambda-1}}{\Gamma(\lambda)} \ast \mathcal{L}^0(\mathcal{I}_{-\lambda/3}f)(-,t) \right](x) \\
&= \frac{1}{\Gamma(\lambda)} \int_{-\infty}^x (x-y)^{\lambda-1} \mathcal{L}^0(\mathcal{I}_{-\lambda/3}f)(y,t) \, dy
\end{aligned}
\end{equation}
and, with $\frac{x_-^{\lambda-1}}{\Gamma(\lambda)} = e^{i\pi\lambda}\frac{(-x)_+^{\lambda-1}}{\Gamma(\lambda)}$, define
\begin{equation} \label{AE:2020A}
\begin{aligned}
\mathcal{L}_+^\lambda f(x,t) &= \left[ \frac{x_-^{\lambda-1}}{\Gamma(\lambda)} \ast \mathcal{L}^0(\mathcal{I}_{-\lambda/3}f)(-,t) \right](x) \\
&= \frac{e^{i\pi\lambda}}{\Gamma(\lambda)} \int_x^{+\infty} (y-x)^{\lambda-1}\mathcal{L}^0(\mathcal{I}_{-\lambda/3}f)(y,t) \, dy
\end{aligned}
\end{equation}
By integration by parts in \eqref{AE:2020}, the decay bounds provided by Lemma \ref{L:Dbf2}, and \eqref{AE:2006},
\begin{align}
\mathcal{L}_-^\lambda f(x,t) &= \left[ \frac{x_+^{(\lambda+3)-1}}{\Gamma(\lambda+3)} \ast \partial_x^3\mathcal{L}^0f(-,t) \right](x) \notag \\
&=
\begin{aligned}[t]
&3\frac{x_+^{(\lambda+3)-1}}{\Gamma(\lambda+3)}\mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}}f(t) \\
&- \int_{-\infty}^x \frac{(x-y)^{(\lambda+3)-1}}{\Gamma(\lambda+3)} \mathcal{L}^0(\partial_t \mathcal{I}_{-\frac{\lambda}{3}}f)(y,t) \, dy
\end{aligned}
\label{AE:2004}
\end{align}
For $\text{Re }\lambda >-3$, we may thus take \eqref{AE:2004} as the definition for $\mathcal{L}_-^\lambda f$. By integration by parts in \eqref{AE:2020A}, the decay bounds provided by Lemma \ref{L:Dbf2}, and \eqref{AE:2006},
\begin{align}
\mathcal{L}_+^\lambda f(x,t) &= \left[ \frac{x_-^{(\lambda+3)-1}}{\Gamma(\lambda+3)} \ast \partial_x^3\mathcal{L}f(-,t) \right](x) \notag \\
&=
\begin{aligned}[t]
&3\frac{x_-^{(\lambda+3)-1}}{\Gamma(\lambda+3)}\mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}}f(t) \\
&+ e^{i\pi\lambda} \int_{-\infty}^x \frac{(-x+y)^{(\lambda+3)-1}}{\Gamma(\lambda+3)} \mathcal{L}^0(\partial_t \mathcal{I}_{-\frac{\lambda}{3}}f)(y,t) \, dy
\end{aligned}
\label{AE:2004A}
\end{align}
For $\text{Re }\lambda >-3$, we may thus take \eqref{AE:2004A} as the definition for $\mathcal{L}_+^\lambda f$.
It is staightforward from these definitions that, in the sense of distributions
$$(\partial_t + \partial_x^3)\mathcal{L}_-^\lambda f(x,t) = 3\frac{x_+^{\lambda-1}}{\Gamma(\lambda)}\mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}}f(t)$$
and
$$(\partial_t + \partial_x^3)\mathcal{L}_+^\lambda f(x,t) = 3\frac{x_-^{\lambda-1}}{\Gamma(\lambda)}\mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}}f(t)$$
\begin{lemma}[Spatial continuity and decay properties for $\mathcal{L}_\pm^\lambda f(x,t)$]
\label{L:decayest}
Let $f\in C_0^\infty(\mathbb{R}^+)$, and fix $t\geq 0$. We have
$$\mathcal{L}_\pm^{-2}f=\partial_x^2\mathcal{L}^0\mathcal{I}_{\frac{2}{3}}f, \qquad \mathcal{L}_\pm^{-1}f=\partial_x\mathcal{L}^0\mathcal{I}_{\frac{1}{3}}f, \qquad \mathcal{L}_\pm^0f=\mathcal{L}f$$
Also, $\mathcal{L}_\pm^{-2}f(x,t)$ has a step discontinuity of size $3f(t)$ at $x=0$, otherwise for $x\neq 0$, $\mathcal{L}_\pm^{-2}f(x,t)$ is continuous in $x$. For $\lambda >-2$, $\mathcal{L}_\pm^\lambda f(x,t)$ is continuous in $x$ for all $x\in \mathbb{R}$. For $-2\leq \lambda \leq 1$, $0\leq t \leq 1$, $\mathcal{L}_-^\lambda f(x,t)$ satisfies the decay bounds
\begin{align*}
|\mathcal{L}_-^\lambda f(x,t)| &\leq c_{k,\lambda,f} \langle x \rangle^{-k} && \forall \; x \leq 0, \quad \forall \; k\geq 0 \\
|\mathcal{L}_-^\lambda f(x,t)| &\leq c_{\lambda,f}\langle x \rangle^{\lambda-1} && \forall \; x \geq 0
\end{align*}
For $-2\leq \lambda \leq 1$, $0\leq t \leq 1$, $\mathcal{L}_+^\lambda f(x,t)$ satisfies the decay bounds
\begin{align*}
|\mathcal{L}_+^\lambda f(x,t)| &\leq c_{k,\lambda,f} \langle x \rangle^{-k} && \forall \; x \geq 0, \quad \forall \; k\geq 0 \\
|\mathcal{L}_+^\lambda f(x,t)| &\leq c_{\lambda,f}\langle x \rangle^{\lambda-1} && \forall \; x \leq 0
\end{align*}
\end{lemma}
\begin{proof}
We only prove the bounds for $\mathcal{L}^\lambda_- f$, since the corresponding results for $\mathcal{L}^\lambda_+ f$ are obtained similarly. For $x\leq -2$, the result follows by direct estimation in \eqref{AE:2004} using $|\mathcal{L}^0(\partial_t\mathcal{I}_{-\frac{\lambda}{3}}f)(y,t)| \leq c_{k,f}\langle y \rangle^{-k}\langle x \rangle^{-k}$ obtained from \eqref{AE:463} (since $|y|\geq |x|$). Assume $x\geq 2$. Let $\psi\in C^\infty(\mathbb{R})$ be such that $\psi(y)=1$ for $y\leq \frac{1}{4}$ and $\psi(y)=0$ for $y\geq \frac{3}{4}$. Then
\begin{align*}
\mathcal{L}_-^\lambda f(x,t) &= \frac{x_+^{(\lambda+3)-1}}{\Gamma(\lambda+3)} \ast \partial_x^3 \mathcal{L}^0\mathcal{I}_{-\frac{\lambda}{3}}f(-,t) \\
&=
\begin{aligned}[t]
&\int_{-\infty}^x \frac{(x-y)^{\lambda+2}}{\Gamma(\lambda+3)} \psi\left(\frac{y}{x}\right) \partial_y^3\mathcal{L}f^0\mathcal{I}_{-\frac{\lambda}{3}}(y,t) \, dy \\
&+ \int_{-\infty}^x \frac{(x-y)^{\lambda+2}}{\Gamma(\lambda+3)} \left[ 1 - \psi\left(\frac{y}{x}\right)\right] \partial_y^3\mathcal{L}^0\mathcal{I}_{-\frac{\lambda}{3}}f(y,t) \, dy
\end{aligned} \\
&= \text{I} + \text{II}
\end{align*}
In I, $y\leq \frac{3}{4}x$, integrate by parts,
\begin{align*}
\text{I} &= -\int_{-\infty}^x \partial_y^3 \left[ \frac{(x-y)^{\lambda+2}}{\Gamma(\lambda+3)} \psi\left(\frac{y}{x}\right)\right] \mathcal{L}^0\mathcal{I}_{-\frac{\lambda}{3}}f(y,t) \, dy \\
&=
\begin{aligned}[t]
&\int_{-\infty}^x \frac{(x-y)^{\lambda-1}}{\Gamma(\lambda)} \psi\left( \frac{y}{x}\right) \mathcal{L}^0\mathcal{I}_{-\frac{\lambda}{3}}f(y,t) \, dy \\
&+\sum_{j=1}^3 c_j\int_{-\infty}^x \frac{(x-y)^{\lambda+j-1}}{\Gamma(\lambda+j)} \frac{1}{x^j} \psi^{(j)}\left( \frac{y}{x} \right) \mathcal{L}^0\mathcal{I}_{-\frac{\lambda}{3}}f(y,t) \, dy
\end{aligned}
\end{align*}
In the first of these terms, since $y\leq \frac{3}{4}x$, $(x-y)^{\lambda-1} \leq (\frac{1}{4})^{\lambda-1} x^{\lambda-1}$. In the second term, $\frac{1}{4}x \leq y \leq \frac{3}{4}x$, and thus we can use the decay of $\mathcal{L}^0 \mathcal{I}_{-\lambda/3} f(y,t)$.
In II, $y\geq \frac{1}{4}x$, apply \eqref{AE:2006},
\begin{align*}
\text{II} &= \int_{-\infty}^x \frac{(x-y)^{\lambda+2}}{\Gamma(\lambda+3)} \left[ 1 - \psi\left(\frac{y}{x}\right)\right] ( 3\delta_0(y)\mathcal{I}_{-2/3}f(t)-\mathcal{L}^0(\partial_t\mathcal{I}_{-2/3}f)(y,t) ) \, dy \\
&= -\int_{-\infty}^x \frac{(x-y)^{\lambda+2}}{\Gamma(\lambda+3)} \left[ 1 - \psi\left(\frac{y}{x}\right)\right] \mathcal{L}^0(\partial_t\mathcal{I}_{-2/3}f)(y,t) \, dy
\end{align*}
Since $y\geq \tfrac{1}{4}x$, we have by Lemma \ref{L:Dbf2}, $$|\mathcal{L}^0(\partial_t \mathcal{I}_{-2/3}f)(y,t)| \leq c_k \|f\|_{H^{2k+1}} \langle x \rangle^{-k} \langle y \rangle^{-k},$$ which establishes the bound.
\end{proof}
\begin{lemma}[Values of $\mathcal{L}_\pm^\lambda f(x,t)$ at $x=0$]
\label{L:valueatzero}
For $\text{Re }\lambda>-2$,
\begin{equation}
\label{AE:2016}
\mathcal{L}_-^\lambda f(0,t) = 2\sin(\tfrac{\pi}{3}\lambda+\tfrac{\pi}{6})f(t)
\end{equation}
\begin{equation}
\label{AE:2016A}
\mathcal{L}_+^\lambda f(0,t) = e^{i\pi \lambda}f(t)
\end{equation}
\end{lemma}
In order to prove this, we need to compute the Mellin transform of each side of the Airy function.
\begin{lemma}[Mellin transform of the Airy function] \label{L:leftMellin}
If $0< \textnormal{Re }\lambda < \frac{1}{4}$, then
\begin{equation} \label{AE:2013}
\int_0^{+\infty} x^{\lambda-1}A(-x) \, dx = \tfrac{1}{3\pi}\Gamma(\lambda)\Gamma(-\tfrac{1}{3}\lambda + \tfrac{1}{3})\cos(\tfrac{2\pi}{3}\lambda - \tfrac{\pi}{6})
\end{equation}
If $\textnormal{Re }\lambda >0 $, then
\begin{equation} \label{AE:2410}
\int_0^{+\infty} x^{\lambda-1} A(x) \, dx = \tfrac{1}{3\pi} \Gamma(\lambda) \Gamma(\tfrac{1}{3}-\tfrac{1}{3}\lambda) \cos( \tfrac{\pi}{3}\lambda+\tfrac{\pi}{6})
\end{equation}
Note that although $\Gamma(\frac{1}{3}-\frac{1}{3}\lambda)$ has poles at $\lambda=1,4,7, \cdots$, $\cos( \tfrac{\pi}{3}\lambda+\tfrac{\pi}{6})$ vanishes at these positions.
\end{lemma}
\begin{proof}
We shall only carry out the computation leading to \eqref{AE:2013}, since the one for \eqref{AE:2410} is similar. Owing to the decay of the Airy function $A(-x) \leq c\langle x \rangle^{-1/4}$ for $x\geq 0$, the given expression is defined as an absolutely convergent integral. In the calculation, we assume that $\lambda$ is real and $0<\lambda <\frac{1}{4}$, and by analyticity, this suffices to establish \eqref{AE:2013}. Let $A_1(x) = \frac{1}{2\pi} \int_0^{+\infty} e^{ix\xi} e^{i\xi^3} \, d\xi$, so that $A(x)=2\text{Re }A_1(x)$. Let $A_{1,\epsilon}(x) = \frac{1}{2\pi} \int_{\xi=0}^{+\infty} e^{ix\xi} e^{i\xi^3} e^{-\epsilon \xi} \, d\xi$. Then, by dominated convergence and Fubini
\begin{align}
\hspace{0.3in}&\hspace{-0.3in} \int_0^{+\infty} x^{\lambda-1}A_1(-x)\, dx \\
&= \lim_{\epsilon \downarrow 0} \lim_{\delta \downarrow 0} \int_{x=0}^{+\infty} x^{\lambda-1} e^{-\delta x} A_{1,\epsilon}(-x) \, dx \notag\\
&= \lim_{\epsilon \downarrow 0} \lim_{\delta \downarrow 0} \frac{1}{2\pi} \int_{\xi=0}^{+\infty} e^{i\xi^3}e^{-\epsilon\xi} \int_{x=0}^{+\infty} x^{\lambda-1}e^{-\delta x}e^{-ix\xi} \, dx \, d\xi . \label{E:301}
\end{align}
By a change of contour,
\begin{equation}
\label{E:300}
\int_{x=0}^{+\infty} x^{\lambda-1}e^{-\delta x} e^{-ix\xi} \, dx = \xi^{-\lambda} e^{-\lambda \frac{\pi}{2}} \Gamma(\lambda, \delta/\xi)
\end{equation}
where $\Gamma(\lambda,z)= \int_{r=0}^{+\infty} r^{\lambda-1} e^{irz}e^{-r} \, dr$. By dominated convergence,
$$\lim_{\delta \downarrow 0} \int_{x=0}^{+\infty} x^{\lambda-1}e^{-\delta x} e^{-ix\xi} \, dx = \xi^{-\lambda} e^{-\lambda \frac{\pi}{2}} \Gamma(\lambda)$$
Since \eqref{E:300} is bounded independently of $\delta>0$, we have by dominated convergence
$$\eqref{E:301} = \frac{1}{2\pi} \Gamma(\lambda) e^{-i\lambda \frac{\pi}{2}} \lim_{\epsilon \downarrow 0} \int_{\xi=0}^{+\infty} e^{i\xi^3}e^{-\epsilon\xi} \xi^{-\lambda} \, d\xi$$
Change variable $\eta=\xi^3$ and change contour, this becomes
$$ \frac{1}{6\pi} \Gamma(\lambda) e^{-\frac{2\pi \lambda i}{3}} e^{\frac{\pi i}{6}} \lim_{\epsilon \downarrow 0} \int_0^{+\infty} e^{-r} e^{-\epsilon( \frac{\sqrt{3}}{2}+i\frac{1}{2})r^{1/3}} r^{-\frac{2}{3}-\frac{\lambda}{3}} \, dr$$
Finally, dominated convergence yields
$$\int_0^{+\infty} x^{\lambda-1}A_1(-x)\, dx = \tfrac{1}{6\pi} e^{-\frac{2\pi \lambda i}{3}} e^{\frac{\pi i}{6}} \Gamma(\lambda)\Gamma(\tfrac{1}{3}-\tfrac{\lambda}{3})$$
Using $A(x) = 2\text{Re }A_1(x)$, we obtain \eqref{AE:2013}
\end{proof}
Now we return to the proof of Lemma \ref{L:valueatzero}.
\begin{proof}[Proof of Lemma \ref{L:valueatzero}]
From \eqref{AE:2004},
\begin{equation} \label{AE:2015}
\mathcal{L}_-^\lambda f(0,t) = \int_{-\infty}^0 \frac{(-y)^{\lambda+2}}{\Gamma(\lambda+3)} \mathcal{L}^0(\partial_t \mathcal{I}_{-\frac{\lambda}{3}}f)(y,t) \, dy
\end{equation}
and from \eqref{AE:2004A},
\begin{equation} \label{AE:2015A}
\mathcal{L}_+^\lambda f(0,t) = e^{i\pi\lambda}\int_0^{+\infty} \frac{y^{\lambda+2}}{\Gamma(\lambda+3)} \mathcal{L}^0(\partial_t \mathcal{I}_{-\frac{\lambda}{3}}f)(y,t) \, dy
\end{equation}
By complex differentiation under the integral sign, \eqref{AE:2015} demonstrates that $\mathcal{L}_-^\lambda f(0,t)$ is analytic in $\lambda$ for $\text{Re }\lambda>-2$. We shall only compute \eqref{AE:2016} for $0<\lambda<\tfrac{1}{4}$, $\lambda$ real. By analyticity, the result will extend to the full range $\text{Re }\lambda>-2$. For the computation in the range $0<\lambda<\tfrac{1}{4}$, we use the representation \eqref{AE:2020} in place of \eqref{AE:2015} to give
$$\mathcal{L}_-^\lambda f(0,t) = \int_{y=-\infty}^0 \frac{(-y)^{\lambda-1}}{\Gamma(\lambda)} \mathcal{L}^0f(y,t) \, dy$$
By the decay for $A(-y)$, $y\geq 0$, we can apply Fubini to the above equation after inserting \eqref{E:Dbf} and then apply \eqref{AE:2013} to obtain
\begin{equation*}
\mathcal{L}_-^\lambda f(0,t) = \tfrac{1}{\pi} \Gamma\left( -\tfrac{1}{3}\lambda+\tfrac{1}{3} \right) \Gamma\left( \tfrac{1}{3}\lambda + \tfrac{2}{3} \right) \cos\left( \tfrac{2\pi}{3}\lambda - \tfrac{\pi}{6} \right) \mathcal{I}_{\frac{1}{3}\lambda+\frac{2}{3}}(\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)(t)
\end{equation*}
Using the identities $\Gamma(z)\Gamma(1-z)=\dfrac{\pi}{\sin \pi z}$, $\cos x = \sin( \frac{\pi}{2} - x)$, and $\sin 2x = 2\cos x \sin x$,
\begin{align*}
\mathcal{L}_-^\lambda f(0,t) &= \frac{\cos \left( \frac{2\pi}{3} \lambda -\frac{\pi}{6} \right)}{\sin \left( -\frac{\pi}{3}\lambda + \frac{\pi}{3} \right)} \mathcal{I}_{\frac{1}{3}\lambda+\frac{2}{3}}(h)(t) \\
&= 2 \sin \left( \tfrac{\pi}{3}\lambda + \tfrac{\pi}{6} \right) \mathcal{I}_{\frac{1}{3}\lambda+\frac{2}{3}}(\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)(t)
\end{align*}
giving \eqref{AE:2016}.
By complex differentiation under the integral sign, \eqref{AE:2015A} demonstrates that $f_+(t,\lambda)$ is analytic in $\lambda$ for $\text{Re }\lambda>-3$. We shall only compute \eqref{AE:2016A} for $0<\lambda$, $\lambda$ real. By analyticity, the result will extend to the full range $\text{Re }\lambda>-3$. For the computation in the range $0<\lambda$, we use the representation \eqref{AE:2020A} in place of \eqref{AE:2015A} to give
$$\mathcal{L}_+^\lambda f(0,t) = e^{i\pi\lambda}\int_{y=0}^{+\infty} \frac{y^{\lambda-1}}{\Gamma(\lambda)} \mathcal{L}^0\mathcal{I}_{-\frac{\lambda}{3}}f(y,t) \, dy$$
By the decay of $A(y)$, $y\geq 0$, we can apply Fubini to obtain
$$\mathcal{L}_+^\lambda f(0,t) = \tfrac{1}{3\pi} \Gamma(\tfrac{1}{3}-\tfrac{1}{3}\lambda) \cos(\tfrac{\pi}{3}\lambda+ \tfrac{\pi}{6}) e^{i\pi\lambda} \mathcal{I}_{\frac{1}{3}\lambda+\frac{2}{3}}(\mathcal{I}_{-\frac{1}{3}\lambda-\frac{2}{3}}f)(t)$$
Using the same identities as above, we obtain \eqref{AE:2016A}.
\end{proof}
\section{Notations and some function space properties} \label{S:N}
We use the notation $H^s$ to mean $H^s(\mathbb{R})$ (and not $H^s(\mathbb{R}^+)$ or $H_0^s(\mathbb{R}^+)$). The trace operator $\phi \mapsto\phi(0)$ is defined for $\phi\in H^s(\mathbb{R})$ when $s>\frac{1}{2}$. For $s\geq 0$, define $\phi \in H^s(\mathbb{R}^+)$ if $\exists \; \tilde{\phi} \in H^s(\mathbb{R})$ such that $\tilde{\phi}(x)=\phi(x)$ for $x>0$; in this case we set $\|\phi\|_{H^s(\mathbb{R}^+)} = \inf_{\tilde{\phi}} \|\tilde{\phi} \|_{H^s(\mathbb{R})}$. For $s \in \mathbb{R}$, define $\phi \in H_0^s(\mathbb{R}^+)$ if, when $\phi(x)$ is extended to $\tilde{\phi}(x)$ on $\mathbb{R}$ by setting $\tilde{\phi}(x)=0$ for $x<0$, then $\tilde{\phi}\in H^s(\mathbb{R})$; in this case we set $\| \phi \|_{H_0^s(\mathbb{R}^+)} = \|\tilde{\phi}\|_{H^s(\mathbb{R})}$. For $s<0$, define $H^s(\mathbb{R}^+)$ as the dual space to $H_0^{-s}(\mathbb{R}^+)$, and define $H^s_0(\mathbb{R}^+)$ as the dual space to $H^{-s}(\mathbb{R}^+)$. A definition for $H^s(0,L)$ can be given analogous to that for $H^s(\mathbb{R}^+)$.
Define $\phi \in C_0^\infty(\mathbb{R}^+)$ if $\phi \in C^\infty(\mathbb{R})$ with $\text{supp }\phi \subset [0,+\infty)$ (so that, in particular, $\phi$ and all of its derivatives vanish at $0$), and $C_{0,c}^{\infty}(\mathbb{R}^+)$ as those members of $C_0^\infty(\mathbb{R}^+)$ with compact support. We remark that $C_{0,c}^{\infty}(\mathbb{R}^+)$ is dense in $H_0^s(\mathbb{R}^+)$ for all $s\in \mathbb{R}$.
\begin{lemma}[\cite{CK02} Lemma 2.8]\label{CK28} If $0\leq \alpha < \frac{1}{2}$, then $\| \theta h \|_{H^\alpha} \leq c\|h\|_{\dot{H}^\alpha}$ and $\| \theta h \|_{\dot{H}^{-\alpha}} \leq c\|h\|_{H^{-\alpha}}$, where $c=c(\alpha, \theta)$.
\end{lemma}
\begin{lemma}[\cite{JK95} Lemma 3.5] \label{JK35}
If $-\frac{1}{2}< \alpha< \frac{1}{2}$, then $\| \chi_{(0,+\infty)}f \|_{H^\alpha} \leq c \| f \|_{H^\alpha}$, where $c=c(\alpha)$.
\end{lemma}
\begin{lemma}[\cite{CK02} Prop.\ 2.4, \cite{JK95} Lemma 3.7, 3.8] \label{JK37}
If $\frac{1}{2}<\alpha<\frac{3}{2}$, then
$$H_0^\alpha(\mathbb{R}^+)=\{ f\in H^\alpha(\mathbb{R}^+) \; \mid \; f(0)=0 \}.$$
If $\frac{1}{2}<\alpha<\frac{3}{2}$ and $f\in H^\alpha(\mathbb{R}^+)$ with $f(0)=0$, then $\|\chi_{(0,+\infty)} f\|_{H_0^\alpha(\mathbb{R}^+)} \leq c\|f\|_{H^\alpha(\mathbb{R}^+)}$, where $c=c(\alpha)$.
\end{lemma}
\begin{lemma}[\cite{CK02}, Lemma 5.1]
If $s\in \mathbb{R}$ and $0<b<1$, $0<\alpha<1$ then
$$\|\theta(t)w(x,t)\|_{X_{s,b}\cap D_\alpha} \leq c\|w\|_{X_{s,b}}$$
where $c=c(\theta)$.
\end{lemma}
\begin{lemma}[\cite{CK02} Cor.\ 2.1, Prop.\ 2.2]
For $\alpha\geq 0$, $H_0^{-\alpha}(\mathbb{R}^+)$ is a complex interpolation scale. For $\alpha\geq 0$, $H_0^\alpha(\mathbb{R}^+)$ is a complex interpolation scale.
\end{lemma}
\section{Estimates}
\label{S:estimates}
\subsection{Estimates for the Riemann-Liouville fractional integral}
In this section, we shall use the notation $\mathcal{J}_\alpha f = \frac{t_+^{\alpha-1}}{\Gamma(\alpha)} \ast f $ for $f\in C_0^\infty(\mathbb{R})$ (no restriction on support of $f$ to $[0,+\infty)$). This is in distinction to the definition of $\mathcal{I}_\alpha$, where we are convolving with a function $f$ supported in $[0,+\infty)$.
\begin{lemma}\label{L:RL5} Let $\alpha\in\mathbb{C}$. If $\mu_1\in C_0^\infty(\mathbb{R})$ and $\mu_2\in C^\infty(\mathbb{R})$ such that $\mu_2=1$ on a neighborhood of $(-\infty, b]$, where $b=\sup \{ \, t \, | \, t\in \textnormal{supp }\mu_1 \, \}$, then $\mu_1 \mathcal{J}_\alpha \mu_2 h = \mu_1 \mathcal{J}_\alpha h$. If $\mu_2\in C^\infty_0(\mathbb{R})$ and $\mu_1\in C^\infty(\mathbb{R})$ such that $\mu_1=1$ on a neighborhood of $[a, +\infty)$, where $a=\inf \{ \, t \, | \, t\in \textnormal{supp }\mu_2 \, \}$, then $\mu_1 \mathcal{J}_\alpha \mu_2 h = \mathcal{J}_\alpha \mu_2 h$
\end{lemma}
\begin{proof}
The first identity is clear from the integral definition if $\text{Re } \alpha>0$. If $\text{Re } \alpha<0$, let $k\in \mathbb{N}$ be such that $-k<\text{Re }\alpha \leq -k+1$ so that $\mathcal{J}_\alpha=\partial_t^k\mathcal{J}_{\alpha+k}$. Let $U$ be an open set such that
$$\text{supp }\mu_1 \subset (-\infty, b] \subset U \subset \{ \, t\, | \, \mu_2(t)=1 \, \}$$
Then $\forall \; t\in U$, $\mathcal{J}_{\alpha+k}h = \mathcal{J}_{\alpha+k}\mu_2 h$, which implies that $\forall \; t\in (-\infty,b]$, $\partial_t^k \mathcal{J}_{\alpha+k}h = \partial_t^k \mathcal{J}_{\alpha+k} \mu_2 h$, which implies that $\forall \; t\in\mathbb{R}$, $\mu_1\partial_t^k \mathcal{J}_{\alpha+k}h = \mu_1\partial_t^k \mathcal{J}_{\alpha+k} \mu_2 h$. The second claim is clear by the integral definition if $\text{Re }\alpha >0$. If $\text{Re }\alpha<0$, let $k\in \mathbb{N}$ be such that $-k<\text{Re }\alpha \leq -k+1$ so that $\mathcal{J}_\alpha=\mathcal{J}_{\alpha+k}\partial_t^k$. Since $\text{supp }\partial_t^j \mu_2 \subset [a,+\infty)\subset \{ \, t\, | \, \mu_1(t)=1 \, \}$, we have
$$\mu_1 \mathcal{J}_{\alpha+k} (\partial_t^j\mu_2) (\partial_t^{k-j}h) = \mathcal{J}_{\alpha+k}(\partial_t^j\mu_2)(\partial_t^{k-j}h)$$
and thus $\mu_1\mathcal{J}_{\alpha+k}\partial_t^k \mu_2 h = \mathcal{J}_{\alpha+k}\partial_t^k \mu_2 h$.
\end{proof}
\begin{lemma} \label{L:RL6} For $\gamma\in\mathbb{R}$, $s\in \mathbb{R}$, $\| \mathcal{J}_{i\gamma} h \|_{H^s(\mathbb{R})} \leq \cosh(\tfrac{1}{2}\pi\gamma)\| h \|_{H^s(\mathbb{R})}$
\end{lemma}
\begin{proof}
From \eqref{E:400}, we have
$$\left( \frac{x_+^{i\gamma-1}}{\Gamma(i\gamma)} \right)\sphat(\xi) =
\left\{
\begin{aligned}
&e^{\frac{1}{2}\pi \gamma} e^{-i\gamma\ln |\xi|} && \text{if }\xi >0 \\
&e^{-\frac{1}{2}\pi \gamma} e^{-i\gamma\ln |\xi|} && \text{if }\xi<0
\end{aligned}
\right.
$$
and thus $\left| \left( \frac{x_+^{i\gamma-1}}{\Gamma(i\gamma)} \right)\sphat(\xi) \right| \leq 2\cosh(\frac{1}{2}\pi \gamma)$.
\end{proof}
\begin{lemma} \label{L:RL1}
If $0\leq \textnormal{Re }\alpha < +\infty$ and $s\in \mathbb{R}$, then
\begin{gather}
\| \mathcal{I}_{-\alpha}h \|_{H_0^s(\mathbb{R}^+)}\leq c e^{\frac{1}{2}\textnormal{Im }\alpha}\|h\|_{H_0^{s+\alpha}(\mathbb{R}^+)} \label{AE:315} \\
\| \mathcal{J}_{-\alpha}h \|_{H^s(\mathbb{R})}\leq c e^{\frac{1}{2}\textnormal{Im }\alpha} \|h\|_{H^{s+\alpha}(\mathbb{R})} \label{AE:316}
\end{gather}
\end{lemma}
\begin{proof}
\eqref{AE:316} is immediate from \eqref{E:410}. \eqref{AE:315} then follows from \eqref{AE:316} by Lemma \ref{L:RL} and a density argument.
\end{proof}
\begin{lemma} \label{L:RL2}
If $0\leq \textnormal{Re }\alpha < +\infty$, $s\in \mathbb{R}$, $\mu, \mu_2 \in C_0^\infty(\mathbb{R})$
\begin{align}
\| \mu \mathcal{I}_\alpha h \|_{H_0^s(\mathbb{R}^+)} &\leq c e^{\frac{1}{2}\textnormal{Im } \alpha}\|h\|_{H_0^{s-\alpha}(\mathbb{R}^+)} & c&=c(\mu)\label{AE:318} \\
\| \mu \mathcal{J}_\alpha \mu_2 h \|_{H^s(\mathbb{R})} &\leq c e^{\frac{1}{2}\textnormal{Im } \alpha} \|h\|_{H^{s-\alpha}(\mathbb{R})} & c&= c(\mu,\mu_2)\label{AE:319}
\end{align}
where $c=c(\mu, \mu_2)$.
\end{lemma}
\begin{proof} We first explain how \eqref{AE:318} follows from \eqref{AE:319}. Given $\mu$, let $b= \sup\{ \, t \, | \, t \in \text{supp } \mu \, \}$. Take $\mu_2\in C_0^\infty(\mathbb{R})$, $\mu_2=1$ on $[0,b]$. Then, when restricting to $h\in C_0^\infty(\mathbb{R}^+)$, we have $\mu \mathcal{I}_\alpha h = \mu \mathcal{J}_\alpha \mu_2 h$. By Lemma \ref{L:RL} and a density argument, we obtain \eqref{AE:318}. Now we prove \eqref{AE:319}. We first need the special case $s=0$.\\
\textit{Claim}. If $k\in \mathbb{Z}_{\geq 0}$, then $\|\mu \mathcal{J}_k \mu_2 h\|_{L^2(\mathbb{R})} \leq c \| h \|_{H^{-k}(\mathbb{R})}$, where $c=c(\mu, \mu_2)$.\\
To prove this claim, consider $k\in \mathbb{N}$. If $g\in C_0^\infty(\mathbb{R})$ with $\| g\|_{L^2}\leq 1$, then
\begin{align*}
\| \mu \mathcal{J}_k \mu_2 h \|_{L^2} &= \frac{1}{\Gamma(k)}\sup_g \int_t \mu(t) \int_{s=-\infty}^t(t-s)^{k-1} \mu_2(s)h(s) \, ds \, g(t) \, dt \\
&= \frac{1}{\Gamma(k)} \sup_g \int_s h(s) \, \mu_2(s) \int_{t=s}^{+\infty} \mu(t)(t-s)^{k-1}g(t)\, dt \, ds \\
&\leq \frac{1}{\Gamma(k)}\| h \|_{H^{-k}} \left\| \mu_2(s) \int_{t=s}^{+\infty} \mu(t)(t-s)^{k-1}g(t)\, dt \right\|_{H^k(ds)} \\
& \leq c\| h\|_{H^{-k}} \|g\|_{L^2}
\end{align*}
The case $k =0$ is trivial, concluding the proof of the claim.\\
To prove \eqref{AE:319}, we first take $\alpha=k\in \mathbb{Z}_{\geq 0}$, $s=m\in\mathbb{Z}$, $h\in C_0^\infty(\mathbb{R})$.\\
\textit{Case 1}. $m\geq 0$.
\begin{align*}
\|\mu \mathcal{J}_k \mu_2 h \|_{H^m} &\leq \| \mu \mathcal{J}_k \mu_2 h \|_{L^2} + \sum_{j=0}^m \| \mu^{(j)} \mathcal{J}_{k-m+j} \mu_2 h \|_{L^2} \\
&\leq c(\| h\|_{H^{-k}} + \sum_{j=0}^m\|h\|_{H^{m-k-j}})\leq c\|h\|_{H^{m-k}}
\end{align*}
by appealing to the claim or Lemma \ref{L:RL1}.\\
\textit{Case 2}. $m<0$.
Let $\mu_3=1$ on $\text{supp }\mu$, $\mu_3\in C_0^\infty(\mathbb{R}^+)$.
$$\mu \mathcal{J}_k \mu_2 h = \mu \partial_t^{-m} \mathcal{J}_{k-m} \mu_2 h = \mu \partial_t^{-m} \mu_3 \mathcal{J}_{k-m} \mu_2 h$$
and therefore
$$\| \mu \mathcal{J}_k \mu_2 h \|_{H^m} \leq \| \mu_3 \mathcal{J}_{k-m} \mu_2 h \|_{L^2}$$
and we conclude by applying the claim.
Next, we extend to $\alpha=k+i\gamma$ for $k,\gamma \in \mathbb{R}$, as follows. Let $\mu_3=1$ on a neighborhood of $(-\infty,b]$, where $b=\sup \{ \, t \, | \, t\in \text{supp }\mu \, \}$, and let $\mu_4=1$ on a neighborhood of $[a,+\infty)$, where $a=\inf \{ \, t \, | \, t\in \text{supp }\mu_2 \, \}$, so that $\mu_3\mu_4 \in C_0^\infty(\mathbb{R})$. By Lemma \ref{L:RL5},
$$\mu \mathcal{J}_{k+i\gamma} \mu_2 h = \mu \mathcal{J}_{i\gamma} \mu_3 \mu_4 \mathcal{J}_k \mu_2 h$$
By Lemma \ref{L:RL6},
$$\| \mu \mathcal{J}_{k+i\gamma} \mu_2 h \|_{H^m} \leq c \cosh (\tfrac{1}{2}\pi \gamma)\| \mu_3 \mu_4 \mathcal{J}_k \mu_2 h\|_{H^m}$$
which is bounded as above. We can now apply interpolation to complete the proof.
\end{proof}
\subsection{Estimates for the group}
The operator $e^{-t\partial_x^3}$ was defined above in \eqref{E:G} satisfying \eqref{E:Geq}.
\begin{lemma} \label{L:G}
Let $s\in \mathbb{R}$. Then
\begin{enumerate}
\item \label{I:Gst} \textnormal{(Space traces)} $\| e^{-t\partial_x^3}\phi(x)\|_{C(\mathbb{R}_t; H^s_x)} \leq c\| \phi\|_{H^s}$.
\item \label{I:Gtt} \textnormal{(Time traces)} $\| \theta(t) e^{-t\partial_x^3} \phi(x) \|_{C(\mathbb{R}_x; H_t^\frac{s+1}{3})} \leq c\|\phi\|_{H^s}$.
\item \label{I:Gdtt} \textnormal{(Derivative time traces)} $\| \theta(t) \partial_x e^{-t\partial_x^3} \phi(x) \|_{C(\mathbb{R}_x; H_t^\frac{s}{3})} \leq c\|\phi\|_{H^s}$.
\item \label{I:GBse} \textnormal{(Bourgain space estimate)} If $0<b<1$ and $0<\alpha<1$, then $\| \theta(t) e^{-t\partial_x^3} \phi(x) \|_{X_{s,b}\cap D_\alpha} \leq c\|\theta\|_{H^1}\|\phi\|_{H^s}$, where $c$ is independent of $\theta$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{I:Gst},\eqref{I:GBse} follow from the definition \eqref{E:G} and \eqref{I:Gtt},\eqref{I:Gdtt} appear in \cite{KPV91}.
\end{proof}
\subsection{Estimates for the Duhamel inhomogeneous solution operator}
The operator $\mathcal{D}$ was defined above in \eqref{E:Di} satisfying \eqref{E:Dieq}.
Let
$$\|u\|_{Y_{s,b}} = \left( \iint_{\xi,\tau} \langle\tau\rangle^{2s/3}\langle\tau-\xi^3\rangle^{2b} |\hat{u}(\xi,\tau)|^2 \, d\xi \, d\tau \right)^{1/2}$$
\begin{lemma} \label{L:Di}
Let $s\in \mathbb{R}$. Then
\begin{enumerate}
\item \label{I:Dist} \textnormal{(Space traces)} If $0\leq b < \frac{1}{2}$, then $$ \| \theta(t) \mathcal{D}w(x,t) \|_{C(\mathbb{R}_t; H^s_x)} \leq c\|w\|_{X_{s,-b}}.$$
\item \label{I:Ditt} \textnormal{(Time traces)} If $0<b<\frac{1}{2}$, then
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in}
\| \theta(t) \mathcal{D}w(x,t) \|_{C(\mathbb{R}_x; H^\frac{s+1}{3}_t)} \\
&\leq \left\{
\begin{aligned}
& c\|w\|_{X_{s,-b}} && \textnormal{if }-1\leq s\leq \tfrac{1}{2} \\
& c(\|w\|_{X_{s,-b}}+ \|w\|_{Y_{s,-b}}) && \textnormal{for any }s
\end{aligned}
\right.
\end{align*}
If $s<\frac{7}{2}$, then $\|\theta(t)\mathcal{D}w(x,t) \|_{C(\mathbb{R}_x;H_0^\frac{s+1}{3}(\mathbb{R}_t^+))}$ has the same bound.
\item \label{I:Didtt} \textnormal{(Derivative time traces)} If $0<b<\frac{1}{2}$, then
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in}
\| \theta(t) \partial_x \mathcal{D}w(x,t) \|_{C(\mathbb{R}_x; H^\frac{s}{3}_t)} \\
&\leq \left\{
\begin{aligned}
& c\|w\|_{X_{s,-b}} && \textnormal{if }0\leq s\leq \tfrac{3}{2} \\
& c(\|w\|_{X_{s,-b}}+ \|w\|_{Y_{s,-b}}) && \textnormal{for any }s
\end{aligned}
\right.
\end{align*}
If $s<\frac{9}{2}$, then $\|\theta(t)\partial_x\mathcal{D}w(x,t) \|_{C(\mathbb{R}_x;H_0^\frac{s}{3}(\mathbb{R}_t^+))}$ has the same bound.
\item \label{I:DiBse} \textnormal{(Bourgain space estimate)} If $0\leq b < \frac{1}{2}$ and $\alpha\leq 1-b$, then $\displaystyle \|\theta(t) \mathcal{D}w(x,t) \|_{X_{s,b}\cap D_\alpha} \leq c\|w\|_{X_{s,-b}}$.
\end{enumerate}
\end{lemma}
\begin{remark} The need for the $Y_{s,b}$ (time-adapted) Bourgain space arises here in Lemma \ref{L:Di}\eqref{I:Ditt}\eqref{I:Didtt} in order the cover the full interval $-\frac{3}{4}<s<\frac{3}{2}$ ($s\neq \frac{1}{2}$). It is, however, only an intermediate device since the bilinear estimate in Lemma \ref{L:bilinear}\eqref{I:BYX} enables us to avoid carrying out the contraction argument in $Y_{s,b}$.
\end{remark}
\begin{proof}
\eqref{I:DiBse} is Lemma 5.4 in \cite{CK02} (although $Y_b$ has a different definition from ours) and \eqref{I:Dist} is a standard estimate (see the techniques of Lemmas 5.4, 5.5 in \cite{CK02}). \eqref{I:Ditt} is Lemma 5.5 in \cite{CK02} and the proof of \eqref{I:Didtt} is modelled on the proof of Lemma 5.5 in \cite{CK02}.
\end{proof}
\subsection{Estimates for the Duhamel boundary forcing operator class}
The operators $\mathcal{L}_\pm^\lambda$ were defined above in \eqref{E:Dbf} solving \eqref{E:Dbfeq1}, \eqref{E:Dbfeq2}.
\begin{lemma} \label{L:Dbf}
Let $s\in \mathbb{R}$. Then
\begin{enumerate}
\item \label{I:Dbfst} \textnormal{(Space traces)} If $s-\frac{5}{2}<\lambda<s+\frac{1}{2}$, $\lambda<\frac{1}{2}$, and $\textnormal{supp }f\subset[0,1]$, then $\| \mathcal{L}_\pm^\lambda f(x,t) \|_{C(\mathbb{R}_t; H^s_x)} \leq c\|f\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)}$.
\item \label{I:Dbftt} \textnormal{(Time traces)} If $-2< \lambda <1$, then
$$\|\theta(t) \mathcal{L}_\pm^\lambda f(x,t) \|_{C(\mathbb{R}_x; H_0^\frac{s+1}{3}(\mathbb{R}_t^+))} \leq c\|f\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)}.$$
\item \label{I:Dbfdtt} \textnormal{(Derivative time traces)} If $-1 < \lambda < 2$, then $$\| \theta(t) \partial_x \mathcal{L}_\pm^\lambda f (x,t) \|_{C(\mathbb{R}_x; H_0^\frac{s}{3}(\mathbb{R}_t^+))} \leq c\|f\|_{H_0^\frac{s+1}{3}(\mathbb{R}_t^+)}.$$
\item \label{I:DbfBse} \textnormal{(Bourgain space estimate)} If $s-1\leq \lambda < s+\frac{1}{2}$, $\lambda<\frac{1}{2}$, $\alpha\leq \frac{s-\lambda+2}{3}$, and $0\leq b<\frac{1}{2}$, then $\|\theta(t) \mathcal{L}_\pm^\lambda f(x,t) \|_{X_{s,b}\cap D_\alpha} \leq c\|f\|_{H_0^\frac{s+1}{3}(\mathbb{R}_t^+)}$.
\end{enumerate}
\end{lemma}
\begin{remark} The restrictions on $s$, $\lambda$ in Lemma \ref{L:Dbf}\eqref{I:Dbfst}\eqref{I:DbfBse} are the primary purpose for introducing the analytic families $\mathcal{L}_\pm^\lambda$ and not simply using $\mathcal{L}^0$ for the right half-line problem and $\mathcal{L}^0$, $\mathcal{L}^{-1}$ for the left half-line problem. Note that by the assumption $\lambda<s+\frac{1}{2}$, we have $\frac{s-\lambda+2}{3}>\frac{1}{2}$, and thus we may take $\frac{1}{2}<\alpha\leq \frac{s-\lambda+2}{3}$, which is needed in order to meet the hypotheses of the bilinear estimates in Lemma \ref{L:bilinear}.
\end{remark}
\begin{proof}
We restrict to $\mathcal{L}_-^\lambda$ for notational convenience. Also, we assume in the proof that $f\in C_0^\infty(\mathbb{R}^+)$. The estimates, of course, extend by density.
To prove (a), we use ($\widehat{\quad}$ denoting the Fourier transform in $x$ alone)
$$( \mathcal{L}^\lambda f)\sphat(\xi,t) = (\xi-i0)^{-\lambda} \int_0^t e^{i(t-t')\xi^3} \mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f(t') \, dt'$$
By the change of variable $\eta=\xi^3$ and the support properties of $ \mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f(t')$,
\begin{align*}
\|\phi\|_{H^s}^2 &\leq \int_\eta |\eta|^{-\frac{2\lambda}{3}-\frac{2}{3}} \langle \eta \rangle^{\frac{2s}{3}} \left| \int_0^t e^{i(t-t')\eta} \mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}} f(t') \, dt' \right|^2 \, d\eta \\
&= \int_\eta |\eta|^{-\frac{2\lambda}{3}-\frac{2}{3}} \langle \eta \rangle^{\frac{2s}{3}} |( \chi_{(-\infty,t)}\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)\sphat(\eta)|^2 \, d\eta
\end{align*}
noting that $\lambda <\frac{1}{2} \Longrightarrow -\frac{2}{3}\lambda-\frac{2}{3}$ and $s-\frac{5}{2}<\lambda<s+\frac{1}{2} \Longrightarrow -1<-\frac{2\lambda}{3}-\frac{2}{3}+\frac{2s}{3}<1$.
By Lemma \ref{CK28} (to replace $|\eta|^{-\frac{2\lambda}{3}-\frac{2}{3}}$ by $\langle \eta \rangle^{-\frac{2\lambda}{3}-\frac{2}{3}}$), Lemma \ref{JK35} (to remove the time cutoff factor $\chi_{(-\infty,t)}$), and Lemma \ref{L:RL1} (to estimate $\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}$) we obtain the estimate in (a).
To prove (b), we first note that the change of variable $t' \rightarrow t-t'$ shows that
$$(I-\partial_t^2)^\frac{s+1}{6} \int_{-\infty}^t e^{-(t-t')\partial_x^3} h(t') \, dt' = \int_{-\infty}^t e^{-(t-t')\partial_x^3} (I-\partial_t^2)^{\frac{s+1}{6}} h(t') \, dt'$$
and thus (b) is equivalent to
$$\left\| \int_\xi e^{ix\xi}(\xi-i0)^{-\lambda} \int_{-\infty}^t e^{+i(t-t')\xi^3} (\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)(t') \, dt' \, d\xi \right\|_{L_t^2} \leq c\|f\|_{L_t^2}$$
Using that $\chi_{(-\infty,t)} = \frac{1}{2}\text{sgn}\,(t-t')+\frac{1}{2}$,
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in}
\int_\xi e^{ix\xi}(\xi-i0)^{-\lambda} \int_{-\infty}^t e^{+i(t-t')\xi^3} (\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)(t') \, dt' \, d\xi \\
&=
\begin{aligned}[t]
&\int_\tau e^{it\tau} \left[ \lim_{\epsilon \downarrow 0} \int_{|\tau-\xi^3|> \epsilon} e^{ix\xi} \frac{(\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}(\xi-i0)^{-\lambda}}{\tau-\xi^3} \, d\xi \right] \hat{f}(\tau) \, d\tau\\
&+\int_\xi e^{ix\xi}(\xi-i0)^{-\lambda} \int_{-\infty}^{+\infty} e^{+i(t-t')\xi^3} (\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)(t') \, dt' \, d\xi
\end{aligned} \\
&= \text{I}+ \text{II}
\end{align*}
We can rewrite II as
$$\text{II} = \int_\xi e^{ix\xi} (\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)\sphat(\xi^3) (\xi-i0)^{-\lambda} e^{it\xi^3} \, d\xi$$
The substitution $\eta =\xi^3$ and \eqref{E:410} gives
$$\text{II} = \int_\eta e^{it\eta} e^{ix\eta^{1/3}} (\eta-i0)^{\frac{\lambda}{3}+\frac{2}{3}} (\eta^{1/3}-i0)^{-\lambda} \eta^{-2/3}\hat{f}(\eta) \, d\eta$$
which is clearly $L^2_t \to L^2_t$ bounded. In addressing term I, it suffices to show that
\begin{equation} \label{AE:434}
\lim_{\epsilon\downarrow 0} \int_{|\tau-\xi^3|>\epsilon} e^{ix\xi} \frac{(\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}(\xi-i0)^{-\lambda}}{\tau-\xi^3} \, d\xi
\end{equation}
is bounded independently of $\tau$. Changing variable $\xi \rightarrow \tau^{1/3}\xi$, and using that
$$(\tau^{1/3}\xi-i0)^{-\lambda} = \tau_+^{-\lambda/3}(c_1\xi_+^{-\lambda}+c_2\xi_-^{-\lambda}) + \tau_-^{-\lambda/3}(c_1\xi_-^{-\lambda}+c_2\xi_+^{-\lambda})$$
we get
$$\eqref{AE:434}=\chi_{\tau > 0} \int_\xi e^{i\tau^{1/3}x\xi} \frac{c_1\xi_+^{-\lambda}+c_2\xi_-^{-\lambda}}{1-\xi^3} \, d\xi + \chi_{\tau < 0} \int_\xi e^{i\tau^{1/3}x\xi} \frac{c_1 \xi_-^{-\lambda} + c_2 \xi_+^{-\lambda}}{1-\xi^3}$$
The treatment of both integrals is similar, so we will only consider the first of the two. Let $\psi(\xi)=1$ near $\xi=1$, and $0$ outside $[\frac{1}{2},\frac{3}{2}]$. Then this term breaks into
$$c_1\int_\xi e^{ix\tau^{1/3}\xi} \frac{\psi(\xi)\xi_+^{-\lambda}}{1-\xi^3} \, d\xi + \int_\xi e^{ix\tau^{1/3}\xi} \frac{(1-\psi(\xi)) (c_1 \xi_+^{-\lambda} + c_2 \xi_-^{-\lambda})}{1-\xi^3} \, d\xi = \text{I}_a+\text{I}_b$$
The integrand in term $\text{I}_b$ is an $L^1$ function (provided $\lambda>-2$), so $|\text{Term I}_b| \leq c$. Term $\text{I}_a$ is
$$c_1 \int_\xi e^{ix\tau^{1/3}\xi} \frac{\psi(\xi) \xi_+^{-\lambda}}{1+\xi+\xi^2} \frac{1}{1-\xi} \, d\xi$$
This becomes convolution of a Schwartz class function with a phase shifted $\text{sgn }x$ function, which is bounded on $L_t^2$, completing the proof of (b).
Part (c) of the theorem is a corollary of (b) and the fact that $\partial_x \mathcal{L}_\pm^\lambda = \mathcal{L}_\pm^{\lambda-1} \mathcal{I}_{1/3}$.
To prove (d), first note that by \eqref{E:410}
$$( \mathcal{L}_-^\lambda f)\sphat (\xi,t) = (\xi-i0)^{-\lambda} \int_\tau \frac{ e^{it\tau}-e^{it\xi^3}}{\tau-\xi^3} (\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}\hat{f}(\tau) \, d\tau$$
Let $\psi(\tau)\in C^\infty(\mathbb{R})$ such that $\psi(\tau)=1$ for $|\tau|\leq 1$ and $\psi(\tau)=0$ for $|\tau|\geq 2$. Set
$$\hat{u}_1(\xi,t) = (\xi-i0)^{-\lambda} \int_\tau \frac{ e^{it\tau}-e^{it\xi^3}}{\tau-\xi^3} \psi(\tau-\xi^3)(\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}\hat{f}(\tau) \, d\tau$$
$$\hat{u}_{2,1}(\xi,t) = (\xi-i0)^{-\lambda} \int_\tau \frac{ e^{it\tau}}{\tau-\xi^3} (1-\psi(\tau-\xi^3))(\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}\hat{f}(\tau) \, d\tau$$
$$\hat{u}_{2,2}(\xi,t) = (\xi-i0)^{-\lambda} \int_\tau \frac{ e^{it\xi^3}}{\tau-\xi^3} (1-\psi(\tau-\xi^3))(\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}\hat{f}(\tau) \, d\tau$$
so that $\mathcal{L}_-^\lambda f = u_1 + u_{2,1} + u_{2,2}$.
For $-1<\lambda<\frac{1}{2}$, both $(\xi-i0)^{-\lambda}$, $(\tau-i0)^{\frac{\lambda}{3}+\frac{2}{3}}$ are square integrable functions and thus
\begin{equation}
\label{E:411}
\|u_{2,1}\|_{X_{s,b}}^2 \leq c \int_\tau |\tau|^{\frac{2\lambda}{3}+\frac{4}{3}} \left( \int_\xi \frac{|\xi|^{-2\lambda}\langle \xi \rangle^{2s}}{\langle \tau-\xi^3 \rangle^{2-2b}} \, d\xi \right) |\hat{f}(\tau)|^2 \, d\tau
\end{equation}
Since $-1<\lambda<\frac{1}{2}$, we have $-1<-\frac{2\lambda}{3}-\frac{2}{3}<0$ and
\begin{equation}
\label{E:412}
\int_\xi \frac{|\xi|^{-2\lambda}\langle \xi \rangle^{2s}}{\langle \tau-\xi^3 \rangle^{2-2b}} \, d\xi = \int_\eta |\eta|^{-\frac{2\lambda}{3}-\frac{2}{3}} \langle \eta \rangle^{\frac{2s}{3}} \langle \tau-\eta \rangle^{-2+2b} \, d\eta \leq c\langle \tau \rangle^{-\frac{2\lambda}{3}-\frac{2}{3}+\frac{2s}{3}}
\end{equation}
This is obtained by separately considering the cases $|\eta|\leq 1$, $|\tau|<<|\eta|$, and $|\eta|<<|\tau|$, and using that $s-1 \leq \lambda < s+\frac{1}{2}$ implies $-1<\frac{2s}{3}-\frac{2\lambda}{3}-\frac{2}{3}\leq 0$. Combining \eqref{E:410} and \eqref{E:411} gives the appropriate bound for $\|u_{2,1}\|_{X_{s,b}}$.
To address the term $u_{2,2}$, we first note that $u_{2,2}(x,t) = \theta(t)e^{-t\partial_x^3}\phi(x)$, where
\begin{equation}
\label{E:414}
\hat{\phi}(\xi) = (\xi-i0)^{-\lambda} \int_\tau \frac{1-\psi(\tau-\xi^3)}{\tau-\xi^3} (\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f)\sphat(\tau) \, d\tau
\end{equation}
Taking $h=\mathcal{I}_{-\frac{\lambda}{3}-\frac{2}{3}}f$ (so that $h\in C_0^\infty(\mathbb{R}^+)$ by Lemma \ref{L:RL}), we claim that
\begin{equation}
\label{E:413}
\int_\tau \hat{h}(\tau) \frac{1-\psi(\tau-\xi^3)}{\tau-\xi^3} \, d\tau = \int_\tau \hat{h}(\tau) \beta(\tau-\xi^3) \, d\tau
\end{equation}
where $\beta\in \mathcal{S}(\mathbb{R})$. This follows from the fact that $\text{supp }h\subset [0,+\infty)$ as follows:
Let $ \hat{g_1}(\tau)=\frac{1-\psi(-\tau)}{\tau}$. Then
$$ g_1(t)=\tfrac{i}{2}\text{sgn } t - \tfrac{i}{4\pi}\int_s \text{sgn}(t-s)\hat{\psi}(s) \, ds$$
Let $\alpha\in C^\infty(\mathbb{R})$ be such that $\alpha(t)=1$ for $t>0$ and $\alpha(t)=-1$ for $t<-1$, and set
$$g_2(t)=\tfrac{i}{2}\alpha(t) - \tfrac{i}{4\pi}\int_s \text{sgn}(t-s)\hat{\psi}(s) \, ds$$
To show that $g_2\in \mathcal{S}(\mathbb{R})$, note that by the definition and the fact that $\hat{\psi}\in \mathcal{S}$, we have $g_2\in C^\infty(\mathbb{R})$. If $t>0$, then since $\tfrac{1}{2\pi}\int\hat{\psi}(\tau)\, d\tau = \psi(0)=1$, we have
$$
g_2(t)=\tfrac{i}{2} - \tfrac{i}{4\pi}\int_s \text{sgn}(t-s)\hat{\psi}(s) \, ds = \tfrac{i}{2\pi} \int_{s>t} \hat{\psi}(s) \, ds
$$
If $t<-1$, then likewise we have
$$
g_2(t)=-\tfrac{i}{2} - \tfrac{i}{4\pi}\int_s \text{sgn}(t-s)\hat{\psi}(s) \, ds = \tfrac{i}{2\pi} \int_{s<t} \hat{\psi}(s) \, ds$$
which provide the decay at $\infty$ estimates for $g_2$ and all of its derivatives, establishing that $g_2\in \mathcal{S}(\mathbb{R})$. Since $g_1(t)=g_2(t)$ for $t>0$ and $h\in C_0^\infty(\mathbb{R}^+)$ we have
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in} \int_\tau \hat{h}(\tau) \frac{1-\psi(\tau-\xi^3)}{\tau-\xi^3} \, d\tau = -(\hat{h}*\hat{g_1})(\xi^3) = -2\pi \widehat{hg_1}(\xi^3) \\
& = -2\pi \widehat{hg_2}(\xi^3) = \int_\tau \hat{h}(\tau) \beta(\tau-\xi^3) \, d\tau
\end{align*}
where $\beta(\tau)=-\hat{g_2}(-\tau)$, and $\beta\in\mathcal{S}(\mathbb{R})$ since $g_2\in \mathcal{S}(\mathbb{R})$, thus establishing \eqref{E:413}.
To complete the treatment of $u_{2,2}$, it suffices to show, by Lemma \ref{L:G}\eqref{I:GBse}, that $\|\phi\|_{H^s} \leq c\|f\|_{H^{\frac{s+1}{3}}}$.
By \eqref{E:414}, \eqref{E:413}, Cauchy-Schwarz and the fact that $|\beta(\tau-\xi^3)| \leq c\langle \tau-\xi^3 \rangle^{-N}$ for $N>>0$,
\begin{align*}
\|\phi\|_{H^s} &\leq \int_\xi \langle \xi \rangle^{2s} |\xi|^{-2\lambda} \left( \int_\tau \beta(\tau-\xi^3) |\tau|^{\frac{\lambda}{3}+\frac{2}{3}}|\hat{f}(\tau)| \, d\tau \right)^2 \, d\xi \\
&\leq \int_\tau \left( \int_\xi |\xi|^{-2\lambda} \langle \xi \rangle^{2s} \langle \tau-\xi^3 \rangle^{-2N+2} \, d\xi \right) |\tau|^{\frac{2\lambda}{3}+\frac{4}{3}} |\hat{f}(\tau)|^2 \, d\tau
\end{align*}
After the change of variable $\eta=\xi^3$, the inner integral becomes ($\lambda<\frac{1}{2} \Longrightarrow -\frac{2\lambda}{3}-\frac{2}{3}>-1$)
$$ \int_\eta |\eta|^{-\frac{2\lambda}{3}-\frac{2}{3}} \langle \eta \rangle^{\frac{2s}{3}} \langle \tau -\eta \rangle^{-2N+2} \, d\eta \leq c\langle \tau \rangle^{-\frac{2\lambda}{3}-\frac{2}{3}+\frac{2s}{3}}$$
This latter estimate can be obtained by considering cases $|\eta|\leq 1$, $|\eta|\leq \frac{1}{2}|\tau|$, and $|\eta| \geq \frac{1}{2}|\tau|$ (using $-1\leq \lambda-s \Longrightarrow -\frac{2\lambda}{3}-\frac{2}{3}+\frac{2s}{3}\leq 0$). By the power series expansion for $e^{it(\tau-\xi^3)}$, $u_1(x,t) = \sum_{k=1}^{+\infty} \frac{1}{k!} \theta_k(t) e^{-t\partial_x^3} \phi_k(x)$, where $\theta_k(t) = i^kt^k\theta(t)$ and
$$\hat{\phi}_k(\xi) = (\xi-i0)^{-\lambda} \int_\tau (\tau-\xi^3)^{k-1} \psi(\tau-\xi^3) ( \mathcal{I}_{-\frac{2}{3}-\frac{\lambda}{3}} f)\sphat (\tau) \, d\tau$$
By Lemma \ref{L:G} \eqref{I:GBse}, it suffices to show that $\|\phi_k \|_{H^s} \leq c \|f\|_{H^\frac{s+1}{3}}$. Note that
\begin{align*}
\|\phi_k\|_{H^s} &\leq \int_\xi \langle \xi \rangle^{2s} |\xi|^{-2\lambda} \left( \int_{|\tau-\xi^3| \leq 1} |\tau|^{\frac{\lambda}{3}+\frac{2}{3}} |\hat{f}(\tau)| \, d\tau \right)^2 \, d\xi \\
& \leq \int_\tau \left( \int_{|\tau-\xi^3|\leq 1} \langle \xi \rangle^{2s} |\xi|^{-2\lambda} \, d\xi \right) |\tau|^{\frac{2\lambda}{3}+\frac{4}{3}} |\hat{f}(\tau)|^2 \, d\tau
\end{align*}
The substitution $\eta=\xi^3$ on the inner integral provides the needed bound.
\end{proof}
\subsection{Bilinear estimates}
\begin{lemma} \label{L:bilinear}
\begin{enumerate}
\item \label{I:BXX} For $s>-\frac{3}{4}$, $\exists \; b=b(s)<\frac{1}{2}$ such that $\forall \; \alpha>\frac{1}{2}$, we have
\begin{equation}
\label{AE:230}
\|\partial_x(uv)\|_{X_{s,-b}} \leq c\|u\|_{X_{s,b}\cap D_{\alpha}}\|v\|_{X_{s,b}\cap D_\alpha}
\end{equation}
\item \label{I:BYX} For $-\frac{3}{4}<s<3$, $\exists \; b=b(s)<\frac{1}{2}$ such that $\forall \; \alpha>\frac{1}{2}$, we have
\begin{equation}
\label{AE:3151}
\|\partial_x(uv)\|_{Y_{s,-b}} \leq c\|u\|_{X_{s,b}\cap D_\alpha}\|v\|_{X_{s,b}\cap D_\alpha}
\end{equation}
\end{enumerate}
\end{lemma}
\begin{remark} The purpose of introducing the $D_\alpha$ low frequency correction factor is to validate the bilinear estimates above for $b<\frac{1}{2}$. Recall that the need to take $b<\frac{1}{2}$ arose in Lemma \ref{L:Dbf}\eqref{I:DbfBse}.
\end{remark}
We shall prove Lemma \ref{L:bilinear} by the calculus techniques of \cite{KPV96}. We begin with some elementary integral estimates.
\begin{lemma} \label{calc1}
If $\frac{1}{4}<b<\frac{1}{2}$, then
\begin{equation}
\int_{-\infty}^{+\infty} \frac{dx}{\langle x-\alpha\rangle^{2b}\langle x-\beta\rangle^{2b}} \leq \frac{c}{\langle \alpha-\beta\rangle^{4b-1}}
\end{equation}
\end{lemma}
\begin{proof}
By translation, it suffices to prove the inequality for $\beta=0$. One then treats the cases $|\alpha|\leq 1$ and $|\alpha|\geq 1$ separately, and for the latter case, uses $\langle x - \alpha \rangle^{-2b} \langle x \rangle ^{-2b} \leq |x-\alpha|^{-2b}|x|^{-2b}$ and scaling.
\end{proof}
The following is \cite{KPV96} Lemma 2.3 (2.11) with $2b-\frac{1}{2}=1-l$ verbatim.
\begin{lemma} \label{calc2}
If $b<\frac{1}{2}$, then
\begin{equation}
\int_{|x|\leq \beta} \frac{dx}{\langle x\rangle^{4b-1}|\alpha-x|^{1/2}} \leq \frac{c(1+\beta)^{2-4b}}{\langle \alpha\rangle^\frac{1}{2}}
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{L:bilinear} \eqref{I:BXX}]
We begin by addressing $-\frac{3}{4}<s<-\frac{1}{2}$. The proof is modelled on the proof for $b>\frac{1}{2}$ given by \cite{KPV96}. Essentially, we only need to replace one of the calculus estimates (\cite {KPV96} Lemma 2.3 (2.8)) in that paper with a suitable version for $b<\frac{1}{2}$ (Lemma \ref{calc1}). Let $\rho=-s$. It suffices to prove
\begin{equation} \label{AE:2100}
\iint_\ast \frac{|\xi| d(\xi,\tau)}{\langle \tau-\xi^3 \rangle^b \langle \xi \rangle^\rho} \frac{ \langle \xi_1 \rangle^\rho \hat{g}_1(\xi,\tau_1)}{\beta(\xi_1,\tau_1)} \frac{\langle \xi_2 \rangle^\rho\hat{g}_2(\xi_2,\tau_2)}{\beta(\xi_2,\tau_2)} \leq c \|d\|_{L^2} \|g_1\|_{L^2} \|g_2\|_{L^2}
\end{equation}
for $\hat{d}\geq 0$, $\hat{g_1}\geq 0$, $\hat{g}_2\geq 0$, where $\ast$ indicates integration over $\xi$, $\xi_1$, $\xi_2$, subject to the constraint $\xi=\xi_1+\xi_2$, and over $\tau$, $\tau_1$, $\tau_2$, subject to the constraint $\tau=\tau_1+\tau_2$, and where $\beta_j(\xi_j,\tau_j) = \langle \tau_j-\xi_j^3\rangle^b + \chi_{|\xi_j|\leq 1}\langle \tau_j \rangle^\alpha$. By symmetry, it suffices to consider the case $|\tau_2-\xi_2^3| \leq |\tau_1-\xi_1^3|$. We address \eqref{AE:2100} in pieces by the Cauchy-Schwarz method of \cite{KPV96}. We shall assume that $|\xi_1|\geq 1$ and $|\xi_2|\geq 1$, since otherwise, the bound \eqref{AE:2100} reduces to the case $\rho=0$, which has already been established in \cite{CK02}.\\
\noindent \textit{Case 1}. If $|\tau_2-\xi_2^3|\leq |\tau_1-\xi_1^3| \leq |\tau-\xi^3|$, then we shall show
\begin{equation} \label{AE:2101}
\frac{|\xi|}{\langle \tau-\xi^3 \rangle^b \langle \xi \rangle^\rho} \left( \iint_{\tau_1,\xi_1} \frac{\langle \xi_1 \rangle^{2\rho} \langle \xi_2 \rangle^{2\rho}}{ \langle \tau_1-\xi_1^3 \rangle^{2b} \langle \tau_2-\xi_2^3 \rangle^{2b}} \, d\xi_1 \, d\tau_1 \right)^{1/2} \leq c
\end{equation}
To prove this, we note that
\begin{equation} \label{AE:2102}
\tau-\xi^3 + 3\xi\xi_1\xi_2 = (\tau_2-\xi_2^3) + (\tau_1-\xi_1^3)
\end{equation}
By lemma \ref{calc1} with $\alpha =\xi_1^3$ and $\beta=\xi_1^3+\tau-\xi^3+3\xi\xi_1\xi_2$, we get that \eqref{AE:2101} is bounded by
$$
\frac{|\xi|}{\langle \tau-\xi^3 \rangle^b \langle \xi \rangle^\rho} \left( \int_{\xi_1} \frac{\langle \xi_1 \rangle^{2\rho} \langle \xi_2 \rangle^{2\rho}}{\langle \tau-\xi^3 + 3\xi\xi_1\xi_2 \rangle^{4b-1}} \, d\xi_1 \right)^{1/2}
$$
By \eqref{AE:2102}, $|\xi\xi_1\xi_2|\leq |\tau-\xi^3|$. Substituting $|\xi_1\xi_2| \leq |\tau-\xi^3| |\xi|^{-1}$ into the above gives that it is bounded by
\begin{equation} \label{AE:2104}
\frac{|\xi|^{1-\rho} \langle \tau-\xi^3 \rangle^{\rho-b}}{\langle \xi \rangle^\rho} \left( \int_{\xi_1} \frac{d\xi_1}{\langle \tau-\xi^3+3\xi\xi_1\xi_2 \rangle^{4b-1}} \right)^{1/2}
\end{equation}
Let $u=\tau-\xi^3 + 3\xi\xi_1\xi_2$, so that, by \eqref{AE:2102}, we have $|u| \leq 2|\tau-\xi^3|$. The corresponding differential is
$$d\xi_1 = \frac{cdu}{|\xi|^{1/2} |u-(\tau-\frac{1}{4}\xi^3)|^{1/2}}$$
Substituting into \eqref{AE:2104}, we obtain that \eqref{AE:2104} is bounded by
$$
\frac{ |\xi|^{\frac{3}{4}-\rho} \langle \tau-\xi^3 \rangle^{\rho-b}}{\langle \xi \rangle^\rho} \left( \int_{|u|\leq 2|\tau-\xi^3|} \frac{du}{\langle u \rangle^{4b-1} |u-(\tau-\frac{1}{4}\xi^3)|^{1/2}} \right)^{1/2}
$$
By Lemma \ref{calc2}, this is controlled by
$$\frac{\langle \tau-\xi^3 \rangle^{\rho+1-3b}}{\langle \xi \rangle^{2\rho-\frac{3}{4}}\langle \tau-\frac{1}{4}\xi^3 \rangle^{1/4}}$$
This expression is bounded, provided $b\geq \frac{1}{9}\rho + \frac{5}{12}$.\\
\noindent \textit{Case 2}. $|\tau_2-\xi_2^3| \leq |\tau_1-\xi_1^3|$, $|\tau-\xi^3|\leq |\tau_1-\xi_1^3|$. In this case, we shall prove the bound
\begin{equation} \label{AE:2300}
\frac{1}{\langle \tau_1-\xi_1^3 \rangle^b} \left( \iint_{\xi,\tau} \frac{|\xi|^{2-2\rho} |\xi\xi_1\xi_2|^{2\rho}}{\langle \xi \rangle^{2\rho} \langle \tau-\xi^3 \rangle^{2b} \langle \tau_2-\xi_2^3 \rangle^{2b}} \, d\xi \, d\tau \right)^{1/2} \leq c
\end{equation}
Since
\begin{equation} \label{AE:2301}
(\tau_1-\xi_1^3)+(\tau_2-\xi_2^3)-(\tau-\xi^3) = 3\xi\xi_1\xi_2
\end{equation}
we have, by Lemma \ref{calc1} with $\alpha=\xi^3$, $\beta=\xi^3+(\tau_1-\xi_1^3)-3\xi\xi_1\xi_2$, that \eqref{AE:2300} is bounded by
\begin{equation} \label{AE:2302}
\frac{1}{\langle \tau_1-\xi_1^3 \rangle^b} \left( \int_\xi \frac{ \langle \xi \rangle^{2-4\rho} |\xi\xi_1\xi_2|^{2\rho}}{\langle \tau_1-\xi_1^3 -3\xi\xi_1\xi_2\rangle^{4b-1}} \, d\xi \right)^{1/2}
\end{equation}
We address \eqref{AE:2302} in cases. Cases 2A and 2B differ only in the bound used for $\langle \xi \rangle^{2-4\rho}$, while Case 2C is treated somewhat differently. \\
\noindent \textit{Case 2A}. $|\xi_1|\sim |\xi|$ or $|\xi_1|<<|\xi|$. Here, we use $\langle \xi \rangle^{2-4\rho} \leq \langle \xi_1 \rangle^{2-4\rho}$.\\
\noindent \textit{Case 2B}. $|\xi|<<|\xi_1|$ and [$|\tau_1|>>\frac{1}{4}|\xi_1|^3$ or $|\tau_1|<<\frac{1}{4}|\xi_1|^3$]. Here, we use $\langle \xi \rangle^{2-4\rho} \leq 1$.\\
\noindent \textit{Cases 2A and 2B}. In the setting of Case 2A, let $g(\xi_1)=\langle \xi_1\rangle^{1-2\rho}$, and in the setting of Case 2B, let $g(\xi_1) = 1$. Since by \eqref{AE:2301}, $|\xi\xi_1\xi_2|\leq |\tau_1-\xi_1^3|$, \eqref{AE:2302} is bounded by
\begin{equation} \label{AE:2305}
g(\xi_1) \langle \tau_1-\xi_1^3\rangle^{\rho-b} \left( \int_\xi \frac{d\xi}{\langle \tau_1-\xi_1^3-3\xi\xi_1\xi_2 \rangle^{4b-1}} \right)^{1/2}
\end{equation}
Set $u=\tau_1-\xi_1^3-3\xi\xi_1\xi_2$. Then
$$du=3\xi_1(\xi_1-2\xi)d\xi=c|\xi_1|^{1/2}|u-(\tau_1-\tfrac{1}{4}\xi_1^3)|^{1/2}d\xi$$
which, upon substituting in \eqref{AE:2305}, gives that it is bounded by
$$ \frac{g(\xi_1)\langle \tau_1-\xi_1^3 \rangle^{\rho-b}}{|\xi|^{1/4}} \left( \int_{|u|\leq 2|\tau_1-\xi_1^3|} \frac{du}{\langle u \rangle^{4b-1}|u-(\tau_1-\frac{1}{4}\xi_1^3)|^{1/2}} \right)^{1/2}
$$
By Lemma \ref{calc2}, this is controlled by
\begin{equation} \label{AE:2307}
\frac{g(\xi_1) \langle \tau_1-\xi_1^3 \rangle^{\rho+1-3b}}{|\xi_1|^{1/4}\langle \tau_1-\frac{1}{4}\xi_1^3 \rangle^{1/4}}
\end{equation}
In Case 2A, $g(\xi_1) = \langle \xi_1 \rangle^{1-2\rho}$, and \eqref{AE:2307} becomes
$$\frac{\langle \tau_1-\xi_1^3 \rangle^{\rho+1-3b}}{\langle \xi_1 \rangle^{2\rho-\frac{3}{4}} \langle \tau_1-\frac{1}{4}\xi_1^3 \rangle^{1/4}}$$
which is bounded provided $b>\frac{1}{9}\rho+\frac{5}{12}$. In Case 2B, $g(\xi_1) = 1$, and \eqref{AE:2307} becomes
$$ \frac{ \langle \tau_1-\xi_1^3 \rangle^{\rho+1-3b}}{\langle \xi_1 \rangle^{1/4}\langle \tau_1-\frac{1}{4}\xi_1^3 \rangle^{1/4}}$$
which is bounded (under the restrictions of Case 2B) provided $b\geq \frac{1}{3}\rho + \frac{1}{4}$.\\
\noindent \textit{Case 2C}. $|\xi|<<|\xi_1|$ and $|\tau_1|\sim \frac{1}{4}|\xi_1|^3$. Here, we return to \eqref{AE:2302} and use that $|\tau_1|\sim \frac{1}{4}|\xi_1|^3$ and $3|\xi\xi_1\xi_2| \leq \frac{1}{4}|\xi_1|^3$ implies $\langle \tau_1-\xi_1^3 -3\xi\xi_1\xi_2 \rangle \sim \langle \xi_1 \rangle^3$. Substituting into \eqref{AE:2302}, we find that it is bounded by
$$ \langle \xi_1\rangle^{3\rho-15b+3} \left( \int_{|\xi|\leq |\xi_1|} \langle \xi \rangle^{2-4\rho} \, d\xi \right)^{1/2} \leq \langle \xi_1 \rangle^{\rho-15b+\frac{9}{2}}$$
which is bounded provided $b\geq \frac{1}{15}\rho+\frac{3}{10}$.
We have completed the proof for $-\frac{3}{4}<s<-\frac{1}{2}$, and we shall now extend this result to all $s>-\frac{3}{4}$ by interpolation. From the above, we have \eqref{AE:230} for $s=-\frac{5}{8}$ and some $b<\frac{1}{2}$. As a consequence,
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in} \| \partial_x (uv) \|_{X_{\frac{3}{8},-b}} \\
&\leq \| \partial_x (uv) \|_{X_{-\frac{5}{8},-b}} + \| \partial_x [ (\partial_x u) v] \|_{X_{-\frac{5}{8},-b}} + \| \partial_x [ u (\partial_xv) ] \|_{X_{-\frac{5}{8},-b}} \\
& \leq (\| u \|_{X_{-\frac{5}{8},b}\cap D_\alpha} + \| \partial_x u \|_{X_{-\frac{5}{8},b}\cap D_\alpha}) (\| v \|_{X_{-\frac{5}{8},b}\cap D_\alpha} + \| \partial_x v \|_{X_{-\frac{5}{8},b}\cap D_\alpha})\\
& \leq \| u \|_{X_{\frac{3}{8},b}\cap D_\alpha} \| v \|_{X_{\frac{3}{8},b}\cap D_\alpha}
\end{align*}
thus establishing \eqref{AE:230} for $s=\frac{3}{8}$. Now we can interpolate between the cases $s=-\frac{5}{8}$ and $s=\frac{3}{8}$ to obtain \eqref{AE:230} for $-\frac{3}{4} < s \leq \frac{3}{8}$. Similarly, we can extend \eqref{AE:230} to all $s>-\frac{3}{4}$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{L:bilinear}\eqref{I:BYX}]
First we address the range $-\frac{1}{2}<s<-\frac{3}{4}$. Let $\rho=-s$. Note that by the $X_{s,b}$ bilinear estimate Lemma \ref{L:bilinear}\eqref{I:BXX}, it suffices to prove the lemma under the assumption $|\tau| \leq \frac{1}{8}|\xi|^3$. Constant multiples are routinely omitted from the calculation. \\
\noindent \textit{Step 1}. If $|\xi_1|\geq 1$, $|\xi_2|\geq 1$, $|\tau_2-\xi_2^3|\leq |\tau_1-\xi_1^3|$, $|\tau_1-\xi_1^3|\leq 1000|\tau-\xi^3|$, and $|\tau|\leq \frac{1}{8}|\xi|^3$, then the expression
\begin{equation} \label{AE:200}
\frac{|\xi|}{\langle \tau \rangle^\frac{\rho}{3} \langle \xi \rangle^{3b}} \left( \int_{\xi_1} \int_{\tau_1} \frac{|\xi_1|^{2\rho} |\xi_2|^{2\rho}}{\langle \tau_1-\xi_1^3 \rangle^{2b} \langle \tau_2-\xi_2^3 \rangle^{2b}} \, d\tau_1 \, d\xi_1 \right)^{1/2}
\end{equation}
is bounded.
\noindent \textit{Proof.} Applying Lemma \ref{calc1}, using $\tau_2-\xi_2^3=(\tau-\xi^3)-(\tau_1-\xi_1^3)+3\xi\xi_1\xi_2$, we get that \eqref{AE:200} is bounded by
$$\frac{|\xi|}{\langle \tau \rangle^\frac{\rho}{3} \langle \xi \rangle^{3b}} \left( \int_{\xi_1} \frac{|\xi_1|^{2\rho} |\xi_2|^{2\rho}}{\langle \tau-\xi^3+3\xi\xi_1\xi_2 \rangle^{4b-1}} \, d\xi_1 \right)^{1/2}
$$
Using that $|\xi_1||\xi_2| \leq \dfrac{|\tau-\xi^3|}{|\xi|}$, this is controlled by
\begin{equation} \label{AE:202}
\frac{|\xi|^{1-\rho}|\tau-\xi^3|^\rho}{\langle \xi \rangle^{3b} \langle \tau \rangle^{\rho/3}} \left( \int_{\xi_1} \frac{1}{\langle \tau-\xi^3+3\xi\xi_1\xi_2 \rangle^{4b-1}} \, d\xi_1 \right)^{1/2}
\end{equation}
Set
$$
u=\tau-\xi^3+3\xi\xi_1(\xi-\xi_1)
$$
so that $3\xi(\xi_1-\frac{1}{2}\xi)^2=u-(\tau-\frac{1}{4}\xi^3)$, and thus
$$\tfrac{3}{\sqrt{2}}|\xi||2\xi_1-\xi| = |\xi|^{1/2}|u-(\tau-\tfrac{1}{4}\xi^3)|^{1/2}$$
Also, $du=3\xi(\xi-2\xi_1)\, d\xi_1$. It follows from the hypotheses of this step that the range of integration is a subset of $|u| \leq |\tau-\xi^3|$. With this substitution, we see that \eqref{AE:202} is bounded by
$$ \frac{|\xi|^{1-\rho}|\tau-\xi^3|^\rho}{\langle \xi \rangle^{3b} \langle \tau \rangle^{\rho/3}} \left( \int_{|u|\leq |\tau-\xi^3|} \frac{du}{\langle u \rangle^{4b-1} |\xi|^{1/2} |u-(\tau-\tfrac{1}{4}\xi^3)|^{1/2}} \right)^{1/2}
$$
By Lemma \ref{calc2}, this is controlled by
$$\frac{|\xi|^{\frac{3}{4}-\rho}|\tau-\xi^3|^\rho \langle \tau-\xi^3 \rangle^{1-2b}}{\langle \xi \rangle^{3b} \langle \tau \rangle^{\rho/3} \langle \tau-\frac{1}{4}\xi^3 \rangle^{1/4}}
$$
If $|\tau|\leq \frac{1}{8}|\xi|^3$, then this reduces to
$$\frac{|\xi|^{\frac{3}{4}-\rho} \langle\xi\rangle^{3\rho} \langle \xi \rangle^{3(1-2b)}}{\langle \xi \rangle^{3b} \langle \xi \rangle^{3/4}}$$
and the exponent $2\rho-9b+3\leq 0$ provided $b\geq \frac{2}{9}\rho+\frac{1}{3}$.\\
\noindent \textit{Step 2}. If $|\xi_1|\geq 1$, $|\xi_2|\geq 1$, $|\tau_2-\xi_2^3| \leq |\tau_1-\xi_1^3|$, $|\tau-\xi^3| \leq \frac{1}{1000}|\tau_1-\xi_1^3|$, and $|\tau|\leq \frac{1}{8}|\xi|^3$, then
\begin{equation} \label{AE:204}
\frac{|\xi_1|^\rho}{\langle \tau_1-\xi_1^3 \rangle^b} \left( \int_\xi \int_\tau \frac{|\xi|^2 |\xi_2|^{2\rho}}{\langle \tau \rangle^{2\rho/3} \langle \xi \rangle^{6b} \langle \tau_2-\xi_2^3 \rangle^{2b}} \, d\xi \, d\tau \right)^{1/2}
\end{equation}
is bounded. \\
\textit{Proof.} Since $|\tau|\leq|\xi|^3$, we have $\dfrac{1}{\langle \xi \rangle^{6b-2\rho}} \leq \dfrac{1}{\langle \tau \rangle^{2b-\frac{2\rho}{3}}}$, and thus \eqref{AE:204} is bounded by
$$ \frac{|\xi_1|^\rho}{\langle \tau_1-\xi_1^3 \rangle^b} \left( \int_\xi \int_\tau \frac{|\xi|^2 |\xi_2|^{2\rho}}{\langle \xi \rangle^{2\rho} \langle \tau \rangle^{2b} \langle \tau_2-\xi_2^3 \rangle^{2b}} \, d\xi \, d\tau \right)^{1/2}
$$
Carrying out the $\tau$ integral and applying Lemma \ref{calc1}, we see that this is controlled by
\begin{equation}\label{AE:210}
\frac{|\xi_1|^\rho}{\langle \tau_1-\xi_1^3 \rangle^b} \left( \int_\xi \frac{|\xi|^2 |\xi_2|^{2\rho}}{\langle \xi \rangle^{2\rho} \langle \tau_1-\xi_1^3 -3\xi\xi_1\xi_2 + \xi^3 \rangle^{4b-1}} \, d\xi \right)^{1/2}
\end{equation}
\textit{Case 1.} $3|\xi\xi_1\xi_2| \leq \frac{1}{2}|\tau_1-\xi_1^3|$. \\ Since $|\tau-\xi^3|<<|\tau_1-\xi_1^3|$ and $|\tau|\leq \frac{1}{8}|\xi|^3$, we have $|\xi|^3 << |\tau_1-\xi_1^3|$, giving $$\langle \tau_1-\xi_1^3-3\xi\xi_1\xi_2 + \xi^3 \rangle \sim \langle \tau_1-\xi_1^3 \rangle$$ and thus \eqref{AE:210} is bounded by
$$ \frac{|\xi_1|^\rho}{\langle \tau_1-\xi_1^3 \rangle^{3b-\frac{1}{2}}} \left( \int_\xi \frac{ |\xi|^2 |\xi_2|^{2\rho}}{\langle \xi \rangle^{2\rho}} \, d\xi \right)^{1/2}
$$
Using that $|\xi\xi_1\xi_2| \leq |\tau_1-\xi_1^3|$, this is controlled by
\begin{equation} \label{AE:207}
\frac{|\tau_1-\xi_1^3|^\rho}{\langle \tau_1-\xi_1^3 \rangle^{3b-\frac{1}{2}}} \left( \int_\xi \frac{ |\xi|^{2-2\rho}}{\langle \xi \rangle^{2\rho}} \, d\xi \right)^{1/2}
\end{equation}
Carrying out the $\xi$ integral over the region $|\xi|\leq |\tau_1-\xi_1^3|^{1/3}$ gives
\begin{equation*}
\int_\xi \frac{ |\xi|^{2-2\rho}}{\langle \xi \rangle^{2\rho}} \, d\xi \leq \langle \tau_1-\xi_1^3 \rangle^{1-\frac{4}{3}\rho}
\end{equation*}
and thus \eqref{AE:207} is bounded by
$$
\langle \tau_1-\xi_1^3 \rangle^{1+\frac{1}{3}\rho-3b}
$$
which is bounded provided $b\geq \frac{5}{12}$.\\
\textit{Case 2}. $3|\xi\xi_1\xi_2| \geq \frac{1}{2} |\tau_1-\xi_1^3|$. \\
In this case, $|\xi|\leq \frac{1}{10}|\xi_1|$. Indeed, if $|\xi_1|\leq 10|\xi|$, then $3|\xi\xi_1\xi_2| \leq 330 |\xi|^3 \leq \frac{1}{3}|\tau_1-\xi_1^3|$. Let $u=\tau_1-\xi_1^3-3\xi_1(\xi-\xi_1)\xi + \xi^3$, $du=3\xi_1(-2\xi+\xi_1)+3\xi^2$. Now $3|\xi|^2 \leq \frac{3}{100}|\xi_1|^2$ and $3|\xi_1(-2\xi+\xi_1)| \geq \frac{12}{5}|\xi_1|^2$, and thus $3|\xi|^2 << 3|\xi_1(2\xi-\xi_1)|$. We see that \eqref{AE:210} is bounded by
$$
\frac{|\xi_1|^\rho}{\langle \tau_1 -\xi_1^3 \rangle^b} \left( \int_\xi \frac{|\xi|^2 |\xi_2|^{2\rho} |3 \xi_1(\xi_1-2\xi) + 3\xi^2|}{\langle \xi \rangle^{2\rho} \langle \tau_1-\xi_1^3 - 3\xi\xi_1\xi_2 + \xi^3 \rangle^{4b-1} |\xi_1(\xi_1-2\xi)|} \, d\xi \right)^{1/2}
$$
Using $|\xi_2|\sim |\xi_1|$, and $|\xi_1(\xi_1-2\xi)| \sim |\xi_1|^2$, this is controlled by
$$ \frac{|\xi_1|^{2\rho-1}}{\langle \tau_1-\xi_1^3 \rangle^b} \left( \int_{|u|\leq |\tau_1-\xi_1^3|} \frac{|\xi|^{2-2\rho}}{\langle u \rangle^{4b-1}} \, du \right)^{1/2}
$$
Using that $|\xi| \leq \dfrac{|\tau_1-\xi_1^3|}{|\xi_1|^2}$, this is controlled by
$$ \frac{|\xi_1|^{2\rho-1}|\tau_1-\xi_1^3|^{1-\rho}}{|\xi_1|^{2(1-\rho)}\langle \tau_1-\xi_1^3 \rangle^b} \left( \int_{|u|\leq |\tau_1-\xi_1^3|} \frac{du}{\langle u \rangle^{4b-1}} \right)^{1/2}
$$
Carrying out the $u$ integral, this is bounded by
$$ \frac{|\xi_1|^{4\rho-3}}{\langle \tau_1-\xi_1^3 \rangle^{\rho+3b-2}}
$$
which is bounded provided $b\geq \frac{2}{3}-\frac{1}{3}\rho$.\\
Now we address the range $\frac{3}{2}<s<3$.
It suffices to show
$$\iint_\ast \frac{|\xi| \langle \tau \rangle^{s/3} \hat{d}(\xi,\tau) }{\langle \tau -\xi^3 \rangle^b} \frac{\hat{g}_1(\xi_1,\tau_1)}{\langle \tau_1-\xi_1^3 \rangle^b \langle \xi_1 \rangle^s} \frac{\hat{g}_2(\xi_2,\tau_2)}{\langle \tau_2 -\xi_2^3 \rangle^b \langle \xi_2 \rangle^s} \leq c\|d\|_{L^2} \|g_1\|_{L^2} \|g_2\|_{L^2}$$
for $\hat{d}\geq 0$, $\hat{g_1}\geq 0$, $\hat{g}_2\geq 0$, where $\ast$ indicates integration over $\xi$, $\xi_1$, $\xi_2$, subject to the constraint $\xi=\xi_1+\xi_2$, and over $\tau$, $\tau_1$, $\tau_2$, subject to the constraint $\tau=\tau_1+\tau_2$, under the assumption $|\tau|>>|\xi|^3$, since, for $s>0$ in the region $|\tau|\leq 2|\xi|^3$, $\|\partial_x(uv)\|_{Y_{s,-b}} \leq c\|\partial_x(uv)\|_{X_{s,-b}}$. We shall show
\begin{equation} \label{AE:2200}
\frac{|\xi| \langle \tau \rangle^{s/3}}{\langle \tau-\xi^3 \rangle^b} \left( \int_{\xi_1} \int_{\tau_1} \frac{ d\xi_1 \, d\tau_1}{\langle \tau_1-\xi_1^3 \rangle^{2b} \langle \xi_1 \rangle^{2s} \langle \tau_2-\xi_2^3 \rangle^{2b} \langle \xi_2 \rangle^{2s}} \right)^{1/2} \leq c
\end{equation}
Since $\tau-\xi^3 +3\xi\xi_1\xi_2 = (\tau_2-\xi_2^3)+(\tau_1-\xi_1^3)$, by Lemma \ref{calc1}, we have that \eqref{AE:2200} is bounded by
\begin{equation} \label{AE:2201}
|\xi|\langle \tau \rangle^{\frac{s}{3}-b} \left( \int_{\xi_1} \frac{1}{\langle \xi_1 \rangle^{2s} \langle \xi_2 \rangle^{2s}} \frac{1}{\langle \tau-\xi^3 + 3 \xi\xi_1\xi_2 \rangle^{4b-1}} \, d\xi_1 \right)^{1/2}
\end{equation}
\noindent \textit{Case 1}. $|\xi_1|<< |\xi_2|$ or $|\xi_2|<<|\xi_1|$. In this case, $3|\xi\xi_1\xi_2| << |\xi|^3$, which combined with $|\xi|^3<<|\tau|$, implies $\langle \tau-\xi^3 + 3\xi\xi_1\xi_2 \rangle \sim \langle \tau \rangle$. Thus
\begin{align}
\eqref{AE:2201} &\leq \langle \tau \rangle^{\frac{s}{3}-3b+\frac{1}{2}} \left( \int_{\xi_1} \frac{|\xi|^2 \, d\xi_1}{\langle \xi_1 \rangle^{2s} \langle \xi_2 \rangle^{2s}} \right)^{1/2} \notag \\
&\leq \langle \tau \rangle^{\frac{s}{3}-3b+\frac{1}{2}} \left( \int_{\xi_1} \frac{ d\xi_1}{\langle \xi_1 \rangle^{2s-2} \langle \xi_2 \rangle^{2s-2}} \right)^{1/2} \label{AE:2202}
\end{align}
Provided $s>\frac{3}{2}$ and $b>\frac{1}{9}s+\frac{1}{6}$, \eqref{AE:2202} is bounded.\\
\noindent \textit{Case 2}. $|\xi_1|\sim |\xi_2|$. \\
\noindent \textit{Case 2A}. $3|\xi\xi_1\xi_2|\sim |\tau|$ or $3|\xi\xi_1\xi_2| >> |\tau|$. Then we ignore $\langle \tau -\xi^3 +3\xi\xi_1\xi_2 \rangle^{4b-1}$ in \eqref{AE:2201} and bound as:
$$\left( \int_{\xi_1} \frac{ |\xi|^2 \langle \tau \rangle^{\frac{2s}{3}-2b}}{\langle \xi_1 \rangle^{2s} \langle \xi_2 \rangle^{2s}} \, d\xi_1 \right)^{1/2}
$$
Using that $\langle \tau \rangle \leq c \langle \xi \rangle \langle \xi_1 \rangle \langle \xi_2 \rangle$, $\langle \xi \rangle \leq \langle \xi_1 \rangle+\langle \xi_2 \rangle$, and $\langle \xi_1 \rangle \sim \langle \xi_2 \rangle$, this is controlled by
$$\left( \int \frac{1}{\langle \xi_1 \rangle^{2s+6b-2}} \, d\xi_1 \right)^{1/2}
$$
Thus, we need $2s+6b-2>1$, which is automatically satisfied if $s>\frac{3}{2}$ and $b>0$.\\
\noindent \textit{Case 2B}. $3|\xi\xi_1\xi_2| << |\tau|$. Here, we just follow the method of Case 1.
Thus we have estimate \eqref{AE:3151} for $-\frac{3}{4}<s<-\frac{1}{2}$, and $\frac{3}{2}<s<3$. The result in the full range $-\frac{3}{4}<s<3$ follows by interpolation.
\end{proof}
\section{The left half-line problem}
\label{S:left}
We now carry out the proof of Theorem \ref{T:main}\eqref{I:left}. We first return to the linearized version of \eqref{SE:101}. Consider $-1< \lambda_1, \lambda_2 < 1$, $h_1,h_2 \in C_0^\infty(\mathbb{R}^+)$, and let
$$u(x,t) = \mathcal{L}_-^{\lambda_1}h_1(x,t) + \mathcal{L}_-^{\lambda_2}h_2(x,t)$$
By Lemma \ref{L:decayest}, $u(x,t)$ is continuous in $x$ at $x=0$ and by Lemma \ref{L:valueatzero},
$$u(0,t) = 2\sin(\tfrac{\pi}{3}\lambda_1 + \tfrac{\pi}{6})h_1(t) + 2 \sin(\tfrac{\pi}{3}\lambda_2+\tfrac{\pi}{6})h_2(t)$$
By the definition \eqref{AE:2020},
$$\partial_x u(x,t) = \mathcal{L}_-^{\lambda_1-1}\mathcal{I}_{-1/3}h_1(x,t) + \mathcal{L}_-^{\lambda_2-1}\mathcal{I}_{-1/3}h_2(x,t)$$
By Lemma \ref{L:decayest}, $\partial_xu(x,t)$ is continuous in $x$ at $x=0$ and by Lemma \ref{L:valueatzero},
$$\partial_xu(0,t) = 2\sin(\tfrac{\pi}{3}\lambda_1 - \tfrac{\pi}{6})h_1(t) + 2 \sin(\tfrac{\pi}{3}\lambda_2-\tfrac{\pi}{6})h_2(t)$$
Combining,
$$\begin{bmatrix} u(0,t) \\ \mathcal{I}_{-1/3}[\partial_x u(0,\cdot)](t) \end{bmatrix} = 2 \begin{bmatrix} \sin(\tfrac{\pi}{3}\lambda_1 + \tfrac{\pi}{6}) & \sin(\tfrac{\pi}{3}\lambda_2 + \tfrac{\pi}{6}) \\ \sin(\tfrac{\pi}{3}\lambda_1 - \tfrac{\pi}{6}) & \sin(\tfrac{\pi}{3}\lambda_2 - \tfrac{\pi}{6}) \end{bmatrix} \begin{bmatrix} h_1(t) \\ h_2(t) \end{bmatrix}$$
By basic trigonometric identities, this $2\times 2$ matrix has determinant $\sqrt{3}\sin \frac{\pi}{3}(\lambda_2-\lambda_1)$ which is $\neq 0$ provided $\lambda_1-\lambda_2 \neq 3n$ for $n\in \mathbb{Z}$. Thus, for any $-1<\lambda_1, \lambda_2 <1$, with $\lambda_1\neq \lambda_2$, if we are given $g_1(t)$, $g_2(t)$ and we set
$$
\begin{bmatrix}
h_1(t) \\ h_2(t)
\end{bmatrix}
= A
\begin{bmatrix}
g_1(t) \\ \mathcal{I}_{1/3}g_2(t)
\end{bmatrix}
$$
where
$$A= \frac{1}{2\sqrt 3\sin [ \frac{\pi}{3}(\lambda_2-\lambda_1)]}
\begin{bmatrix}
\sin(\tfrac{\pi}{3}\lambda_2 - \tfrac{\pi}{6}) & -\sin(\tfrac{\pi}{3}\lambda_2 + \tfrac{\pi}{6}) \\
-\sin(\tfrac{\pi}{3}\lambda_1 - \tfrac{\pi}{6}) & \sin(\tfrac{\pi}{3}\lambda_1 + \tfrac{\pi}{6})
\end{bmatrix}$$
then $u(x,t)$ solves
$$
\left\{
\begin{aligned}
& \partial_t u + \partial_x^3u = 0 && \text{for }x<0 \\
&u(x,0) = 0 \\
&u(0,t) = g_1(t) \\
&\partial_x u(0,t)=g_2(t)
\end{aligned}
\right.
$$
If we take $-1<\lambda_1, \lambda_2 <1$, $\lambda_1 \neq \lambda_2$, and set
$$\Lambda w(x,t) = \theta(t) e^{-t\partial_x^3}\phi(x) - \tfrac{1}{2}\theta(t) \mathcal{D}\partial_x w^2(x,t) + \theta(t)\mathcal{L}_-^{\lambda_1}h_1(x,t) + \theta(t)\mathcal{L}_-^{\lambda_2}h_2(x,t)$$
where
$$
\begin{bmatrix}
h_1(t) \\ h_2(t)
\end{bmatrix}
= A
\begin{bmatrix}
g_1(t) - \theta(t) e^{-t\partial_x^3}\phi|_{x=0} + \tfrac{1}{2}\theta(t)\mathcal{D}\partial_xw^2(0,t) \\
\theta(t)\mathcal{I}_{1/3}( g_2 - \theta \partial_xe^{-\cdot \partial_x^3}\phi|_{x=0} + \tfrac{1}{2}\theta \partial_x\mathcal{D}\partial_xw^2(0,\cdot))(t)
\end{bmatrix}
$$
Then $(\partial_t + \partial_x^3)\Lambda w(x,t) = -\frac{1}{2}\partial_x w^2(x,t)$ for $x<0$, $0<t<1$, in the sense of distributions.
We have
\begin{equation}
\label{E:351}
\begin{aligned}
\hspace{0.3in}&\hspace{-0.3in} \|h_1\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} + \|h_2\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \\
&\leq
\begin{aligned}[t]
&c\|g_1(t) -\theta(t)e^{-t\partial_x^3}\phi|_{x=0} + \tfrac{1}{2}\theta(t)\mathcal{D}\partial_xw^2(0,t)\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \\
&+c\|\theta(t)\mathcal{I}_{1/3}( g_2 - \theta \partial_xe^{-\cdot \partial_x^3}\phi|_{x=0} + \tfrac{1}{2}\theta \partial_x\mathcal{D}\partial_xw^2(0,\cdot))(t)\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)}
\end{aligned}
\end{aligned}
\end{equation}
By Lemma \ref{L:G}\eqref{I:Gtt}, $\|g_1(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}\|_{H_t^\frac{s+1}{3}}\leq c\|g_1\|_{H^\frac{s+1}{3}}+c\|\phi\|_{H^s}$. If $-\frac{3}{4}<s<\frac{1}{2}$, then $\frac{1}{12}<\frac{s+1}{3}<\frac{1}{2}$, and Lemma \ref{JK35} shows that $g_1(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}\in H_0^\frac{s+1}{3}(\mathbb{R}_t^+)$ with comparable norm. If $\frac{1}{2}<s<\frac{3}{2}$, then $\frac{1}{2}<\frac{s+1}{3}<\frac{5}{6}$ and by the compatibility condition, $g_1(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}$ has a well-defined value of $0$ at $t=0$. By Lemma \ref{JK37}, $g_1(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}$ also belongs to $H_0^\frac{s+1}{3}(\mathbb{R}_t^+)$ with comparable norm. The conclusion then, is that if $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$, then
$$\|g_1(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \leq c\|g_1\|_{H^\frac{s+1}{3}} + c\|\phi\|_{H^s}$$
By Lemmas \ref{L:Di}\eqref{I:Ditt}, \ref{L:bilinear},
$$\| \theta(t)\mathcal{D}\partial_xw^2(0,t) \|_{H_0^\frac{s+1}{3}(\mathbb{R}_t^+)} \leq c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
By Lemma \ref{L:G}\eqref{I:Gdtt}, $\|g_2(t) - \theta(t)\partial_xe^{-t\partial_x^3}\phi|_{x=0}\|_{H_t^{s/3}} \leq c\|g_2\|_{H^{s/3}} + c\|\phi\|_{H^s}$. If $-\frac{3}{4}<s<\frac{3}{2}$, then $\frac{s}{3}<\frac{1}{2}$ and by Lemma \ref{JK35}, $g_2(t) - \theta(t)\partial_xe^{-t\partial_x^3}\phi|_{x=0}\in H_0^{s/3}(\mathbb{R}^+)$ with comparable norm. By Lemma \ref{L:RL2},
$$\| \theta(t)\mathcal{I}_{1/3}( g_2 - \theta \partial_xe^{-\cdot \partial_x^3}\phi|_{x=0}) \|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \leq c\|g_1\|_{H^\frac{s+1}{3}} + c\|\phi\|_{H^s}$$
By Lemmas \ref{L:RL2}, \ref{L:Di}\eqref{I:Didtt}, \ref{L:bilinear},
$$\| \theta(t)\mathcal{I}_{1/3}(\theta \partial_x\mathcal{D}\partial_xw^2(0,\cdot))(t) \|_{H_0^\frac{s+1}{3}(\mathbb{R}_t^+)} \leq c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
Combining the above estimates with \eqref{E:351}, we obtain
\begin{equation}
\label{E:350}
\|h_1\|_{H_0^{\frac{s+1}{3}}(\mathbb{R}^+)} + \|h_2\|_{H_0^{\frac{s+1}{3}}(\mathbb{R}^+)} \leq c\|g_1\|_{H_t^\frac{s+1}{3}}+ \|g_2\|_{H_t^{s/3}} + c\|\phi\|_{H^s} + c\|w\|_{X_{s,b}\cap D_\alpha}^2
\end{equation}
By Lemmas \ref{L:G}\eqref{I:Gst}, \ref{L:Di}\eqref{I:Dist}, \ref{L:Dbf}\eqref{I:Dbfst}, \ref{L:bilinear}, and \eqref{E:350}
$$\| \Lambda w(x,t) \|_{C(\mathbb{R}_t; H_x^s)} \leq c\|\phi\|_{H^s} + c\|g_1\|_{H^\frac{s+1}{3}} + c\|g_2\|_{H^\frac{s}{3}}+ c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
provided $b(s)\leq b< \frac{1}{2}$ (where $b(s)$ is specified by Lemma \ref{L:bilinear}), $s-\frac{5}{2}<\lambda_1<s+\frac{1}{2}$, $s-\frac{5}{2}<\lambda_2<s+\frac{1}{2}$, $\alpha>\frac{1}{2}$. In the sense of $C(\mathbb{R}_t; H_x^s)$, $w(x,0)=\phi(x)$. By Lemmas \ref{L:G} \eqref{I:Gtt}, \ref{L:Di}\eqref{I:Ditt}, \ref{L:Dbf}\eqref{I:Dbftt}, \ref{L:bilinear}, and \eqref{E:350}
$$\|\Lambda w(x,t) \|_{C(\mathbb{R}_x;H_t^\frac{s+1}{3})} \leq c\|\phi\|_{H^s} + c\|g_1\|_{H^\frac{s+1}{3}} + c\|g_2\|_{H^\frac{s}{3}}+ c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
provided $b(s)<b<\frac{1}{2}$. In the sense of $C(\mathbb{R}_x; H_t^\frac{s+1}{3})$, $\Lambda w(0,t) = g_1(t)$ for $0\leq t\leq 1$. By Lemmas \ref{L:G}\eqref{I:Gdtt}, \ref{L:Di}\eqref{I:Didtt}, \ref{L:Dbf}\eqref{I:Dbfdtt}, \ref{L:bilinear}, and \eqref{E:350}
$$\|\partial_x \Lambda w(x,t) \|_{C(\mathbb{R}_x;H_t^\frac{s}{3})} \leq c\|\phi\|_{H^s} + c\|g_1\|_{H^\frac{s+1}{3}} + c\|g_2\|_{H^\frac{s}{3}}+ c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
provided $b(s)<b<\frac{1}{2}$, and in the sense of $C(\mathbb{R}_x;H^{s/3}_t)$, $\partial_xw(0,t)=g_2(t)$ for $0\leq t\leq 1$. By Lemma \ref{L:G}\eqref{I:GBse}, \ref{L:Di}\eqref{I:DiBse}, \ref{L:Dbf}\eqref{I:DbfBse}, \ref{L:bilinear}, and \eqref{E:350}, we have
$$\|\Lambda w \|_{X_{s,b}\cap D_\alpha} \leq c\|\phi\|_{H^s} + c\|g_1\|_{H^\frac{s+1}{3}} + c\|g_2\|_{H^\frac{s}{3}}+ c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
provided $s-1\leq \lambda_1<s+\frac{1}{2}$, $s-1\leq \lambda_2 < s+\frac{1}{2}$, $\lambda_1<\frac{1}{2}$, $\lambda_2<\frac{1}{2}$, $\alpha\leq \frac{s-\lambda_1+2}{3}$, $\alpha\leq \frac{s-\lambda_2+2}{3}$, $b(s)<b<\frac{1}{2}$, and $\frac{1}{2}<\alpha\leq 1-b$.
Collectively, the restrictions are $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$, $b(s)<b<\frac{1}{2}$,
\begin{equation}
\label{E:360}
\begin{aligned}
&s-1\leq \lambda_1 <s+\tfrac{1}{2} &\qquad & -1< \lambda_1<\tfrac{1}{2}\\
& s-1\leq \lambda_2 < s+\tfrac{1}{2} && -1<\lambda_2 < \tfrac{1}{2}
\end{aligned}
\end{equation}
\begin{equation}
\label{E:361}
\begin{aligned}
&\tfrac{1}{2}< \alpha \leq \tfrac{s-\lambda_1+2}{3} \\
&\tfrac{1}{2}< \alpha \leq \tfrac{s-\lambda_2+2}{3} \\
&\alpha \leq 1-b
\end{aligned}
\end{equation}
Since $s<\frac{3}{2} \Longrightarrow s-1<\frac{1}{2}$ and $s>-\frac{3}{4} \Longrightarrow s+\frac{1}{2}> -\frac{1}{4}$, and thus we can find $\lambda_1\neq \lambda_2$ meeting the restriction \eqref{E:360}. (Note that for $s<-\frac{1}{2}$, we cannot use $\lambda=0$, the operator used in \cite{CK02}). The conditions $\lambda_1<s+\frac{1}{2}$, $\lambda_2<s+\frac{1}{2}$ imply that $\frac{s-\lambda_1+2}{3}>\frac{1}{2}$, $\frac{s-\lambda_2+2}{3}>\frac{1}{2}$, and thus we can meet the requirements expressed in \eqref{E:360}.
Define a space $Z$ by the norm
$$\|w\|_Z = \|w\|_{C(\mathbb{R}_t;H_x^s)} + \|w\|_{C(\mathbb{R}_x;H_t^\frac{s+1}{3})} + \|\partial_x w\|_{C(\mathbb{R}_x;H_t^\frac{s+1}{3})} + \|w\|_{X_{s,b}\cap D_\alpha}$$
By the above estimates
$$\|\Lambda w \|_Z \leq c\|\phi\|_{H^s} + c\|g_1\|_{H^\frac{s+1}{3}} + c\|g_2\|_{H^\frac{s}{3}}+ c\|w\|_Z^2$$
Now
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in} \Lambda w_1(x,t) - \Lambda w_2(x,t) \\
&=
\begin{aligned}[t]
&-\tfrac{1}{2}\theta(t)\mathcal{D}\partial_x(w_1-w_2)(w_1+w_2)(x,t) + \theta(t)\mathcal{L}_-^{\lambda_1}h_1(x,t) \\
&+ \theta(t)\mathcal{L}_-^{\lambda_2}h_2(x,t)
\end{aligned}
\end{align*}
where
$$
\begin{bmatrix}
h_1(t) \\ h_2(t)
\end{bmatrix}
=\tfrac{1}{2}A
\begin{bmatrix}
\theta(t) \mathcal{D}\partial_x(w_1-w_2)(w_1+w_2)(0,t) \\
\theta(t) \mathcal{I}_{1/3}( \theta \partial_x \mathcal{D} \partial_x(w_1-w_2)(w_1+w_2)(0,\cdot))(t)
\end{bmatrix}
$$
By similar arguments, we can show
$$\|\Lambda w_1- \Lambda w_2 \|_2 \leq c(\|w_1\|_Z+\|w_2\|_Z)(\|w_1-w_2\|_Z)$$
By taking $\|\phi\|_{H^s} + \|g_1\|_{H^\frac{s+1}{3}} + \|g_2\|_{H^\frac{s}{3}}\leq \delta$ for $\delta>0$ suitably small, we obtain a fixed point ($\Lambda u =u$) in $Z$.
Theorem \ref{T:main}\eqref{I:left} follows by the standard scaling argument. Suppose we are given data $\tilde \phi$, $\tilde g_1$, and $\tilde g_2$ of arbitrary size for the problem \eqref{SE:101}, and we seek a solution $\tilde u$. For $0 \leq \lambda \ll 1$ (to be selected in a moment) set $\phi(x) = \lambda^2\tilde \phi(x)$, $g_1(t) = \lambda^2\tilde g_1(t)$, $g_2(t) = \lambda^3 \tilde g_2(\lambda^3 t)$. Take $\lambda$ sufficiently small so that
\begin{align*}
\hspace{0.3in}&\hspace{-0.3in} \|\phi\|_{H^s} + \|g_1\|_{H^\frac{s+1}{3}}+ \|g_2\|_{H^\frac{s}{3}} \\
&\leq \lambda^\frac{3}{2}\langle \lambda^s \rangle \|\tilde \phi \|_{H^s} + \lambda^\frac{1}{2} \langle \lambda \rangle^{s+1}\| \tilde g_1 \|_{H^\frac{s+1}{3}} + \lambda^\frac{3}{2} \langle \lambda \rangle^{s}\| \tilde g_2 \|_{H^\frac{s}{3}}\\
&\leq \delta
\end{align*}
By the above argument, there is a solution $u(x,t)$ on $0\leq t\leq 1$. Then $\tilde u(x,t) = \lambda^{-2} u( \lambda^{-1} x, \lambda^{-3} t)$ is the desired solution on $0\leq t \leq \lambda^3$.
\section{The right half-line problem}
\label{S:right}
Now we prove Theorem \ref{T:main}\eqref{I:right}. Suppose $-1<\lambda<1$ and we are given $f\in C_0^\infty(\mathbb{R}^+)$. Let $u(x,t)=e^{-\pi \lambda i}\mathcal{L}_+^\lambda f(x,t)$. Then by Lemma \ref{L:decayest}, $u(x,t)$ is continuous in $x$ at $x=0$ and by Lemma \ref{L:valueatzero}, $u(0,t)=f(t)$. Then $u(x,t)$ solves
$$
\left\{
\begin{aligned}
&\partial_t u +\partial_x^3 u = 0 \\
&u(x,0) = 0 \\
&u(0,t) = f(t)
\end{aligned}
\right.
$$
Therefore, to address the nonlinear problem \eqref{SE:101} with given data $f$ and $\phi$, take $-1<\lambda<1$ and set
$$\Lambda w(x,t) = \theta(t) e^{-t\partial_x^3}\phi(x) - \tfrac{1}{2}\theta(t)\mathcal{D}\partial_x w^2(x,t) + \theta(t)\mathcal{L}_+^\lambda h(x,t)$$
where
$$h(t) = e^{-\pi i\lambda}[ f(t) - \theta(t)e^{-t\partial_x^3}\phi|_{x=0} + \tfrac{1}{2}\theta(t)\mathcal{D}\partial_xw^2(0,t)]$$
Then
$$(\partial_t + \partial_x^3)\Lambda w(x,t) = -\frac{1}{2}\partial_x w^2(x,t).$$
By Lemma \ref{L:G}\eqref{I:Gtt}, $\|f(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0} \|_{H^\frac{s+1}{3}} \leq c\|f\|_{H^\frac{s+1}{3}}+c\|\phi\|_{H^s}$. If $-\frac{3}{4}<s<\frac{1}{2}$, then $\frac{1}{12}<\frac{s+1}{3}<\frac{1}{2}$ and Lemma \ref{JK35} shows that $f(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}\in H_0^\frac{s+1}{3}$ with comparable norm. If $\frac{1}{2}<s<\frac{3}{2}$, then $\frac{1}{2}<\frac{s+1}{3}<\frac{5}{6}$ and by the compatibility condition, $f(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}$ has a well-defined value of $0$ at $t=0$. By Lemma \ref{JK37}, $f(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}\in H_0^\frac{s+1}{3}(\mathbb{R}^+)$ with comparable norm. The conclusion, then, is that if $-\frac{3}{4}<s<\frac{3}{2}$, $s\neq \frac{1}{2}$, then
$$\|f(t)-\theta(t)e^{-t\partial_x^3}\phi|_{x=0}\|_{H^\frac{s+1}{3}(\mathbb{R}^+)} \leq c\|f\|_{H^\frac{s+1}{3}} + c\|\phi\|_{H^s}$$
By Lemma \ref{L:Di}\eqref{I:Ditt}, \ref{L:bilinear},
$$\|\theta(t)\mathcal{D}\partial_x w^2(0,t)\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \leq c\|w\|_{X_{s,b}\cap D_\alpha}^2$$
Combining, we obtain
\begin{equation}
\label{E:362}
\|h\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \leq c\|f\|_{H^\frac{s+1}{3}} + c\|\phi\|_{H^s} + c\|w\|_{X_{s,b}\cap D_\alpha}^2
\end{equation}
We then proceed in the manner of \S \ref{S:left} to complete the proof of Theorem \ref{T:main}\eqref{I:right}.
\section{The line segment problem}
\label{S:linesegment}
We now turn to the line segment problem \eqref{SE:102}. By the standard scaling argument, it suffices to show that $\exists \; \delta>0$ and $\exists \; L_1>>0$ such that for any $L>L_1$ and data $f$, $g_1$, $g_2$, $\phi$ satisfying
$$\|f\|_{H^\frac{s+1}{3}(\mathbb{R}^+)} + \|g_1\|_{H^\frac{s+1}{3}(\mathbb{R}^+)} + \|g_2\|_{H^\frac{s}{3}(\mathbb{R}^+)}+\|\phi\|_{H^s(0,L)} \leq \delta$$
we can solve \eqref{SE:102} with $T=1$. By the techniques employed in the previous two sections, it suffices to show that for all boundary data $f$, $g_1$, $g_2$, there exists $u$ solving the linear problem
\begin{equation}
\label{E:366}
\left\{
\begin{aligned}
&\partial_tu + \partial_x^3 u = 0 && \text{for }(x,t)\in (0,L)\times (0,1) \\
& u(0,t) = f(t) && \text{for }t\in (0,1)\\
& u(L,t) = g_1(t) && \text{for }t\in (0,1)\\
&\partial_xu(L,t) = g_2(t) && \text{for }t\in (0,1)\\
&u(x,0) = 0 && \text{for }x\in (0,L)
\end{aligned}
\right.
\end{equation}
such that
\begin{equation}
\label{E:367}
\begin{aligned}
\hspace{0.3in}&\hspace{-0.3in} \|u\|_{C(\mathbb{R}_t; H^s_x)} + \| u\|_{C(\mathbb{R}_x; H^\frac{s+1}{3}_t)} + \|\partial_x u \|_{C(\mathbb{R}_x; H^\frac{s}{3}_t)}+ \|u\|_{X_{s,b}\cap D_\alpha} \\
&\leq \|f\|_{H^\frac{s+1}{3}(\mathbb{R}^+)} + \|g_1\|_{H^\frac{s+1}{3}(\mathbb{R}^+)} + \|g_2\|_{H^\frac{s}{3}(\mathbb{R}^+)}
\end{aligned}
\end{equation}
Let
\begin{align*}
\mathcal{L}_1h_1(x,t)&= \mathcal{L}_-^{\lambda_1}h_1(x-L,t) \\
\mathcal{L}_2h_2(x,t)&= \mathcal{L}_-^{\lambda_2}h_2(x-L,t) \\
\mathcal{L}_3h_3(x,t)&=\mathcal{L}_+^{\lambda_3}h_3(x,t)
\end{align*}
By Lemma \ref{L:valueatzero} and the estimates in \S \ref{S:estimates}, solving \eqref{E:366}, \eqref{E:367} amounts to showing that the matrix equation
\begin{equation}
\label{E:364}
(g_1,\mathcal{I}_{1/3}g_2, f)^T = (E_L+K_L)(h_1,h_2,h_3)^T
\end{equation}
has a bounded inverse, where
$$E_L=\begin{bmatrix}
2\sin( \frac{\pi}{3}\lambda_1+\frac{\pi}{6}) &&
2\sin( \frac{\pi}{3}\lambda_2+\frac{\pi}{6}) &&
0 \\
2\sin( \frac{\pi}{3}\lambda_1-\frac{\pi}{6}) &&
2\sin( \frac{\pi}{3}\lambda_2-\frac{\pi}{6}) &&
0 \\
\mathcal{L}_1\big|_{x=0} && \mathcal{L}_2\big|_{x=0} && e^{i\pi \lambda_3}
\end{bmatrix},$$
$$
K_L=\begin{bmatrix}
0 && 0 && \mathcal{L}_3\big|_{x=L} \\
0 && 0 && \mathcal{I}^t_{1/3}(\partial_x\mathcal{L}_3)\big|_{x=L} \\
0 && 0 && 0
\end{bmatrix}$$
The matrix operator $E_L$ is invertible with inverse
$$E_L^{-1}= \begin{bmatrix}
\dfrac{\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{6})}{\sqrt{3}\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)} &&
\dfrac{-\sin(\frac{\pi}{3}\lambda_2+\frac{\pi}{6})}{\sqrt{3}\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)} &&
0 \\
\dfrac{-\sin(\frac{\pi}{3}\lambda_1-\frac{\pi}{6})}{\sqrt{3}\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)} &&
\dfrac{\sin(\frac{\pi}{3}\lambda_1+\frac{\pi}{6})}{\sqrt{3}\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)} &&
0 \\
A_1 && A_2 && e^{-i\pi\lambda_3}
\end{bmatrix}$$
where
$$A_1 =\frac{ \sqrt{3} e^{-i\pi\lambda_3}\sin( \frac{\pi}{3}\lambda_1-\frac{\pi}{6})}{\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)} \mathcal{L}_2\big|_{x=0} - \frac{ \sqrt{3} e^{-i\pi\lambda_3} \sin( \frac{\pi}{3}\lambda_2-\frac{\pi}{6})}{\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)}\mathcal{L}_1\big|_{x=0}$$
and
$$A_2 = \frac{ -\sqrt{3}e^{-i\pi\lambda_3} \sin( \frac{\pi}{3}\lambda_1+\frac{\pi}{6})}{\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)} \mathcal{L}_2\big|_{x=0} + \frac{ \sqrt{3}e^{-i\pi\lambda_3} \sin( \frac{\pi}{3}\lambda_2+\frac{\pi}{6})}{\sin(\frac{\pi}{3}\lambda_2-\frac{\pi}{3}\lambda_1)}\mathcal{L}_1\big|_{x=0}$$
Since $\mathcal{L}_1\big|_{x=0}: H_0^\frac{s+1}{3}(\mathbb{R}^+)\to H_0^\frac{s+1}{3}(\mathbb{R}^+)$, $\mathcal{L}_2\big|_{x=0}: H_0^\frac{s+1}{3}(\mathbb{R}^+)\to H_0^\frac{s+1}{3}(\mathbb{R}^+)$ are bounded uniformly as $L\to +\infty$, the norm of $E_L^{-1}$ is uniformly bounded as $L\to +\infty$. \eqref{E:364} becomes
\begin{equation} \label{E:363}
E_L^{-1}(g_1,\mathcal{I}_{1/3}g_2, f)^T = (I+E_L^{-1}K_L)(h_1,h_2,h_3)^T
\end{equation}
and we see that it suffices to show that $(I+E_L^{-1}K_L)$ is invertible. We claim that $K_L:[ H_0^\frac{s+1}{3}(\mathbb{R}^+)]^3 \to [H_0^\frac{s+1}{3}(\mathbb{R}^+)]^3$ is bounded with norm $\to 0$ as $L\to +\infty$. To show this, we need a refinement of Lemma \ref{L:Dbf}\eqref{I:Dbftt}.
\begin{lemma}
For $-2<\lambda<1$ and $x>0$
$$\|\theta(t)\mathcal{L}_+^\lambda h(x,t) \|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)} \leq c(x)\|h\|_{H_0^\frac{s+1}{3}(\mathbb{R}^+)}$$
where $c(x)\to 0$ as $x\to +\infty$.
\end{lemma}
\begin{proof}
$\mathcal{L}_+^\lambda f(x,t) = \mathcal{L}^0h(x,t)$ for $x>0$ by a uniqueness calculation. By \eqref{E:Dbf},
\begin{align*}
\theta(t)\mathcal{L}^0h(x,t) &= \theta(t) \int_0^t \frac{\theta(2(t-t'))}{(t-t')^{1/3}} A\left( \frac{x}{(t-t')^{1/3}} \right) \mathcal{I}_{-2/3}h(t') \, dt' \\
&= -\theta(t) \int_0^t \partial_{t'}\left[ \frac{\theta(2(t-t'))}{(t-t')^{1/3}} A\left( \frac{x}{(t-t')^{1/3}} \right)\right] \theta(4t')\mathcal{I}_{1/3}h(t') \, dt'
\end{align*}
Since $A(x)$ decay rapidly as $x\to +\infty$, we have
\begin{align*}
H(t) &:= -\partial_t \left[ \frac{\theta(2t)}{t^{1/3}} A \left( \frac{x}{t^{1/3}} \right) \chi_{t\geq 0} \right] \\
&=
\begin{aligned}[t]
&-2\theta'(2t) x^{-1} \left( \frac{x}{t^{1/3}} \right) A\left( \frac{x}{t^{1/3}} \right) \chi_{t\geq 0} \\
&+ \tfrac{1}{3}\theta(2t) x^{-4} \left( \frac{x}{t^{1/3}} \right)^4 A\left( \frac{x}{t^{1/3}} \right) \chi_{t\geq 0} \\
&+ \tfrac{1}{3}\theta(2t) x^{-4}\left( \frac{x}{t^{1/3}} \right)^5 A' \left( \frac{x}{t^{1/3}} \right) \chi_{t\geq 0}
\end{aligned}
\end{align*}
so that $\mathcal{L}^0h(x,t) = \theta(t) H\ast (\theta(4\cdot)\mathcal{I}_{1/3}h)(t)$. By the asymptotic properties of $A(x)$ as $x\to +\infty$,
$$\|\hat{H}\|_{L^\infty} \leq \|H\|_{L^1} \leq \sup_{x\geq \frac{x}{2}} ( |x^4A(x)| + |x^5A'(x)|)\to 0 \text{ as }x\to +\infty$$
and we have
$$\| \mathcal{L}^0h(x,t)\|_{H^\frac{s+1}{3}} \leq \|\hat{H}\|_{L^\infty}\| \theta(4t)\mathcal{I}_{1/3}h(t)\|_{H^\frac{s+1}{3}} \leq c(x)\|h\|_{H^\frac{s+1}{3}}$$
with $c(x)\to 0$ as $x\to +\infty$.
\end{proof}
From the lemma, $\mathcal{L}_3|_{x=L}: H_0^\frac{s+1}{3}(\mathbb{R}^+)\to H_0^\frac{s+1}{3}(\mathbb{R}^+)$ and $\mathcal{I}_{1/3}(\partial_x \mathcal{L}_3)|_{x=L}= \mathcal{I}_{1/3}(\mathcal{L}_+^{\lambda_3-1}\mathcal{I}_{-1/3})|_{x=L}: H_0^\frac{s+1}{3}(\mathbb{R}^+)\to H_0^\frac{s+1}{3}(\mathbb{R}^+)$ are bounded with norm $\to 0$ as $L\to +\infty$. Thus $K_L:[ H_0^\frac{s+1}{3}(\mathbb{R}^+)]^3 \to [H_0^\frac{s+1}{3}(\mathbb{R}^+)]^3$ enjoys the same property and $(I+E_L^{-1}K_L)$ has bounded (uniformly in $L$ as $a\to +\infty$) inverse in \eqref{E:363}.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,155,310 | arxiv | \section{Introduction}
The majority of stars are thought to form in binary or
multiple\footnote{For brevity, by `binaries' we generally also mean
`multiples', since many stars are found in triple and higher-order
multiple systems. We will distinguish the two only when it is
important to do so. Note that the analysis in this paper is of {\em
binary} systems.} systems (Goodwin \& Kroupa 2005; Duch\^ene et
al. 2007; Goodwin et al. 2007), and the initial binary properties of
stars place important constraints on star formation and the origin of
the stellar initial mass function (IMF; Goodwin et al. 2007,
2008). The majority of stars with masses greater than approximately
$0.5 M_\odot$ are also thought to form in star clusters (Lada \& Lada
2003), and the binary content of a star cluster plays an important
role in both its observational properties and its dynamical evolution
(e.g., Kroupa et al. 1999). In addition, many exotic objects observed
in star clusters, such as blue stragglers, cataclysmic variables, and
X-ray sources, are believed to be related to binary systems. Almost
all studies of binarity have been limited to nearby, solar-metallicity
populations. However, it might be expected that metallicity (e.g.,
through its effects on cooling and, hence, on the opacity limit for
fragmentation) will play a role in the fragmentation of cores to
produce binary systems (Bate 2005; Goodwin et al. 2007).
In general, the most direct way in which to study binary fractions is
by examining whether a given star is part of a binary system, on an
individual basis. Over the past two decades, the binary fractions of
field stars in the solar neighborhood have been studied carefully in
this conventional fashion (e.g., Abt 1983 for B stars; Duquennoy \&
Mayor 1991 for G dwarfs; Fischer \& Marcy 1992 for M dwarfs;
Kouwenhoven et al. 2006 for A and B stars; see also Goodwin et
al. 2007 for a review). Nearby clusters and associations have also
been examined in detail (see Duch\^ene 1999 and Duch\^ene et al. 2007
for reviews and comparisons). However, the binary fractions in more
distant, massive clusters have not yet been studied thoroughly,
because these environments are too crowded and their distances too
great, so that their member stars are too faint to be examined
individually for binarity. Fortunately, there is an alternative
approach, i.e., by means of an artificial-star-test technique, which
allows us to estimate the binary fractions in crowded environments. By
studying the morphology of their color-magnitude diagrams (CMDs),
Romani \& Weinberg (1991) determined the observed binary fractions in
M92 and M30 at $\la 9$ and $\la 4$\%, respectively, for the full
populations of cluster stars down to $m_V \sim 23$ and $\sim 22$ mag,
respectively, based on two-dimensional maximum-likelihood
analysis. Rubenstein \& Bailyn (1997, hereafter RB97) investigated the
binary fraction of main-sequence stars with $15.8 < V < 28.4$ mag in
the $\sim 13.5$ Gyr-old (e.g., Pasquini et al. 2007)
post-core-collapse Galactic globular cluster NGC~6752. They found a
binary fraction of 15--38\% inside the inner core radius, falling to
$\la 16$\% at larger radii, with a power-law mass-ratio
distribution. For other old globular clusters, Bellazzini et
al. (2002) estimated the binary fraction in NGC 288 for stars with $20
< V < 23$ mag (corresponding to masses of $0.54 \la M_\star/M_\odot
\la 0.77$) at 8--38\% inside the cluster's half-mass radius (and at $<
0.10$ in the outer regions, most likely close to zero), regardless of
the actual mass-ratio distribution. Zhao \& Bailyn (2005) claimed
6--22\% of main-sequence binaries ($19.2 \le m_{\rm F555W} \le 21.2$
mag) for M3, within the cluster's core radius, and between 1 and 3\%
for stars between 1 and 2 core radii. By applying similar techniques
to the post-core-collapse Galactic globular cluster NGC 6397, Cool \&
Bolton (2002) derived a binary fraction of 3\% for main-sequence stars
with primary masses between 0.45 and $0.80 M_\odot$, for a range of
mass ratios. Based on an extrapolation to all mass ratios, they
estimated the total main-sequence binary fraction in the cluster core
at 5 to 7\%. Davis et al. (2008), using the method pioneered by Romani
\& Weinberg (1991) in combination with numerical simulations by Hurley
et al. (2007), concluded that the outer regions of this cluster (at
1.3--2.8 half-mass radii) exhibit a binary fraction ($1.2 \pm 0.4$\%)
close to the primordial fraction of $\sim$1\% predicted by the
simulations, while the inner region has a substantially higher
fraction, $5.1 \pm 1.0$\%. However, {\it all clusters thus far studied
in this way are old stellar systems} (cf. table 1 in Davis et
al. 2008) in which dynamical evolution is expected to have
significantly altered the initial binary population.
In this paper, we use accurate photometric observations of the young,
low-metallicity star cluster NGC 1818 in the Large Magellanic Cloud,
taken with the Wide Field and Planetary Camera-2 (WFPC2) on board the
{\sl Hubble Space Telescope (HST)}, to study its binary fraction. The
photometric data and the cluster CMD are discussed in \S 2. Our newly
developed method to correct for stellar blends and superpositions,
based on the artificial-star-test technique, is presented in \S 3. The
fitting of the binary fraction is discussed in \S 4, and \S 5 contains
a further discussion and our conclusions.
\section{Observations, data reduction, and background decontamination}
Our data set was obtained from {\sl HST} program GO-7307 (PI Gilmore),
which included three images in both the F555W and F814W filters (with
exposure times of 800, 800, and 900s per filter). Specifically,
we used the observations with the PC1 chip centered on the cluster's
half-light radius (see Fig. 1). The origin of this
data set and the program's scientic rationale were described in detail
in de Grijs et al. (2002a, and references therein). The observations
were reduced with {\sc HSTPhot} (version 1.1, May 2003; Dolphin 2002)
using the point-spread-function (PSF)-fitting option. Bad and (close
to) saturated pixels were masked using data-quality images. Cosmic
rays were removed using {\sc HSThphot}'s {\sc crmask} routine. The
three images were combined into a single deep frame (with a total
exposure time of 2500s) using the {\sc coadd} routine and hot pixels
were removed following the procedure recommended in the {\sc HSTphot}
manual. Photometry was performed on both the F555W and F814W images
simultaneously, using a weighted PSF fit (option `2048' in {\sc
HSTphot}), as suggested by Holtzman et al. (2006). The instrumental
magnitudes were converted to the standard Johnson/Cousins $V$ and $I$
filters using the transformation solutions of Dolphin (2000).
We show the CMD around the cluster's half-light radius (as
defined in HST program GO-7307) in Fig. 2 (left-hand
panel) and provide the current-best estimates of a few important
cluster parameters including the core and half-mass radii, in
Table 1, as well as the relevant bibliographic references. We use the
Padova isochrones (Girardi et al. 2000) to perform our fits to the
cluster's CMD (see Fig. 3, left-hand panel). For
comparison, we also show the CMD obtained by Liu et al. (2009; see
also de Grijs et al. 2002a) using the {\sc iraf apphot} (aperture
photometry) software package in Fig. 2 (right-hand
panel). Our newly determined CMD is cleaner and the main sequence is
much tighter than that of Liu et al.'s (2009) CMD, because the HSTphot
software package we used is much better at properly handling stellar
photometry in crowded fields than {\sc iraf}'s {\sc apphot} routine, while
our PSF-fitting approach ensures higher-precision
photometric measurements than Liu et al.'s (2009) aperture photometry
(cf. Fig. 2). We will provide a careful, quantitative comparison of
the results from both approaches in Hu et al. (in prep.).
As noted by Castro et al. (2001), an old red-giant population and an
intermediate-age red-giant clump are clearly seen in the CMD of NGC
1818. If we adopt an age for this cluster of approximately 25 Myr
(e.g., de Grijs et al. 2002a; and references therein), these older
components can only be interpreted as background field stars in the
LMC's disk. Therefore, the main sequence of NGC 1818 is severely
contaminated by field stars. Here, we adopt a statistical approach
similar to that adopted by Bonatto et al. (2006) to subtract
background stars. The background data set (to which we applied
exactly the same photometric procedures as for our science field) was
obtained as part of {\sl HST} program GO-6277 (PI Westphal); it is
suitably located at a distance of $\sim$8 arcmin from the cluster's
half-mass radius.\footnote{We initially selected a number of suitable
LMC fields for our background substraction, including the field
specifically associated with the cluster from {\sl HST} program
GO-7307. Eventually, we decided to use the field that best represented
the background isochrone shown in the left-hand panel of Fig. 2.}
The middle panel of Fig. 3 shows the CMD of the LMC
background field near NGC 1818, which was specifically observed for
background characterization (see, for more details, Castro et
al. 2001; Santiago et al. 2001; de Grijs et al. 2002a). We only remove
the stars in the region $19\leq V \leq25$ mag and $-0.1\leq (V-I) \leq
1.5$ mag, since this region contains almost all stars for which the
observational completeness fraction is greater than 50\%. To perform
the field-star decontamination procedure, we divide both the
background and cluster CMDs into grids of cells, in color and
magnitude. We count the number of stars in each cell in the background
CMD, and then randomly remove the corresponding number of stars,
corrected for the difference in area covered, from the respective cell
of the cluster CMD. The choice of cell size affects the appearance of
the resulting background-subtracted cluster CMD significantly. After
extensive experimentation we chose a cell size of three times the
observational uncertainty in both magnitude and color of single
stars (i.e., $3 \sigma = 0.06$ mag) to minimize any significant
effects due to stochasticity. The right-hand panel of
Fig. 3 shows the results of the background
decontamination.
We performed additional tests to validate our approach, based on the
simplest assumption that if we subtract the background stellar
population from the original background field, we should be left with
only statistical noise, which should, therefore, not lead to
systematics in our analysis. These tests indeed showed that our
background-subtraction procedure is robust. Nevertheless, a close
examination of the right-hand panel of Fig. 3 shows that
there is some residual contamination from the background stellar
population, as indicated by the presence of a faint red-giant
branch/clump feature. This is most likely caused by the local
background in the cluster region being somewhat different from that in
our comparison field (we applied two slightly different field regions
in an attempt to optimally subtract the local cluster background, but
a small residual effect remains). For the statistical comparison
carried out in the remainder of this paper, we are confident that this
residual background population affects our results negligibly,
however. We base this conclusion on (i) the fact that the dominance of
the background population is significantly redward with respect to the
expected locus of NGC 1818's main-sequence binary population (i.e.,
the residual background contamination dominates the region {\it
outside} the parallellogram shown in the right-hand panel of
Fig. 3; our background-subtraction procedure at and near
the single- and binary-star main sequence is unaffected by these
stellar types) and (ii) a statistical analysis of the relative
importance of the genuine binaries versus the residual background
contamination: we estimate that in the region in the CMD space of
interest (i.e., the parallellogram), the fractional contamination by
the background population (i.e., our systematic error) is $\la 3$\%
(based on a detailed examination of the stellar population using star
counts). Finally, in Fig. 4 we present the completeness
curves for both the NGC 1818 observations and the background field,
for two of the four WFPC2 chips. We emphasize that we performed
completeness analyses for all four chips, so that we can properly and
consistently correct our observational data for the effects of
incompleteness. We also note that for the entire magnitude range of
interest, $V \la 22$ mag (see the parallellogram in
Fig. 3), the completeness of our data is well in excess
of 80\%, so that corrections for incompleteness are straightforward.
\section{Artificial-star tests}
Ideally, if there is no binary population, nor any observational
errors, all stars in a cluster should lie on the same isochrone,
because they were all born at approximately the same time in the same
giant molecular cloud (i.e., they have the same metallicity). However,
in Fig. 3 we clearly see a broadening of the cluster's
main sequence. There are three factors that contribute to this
broadening: (i) photometric errors, (ii) superposition effects, and
(iii) the presence of true binary and/or multiple systems.
Photometric errors broaden the main sequence symmetrically if we
assume the magnitude errors to be Gaussian, which is not unreasonable
(given that the magnitude range of interest is well away from the
observational completeness limit). However, the other two factors skew
the stars to the brighter, and redder, side relative to the
corresponding best-fitting isochrone. However, it is difficult to
distinguish between superpositions and physical binaries on the basis
of only CMD morphological analysis. To obtain the binary fraction of
NGC 1818, we therefore perform Monte Carlo tests, where we produce
artificial-star catalogs and compare the spread of real and artificial
stars around the best-fitting isochrone.
Since the photometric errors of the observed stars strongly depend on
their magnitudes and their positions on the {\sl HST}/WFPC2 chips used
for the observations, exponential functions (numerically following the
densest concentration of data points, closely matched to their lower
boundary) were adopted to fit the relation between the magnitude and
standard deviation of the photometric errors (these relations vary
between the WFPC2 chips; we have taken great care to use the
appropriate relations for our analysis). Each artificial star (see
below) is randomly assigned Gaussian photometric errors, of which the
standard Fig. 5 shows the photometric uncertainties
as a function of $V$ and $I$ magnitude for the {\sl HST}/WF3
observations (containing 2473 stars); the center of the
corresponding PC1 chip (containing 886 stars) is located at the
half-light radius. The solid curves in Fig. 5 show the
functions adopted to assign uncertainties to the individual
artificial stellar magnitudes; 80\% of the data points are located
within $\sim$1.2 times the standard deviation from the curve.
The global stellar mass function of NGC 1818 is well approximated by a
Salpeter (1955) power law for masses $>0.6 M_\odot$ (de Grijs et
al. 2002b; Kerber et al. 2007; Liu et al. 2009).\footnote{We note that
these mass-function fits were obtained on the basis of a single-star
mass-function assumption. However, as Kerber et al. (2007; see also
Liu et al. 2009) illustrate in their Fig. 9, the inclusion of binaries
affects the derived mass-function slope only slightly: the deviation
between the input (single-star) and output (single+binary) mass
functions they derive is less than 0.15 (in units of the mass-function
slope, $\alpha$), as long as the input slope is sufficiently close to
the Salpeter (1955) index and assuming a 100\% binary fraction. A
smaller binary fraction, as found in this study, will change the
mass-function slope proportionately less (see Kerber et al. 2007).}
Therefore, we draw the masses of single stars and the primary stars of
the binary population from a Salpeter (1955) $\alpha=2.35$ power-law
IMF in the mass range $0.6 \leq M_{\star}/M_\odot \leq 6.0$. The
masses of the secondaries are drawn from a given mass-ratio
distribution for all primary masses (we discuss the choice of the
mass-ratio distribution in detail in the next section). We note that
this produces a {\em total} IMF (of single stars plus each component
of binary systems) which is {\em not} equal to a Salpeter
IMF. However, the deviation from a Salpeter IMF is fairly minor (see
Kerber et al. 2007). The alternative is to draw both primary and
secondary masses from a Salpeter IMF. However, random pairing is
excluded in all observed multiple populations (see, e.g., Duquennoy \&
Mayor 1991; Fischer \& Marcey 1992; Kouwenhoven et al. 2005; Duch\^ene
et al. 2007).
The magnitudes and colors of all artificial stars are then obtained by
interpolating from the relevant isochrone. For binary stars, we simply
add the fluxes of the primaries and secondaries to obtain the
magnitudes and colors of the system. The results from this procedure
are shown in Fig. 6 (left-hand panel).
The best way to simulate the superposition effect is by adding
artificial stars to the original images (see details in RB97; and
references therein). Alternatively, in each run we randomly distribute
$5 \times 10^6$ artificial stars on the spatial-distribution diagram
of the real stars (while properly accounting for the
position-dependent photometric uncertainties using the chip-dependent
magnitude--uncertainty relations discussed above), instead of on the
original images. If an artificial star has an angular distance from
any real star within 2 pixels (corresponding to the size of our
aperture), it is assumed to be `blended' [see below; see also Reipurth
et al. (2007) for a blending analysis relevant to this study]. Its new
magnitude and color are re-calculated in the same way as for a binary
system. To avoid double counting, if the output $V$-band magnitude of
any artificial star is 0.752 mag brighter than its input magnitude (as
expected for equal-mass binary systems), we assume that we are dealing
with a chance superposition and remove the star from the output
catalog. However, we do not allow the artificial stars to blend with
each other, even when their angular distance is less than 2
pixels. Therefore, we do not need to add artificial stars multiple
times to avoid blends between them. This is one of the main
differences between our novel approach and that followed by RB97 (the
latter authors added much smaller numbers of artificial stars to their
images, precisely to avoid this blending issue).\footnote{Using this
approach, we can generate any number of artificial stars, with which
we can construct an artificial-star catalog that has the same
luminosity function and spatial distribution as the observations, thus
requiring much less computing time than when using the full images for
our analysis.} The CMDs of the artificial stars are shown in
Fig. 6. Fig. 8 presents a comparison of the
observed, background-subtracted CMD with the artificial CMD containing
50\% binaries. By adding artificial stars to the raw images, we
robustly verified that the 2-pixel threshold we adopted is a good
approximation to simulate stellar blending (see Hu et al., in prep.,
for the full quantitative analysis).
For each observed star, we find all artificial stars in the input
catalog that are located within 20 pixels and within 0.2 mag in
brightness.\footnote{This choice is driven by the need to have a
statistically sound procedure: we generated artificial stars that may
or may not fall exactly on top of a real star. Therefore, we needed to
choose a region around any real star in which any artificial star(s)
correspond(s) to the real star for statistical comparison
purposes. RB97 used stars within 100 pixels, but since we are less
restricted by computational power, a smaller radial range ensures that
the spatial distributions of the real stars and the final catalog of
artificial stars are statistically the same.} We randomly extract one
of these artificial stars as the counterpart to the observed
star. Finally, we construct a synthetic catalog containing the same
total number of {\em systems} (be they single stars or unresolved
binary systems), a similar luminosity function, projected surface
number density, and superposition probability as the original
data. The only differences between the observed and synthetic catalogs
are the binary properties (both the binary fraction and the mass-ratio
distribution).
\section{The binary fraction of NGC~1818}
We analyze stars in the mass range from 1.3 to $1.6 M_\odot$ (roughly
F stars), in the region of the CMD in the parallelogram shown in the
right-hand panel of Fig.~3, which is where any binary
sequence will show optimally. For brighter stars, the isochrone is
almost vertical, and therefore the binary sequence is too close to the
main sequence to allow us to distinguish it. (In addition, we are not
fully convinced that all these bright stars are really main-sequence
stars rather than blue stragglers.) For fainter stars, the larger
photometric errors and lower completeness make it difficult to detect
the binary population. (As we noted above, we applied
position-dependent completeness corrections to our data, which were
all $>80$\% complete in the magnitude range of interest.) Since the
isochrone is almost linear in the region in CMD space of interest, we
rotate the ordinates of the artificial and observed CMDs such that the
isochrone becomes vertical to a new `pseudo-color,' i.e., a new
function of $V$ and $V-I$ produced by rotating the CMD
(corresponding to the color axis projected onto the long side of the
parallellogram in Fig. 2). Note that the exact form of the
pseudo-color function is unimportant.
In Fig.~9 we show the cumulative distribution function
(CDF) with pseudo-color of the true CMD (solid line) and a stellar
population with photometric errors, but {\em no} binaries (dotted
line).\footnote{As the luminosity function of the final output
artificial stars is similar to the observed luminosity function, the
precise form of the stellar mass function is unimportant in this
context.} This is clearly a very poor fit to the data. We also show
the best-fitting binary fraction ($f_{\rm b}$), with a uniform $q$
distribution, of $f_{\rm b} = 0.55 \pm 0.05$ ($1\sigma$) (dashed
line), as well as the relevant model for 100\% binarity (long-dashed
line).
It is expected that this method will be insensitive to low-$q$ systems
in which the secondary component contributes very little to the
pseudo-color. To test our sensitivity to the mass ratio of a system,
we produce artificial catalogs that do not include binaries below some
mass ratio $q_{\rm cut-off}$, but contain the same number of binaries
above $q_{\rm cut-off}$. Therefore, we need to compare these results
based on the artificial-star catalogs with the best-fitting $f_{\rm b}
= 0.55$ with $q_{\rm cut-off} = 0$ (see Fig.~9).
We find that there is very little difference in the fits to the CDF
for $q_{\rm cut-off} < 0.4$, showing that we are insensitive to
binaries with mass ratios smaller than $q \sim 0.4$. In
Fig.~9 we plot the binary fraction versus $\chi^2$
probability (rms error) for different $q_{\rm cut-off}$'s. For $q_{\rm
cut-off} < 0.4$, the best fits of binary fraction are acceptable,
since the maximum probabilities are much larger than the commonly
adopted limit of $\Delta \chi^2 = 0.05$.\footnote{G. E. Dallal (2007),
http://www.jerrydallal.com/LHSP/p05.htm.} However, for $q_{\rm
cut-off} =0.5$, the value is just a little greater than 0.02. And for
$q_{\rm cut-off} = 0.7$, the very low $\chi^2$ value indicates that
even the best fit is poor. Therefore, we adopt a conservative limit to
the mass ratios to which we are sensitive of $q > 0.4$.
For this discussion, we have assumed that the $q$ distribution is flat
(at least above $q=0.4$). This is consistent with the mass-ratio
distribution of A- and B-type stars in Sco OB2 (Kouwenhoven et
al. 2005, their fig.~14). However, G-dwarf mass ratios in the solar
neighborhood are concentrated toward low $q$ (Duquennoy \& Mayor
1991). Duquennoy \& Mayor (1991) show that the mean local G-dwarf $q$
distribution shows a roughly linear decrease with increasing $q$ for
$q>0.4$ (their fig.~10).
The results indicate that the {\em total} binary fraction of F stars
in NGC~1818 is $\sim 0.55$, with an approximately flat mass-ratio
distribution. However, since we are only sensitive to binaries with $q
> 0.4$, we may only constrain the binary fraction in this mass range
to $f_{{\rm b} (q>0.4)} \sim 0.35$. It is impossible to determine the
total binary fraction without making some assumptions about the form
of the $q$ distribution below 0.4. It is unlikely that the mass-ratio
distribution declines below $0.4$, meaning that a total binary
fraction of $\sim 0.55$ is probably a safe lower limit (cf. the
mass-ratio distributions found by Duquennoy \& Mayor 1991; Kouwenhoven
et al. 2007). {\em Depending on exactly which assumptions are made for
the form of the mass-ratio distribution, the total binary fraction
ranges from 0.55 to (more probably) unity.}
\section{Discussion and conclusions}
The CMD of NGC 1818, obtained from {\sl HST} photometry, shows a
clearly asymmetric broadening of the main sequence, which implies that
this cluster contains a large fraction of binary systems. Using the
artificial-star-test method, we estimate that the binary fraction in
the mass range from 1.3 to $1.6 M_\odot$ is $f_{\rm b} \sim 0.35$ for
systems with an approximately flat mass-ratio distribution for
$q>0.4$. This is consistent with a {\em total} binary fraction of F
stars of 0.55 to unity. Elson et al. (1998) found the fraction of
roughly equal-mass ($q \sim 1$) systems in NGC 1818 to be 30--40\% in
the core and 15--25\% outside the core, which is consistent with our
result.
Observationally determined binary fractions are rather scant in
general. Binary fractions are a clear function of stellar type, age,
and environment. We note that our result is quite close to the
fraction of binary dwarfs in the field of the same spectral type,
which is a little smaller than the fraction in Sco OB2 (Kouwenhoven
2006) and much higher than in Galactic globular clusters (e.g., RB97;
Bellazzini et al. 2002). The most recently published relevant results
include the determination by Sana et al. (2008) of the binary fraction
of O-type stars in NGC 6231 (at least 63\%), while Reipurth et
al. (2007) reported a visual-binary fraction of late-type stars in the
Orion Nebula Cluster of only 8.8\%.
At 15--25~Myr old, NGC 1818 is several crossing times old, and the
binary population would be expected to have been modified by dynamical
interactions (see Goodwin et al. 2007; and references therein). In
particular, soft (i.e., wide) binaries would be expected to have been
destroyed by this age. Therefore, the high binary fraction found for F
stars suggests that these binaries are relatively `hard' and able to
survive dynamical encounters.\footnote{We argue that the binaries
in NGC 1818 must be `hard,' since the cluster is dynamically old and
soft binaries are destroyed in roughly a crossing time (e.g., Parker
et al. 2009), so soft binaries cannot be a major component of the
binary population (some may form by capture but will be quickly
destroyed).} The relatively flat mass-ratio distribution in NGC~1818
compared to similar-mass stars in the loose association Sco OB2
(Kouwenhoven et al. 2007; $\sim q^{-0.4}$) may be evidence for a
difference in the initial populations (see also Sana et al. 2008 for
an alternative mass-ratio distribution biased toward unity). However,
it is more likely to be a product of the different dynamical evolution
of the two populations. The larger number of encounters suffered by
the binary population in NGC~1818 would be expected to disrupt
less-bound (i.e., wide and/or low-$q$) systems, and to form more
equal-mass systems, leading to a mass-ratio distribution more biased
to high $q$.
We conclude that the binary fraction of F stars in the young,
low-metallicity LMC cluster NGC~1818 is high and consistent with the
field and lower-density clusters. This suggests that, at least among
intermediate-mass stars, metallicity down to $\mbox{[Fe/H]} \sim -0.4$
dex does not suppress fragmentation and binary formation, and the
binarity of these stars is at least as high as at solar metallicity.
\acknowledgments
We gratefully acknowledge joint networking funding from the Royal
Society in the UK and the National Natural Science Foundation of China
(NSFC), supporting a UK-China International Joint Project between the
University of Sheffield and the National Astronomical Observatories
(Chinese Academy of Sciences) of China. YH, LD, and QL acknowledge
financial support from NSFC grants 10973015 and 10778719, and the
Ministry of Science and Technology of China grant 2007CB815406. RdG
acknowledges NSFC grant 11043006. He also acknowedges financial
assistance from the Kavli Institute for Astronomy and Astrophysics
prior to moving from the UK to China. This research has made use of
NASA's Astrophysics Data System Abstract Service. We also thank Thijs
Kouwenhoven for useful discussions and valuable input. Finally, RdG
and LD pay tribute to the 2003 Guillermo Haro workshop for initiating
their collaboration, having thus far resulted in 2 PhD theses and
numerous secondary benefits. This turned out to be a career-defining
moment for RdG.
|
2,869,038,155,311 | arxiv |
\section{Introduction}\label{sec:intro}
Ribose was recently identified in the soluble organic matter of carbonaceous chondrites~\cite{Furukawa2019}, with measured concentrations of \SIrange{4.5}{25}{ppb} (parts per billion). Along with comets, the parent bodies of carbonaceous chondrites or their fragments are considered as one of the major sources of pristine organic matter that have been exogenously delivered onto the Hadean and Eoarchean Earth during the late heavy bombardment, see, e.g., \cite{Chyba1990,Chyba:1992bp,Gomes2005,Kooten2021,Fischer-Godde2020,Pizzarello2017}, and are still falling to Earth nowadays.
One of the favored theories of the origin of life relies on the finding that RNA molecules can act as both catalysts of self-replication and genetic information storage, see, e.g.,~\mbox{\cite{Rich1962,Gilbert1986,Kruger1982,Guerrier-Takada1983,Guerrier-Takada1984,Zaug1986,Cech1986,Johnston2001,Vaidya2012}.} So-called ribozymes allow solving the fundamental ``chicken-or-the-egg'' problem in the process of abiogenesis and the emergence of life: What came first, the genetic information macromolecules like RNA and DNA coding for protein sequences or the enzymatic proteins catalyzing the formation of information storage molecules and other biomolecular reactions? One mutually compatible molecule is needed for the synthesis and \textit{survival} of the other one, and vice versa.
RNA polymers could solve this dilemma by providing both these functionalities, serving as the universal ``Swiss Army knife'' to the organocatalysis of the first complex biomolecules. The polymerization of oligonucleotides and RNA-like molecules was shown to be possible in the hydrothermal field settings, in the presence of clays or salts \cite{Ferris1996,Ferris2004,DaSilva2015}, metal ion catalysts \cite{Orgel2004}, and in lipid bilayers \cite{Toppozini2013}. In particular, wet-dry cycles, which were meant to simulate the natural cycling in warm little ponds (WLPs) on freshly formed volcanic islands on the Hadean Earth, were shown to promote the formation of RNA-like chains made up of up to 300 linked residues \cite{DaSilva2015,Damer2020}. \citet{Becker2018,Becker2019} showed in lab experiments that the formation of RNA nucleosides and nucleotides was plausible in \textit{one pot} in WLPs during wet-dry cycles (requiring specific and changing conditions with respect to temperature and pH). They added ribose without forming it in the same pot, which makes the possibility of an exogenous delivery by meteorites an intriguing hypothesis. The different pathways to nucleosides and nucleotides shown in experiments by Orgel and colleagues \cite{Fuller1972a,Fuller1972b}, \citet{Kim2017,Kim2018}, and \citet{Nam2018} also all required the prior presence of ribose. \citet{Pearce2017} considered the delivery of nucleobases by carbonaceous chondrites to WLPs as a source for the formation and polymerization of oligonucleotides during wet-dry cycles. Similarly, ribose-rich meteorites might provide another crucial ingredient for this process that lead to the emergence of the first RNA-like molecules on the early Earth.
The crucial components that make up the backbone of these linked oligonucleotides are the pentose (5C), ribose, and phosphate groups. Our study sought to model the formose reaction pathway leading to the synthesis of ribose inside carbonaceous planetesimals. We used the same model as developed in our previous study by \citet{Paschek2021}, where we studied the formation of nucleobases inside the parent bodies of carbonaceous chondrites. Our model comprised an up-to-date thermochemical equilibrium model coupled with a 1D thermodynamic planetesimal model to calculate the abundances of prebiotic molecules via abiotic synthesis pathways under realistic conditions inside parent body planetesimals. By applying the same model to ribose, we elucidated a more comprehensive understanding of the formation of crucial building blocks of the RNA world in outer space, very early in the history of our solar system.
The theoretical background is provided in Section~\ref{sec:theory}. First, we introduce our approach to model the formose reaction pathway, focusing on the synthesis of ribose. Further, we briefly recall our new concept to estimate the initial concentrations of the volatiles in carbonaceous planetesimals, which was explained in more detail in \citet{Paschek2021}. Next, in Section~\ref{sec:computations}, we outline our computational methods. We also explain how the Gibbs free energy of formation of glycolaldehyde needed for our theoretical model was estimated, as it is missing in the database that we used. In Section~\ref{sec:results}, we first present our experimental results regarding the efficiency of ribose formation among other 5Cs in the presence of various mineral catalysts representative of carbonaceous chondrites. Second, we present our theoretical results, incorporating the ribose yields obtained in our experiments. We analyze our results and compare them to the measured values in carbonaceous chondrites. The discussion and conclusions follow in Section~\ref{sec:discussion}. We discuss the possible decomposition processes of ribose and gave a suggestion for the region within the planetesimals with the likely highest ribose abundance.
\section{Materials and Methods}
\subsection{Theory}\label{sec:theory}
\subsubsection{Formose Reaction Pathway}\label{sec:reactions}
The formose reaction, first described by \citet{Butlerow1861}, is a reaction network forming a variety of sugars. It starts with formaldehyde in an aqueous solution. In a self-condensation, a dimerization of formaldehyde leads to the formation of glycolaldehyde, starting an autocatalytic cycle \cite{Breslow1959}. The formation of glycolaldehyde itself is not yet fully understood, since a freshly distilled formaldehyde solution leads only to the nonproductive reaction to methanol and formate \cite{Cannizzaro1853}. Small amounts of sugars or impurities are needed to start the formose reaction, e.g., \SI{3}{ppm} (parts per million) of glycolaldehyde are sufficient \cite{Socha1980}. Recent experimental and numerical studies suggested that glycolaldehyde could be formed in interstellar clouds, either by surface hydrogenation of \ce{CO} molecules on icy dust grains~\cite{Fedoseev2015}, or by formaldehyde reacting with its isomer hydroxymethylene \cite{Eckhardt2018}. Glycolaldehyde drives the formose reaction toward more complex sugars. For this reason, we started our laboratory experiments with a solution of formaldehyde and glycolaldehyde and modeled the reaction accordingly in our theoretical studies.
A multitude of different catalysts is effective in the formose reaction. Very effective catalysts are hydroxides, carbonates, and oxides of alkali/alkaline earth metals, as well as aluminosilicates, tertiary amines, lanthanide hydroxides, thallium hydroxide, and lead oxide, see, e.g., \cite{Iqbal2012}. In particular, hydroxides and carbonates are of great interest, as these are commonly found in carbonaceous chondrites, see, e.g., \cite{Barber1981}.
In the context of prebiotic chemistry, a presumed reaction pathway for the synthesis of molecules involved in the emergence of life has to start from abiotic and naturally abundant molecules.
Our considered reaction pathway for the formation of ribose (\ce{C5H10O5}) from formaldehyde (\ce{CH2O}) and glycolaldehyde (\ce{C2H4O2}) can be summarized as
\begin{equation}\label{equ:formose}
\ce{CH2O{}_{(aq)} + 2C2H4O2{}_{(aq)} ->[catalyst] C5H10O5{}_{(aq)}}.
\end{equation}
Here, ``catalyst'' stands for either one or the other of the following hydroxides or carbonates: \ce{Ca(OH)2}, \ce{CaCO3}, \ce{KOH}, or \ce{K2CO3}. We used one of these catalysts in each run of our laboratory experiments. The formose reaction in Equation~\eqref{equ:formose} is an oversimplification of the far more complex reaction network toward sugars. Hence, we performed the laboratory work to correct our theoretical results by using realistic ribose yields measured in our experiments.
In the framework of the previous studies of \citet{Pearce2015,Pearce2016} and \citet{Paschek2021}, an ID-number was assigned (maximum two digits) to each considered reaction pathway (there reaction pathways for the formation of nucleobases). We extended this numbering scheme to the formose reaction and assigned the no.~101 to the reaction pathway considered here in Equation~\eqref{equ:formose} (we reserved no.~100 for the potential formation of ribose from formaldehyde only, without initially present glycolaldehyde, which was not considered here).
Theoretical and experimental studies proposed other possible reactants and reaction pathways for the formation of ribose and RNA precursors in combination with nucleobases. \citet{Weber2006} demonstrated a stereospecific formation of tetroses from glycolaldehyde catalyzed by peptides. This could be one of the possible solutions to the unsolved question of how the homochirality of life emerged. \citet{Jeilani2020} used quantum chemistry calculations to verify the possibility of abiotic ribose and RNA nucleoside synthesis by free radical reactions. They started from formaldehyde and used \ce{Ca^{2+}} and \ce{CaOH+} cations as catalysts. The hydroxymethyl radical \ce{^{.}CH2OH} was identified as a potential intermediate in the dimerization of formaldehyde and the autocatalytic cycle. \mbox{\citet{Teichert2019}} and \citet{Kruse2020} presented a new direct formation pathway to DNA nucleosides. They started with nucleobases and specifically formed deoxyribose by condensation with acetaldehyde and sugar-forming precursors. \citet{Saladino2015} and \citet{Sponer2016} showed the formation of nucleosides starting from formamide catalyzed by powdered meteorites. Additionally, \citet{Eschenmoser2007a,Eschenmoser2007b} proposed a hypothetical reaction pathway starting from \ce{HCN}. Amino acids and carbohydrates might be formed with glyoxylate (glyoxylic acid in neutral solution) and its dimer dihydroxyfumarate as intermediates in the so-called ``glyoxylate scenario''. \citet{Banfalvi2021} reviewed this scenario in comparison to the formose reaction. Further reviewed aspects are an alternative mechanism for the formation of RNA starting with the ribose-phosphate backbone, which then binds nucleobases, skipping ribonucleotides as intermediate molecules. Moreover, this study described why ribose is the best fitting aldopentose for the build-up of RNA, as ribose allows for the maximum flexibility of RNA.
\subsubsection{Initial Concentrations of Reactants}\label{sec:concs}
In order to model the formose reaction in Equation~\eqref{equ:formose} with our theoretical chemical model, we used the initial concentrations of the reactants as inputs to determine the resulting abundance of ribose. Comets are believed to have the most pristine composition, and to most closely reflect the conditions that prevailed before or during the early stages in the formation of our solar system. Therefore, comets are the only reservoir of such pristine objects in the solar system that still exists and is accessible to measurements. We took the abundances measured spectroscopically in comets \cite{Mumma2011} (and references therein) as the first reference values. With this, we followed the same approach as described in the previous studies by \citet{Cobb2015} and \citet{Pearce2016}.
Nevertheless, the icy pebbles making up the source material of the parent bodies of carbonaceous chondrites are believed to originate from more inner and warmer regions further inside the solar nebula than those of comets. The main-belt asteroid 19~Fortuna, orbiting the Sun at $\sim$$\SI{2.5}{au}$, was identified as the potential parent body source of CM (Mighei-type) meteorites \cite{Burbine2002}. Therefore, we assumed the inner region of \SIrange{2}{3}{au} to be the forming location of carbonaceous chondrite parent bodies in the solar nebula (in a first approximation neglecting possible radial migration processes). This region was more distant to the proto-Sun than the water snowline at ${T\lesssim\SI{150}{\kelvin}}$ \cite{1977E&PSL..36....1W,2001Sci...293...64A,2001M&PS...36..671C,Lodders03,2005PNAS..10213755B,2018GeCoA.239...17B,2020ApJ...897...82V,Oberg_Bergin21,Lichtenberg2021}. As a result, water ice was supposed to be preserved in the source material of carbonaceous chondrites. On the other hand, this region is inside the sublimation zone of the reactant formaldehyde \cite{1977E&PSL..36....1W,2001Sci...293...64A,2001M&PS...36..671C,Lodders03,2005PNAS..10213755B,2018GeCoA.239...17B,2020ApJ...897...82V,Oberg_Bergin21,Lichtenberg2021}, which has a sublimation temperature of ${\sim}$$\text{\SIrange{40}{45}{\kelvin}}$ \cite{Cuppen_ea17}. Consequently, the icy pebbles in this region are expected to lose a substantial fraction of the most volatile constituents, e.g., formaldehyde, which diffused through the monolayers of water ice and sublimed into space, leading to a more volatile-poor water ice mantle \cite{C5CP00558B}. Thus, the more volatile formaldehyde should have been depleted in the carbonaceous chondrites' building blocks compared to the pristine ices in the comet-forming zone.
It was predicted that carbonaceous planetesimals were rapidly assembled via streaming instabilities from the source material \cite{Johansen_ea07,Ormel2010,Klahr_Schreiber20}. However, unlike comets, these pristine pebbles were not preserved. Therefore, we had to refer to models predicting the depletion of formaldehyde \cite{C5CP00558B} to simulate the physico-chemical processes in the whole solar nebula~\cite{Visser_ea09,SW11,Drozdovskaya_ea16,Bergner_Ciesla21,Lichtenberg2021}. We then compared the remaining volatile abundances between the forming regions of comets and carbonaceous chondrite parent bodies in these solar nebular models. \citet{Drozdovskaya_ea16} simulated two different scenarios of the solar nebula collapsing into a protoplanetary disc. They predicted a depletion of formaldehyde ice by about three or more orders of magnitude when comparing between regions at \SIrange{1}{10}{au} and ${>}$$\SI{30}{au}$. The abundance changed between outer and inner regions by a factor of ${\le}$$\num{4.25e-3}$ (compare to Table~4 in their publication). In the models of \citet{Visser_ea09}, \citet{SW11}, and \citet{Bergner_Ciesla21} similar or even higher depletion factors were predicted. The large range of possible depletion factors results from different model assumptions, different computed scenarios, considered physico-chemical mechanisms, and large uncertainties within and between the models. Note that the water ice was not substantially depleted at \SIrange{2}{3}{au} in all the models, which coincides with the considerations about the water snowline mentioned above. This allowed us to normalize all molecular abundances to that of water~ice.
In the temperature programmed desorption (TPD) experiments in the laboratory, it was shown that the sublimed volatiles were not able to leave the water ice matrix freely. A fraction of the volatile ices remained trapped in the water ice and co-desorbed at higher temperatures of ${T \gtrsim \SI{150}{\kelvin}}$ when the water ice sublimed. However, icy pebbles in the solar nebula might have gradually lost their volatile content over thousands of years and became more volatile-poor compared to the results of the TPD experiments, which were conducted over short timescales (hours--days) \citep{Cuppen_ea17,Potapov_McCoustra21}. Another TPD experiment showed that some of the formaldehyde polymerizes in reaction with water to polyoxymethylene and therefore did not co-desorb with water at high temperatures \cite{Noble2012}. Polymerization of formaldehyde could reduce the amount of its monomer form that was available to the synthesis of ribose in the formose reaction. When the formed parent body planetesimal heated up, a fraction of the formaldehyde could be trapped in its polymeric form, preventing it from participating in the synthesis of more complex organics.
In summary, we used a factor of $10^{-3}$ to reduce the abundance of formaldehyde measured in comets, corresponding to a conservative estimate of an upper bound motivated by the value of ${\le}$$\num{4.25e-3}$, given in the solar nebula model by \citet{Drozdovskaya_ea16}. This value was chosen in this approximate manner since it was based on many assumptions and uncertainties. As the solar nebula models are in general strong simplifications of the different complex desorption, trapping, and chemical mechanisms, which were observed in, and deduced from, the TPD experiments and could have occurred during the formation of the protoplanetary disc of the solar system, they most likely overestimate the depletion of volatiles.
Moreover, different assumed sizes of the icy pebbles forming the carbonaceous planetesimals introduce largely different predictions for the diffusion times of formaldehyde and other volatiles, since the molecules can leave faster through the porous water ice layers in smaller pebbles. The diffusion rate of formaldehyde at ${T \gtrsim \SI{90}{\kelvin}}$ was measured experimentally and predicted via molecular dynamics calculations \cite{C5CP00558B} (and references therein). The resulting diffusion and depletion timescales of volatiles then depend on the thickness of the bulk water ice that needs to be passed, and on the sizes of the icy pebbles accordingly. Further, uncertainties are caused by possible migration processes within the solar nebula and the protoplanetary disc \cite{Burkhardt2021,Johansen2021,Kooten2021}, as reservoirs of pebbles from different regions further outside the solar nebula could contribute to the volatile content in the forming carbonaceous chondrite parent bodies.
The chosen depletion factor of formaldehyde provided us with a conservative estimate of the least depletion predicted in the solar nebula models and thus indicated a possible upper limit for the initial formaldehyde concentrations in the source material of carbonaceous chondrite parent bodies. Other processes, e.g., the outgassing in these porous bodies heated by the decay of short-lived radioactive isotopes, was shown to potentially reduce their most volatile content even further \cite{Lichtenberg2021}.
TPD experiments with glycolaldehyde showed that its desorption was dictated by water ice \cite{Burke2014}. As we assumed water ice to stay frozen in the pebbles, we expected the glycolaldehyde abundance to be similar to cometary values and therefore we introduced no additional depletion factor. A more in-depth and detailed analysis of the depletion of volatiles was described in our previous study \cite{Paschek2021}.
It is important to note that our prediction of the concentrations of the volatile reactants was tailored to the view that carbonaceous planetesimals were assembled mostly instantaneously via streaming instabilities in the expected \SIrange{2}{3}{au} region inside the solar nebula. For this reason, it is strongly dependent on assumptions on the physico-chemical processes dominating the solar nebula and protoplanetary disc of the solar system.
Table~\ref{tab:init_concs} lists the concentrations of the considered reactants. The cometary concentrations were corrected by the depletion factor of formaldehyde in the correction factor column, giving rise to the solar nebula model-guided estimate for the initial concentrations in carbonaceous chondrite parent bodies in the predicted concentration column. The predicted concentrations were then used as the input parameters in our theoretical chemistry calculations for the abundances of ribose in this study.
\begin{table}[ht
\setlength{\tabcolsep}{2.25mm}
\footnotesize
\caption{Initial concentrations of reactants. The concentrations were normalized to water. The predicted concentrations using solar nebula models (except glycolaldehyde) were already used in the previous study \cite{Paschek2021}, which were the adjusted version of the ones found in comets \cite{Mumma2011} (and references therein) used by \citet{Cobb2015} and \citet{Pearce2016}. This correction was applied to be more representative of the concentrations present in pristine carbonaceous chondrite parent bodies. The predicted concentrations (the last column) are the ones used for the theoretical abundance calculations in this study and were derived by multiplying the cometary concentrations (the third column) with the correction factors (the fourth column). \label{tab:init_concs}}
\begin{tabular}{ccccc}
\toprule
\textbf{Molecule} & \multirow{2}{*}{\textbf{Name}} & \textbf{Cometary Concentration} & \multirow{2}{*}{\textbf{Correction Factor}} & \textbf{Predicted Concentration} \\
\textit{\textbf{i}} & & {\boldmath{${[\mathrm{mol}_i\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}]}$}} & & {\boldmath{${[\mathrm{mol}_i\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}]}$}} \\
\midrule
\ce{H2O} & water & \phantom{(0.05--}1\phantom{.60)$\times$~$10^{-0}$} & - & \phantom{(0.05--}1\phantom{.60)$\times$~$10^{-0}$}\\
\ce{CH2O} & formaldehyde & \phantom{(0.05--}6.60\phantom{)}$\times$~$10^{-4}$ & $10^{-3}$ & \phantom{(0.05--}6.60\phantom{)}$\times$~$10^{-7}$ \\
\ce{C2H4O2} & glycolaldehyde & (0.05--4.00)~$\times$~$10^{-4}$ & - & (0.05--4.00)~$\times$~$10^{-4}$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Computations}\label{sec:computations}
\textls[-15]{We calculated the amounts of ribose using the formose reaction pathway (see Equation~\eqref{equ:formose})} under the conditions inside the carbonaceous planetesimals. The thermochemical equilibrium calculations were performed using the software \textit{ChemApp}, distributed by GTT Technologies \cite{Petersen2007} (version 740 (6 April 2020), available online: \url{https://gtt-technologies.de/software/chemapp/}, accessed on 17 November 2021). The central input for these calculations with ChemApp was the Gibbs free energies of formation of the molecules involved in the modeled reaction. This Gibbs energy data were taken from the thermodynamic database \emph{CHNOSZ} \cite{Dick2019} (version 1.3.6 (16 March 2020), authored by Jeffrey M.\ Dick, available online: \url{https://www.chnosz.net}, accessed on 17 November 2021). Further, we used the \textit{GCC} compiler \cite{gcc} and the following software packages: \textit{pybind11} \cite{pybind11}, \textit{rpy2} \cite{rpy2}, \textit{NumPy} \cite{harris2020array}, \textit{SciPy} \cite{2020SciPy-NMeth}, and \textit{Matplotlib} \cite{matplotlib2007}.
\subsubsection{Planetesimal Model}
The environmental conditions, mainly the temperatures, inside the parent bodies were provided by the thermal planetesimal model developed by \citet{Lange2021}. This 1D model considered the radioactive decay of short- and long-lived isotopes as the heat source in the planetesimal interiors. It simulated the thermal evolution of planetesimals from their formation in the solar system until the present time. The available data provided us with the radial, time-dependent temperature profiles inside these bodies. The adjustable parameters for the simulation of these planetesimals were mainly their radius and the time of formation after calcium-aluminium-rich inclusions (CAI). As in our previous study \cite{Paschek2021}, we considered hypothetical planetesimals with radii of \SIrange{3}{150}{\kilo\meter} and times of formation after CAI of \SIrange{0.5}{3.5}{\mega\year}. A porosity of ${\phi = 0.2}$ was assumed in all models, representing a typical value found in carbonaceous chondrites \cite{Mason1963}.
One important aspect to note is that, compared to the more complex models of \citet{Lange2021}, the planetesimal models here are a simplified version adapted to parent bodies of carbonaceous chondrites.
\begin{figure}[p]
\includegraphics[width=0.98\columnwidth]{planetesimal_time_evolution_4km.pdf}
\caption{Temperature evolution inside a small- and early-formed model planetesimal over time. The temperature curves are given for different distances from the center inside the planetesimal. Properties of the planetesimal: ${\text{Radius} = \SI{4}{\kilo\meter}}$, porosity ${\phi = 0.2}$, and time of formation after calcium-aluminium-rich inclusions ${\text{(CAI)} = \SI{1}{\mega\year}}$. Reproduced from a simplified and adapted version of the model by \citet{Lange2021}. The temperature evolution for the other available model planetesimals was described in the previous study \cite{Paschek2021}.\label{fig:planetesimal_4km}}
\end{figure
\begin{figure}[p]
\includegraphics[width=0.98\columnwidth]{planetesimal_time_evolution_149km.pdf}
\caption{Temperature evolution inside a large- and late-formed model planetesimal over time. The temperature curves are given for different distances from the center inside the planetesimal. Properties of the planetesimal: ${\text{Radius} = \SI{150}{\kilo\meter}}$, porosity ${\phi = 0.2}$, and time of formation after ${\text{CAI} = \SI{3.5}{\mega\year}}$. Reproduced from a simplified and adapted version of the model by \citet{Lange2021}. The temperature evolution for the other available model planetesimals was described in the previous study \cite{Paschek2021}.\label{fig:planetesimal_150km}}
\end{figure}
Two examples of this planetesimal model can be seen in Figures~\ref{fig:planetesimal_4km}~and~\ref{fig:planetesimal_150km}. The first rise of temperature is caused by the decay of short-lived radionuclides (mainly \ce{^{26}Al}, with a small contribution of \ce{^{60}Fe}). For larger planetesimals, such as the one shown in Figure~\ref{fig:planetesimal_150km} with a radius of \SI{150}{\kilo\meter}, long-lived radionuclides (\ce{^{40}K}, \ce{^{232}Th}, \ce{^{235}U}, and \ce{^{238}U}) can also have a significant contribution. This results in a second temperature rise or plateau in the outer shells of the planetesimal over time.
For a more comprehensive overview of the software, the Gibbs free energies of formation, and more details about the planetesimal model, we refer to our previous study~\cite{Paschek2021}. The pressure dependence of the Gibbs free energies of formation is very marginal, allowing us to assume \SI{100}{\bar} for all thermochemical calculations inside the entire planetesimal interiors. Further information about the thermochemical equilibrium calculations can be found in \citet{Pearce2016} and \citet{Cobb2015}. The source code, excluding the proprietary ChemApp library, and including the data of the planetesimal models, is openly available on Zenodo at (\cite{klaus_paschek_2021_5774880}, \url{https://doi.org/10.5281/zenodo.5774880}, accessed on 1 March 2022) and as a Git repository: \url{https://github.com/klauspaschek/prebiotic_synthesis_planetesimal}, accessed on 17 November 2021.
\subsubsection{Gibbs Free Energies of Formation of Glycolaldehyde}\label{sec:gibbs_glycolaldehyde}
The CHNOSZ database used in our modeling does not contain the Gibbs energies for glycolaldehyde. Therefore, we provide and compare the two ways of estimating these energies, which we used in our calculations. \citet{Cobb2015} gave an estimate by modeling glycolaldehyde as a mixture of acetaldehyde and acetic acid, for which the CHNOSZ database does have the Gibbs energies. The motivation is that the combination of acetaldehyde and acetic acid roughly resembles glycolaldehyde's functional groups and structure. In the works of \citet{Emberson2010} and \citet{Fernandez2010}, the respective enthalpies of formation $\Delta H_f$ for acetaldehyde and acetic acid were weighted to fit the one of glycolaldehyde given by \citet{Espinosa-Garcia2005} at standard conditions. The weights were found to be \SI{61.1}{\percent} and \SI{38.2}{\percent}, respectively. The same weighting coefficients were used to weight the Gibbs free energies of formation $\Delta G_f$ of acetaldehyde and acetic acid (taken from CHNOSZ) to estimate the one for glycolaldehyde. These weighted energies can be found in Figure~\ref{fig:gibbs_glycolal}a.
For the second method of estimation, we performed quantum chemistry calculations using the \textit{Gaussian 09} software package \cite{g09} (available online: \url{https://gaussian.com/}, accessed on 17 November 2021). These quantum chemistry calculations were used to obtain the atomic and molecular energies and entropies necessary to directly calculate the Gibbs free energy of formation for glycolaldehyde. All quantum chemistry calculations were performed using the \text{Becke-3–Lee–Yang–Parr} (B3LYP) hybrid density functional~\cite{Stephens1994,Becke1993,Lee1988} and the polarizable continuum model (PCM) for aqueous solution effects \cite{Miertus1981,Cammi1995}. Geometries were optimized using the \text{6-31G(d,p)} basis set, and single-point energies and frequencies were calculated at a higher level basis set, i.e., \text{6-311++G(2df,2p)}, using the geometries optimized in the previous step. This particular method was used in the past by \citet{Espinosa-Garcia2005} to calculate the enthalpy of glycolaldehyde (see first estimation method above). All calculations were performed at \SI{100}{\bar} to match the peak pressures inside the meteoritic parent bodies \cite{Pearce2016}, for reasons explained in previous studies (see Section~\ref{sec:computations} above). The Gibbs free energies of formation were calculated using the three-step method outlined by \citet{Ochterski2000}, i.e., (1) calculate the enthalpy of formation at \SI{0}{\kelvin}, (2) calculate the enthalpy of formation at \SI{298}{\kelvin} from elements in their standard states, and (3) calculate the entropy of formation $\Delta S_f$ from elements in their standard states at \SI{298}{\kelvin}, and insert everything into the standard Gibbs formula ${\Delta G_f = \Delta H_f - T\Delta S_f}$.
The atomic and molecular energies and entropies used for the $\Delta G_f$ calculations were obtained from our quantum chemistry calculations. The only exceptions were the heats of formation for the atoms and the entropy for standard state carbon (graphite) at \SI{298}{\kelvin}, which were obtained from experiments \cite{Curtiss1997} and National Bureau of Standards (current name: National Institute of Standards and Technology) tables of the chemical thermodynamic properties \cite{Wagman1982}, respectively. Due to the lack of experimental entropic data for carbon (graphite) above \SI{298}{\kelvin}, a \SI{1}{\percent} increase per \SI{25}{\kelvin} was introduced to the experimental carbon (graphite) entropy. This correction was done to match similar entropy increases from our quantum chemistry calculations of hydrogen and oxygen. Lastly, the atomic enthalpy correction was calculated regarding the gas-state carbon rather than carbon (graphite), which introduced ${\sim}$$\SI{3}{\kilo\joule\per\mole}$ error into our calculations.
\begin{figure}[t]
\includegraphics[width=\textwidth]{gibbs_glycolaldehyde.pdf}
\caption{Gibbs free energies of formation of different molecules plotted against temperature at \SI{100}{\bar}. The energies for glycolaldehyde were estimated values calculated (\textbf{a},\textbf{b}) either by the technique developed by \citet{Emberson2010} and \citet{Fernandez2010}, and used in \citet{Cobb2015} (denoted in figure legend as ``weighted'', plotted as solid lines with filled symbols), (\textbf{c}) or by using the computational quantum chemistry software Gaussian \cite{g09} (denoted in figure legend as ``Gaussian'', plotted as solid lines with hollow symbols). Data taken from the CHNOSZ database are plotted as solid and dashed lines without symbols.\label{fig:gibbs_glycolal}}
\end{figure}
We validated this method by calculating $\Delta G_f$ for acetaldehyde and formaldehyde at \SI{298}{\kelvin}. Our calculated values were within \SI{6}{\kilo\joule\per\mole} and \SI{10}{\kilo\joule\per\mole} of the values from the CHNOSZ database, respectively. Improvements may be made to this method by calculating enthalpies and entropies for carbon (graphite) to be consistent with the quantum calculations of the other entropies. However, given that the results of the ribose synthesis in this study were not sensitive to our glycolaldehyde calculations for neither the first nor this second estimate (see the explanation below), we did not further improve upon these calculations this time. The resulting second estimate for $\Delta G_f$ of glycolaldehyde is plotted in Figure~\ref{fig:gibbs_glycolal}c.
Although the two estimates for $\Delta G_f$ of glycolaldehyde were slightly different, they both showed the same theoretical results for the ribose abundances. This was because the adopted reaction pathway for the synthesis of ribose is limited by the initial abundances of reactants, and hence is less sensitive to the $\Delta G_f$ values. This was verified by analyzing the output abundances from ChemApp, which showed that all initially present reactants had zero abundances after ribose was formed.
Consequently, comparing the Gibbs energies of reactants and products allows us to determine whether ribose should be formed. When looking at the modeled reaction pathway for ribose (see Equation~\eqref{equ:formose}), one molecule of formaldehyde and two of glycolaldehyde are combined to form one molecule of ribose. Following this stoichiometry, one has to compare the Gibbs energies of ribose with the sum of the energies of formaldehyde and twice the energies of glycolaldehyde (\ce{1\text{formaldehyde} + 2\text{glycolaldehyde} -> 1\text{ribose}}, Equation~\eqref{equ:formose}). Ribose should only be formed if the energies for ribose are more negative than the stoichiometric sum of those for formaldehyde and glycolaldehyde.
Figure~\ref{fig:gibbs_glycolal}b shows this sum compared to ribose with the first weighted estimate for glycolaldehyde, and Figure~\ref{fig:gibbs_glycolal}c shows the comparison with the second estimate with Gaussian. The stoichiometric sum of the Gibbs energies of formaldehyde and glycolaldehyde was less negative than the Gibbs energies of ribose in both cases. Thus, indeed, only the limitation by the initial abundances of reactants mattered for the ribose formation. If the stoichiometric sum of formaldehyde and glycolaldehyde crossed with the energies of ribose, no ribose would be formed at the respective temperatures. Therefore, both estimates of the Gibbs energies of glycolaldehyde allowed us to model this reaction pathway in the same way.
\subsection{Laboratory Experiments}\label{sec:lab}
There are about 40 different products formed in the formose reaction \cite{Rauchfuss2005}. These include plenty of sugars such as aldoses and ketoses, sugar alcohols, sugar acids, branched sugars, and even decomposition products such as lactic acid, see also, e.g., \cite{Omran2020}. Modeling the complex formose network with its different reactions such as aldol reactions, retro-aldol reactions, arrangements, and decomposition reactions is a big challenge for theoretical and analytical chemistry, see, e.g., \cite{Kim2011}. Accordingly, following the kinetics of every single molecule is almost impossible, and the thermodynamic data of the molecules and reactions are often missing.
Therefore, the reaction pathway considered here and summarized in Equation~\eqref{equ:formose} is a major simplification. To compensate for this, we performed laboratory experiments of the formose reaction in an aqueous solution and measured the amounts of the resulting sugars. This allowed us to find the fraction of ribose in all forming 5Cs.
In our experiments, the reaction was started with a very concentrated solution of formaldehyde (\SI{1.34}{\mole\per\liter}) and glycolaldehyde (\SI{0.269}{\mole\per\liter}, \SI{20}{\mole\percent}) based on the prebiotic DNA formation pathway demonstrated by \citet{Teichert2019} (for more context, see the review by \citet{Kruse2020}). To this solution, \SI{10}{\mole\percent} of one of the catalysts were added, and the temperature was kept stable. At different time intervals, depending on the activity of the catalyst, a sample was taken. The whole solution had a volume of \SI{1}{\milli\liter} at the start of each run, and samples were taken in volumes of \SI{50}{\micro\liter} each. The formose reaction was stopped by adding citric acid and freezing the sample. After lyophilization, the sample was analyzed via gas chromatography in combination with a mass spectrometer, as described in the preceding studies by \citet{Haas2018,Haas2020} (abbreviated there as GC-MS).
There are also other possibilities to analyze products in the formose reaction, e.g., coupling liquid chromatography with UV and electrospray ionization-mass spectrometry (abbreviated as LC-UV and ESI-MS) \cite{Zweckmair2013}. This other method could help overcome problems, e.g., thermal instability of some of the analyzed molecules in GC-MS analysis, and could be interesting for follow-up studies.
The maximum yields of ribose in all 5Cs inferred from our measurements were used as a correction factor for the ribose abundances resulting from our theoretical thermochemical equilibrium model.
\section{Results}\label{sec:results}
\subsection{Experimentally Found Yields of Ribose in All 5Cs}\label{sec:yields}
Figure~\ref{fig:yields} shows our experimental results for the fraction of ribose in all 5Cs plotted against the elapsed time of the reaction. Calcium hydroxide was the catalyst with the highest activity. Compared to the other catalysts, it formed the highest amount of ribose in the shortest amount of time. However, one has to be aware that the decomposition of the formose products is also more rapid (see Discussion in Section~\ref{sec:discussion}). Focusing on ribose, it took less than \SI{20}{\minute} to reach its maximum yield. Likewise, potassium hydroxide and potassium carbonate produced ribose very quickly in significant amounts, with the maximum values being lower than those obtained with calcium hydroxide. Calcium carbonate took significantly longer to produce higher sugars, which can be explained by its lower solubility. Since we used \SI{10}{\mole\percent} of the catalyst in each experiment, calcium carbonate reached its maximum aqueous solubility. This was done in order to stay consistent within the framework of our experiments and with the previous study by \citet{Teichert2019}. Still, after \SI{180}{\minute} the experimental run with calcium carbonate resulted in the second-highest yield for ribose. All catalysts did not seem to differ much in effectiveness. They produced ribose with yields of the same order of magnitude.
The maximum yields of ribose reached in Figure~\ref{fig:yields} are listed in Table~\ref{tab:yields}. These values were the correction factors used in the theoretical model to compensate for the oversimplification of the modeled reaction in Equation~\eqref{equ:formose}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{yields_ribose.pdf}
\caption{Fraction of ribose (R) in all pentoses (5Cs) synthesized over time in lab experiments. The reaction started with concentrations for formaldehyde of \SI{1.34}{\mole\per\liter}, for glycolaldehyde of \SI{0.269}{\mol\per\liter} or \SI{20}{\mole\percent}, and for the respective catalyst of \SI{10}{\mole\percent}, in a solution volume of \SI{1}{\milli\liter}, with samples taken over time in volumes of \SI{50}{\micro\liter} each. The temperature of the solution was kept constant over time at the values denoted in the figure legend for each catalyst run.\label{fig:yields}}
\end{figure}\vspace{-6pt}
\begin{table}[ht]
\setlength{\tabcolsep}{7.65mm}
\small
\caption{Maximum yield of ribose (R) in all 5Cs reached in lab experiments for different present catalysts, respectively (see Figure~\ref{fig:yields}). The reaction always started from formaldehyde and glycolaldehyde as reactants in aqueous solution.\label{tab:yields}}
\begin{tabular}{ccc}
\toprule
\multirow{2}{*}{\textbf{Catalyst}\vspace{-6pt}} & \multirow{2}{*}{\textbf{Name}\vspace{-6pt}} & \textbf{Maximum Yield of Ribose} \\
& & \textbf{{\boldmath{$\left\{\frac{c_{\text{R}}}{c_{\text{5C}}}\right\}_{\mathrm{max}}\,{\left[{\mathrm{mol~L^{-1}}\cdot{}\left(\mathrm{mol~L^{-1}}\right)^{-1}}\right]}$}}}\\
\midrule
\ce{Ca(OH)2} & Calcium hydroxide & 4.1 $\times$ $10^{-2}$ \\
\ce{CaCO3} & Calcium carbonate & 3.5 $\times$ $10^{-2}$ \\
\ce{K2CO3} & Potassium carbonate & 2.7 $\times$ $10^{-2}$ \\
\ce{KOH} & Potassium hydroxide & 2.4 $\times$ $10^{-2}$ \\
\bottomrule
\end{tabular}
\end{table}
\newpage
\subsection{Theoretically Calculated Ribose Abundances in Planetesimals}\label{sec:ribose}
The resulting ribose abundances were calculated for the temperatures taken from a simplified version of the planetesimal model by \citet{Lange2021} (see Section~\ref{sec:computations}). The temperatures inside the planetesimal are plotted as solid and dashed lines in Figures~\ref{fig:R_lower}~and~\ref{fig:R_upper} and correspond to the example planetesimal model shown in Figure~\ref{fig:planetesimal_150km}. Figures~\ref{fig:R_lower}~and~\ref{fig:R_upper} show the resulting ribose abundances for several parameters and are a representative selection among all the available thermal profiles. We also calculated the abundances for the other available planetesimal models with our chemical simulations, but the results share the same characteristics and trends as for the shown planetesimal model in Figures~\ref{fig:R_lower}~and~\ref{fig:R_upper} (for the other available planetesimal models see also the previous study \cite{Paschek2021}). The resulting ribose abundances for different catalyst correction factors (see Table~\ref{tab:yields}) are plotted as dashed lines with symbols (see legends). The shaded part of the abundance axis in both figures represents the range of measured ribose concentrations in carbonaceous chondrites~\cite{Furukawa2019}. These measured abundances represent the to-be-achieved benchmark and real-world reference to our theoretically calculated values.
Figures~\ref{fig:R_lower}a~and~\ref{fig:R_upper}a show the radial distribution of the calculated ribose abundances in the planetesimal's interior. The maximum temperature $T_{\mathrm{max}}$ (solid line) reached at a specific distance from the center inside the planetesimal over the entire simulation time (from the planetesimal's formation until today) was taken into account to calculate the ribose abundances. In this case, we inherently assumed that the peak production of the ribose sugar was achieved at the peak temperature at each distance from the center. The left side of these panels (a) defines the center, and the right side the surface of the planetesimal.
In Figures~\ref{fig:R_lower}b~and~\ref{fig:R_upper}b, the dashed curve shows the temperature $T_{\mathrm{core}}$ in the center of the planetesimal over time. The ribose abundances were calculated by iterating over time and using the abundances of reactants and products resulting from previous steps as the initial abundances in each step. This allowed us to follow the equilibrium of the reaction proceeding through time.
In Figure~\ref{fig:R_lower}, the \textit{lower} bound of the initial concentration of glycolaldehyde of $5\times10^{-6}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}$ from Table~\ref{tab:init_concs} was used. As a result, the simulated ribose abundances coincided with measurements in carbonaceous chondrites \citep{Furukawa2019} within a factor of \numrange{2}{3}. On the other hand, in Figure~\ref{fig:R_upper} we used the \textit{upper} bound of $4\times10^{-4}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}$, and the calculated ribose abundances were higher than the measured values by about two orders of magnitude.
As the reduction of the initial cometary formaldehyde concentration by $10^{-3}$ (see Table~\ref{tab:init_concs}) was a very broad assumption, a change of this reduction will change the resulting ribose abundances. Keeping this in mind, the theoretically simulated abundances can be considered to be in reasonable agreement with the measured ones. Further, we suspect that decomposition and other complex reactions that we did not take into account could be responsible for the slightly overestimated theoretical ribose abundances.
\begin{figure}[t]
\includegraphics[width=\textwidth]{ribose_100bar_peak_temps_time_iter_amounts_radius_149km_lower.pdf}
\caption{Lower bound theoretical ribose abundances from simulations of formose reaction pathway in Equation~\eqref{equ:formose}. Properties of planetesimal: ${\text{Radius} = \SI{150}{\kilo\meter}}$, densities ${\rho_{\mathrm{rock}} = \SI{3}{\gram\per\centi\meter\cubed}}$, ${\rho_{\mathrm{ice}} = \SI{0.917}{\gram\per\centi\meter\cubed}}$, porosity ${\phi = 0.2}$, and time of formation after ${\text{CAI} = \SI{3.5}{\mega\year}}$. The experimentally found yields of ribose within 5Cs for each catalyst (see Table~\ref{tab:yields}) were multiplied with the theoretically calculated 5C abundance to obtain the ribose abundances (dashed lines with symbols). This simulation was run with the \textit{lower} (opposite to Figure~\ref{fig:R_upper}) bound of the initial concentration of glycolaldehyde of ${5\times10^{-6}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}}$ (see Table~\ref{tab:init_concs}). All simulations were run at \SI{100}{bar}. In both panels (\textbf{a}) and (\textbf{b}) the left vertical axis corresponds to the abundances (dashed lines with symbols) and the right vertical axis corresponds to the temperatures from the planetesimal model (solid and dotted lines). The shaded part of the abundance axis represents the range of ribose abundances measured in CM2 (Mighei-type, Murchison, upper limit) and CR2 (Renazzo-type, NWA 801, lower limit) meteorites \cite{Furukawa2019}, and has no correlation to the radial location inside the object or the point in time (horizontal axes). (\textbf{a}) Distribution of abundances for the maximum temperature $T_{\mathrm{max}}$ (solid line) reached at a specific distance from the center inside the planetesimal (center at the left and surface at the right). Ribose was synthesized at and below \SI{138}{\kilo\meter} distance from the center. (\textbf{b}) Evolution of abundances at temperatures $T_{\mathrm{core}}$ (dotted line) in the center of the planetesimal over time (the same temperature evolution curve can be found in Figure~\ref{fig:planetesimal_150km}). Ribose synthesis started at \SI{2}{\mega\year} after formation.\label{fig:R_lower}}
\end{figure}\vspace{-6pt}
\begin{figure}[t]
\includegraphics[width=\textwidth]{ribose_100bar_peak_temps_time_iter_amounts_radius_149km_upper.pdf}
\caption{Upper bound theoretical ribose abundances from simulations of formose reaction pathway in Equation~\eqref{equ:formose}. Properties of planetesimal: ${\text{Radius} = \SI{150}{\kilo\meter}}$, densities ${\rho_{\mathrm{rock}} = \SI{3}{\gram\per\centi\meter\cubed}}$, ${\rho_{\mathrm{ice}} = \SI{0.917}{\gram\per\centi\meter\cubed}}$, porosity ${\phi = 0.2}$, and time of formation after ${\text{CAI} = \SI{3.5}{\mega\year}}$. The experimentally found yields of ribose within 5Cs for each catalyst (see Table~\ref{tab:yields}) were multiplied with the theoretically calculated 5C abundance to obtain the ribose abundances (dashed lines with symbols). This simulation was run with the \textit{upper} (opposite to Figure~\ref{fig:R_lower}) bound of the initial concentration of glycolaldehyde of ${4\times10^{-4}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}}$ (see Table~\ref{tab:init_concs}). All simulations were run at \SI{100}{bar}. In both panels (\textbf{a}) and (\textbf{b}) the left vertical axis corresponds to the abundances (dashed lines with symbols) and the right vertical axis corresponds to the temperatures from the planetesimal model (solid and dotted lines). The shaded part of the abundance axis represents the range of ribose abundances measured in CM2 (Murchison, upper limit) and CR2 (NWA 801, lower limit) meteorites \cite{Furukawa2019}, and has no correlation to the radial location inside the object or the point in time (horizontal axes). (\textbf{a}) Distribution of abundances for the maximum temperature $T_{\mathrm{max}}$ (solid line) reached at a specific distance from the center inside the planetesimal (center at the left and surface at the right). Ribose was synthesized at and below \SI{138}{\kilo\meter} distance from the center. (\textbf{b}) Evolution of abundances at temperatures $T_{\mathrm{core}}$ (dotted line) in the center of the planetesimal over time (the same temperature evolution curve can be found in Figure~\ref{fig:planetesimal_150km}). Ribose synthesis started at \SI{2}{\mega\year} after formation.\label{fig:R_upper}}
\end{figure}
\section{Discussion and Conclusions}\label{sec:discussion}
In the thermochemical equilibrium calculations, the resulting abundances of the reaction products depend strongly on the initial concentrations of the reactants (see Table~\ref{tab:init_concs}). Therefore, the resulting ribose abundances in Figures~\ref{fig:R_lower}~and~\ref{fig:R_upper} were strongly dependent on the initial abundance of glycolaldehyde. Furthermore, a different estimate for the initial abundance of formaldehyde and the proper rescaling of the pristine cometary values guided by solar nebula models would also change the resulting ribose concentrations. This rescaling had to be made and is hard to verify via measurements, observations, or modeling, since the icy pebbles out of which the carbonaceous chondrite planetesimals formed in the solar nebula have been gone for too long. Considering all these limitations, uncertainties, and the fact that our correction factor for the initial formaldehyde abundance was an upper limit, the simulated and measured ribose abundances shown in Figures~\ref{fig:R_lower}~and~\ref{fig:R_upper} still coincide reasonably well.
We see this as the confirmation that the formose reaction could be the pathway to forming sugars such as ribose abiotically. Pristine carbonaceous chondrites are time capsules showing us how the foundations for the emergence of life might have been laid in our early solar system.
Ribose is susceptible to decomposition, see, e.g., \cite{Larralde1995}. Particularly at higher temperatures further away from the freezing point of water, significant portions of ribose and other sugars could be destroyed or converted to even more complex species (e.g., polysaccharides). In this context, typical decomposition processes of sugars are the $\beta$-elimination to dicarbonyls, the benzilic acid rearrangement, oxidation, and others, see, e.g., \cite{DeBruijn1986}. In laboratory experiments, the solution containing the freshly formed sugars starts to turn yellow (the so-called ``yellowing point'') when the maximum abundances of sugars are reached. This corresponds roughly to the point in time when the maximum fraction of ribose was reached in Figure~\ref{fig:yields}. After a while, the solution starts to turn brownish (forming so-called ``brown tar'') as the decomposition proceeds.
Since we used the maximum yields of ribose reached in the experiments (see Table~\ref{tab:yields}) as the correction for our theoretical studies, our results represent an estimate of the possible upper limit. This probably led to the slightly too high ribose abundances when compared to the measurements in carbonaceous chondrites (see Figures~\ref{fig:R_lower}~and~\ref{fig:R_upper}). Decomposition plus other reactions could have lowered the ribose abundance in the planetesimals over time, which might explain the lower values measured in carbonaceous chondrites.
Our results seemed to be reasonable when taking these potentially adverse effects into account. Ribose is still found today in meteorites \cite{Furukawa2019}, indicating that it only decomposed to a certain extent. \citet{Ricardo2004} showed in laboratory experiments that boron from borate minerals stabilized ribose and other 5Cs in their cyclic furanose form. The solution did not turn brownish for 2 months as decomposition was prevented. They also postulated that boron could stabilize glyceraldehyde, an intermediate reaction product in the formose reaction, keeping it from decomposing into ``brown tar'', and therefore enhancing the formation of complex sugars from glycolaldehyde and glyceraldehyde. Since boron was found in carbonaceous chondrites \cite{Mills1968}, it could stabilize the formed ribose in meteorites and their parent bodies.
Ribose destruction rates by hydrolysis could be dampened by many orders of magnitude at temperatures below \SI{60}{\celsius} at pH between \numrange{4}{8} \cite{Larralde1995}. The outer shells of planetesimals were heated only for short periods of time to temperatures above the melting point of water before they were frozen again. Therefore, in these outer shells, the freshly formed ribose (besides other prebiotic molecules) might be quickly frozen in the water. This potentially reduced the chance of decomposition as the temperatures were lower compared to the core region of the planetesimal, and liquid water only existed over a shorter period of time. The frozen ribose might have been preserved until it was distributed in the solar system as fragments of the parent body, and some of the fragments fell to the Earth as meteorites. Extraterrestrial organic and prebiotic molecules were found in meteorites on the Earth's surface today, see, e.g., \cite{Furukawa2019,Cobb2014,Pearce2015,Lai2019,Gilmour2003,Pizzarello2006,Derenne2010}, although the possibility of at least partial terrestrial contamination has to be considered. Theoretical studies coupled with experimental data~\cite{Chyba1990,Chyba:1992bp,Basiuk1998,Pierazzo1999} suggested that a significant portion of organics might survive the heating due to friction in the atmosphere and the energy of the impact \cite{Brinton1996} and arrive intact on the Earth's surface even in comets and interplanetary dust particles. Destruction during the atmospheric entry and impact could be another reason why the detected abundances of ribose in carbonaceous chondrites were lower than our calculated results.
Figures~\ref{fig:shell_4km}~and~\ref{fig:shell_150km} show the ribose synthesis in the outer shells of the example planetesimal models in Figures~\ref{fig:planetesimal_4km}~and~\ref{fig:planetesimal_150km}. At the distance of \SI{2.76}{\kilo\meter} from the center of the \SI{4}{\kilo\meter}-sized planetesimal model (time of formation after ${\text{CAI} = \SI{1}{\mega\year}}$), the synthesis of ribose that started shortly before the peak temperature was reached in this region and allowed for liquid water to exist over $\lesssim${200,000} yr. When the water froze again, the formed ribose was preserved (see Figure~\ref{fig:shell_4km}). The similar phenomenon happened in the outer shell at the distance of \SI{138}{\kilo\meter} from the center of the \SI{150}{\kilo\meter}-sized planetesimal model (time of formation after ${\text{CAI} = \SI{3.5}{\mega\year}}$), where the water stayed liquid for $\lesssim\SI{2}{\mega\year}$ (see Figure~\ref{fig:shell_150km}).
\begin{figure}[p]
\includegraphics[width=0.82\columnwidth]{ribose_100bar_time_iter_shell_amounts_radius_4km.pdf}
\caption{Theoretical ribose abundances in an outer shell at \SI{2.76}{\kilo\meter} distance from the center of the \SI{4}{\kilo\meter}-sized planetesimal model. The whole planetesimal model is shown in Figure~\ref{fig:planetesimal_4km}. Properties of planetesimal: ${\text{Radius} = \SI{4}{\kilo\meter}}$, densities ${\rho_{\mathrm{rock}} = \SI{3}{\gram\per\centi\meter\cubed}}$, ${\rho_{\mathrm{ice}} = \SI{0.917}{\gram\per\centi\meter\cubed}}$, porosity ${\phi = 0.2}$, and time of formation after ${\text{CAI} = \SI{1}{\mega\year}}$. The formose reaction pathway in Equation~\eqref{equ:formose} was used in the simulations. The experimentally found yields of ribose within 5Cs for each catalyst (see Table~\ref{tab:yields}) were multiplied with the theoretically calculated 5C abundance to obtain the ribose abundances (dashed lines with symbols). Ribose synthesis started at ${\sim}$210,000 yr after formation. All simulations were run at \SI{100}{bar}. In both panels (\textbf{a},\textbf{b}), the left vertical axis corresponds to the abundances (dashed lines with symbols) and the right vertical axis corresponds to the temperatures $T$ (solid lines) in the outer shell of the planetesimal model. The shaded part of the abundance axis represents the range of ribose abundances measured in CM2 (Murchison, upper limit) and CR2 (NWA 801, lower limit) meteorites \cite{Furukawa2019}, and has no correlation to the point in time (horizontal axis). (\textbf{a}) Time evolution of lower bound abundances simulated using the \textit{lower} (opposite to panel (\textbf{b})) bound of the initial concentration of glycolaldehyde of ${5\times10^{-6}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}}$ (see Table~\ref{tab:init_concs}). (\textbf{b}) Time evolution of upper bound abundances simulated using the \textit{upper} (opposite to panel (\textbf{a})) bound of the initial concentration of glycolaldehyde of ${4\times10^{-4}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}}$.\label{fig:shell_4km}}
\end{figure}
\begin{figure}[p]
\includegraphics[width=0.82\columnwidth]{ribose_100bar_time_iter_shell_amounts_radius_149km.pdf}
\caption{Theoretical ribose abundances in an outer shell at \SI{138}{\kilo\meter} distance from the center of the \SI{150}{\kilo\meter}-sized planetesimal model. The whole planetesimal model is shown in Figure~\ref{fig:planetesimal_150km}. Properties of planetesimal: ${\text{Radius} = \SI{150}{\kilo\meter}}$, densities ${\rho_{\mathrm{rock}} = \SI{3}{\gram\per\centi\meter\cubed}}$, ${\rho_{\mathrm{ice}} = \SI{0.917}{\gram\per\centi\meter\cubed}}$, porosity ${\phi = 0.2}$, and time of formation after ${\text{CAI} = \SI{3.5}{\mega\year}}$. The formose reaction pathway in Equation~\eqref{equ:formose} was used in the simulations. The experimentally found yields of ribose within 5Cs for each catalyst (see Table~\ref{tab:yields}) were multiplied with the theoretically calculated 5C abundance to obtain the ribose abundances (dashed lines with symbols). Ribose synthesis started at ${\sim}$$\SI{2.1}{\mega\year}$ after formation. All simulations were run at \SI{100}{bar}. In both panels (\textbf{a},\textbf{b}), the left vertical axis corresponds to the abundances (dashed lines with symbols) and the right vertical axis corresponds to the temperatures $T$ (solid lines) in the outer shell of the planetesimal model. The shaded part of the abundance axis represents the range of ribose abundances measured in CM2 (Murchison, upper limit) and CR2 (NWA 801, lower limit) meteorites \cite{Furukawa2019}, and has no correlation to the point in time (horizontal axis). (\textbf{a}) Time evolution of lower bound abundances simulated using the \textit{lower} (opposite to panel (\textbf{b})) bound of the initial concentration of glycolaldehyde of ${5\times10^{-6}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}}$ (see Table~\ref{tab:init_concs}). (\textbf{b}) Time evolution of upper bound abundances simulated using the \textit{upper} (opposite to panel (\textbf{a})) bound of the initial concentration of glycolaldehyde of ${4\times10^{-4}\,\mathrm{mol}\cdot{}\mathrm{mol}_{\ce{H2O}}^{-1}}$.\label{fig:shell_150km}}
\end{figure}
It is worth noting that the planetesimal model by \citet{Lange2021}, which was used in our study, did not take into account the latent heat of water. Therefore, we were not able to accurately model the phase transitions of water and their effects on the temperature evolution. The phase transitions require a considerable amount of energy and cause the temperature evolution to stagnate until the water is completely melted or frozen. Since the periods estimated above were in between the melting and freezing times of water, our estimates were only approximations of the actual duration.
There should be a layer below the consistently frozen crust of planetesimal parent bodies, which could be the most promising part that contained the highest amounts of ribose. This region should be frozen rapidly due to the low and declining internal radioactive heating, preventing decomposition processes. On the other hand, the cores of the parent bodies were more likely to reach critically high temperatures over longer periods of time and probably led to significant sugar decomposition. Therefore, Figures~\ref{fig:R_lower}b~and~\ref{fig:R_upper}b could not depict the actual truth, as decomposition was not considered in our calculations.
The surface and outer shells of the parent bodies were more likely to be shattered and blown off by collisions with other asteroids. If impactors were able to penetrate through the surface layers of the parent bodies and reach the potentially ribose-rich intermediate part, the generated fragments would most likely contain the organic complexity that is characteristic for carbonaceous chondrites. This could explain why we were able to find ribose in carbonaceous chondrites on the Earth \cite{Furukawa2019}. The origin of these meteorites was likely biased to the outer shells of the parent bodies in fragmentation events, which could (partly) coincide with the most promising regions identified in Figures~\ref{fig:shell_4km}~and~\ref{fig:shell_150km}.
\citet{Larralde1995} suspected that ribose was not stable enough to take part in the origin of life, questioning the RNA world hypothesis. However, ribose could be preserved in the icy fragments for a long time. This illustrates a possible explanation for how the concerns about the instability of ribose could be resolved in the scope of the chemical synthesis in planetesimals. By eventually falling as meteorites to the Earth into WLPs, the ribose-rich fragments might provide ribose for the origin of life, as described by \mbox{\citet{Pearce2017}}.
Smaller planetesimals, such as the \SI{4}{\kilo\meter}-sized one in Figures~\ref{fig:planetesimal_4km}~and~\ref{fig:shell_4km}, had to be formed earlier than larger planetesimals to become aqueous, and allow for the formose reaction to take place in its interior. At \SI{1}{\mega\year} after CAI, there was enough \ce{^{26}Al} left to even heat the outer shells of the \SI{4}{\kilo\meter}-sized planetesimal above the melting point of water (see Figure~\ref{fig:shell_4km}). The large \SI{150}{\kilo\meter}-sized planetesimal was formed later at \SI{3.5}{\mega\year} after CAI (see Figures~\ref{fig:planetesimal_150km},~\ref{fig:R_lower},~\ref{fig:R_upper}~and~\ref{fig:shell_150km}). If it was formed earlier as the smaller planetesimal, this large body would have reached such high temperatures that strong thermal metamorphism or even siliceous volcanism would have occurred, resulting in hostile conditions for the synthesis of organic molecules. Therefore, when comparing these moderately heated planetesimals, the time intervals of the aqueous phase in the outer shells occurred earlier in the smaller bodies and over a shorter interval than in the larger ones ($\sim${200,000} yr in Figure~\ref{fig:shell_4km} vs. $\sim$$\SI{2}{\mega\year}$ in Figure~\ref{fig:shell_150km}). This leads to the conclusion that smaller planetesimals with moderate heating might preserve ribose better than larger planetesimals since the aqueous time interval allowing for decomposition of the formed ribose was shorter by around one order of magnitude.
In follow-up studies, when more detailed thermodynamic and kinetic data with a higher temperature resolution for the decomposition rates become available, it could be interesting to consider the decomposition of ribose in more detail and constrain the region with the likely highest ribose abundances with more accuracy compared to our approximate estimates. This could also help to identify the part of parent bodies where carbonaceous chondrites with high ribose content could have most likely originated.
In this study, we used the same model and the same initial concentrations of reactants as in our previous study for nucleobases \cite{Paschek2021}, in which we found abundances matching the measured values in meteorites. It seems that the formation of crucial RNA-building blocks such as nucleobases and ribose could be explained uniformly with our model and the selected reaction pathways. Note that carbonaceous chondrites also contain \ce{P}-rich minerals, e.g., schreibersite, which could provide the last missing piece for the synthesis of the RNA nucleotides, the phosphates (\ce{[PO4]^{3-}}, \ce{[HPO4]^{2-}}, and \ce{[H2PO4]-} depending on pH) \cite{Gull2015}. Moreover, the clay minerals at the bottom of WLPs could also provide the phosphates needed for the phosphorylation of nucleosides \cite{Ferris1996}. In addition, metal-doped-clays were shown to select ribose from a formose mixture \cite{Zhao2021} and catalyze the formation of ribonucleosides \cite{Chen2021}.
Thus, ribose and nucleobases (see our previous studies \cite{Pearce2016,Paschek2021}) delivered by carbonaceous chondrites could have been an essential ingredient for the build-up of the first RNA molecules in WLPs (including geothermal fields and hot springs), or around subsea hydrothermal vents, or all of them, setting the stage for the emergence of the RNA world and the origin of life on the Earth and elsewhere.
\newpage
\authorcontributions{Conceptualization, D.A.S.; methodology, K.P., K.K., B.K.D.P., K.L., O.T., R.E.P. and D.A.S.; software, K.P., B.K.D.P. and K.L.; validation, K.P., K.K., B.K.D.P., K.L., T.K.H. and D.A.S.; formal analysis, K.P., K.K., B.K.D.P. and K.L.; investigation, K.P., K.K., B.K.D.P., K.L. and D.A.S.; resources, T.K.H., O.T., R.E.P. and D.A.S.; data curation, K.P., K.K., B.K.D.P. and K.L.; writing---original draft preparation, K.P.; writing---review and editing, K.K., B.K.D.P., K.L., T.K.H., O.T., R.E.P. and D.A.S.; visualization, K.P.; supervision, T.K.H., O.T., R.E.P. and D.A.S.; project administration, T.K.H. and D.A.S.; funding acquisition, T.K.H. All authors have read and agreed to the published version of the manuscript.}
\funding{K.P.\ acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1-390900948 (the Heidelberg STRUCTURES Excellence Cluster).
K.P.\ is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD).
T.K.H.\ acknowledges financial support by the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28.
D.A.S.\ acknowledges financial support by the Deutsche Forschungsgemeinschaft through SPP 1833: ``Building a Habitable Earth'' (SE 1962/6-1).}
\institutionalreview{Not applicable.}
\informedconsent{Not applicable.}
\dataavailability{The source code, excluding the proprietary ChemApp library, and including the data of the planetesimal models, is openly available on Zenodo at (\cite{klaus_paschek_2021_5774880}, \url{https://doi.org/10.5281/zenodo.5774880}, accessed on 1 March 2022) and as a Git repository: \url{https://github.com/klauspaschek/prebiotic_synthesis_planetesimal}, accessed on 17 November 2021.}
\acknowledgments{The authors thank Cornelis P.~Dullemond for his extensive help in understanding the planetesimal model and the background attached to its theory. We would also like to thank Catharina Fairchild and Hao-En Huang for their stylistic review of the manuscript. We wish to thank an anonymous reviewer for corrections, comments, and suggestions.}
\conflictsofinterest{The authors declare no conflict of interest.}
\abbreviations{Abbreviations}{
The following abbreviations are used in this manuscript:\\
\noindent
\begin{tabular}{@{}ll}
RNA & Ribonucleic acid\\
DNA & Deoxyribonucleic acid\\
5C & Pentose\\
WLP & Warm little pond/Darwinian pond\\
ppb & Parts per billion\\
ppm & Parts per million\\
CAI & Calcium-aluminium-rich inclusions\\
yr & Year(s)\\
GC-MS & gas chromatography coupled with a mass spectrometer\\
CM2 & Mighei-type chondrites (CM), a group of meteorites, in this case of petrologic type 2\\
CR2 & Renazzo-type chondrites (CR), a group of meteorites, in this case of petrologic type 2
\end{tabular}}
\newpage
\begin{adjustwidth}{-\extralength}{0cm}
\reftitle{References}
|
2,869,038,155,312 | arxiv | \section{Introduction}
The interest for networks has continuously grown in the last twenty
years. The focus has progressively shifted from the characterization
of their structure towards the study of the underlying dynamics. The
motivation of the massive attention is at least twofold. On the one
hand, many systems of practical and conceptual interest are \textit{de
facto} networks like, e.g., the mammalian brain \citep{Gerstner2014}
and power-grids \citep{Nishikawa2015}. On the other hand several
complex systems can be represented as networks in suitable abstract
spaces as in the case of climate models \citep{Donges2009a}.
Additionally, networks represent an effective testing ground for theoretical
ideas. Quantum graphs, for instance, offer simplified but non-trivial
models of complex phenonema, such as electron propagation in multiply
connected media, Anderson localization, quantum chaos and even quantum
field theory \citep{Kuchment2008}.
Typically, a network is represented as an ensemble of relatively simple
dynamical systems (the nodes) driven by pairwise interactions schematically
accounted for by a suitable adjacency matrix. This is exemplified
by the Kuramoto model, where the single units are phase oscillators
and the connections are all-to-all \citep{Kuramoto1984}. In some
cases, the connections have their own dynamics, with a massive increase
of the overall computational complexity \citep{gross2008adaptive,Aoki2011,Berner2019}.
This is the case of neural systems, where synaptic plasticity is included
both because of the experimental evidence that the synaptic strength
changes over time and because this mechanism is believed to play a
crucial role in establishing memory \citep{Abbott2000}.
The dynamical properties are so rich that even in identical, globally
coupled oscillators, nontrivial and not yet fully understood regimes
are observed \citep{Zillmer2007}. In this paper, we focus on a class
of networks which naturally arise while considering propagation of
waves of various nature (electromagnetic, acoustic, etc.) through
quasi one-dimensional systems (quantum wires, photonic crystals, thin
waveguides). The novelty and crucial difference with respect to many
other networks is that here the self-sustained dynamical regimes depend
sensitively on the network structure and in particular on the lengths
of the individual links. Additionally, they offer the possibility
of experimental tests and even the opportunity to develop new devices
such as nanophotonic networks of waveguides \citep{Gaio2019}.
More specifically, this work formalizes the concept of active optical
networks (LANER), going beyond the linear description proposed in
Ref.~\citep{LepriTronoGiacomelli2017,Giacomelli2019}. In our case,
the \textit{network} is a physical network (such as for power grids),
composed of links each characterized by a potentially bi-directional
propagation of electromagnetic waves. \emph{A priori}, there exist
active and passive links (i.e. the electric fields are either damped
or amplified), a little bit like excitatory and inhibitory synaptic
connections in neural systems. A peculiarity of these devices which
distinguishes them from other types of networks is that the wave frequencies
are self-selected and multiple frequencies can coexist. The whole
dynamical structure emerges out of a careful balance and interferences
among the activity along the various links.
This is precisely the reason why it is necessary to include nonlinearities,
as they are ultimately responsible for the saturation of the self-generated
fields, like in standard lasers, a relevant difference being the underlying
complex network LANER structure. In order to keep the model complexity
at a minimum level, without losing physical plausibility, we assume
that the passive links can be all treated as linear damping processes.
As for the active links, the most general approach would require introducing
Maxwell-Bloch equations to account for the spatial structure of polarization
and population along the media \citep{Hess1996}. Given the mathematical
complexity of this type of models, we have restricted our analysis
to active semiconductors-type links, so that the polarization can
be adiabatically eliminated. By following the approach proposed by
Vladimirov and Turaev \citep{Vladimirov2005} for the ring laser,
the spatial dependence of the population dynamics is integrated out
and transformed into a delayed interaction.
As a final simplification, we assume unidirectional propagation along
the active links: this is to avoid the complications arising from
the interactions between counter-propagating waves which would force
us to reintroduce the spatial dependence along the active links.
In Section \ref{sec:LANER-components}, we introduce the mathematical
formalization of the LANER components, starting from the single links
(both active and passive ones) and including the splitters which amount
to a linear coupling between outgoing and incoming fields. The resulting
full LANER network model is introduced in Sec.~\ref{sec:The-LANER-model},
where we show that the most convenient representation consists in
introducing a sort of dual (abstract) network, where the single fields
(with their specific direction of propagation) play the role of nodes,
while the splitters account for the connectivity which is eventually
represented by a nontrivial adjacency matrix. In Sec.~\ref{sec:Networks-single-active}
we consider a general LANER with multiple passive links and a single
active one. The treatment helps understanding that, irrespective of its complexity, the
passive part can be treated as a single
transfer function, whose resonances contribute to the selection of the relevant frequencies.
A first exemplification of this approach is the standard ring laser: a single link with a single node.
A less trivial example is discussed in Sec.~\ref{sec:A-first-non-trivial},
where we discuss a double ring configuration, where only one ring
is active. In this case, we compute stationary solutions (LANER modes)
and illustrate the effect of the transfer function of the passive
part of the network. In the last section we summarize the
main results, recall the several open problems and mention possible
directions for future progress.
\section{LANER components\label{sec:LANER-components}}
In this section we outline the mathematical modeling of the main elements of a
LANER: active and passive optical links, and the connecting devices.
\subsection{Link models\label{subsec:Link-models}}
Active links can be realized in several ways, by e.g., laser-pumped erbium-doped fibers or semiconductor amplifiers connected with optical fibers \citep{Franz2008,Leo2010,Mou2012,Herr2014,Bednyakova2015,Romeira2016,Liu2018,Giacomelli2019}.
Importantly for our modeling approach: along the active links we always assume unidirectional
propagation. Unidirectional propagation avoids dealing with the interaction between counter-propagating
waves, substantially simplifying the model structure. Additionally, it can be experimentally implemented
by inserting, e.g., optical isolators.
We follow the approach introduced in~\citep{Vladimirov2005}, which can be used for
optical systems described by rate equations such as semiconductor
lasers \citep{Soriano2013,Larger2017}. It follows the so-called lumped-element method, where the link
is divided into several components: gain and losses sections, and
the bandwidth limiting element. At variance with~\citep{Vladimirov2005}, here
we do not include the saturable absorber section.
More specifically, let $E(t,z)$ be the slowly-varying amplitude of
the electric field at the position $z$, $E(t,0)$ being
the entrance point and $E(t,L)$ the exit point relative to the propagation
direction -- $L$ is the length of the link (see Fig.~\ref{fig:link}).
The corresponding propagation time is $T=L/v$, where
the light group-velocity $v$ is assumed to be constant. The direct
application of the approach from \citep{Vladimirov2005} leads to
the following relation between the amplitude of the electric field
at the ends of the link
\begin{equation}
\frac{1}{\beta}\frac{\partial}{\partial t}E(t,L) = -\left(1-i\frac{\Omega}{\beta}\right)E(t,L)+\sqrt{\kappa}e^{(1-i\alpha)G(t)/2}E(t-T,0)\label{eq:active link}
\end{equation}
\begin{eqnarray}
\frac{1}{\gamma}\frac{\partial G(t)}{\partial t} = d-G(t)+r\left(1-e^{G(t)}\right)|E(t-T,0)|^{2},\label{eq:gain}
\end{eqnarray}
where $G(t)$ is the integral of the local population inversion $n(t,z)$\footnote{At variance with \citep{Vladimirov2005}, for the sake of elegance,
here we time shift the definition of $G$ by $T$.}
\[
G(t)=\int_{0}^{L}n(t-T,z)dz.
\]
The parameters $\Omega$ and $\beta$ are the central frequency and
the bandwidth of the field filter respectively; $\gamma$ is the carrier
density relaxation rate, $\alpha$ the linewidth enhancement factor;
$d$ is the normalized injection current in the gain section; $\kappa$
accounts for possible additional losses affecting wave propagation;
finally, $r=\frac{vg\Gamma}{\gamma}$, where $g$ is the differential
gain and $\Gamma$ is the transverse modal fill factor.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\linewidth]{Link.eps}
\par\end{centering}
\caption{\label{fig:link} Schematic representation of the lumped-element approach
for the link containing the gain medium, passive section, and bandwidth
limiting element.}
\end{figure}
Since the derivation of the model (\ref{eq:active link},\ref{eq:gain})
follows closely \citep{Vladimirov2005}, it is not reported here.
We limit to explain the role of each term. The differential equation
(\ref{eq:active link}) corresponds to a filter with Lorenzian
line shape $\beta\exp\left[(-\beta+i\Omega)\xi\right]$ of the input signal
$\sqrt{\kappa}\exp\left[(1-i\alpha)G(t)/2\right]E(t-T,0)$,
where $E(t-T,0)$ is the field entering the link $T$ time before, $\sqrt{\kappa}$
is the total attenuation through nonresonant linear intensity losses,
and $\exp\left[(1-i\alpha)G(t)/2\right]$ is the amplification and
phase-shift factor due to the semiconductor gain medium. The rate
of change of the gain in equation (\ref{eq:gain}) is proportional
to the injection current $d$, the term $-G(t)$ describes the gain
decay without emission with the rate $\gamma$. The expression
$r\left(1-\mathrm{e}^{G(t)}\right)|E(t-T,0)|^{2}$
is the contribution of the electric field to the gain rate; it is
proportional to the variation of the intensity during the passage through
the link. For instance, if the electric field is amplified during
the passage, i.e. $|E(t-T,0)|^{2}<|E(t,L)|^{2}$, then this term is
negative, thus contributing to the gain decay.
Notice that the structure of the final model does not depend on the spatial distribution of the pump density, which
enters the final equation only via the integral $d$.
The minimal, ring configuration, periodic boundary condition $E(t,0)=E(t,L)$,
was already considered in~\citep{Vladimirov2005}.
Here, we discuss true network configurations starting from
the inclusion of passive links, which can be treated as input-output
\begin{equation}
E(t,L)=\mathbf{\sqrt{\kappa}}E\left(t-T,0\right),\label{eq:passive}
\end{equation}
where $T$ is the propagation time. This equation can
be considered as a special case of equation~(\ref{eq:active link})
for $\beta\to\infty$ (infinite bandwidth) and $G=0$. At variance
with active links, passive ones are allowed to be bidirectional,
since counter-propagating waves do not mutually interfere.
As we show later, some of the directions of the passive links are not involved in the stationary dynamics, and the corresponding propagating waves decay in time exponentially to zero. However, for the sake of completeness, we prefer to consider the general case of bidirectional propagations.
\subsection{Connecting the links \label{subsec:Connecting}}
The coupling circuit elements (splitters) transform the incoming electric
fields ($A_{j}(t)$ variables) into suitable output fields $B_{j}(t)$.
Here below, we refer to a specific but commonly used linear four-ports,
$2\times2$ power splitter, described by a scattering $4\times4$
matrix $\mathsf{S}_{\text{sp}}$ with the following properties (see
Fig.~\ref{fig:coupling}):
\begin{figure}
\centering{}\includegraphics[width=0.9\linewidth]{Splitter.eps} \caption{(a) General linear coupling between the links and (b) a particular
realization with the 2x2 splitter. \label{fig:coupling}}
\end{figure}
\begin{enumerate}
\item Matched, i.e. no reflections, $\mathsf{S}_{\text{sp}}$ has $2\times2$
block-diagonal structure;
\item Reciprocal (inversion symmetry, $\mathsf{S}_{\text{sp}}$ is symmetric):
$\mathsf{S}_{\text{sp}}^{T}=\mathsf{S}_{\text{sp}}$ (the superscript $T$ denotes the transpose);
\item Lossless (energy conservation, $\mathsf{S}_{\text{sp}}$ is unitary):
$\mathsf{S}_{\text{sp}}^{\dagger}=\mathsf{S}_{\text{sp}}^{-1}$.
\end{enumerate}
Taking into account the above properties, $\mathsf{S}_{\text{sp}}$
can be written in the form \citep{Pozar2012}
\begin{equation}
\mathsf{S}_{\text{sp}}=\left(\begin{array}{cc}
0 & \mathsf{s}\\
\mathsf{s}^{T} & 0
\end{array}\right),\label{eq:smatrix}
\end{equation}
where
\begin{equation}
\mathsf{s}=\mathrm{e}^{i\phi}\left(\begin{array}{cc}
\mathrm{e}^{i\psi_{1}}\cos\theta & \mathrm{e}^{i\psi_{2}}\sin\theta\\
-\mathrm{e}^{-i\psi_{2}}\sin\theta & \mathrm{e}^{-i\psi_{1}}\cos\theta
\end{array}\right).\label{eq:smatrix2}
\end{equation}
Here, $\theta$ measures the splitting ratio, $\psi_{1,2}$ the phase
shifts in the splitting arms, and $\phi$ the overall splitter phase
shifts. The $2\times2$ sub-matrix $\mathsf{s}$ describes the input-output
relation
\[
\left(\begin{array}{c}
B_{1}(t)\\
B_{2}(t)
\end{array}\right)=\mathsf{s}\left(\begin{array}{c}
A_{3}(t)\\
A_{4}(t)
\end{array}\right)
\]
for the output of the splitter given the input at the right ports
3 and 4, see Fig.~\ref{fig:coupling}. Similarly, $\mathsf{s}^{T}$
transforms the input from the left ports $A_{1}(t)$ and $A_{2}(t)$
into the output $B_{3}(t)$ and $B_{4}(t)$.
\section{The LANER model\label{sec:The-LANER-model}}
In order to define the LANER model, it is helpful to refer to a specific
example such as the one depicted in Fig.~\ref{fig: example}. It
is also useful to introduce two network representations. The first
one is the physical network $\mathcal{P}$ (see panel (a)), whose
nodes are the splitters, labelled by the index $k$, while the links
are represented by the physical connections between pairs of splitters
(self-connections are allowed), labelled by the index $m$. The second
one, is the ``abstract'' network $\mathcal{A}$, whose
nodes are the fields $A_{j}(t)$ observed at the end of each given
link ($A_{j}(t)=E_{j}(t,L_{j})$, where $L_{j}$ is the length of
the specific link). Hence, each active link is characterized by a
single $A_{j}(t)$ variable (see, e.g. $A_{9}$ and $A_{10}$ in Fig.~\ref{fig: example}(a)).
In passive links, bidirectional propagation is possible; hence we
introduce two variables, $A_{j_{1}}(t)$ and $A_{j_{2}}(t)$ to represent
counter-propagating waves (according to some unspecified rule): see
e.g., the pairs $\left(A_{1},A_{2}\right)$, $\left(A_{3},A_{4}\right)$,
$\left(A_{5},A_{6}\right)$, and $\left(A_{7},A_{8}\right)$ in Fig.~\ref{fig: example}(a).
The links of $\mathcal{A}$ encode the connections among the fields
intervening in each splitter, see Fig.~\ref{fig: example}(b). For
instance, consider node $A_{4}$ of the LANER network. Since the field
$A_{4}$ affects the field $A_{9}$ through the splitter 2, there
is a directed connection from $A_{4}$ to $A_{9}$. Similarly, the
fields $A_{2}$ and $A_{10}$ affect $A_{4}$ through the splitter
1, leading to the connections $A_{2}\to A_{4}$ and $A_{10}\to A_{4}$.
The resulting LANER network for our example is shown in Fig.~\ref{fig: example}(b).
As it will become progressively clear, this latter representation
is more appropriate for the formulation of the dynamical equations.
It is, nevertheless, necessary to introduce a formal relationship
between the two representations. With reference to $\mathcal{A}$,
its nodes can be ordered as we prefer. For the sake of simplicity,
passive nodes (passive links in the physical network $\mathcal{P}$)
are labelled by an index $j\le N_{p}$, while $N_{p}<j\le N$ refers
to the active nodes in $\mathcal{A}$ (active links in $\mathcal{P}$).
Once the ordering has been chosen, the mapping from $\mathcal{A}$
to $\mathcal{P}$ is determined by two functions $M(j)$ and $Q(j)$,
where $M(j)$ identifies the physical link in $\mathcal{P}$, while
$Q(j)=\pm1$ denotes the corresponding propagation direction (the
value $\pm1$ can be assigned once for all in an arbitrary way but
consistently all over the network). Inversely, given the physical
link $m$ and the propagation direction $q$, the function $J(m,q)$ determines
the corresponding node within $\mathcal{A}$.
\begin{figure}
\includegraphics[width=\linewidth]{Network-2.eps} \caption{\label{fig: example} A LANER example. (a) the physical optical network.
(b) the equivalent graph (the abstract LANER network). (c) the reduced
graph determining the dynamics. Active links and fields are denoted in red, while
passive in black.}
\end{figure}
From Eq.~(\ref{eq:passive}), the input-output relationship of the
passive links are represented as
\begin{equation}
A_{j}(t)=\sqrt{\kappa_{j}}B_{j}\left(t-T_{j}\right)\;,\quad j=1,\dots,N_{p},\label{eq:passive2}
\end{equation}
where $B_{j}(t)$ denotes the field amplitude at the beginning of
the link (see Fig.~\ref{fig:link}).
The evolution of the active links follows instead from Eq.~(\ref{eq:active link}),
\begin{equation}
\frac{1}{\beta_{j}}\frac{dA_{j}(t)}{\partial t} =
-\left(1-\frac{i\Omega_{j}}{\beta_{j}}\right)
A_{j}(t)+\sqrt{\kappa_{j}}\mathrm{e}^{(1-i\alpha_j)G_{j}(t)/2}B_{j}(t-T_{j})
\label{eq:Efinal-3}
\end{equation}
\begin{equation}
\frac{1}{\gamma_{j}}\frac{dG_{j}(t)}{\partial t} = d_{j}-G_{j}(t)+r_{j}\left(1-\mathrm{e}^{G_{j}(t)}\right)|B_{j}(t-T_{j})|^{2},
\label{eq:Gfinal-3}
\end{equation}
$j=N_{p}+1,\dots,N$, where the subindex $j$ has been added to the
parameters $\kappa$, $\alpha$, $\beta$, $d$, $\gamma$, $r$, and $T$ to
stress that the various links differ in general from one another.
Two-way propagation in active links cannot be modeled by the above
equations, since the two fields interact with each other. In order
to avoid the additional complications associated with such interaction,
as anticipated,
here in the following we assume unidirectional propagation along
active links (such as in Fig.~\ref{fig: example}).
\subsection{The scattering matrix}
The equations can be closed by expressing the $B_{j}$ fields
as functions of the $A_{j}$ variables. This is done, by including
the action of the splitters in the model. Formally speaking, the transformation
is linear and can be written in vector notations as
\begin{equation}
{\bf B}(t)=\mathsf{S}{\bf A}(t),\label{eq:splitter}
\end{equation}
where ${\bf B}=\left\{ B_{j}\right\} _{j=1}^{N}$, ${\bf A}=\left\{ A_{j}\right\} _{j=1}^{N}$,
while $\mathsf{S}$ represents the scattering matrix, encoding the
action of the splitters.
The structure of $\mathsf{S}$ can be determined from the contribution
of the single splitters. Let $k=1,\dots,K$ be the index numbering
the splitters, and let $a_{1}(k)$ be the input field from the network
$\mathcal{A}$ entering the port 1 of the splitter $k$. Correspondingly,
$a_{2}(k),a_{3}(k),$ and $a_{4}(k)$ are the fields entering the
ports 2, 3, and 4. Similarly, we define $b_{1}(k)$, $b_{2}(k)$,
$b_{3}(k)$, and $b_{4}(k)$ as the outgoing fields from the corresponding
ports of the splitter $k$. The indices of the output fields can be
determined by invoking the mapping between the links of
network $\mathcal{P}$ and the nodes of $\mathcal{A}$,
\begin{equation}
b_{i}(k)=J[M(a_{i}(k)),-Q(a_{i}(k))]~,~i=1,..,4~.
\end{equation}
The relationships resulting in the example in Fig.~\ref{fig: example}
are presented in Table~\ref{tab:splittersexample}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& $a_{1}$ & $a_{2}$ & $a_{3}$ & $a_{4}$ & $b_{1}$ & $b_{2}$ & $b_{3}$ & $b_{4}$\tabularnewline
\hline
\hline
$k=1$ & 1 & 3 & 2 & 10 & 2 & 4 & 1 & -\tabularnewline
\hline
$k=2$ & - & 9 & 4 & 5 & 9 & - & 3 & 6\tabularnewline
\hline
$k=3$ & 7 & 6 & 8 & - & 8 & 5 & 7 & 10\tabularnewline
\hline
\end{tabular}\caption{\label{tab:splittersexample} Definitions of the input-output fields
for the splitters from the example in Fig.~\ref{fig: example}. $k$
numbers the splitters, $a_{j}$ is the input field into port $j$
and $b_{j}$ the outgoing field from the port $j$. The number is
not defined if the propagation is unidirectional and there is no corresponding
field.}
\end{table}
From the structure of each $\mathsf{S}_{\text{sp}}$ matrix
(see Eq.~(\ref{eq:smatrix})) and the action of all splitters, it follows
that the elements of the scattering matrix are
\begin{eqnarray}
S_{j,l} & = & \sum_{k}\Bigl\{[\delta_{j,b_{1}(k)}\delta_{l,a_{3}(k)}+\delta_{j,b_{3}(k)}\delta_{l,a_{1}(k)}]\mathsf{s}_{1,1}(k)+\nonumber \\
& & [\delta_{j,b_{1}(k)}\delta_{l,a_{4}(k)}+\delta_{j,b_{4}(k)}\delta_{l,a_{1}(k)}]\mathsf{s}_{1,2}(k)+\nonumber \\
& & [\delta_{j,b_{2}(k)}\delta_{l,a_{3}(k)}+\delta_{j,b_{3}(k)}\delta_{l,a_{2}(k)}]\mathsf{s}_{2,1}(k)+\nonumber \\
& & [\delta_{j,b_{2}(k)}\delta_{l,a_{4}(k)}+\delta_{j,b_{4}(k)}\delta_{l,a_{2}(k)}]\mathsf{s}_{2,2}(k)\Bigr\}.\label{eq:scattering}
\end{eqnarray}
In simple terms, all elements of the matrix $\mathsf{S}$ are equal
to zero, except those which appear in the $k$th splitter transformation
for some value of $k$. Since each link is assumed to end in a single
well defined splitter, only one of the coefficients of the $\mathsf{s}$
elements can contribute to a given element of the matrix $\mathsf{S}$.
In the case of the LANER depicted in Fig.~\ref{fig: example}, the
matrix $\mathsf{S}$ is
\[
\mathsf{S}=\frac{1}{\sqrt{2}}\begin{bmatrix}1. & 0. & -1. & 0. & 0. & 0. & 0. & 0. & 0. & 0.\\
0. & 1. & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 1.\\
0. & 0. & 0. & 0. & 0. & 0. & 0. & 0. & -1. & 0.\\
0. & -1. & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 1.\\
0. & 0. & 0. & 0. & 0. & 0. & 0. & -1. & 0. & 0.\\
0. & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 1. & 0.\\
0. & 0. & 0. & 0. & 0. & -1. & 1. & 0. & 0. & 0.\\
0. & 0. & 0. & 0. & 0. & 0. & 0. & 1. & 0. & 0.\\
0. & 0. & 0. & 1. & 1. & 0. & 0. & 0. & 0. & 0.\\
0. & 0. & 0. & 0. & 0. & 1. & 1. & 0. & 0. & 0.
\end{bmatrix}
\]
where we have assumed equal, $50\%$ splitters with zero phase delays,
i.e., $\theta=\pi/4$ and $\phi=\psi_{1}=\psi_{2}=0$.
In the next subsection we derive the dynamical equations, determining
the behavior of the LANER network. However, the network structure
alone provides already important insight in the system properties.
A moment's reflection shows that the network depicted in Fig.~\ref{fig: example}(b)
can be decomposed into three logically concatenated subnetworks, $\mathcal{N}_{1},\mathcal{N}_{2},$
and $\mathcal{N}_{3}$. The subnetwork $\mathcal{N}_{1}$ is composed of the nodes
$A_{8}$ and $A_{5}$, and it is not influenced by the rest of the network.
Moreover, being entirely passive, its action eventually
vanishes and can be discarded, as shown in Fig.~\ref{fig: example}(c).
Analogously, but with an opposite causality motivation, $\mathcal{N}_{3}$,
composed of the nodes $A_{1}$ and $A_{3}$ does not influence the
remaining six nodes composing $\mathcal{N}_{2}$: it is only forced
by them in a master-slave configuration.
As a result, the bulk of the evolution arises from the fully
connected component $\mathcal{N}_{2}$~\footnote{The scenario would be slightly more complex in case either $\mathcal{N}_{1}$,
or $\mathcal{N}_{3}$ contain an active link.}.
\subsection{LANER equations}
Eqs.~(\ref{eq:splitter},\ref{eq:scattering}) allow expressing the
field ${\bf B}$ in terms of ${\bf A}$ and, thereby closing the evolution
equations~(\ref{eq:passive2}), which can be rewritten in vector
notations,
\begin{equation}
\boldsymbol{A_{p}}=\mathsf{K}_{p}\mathsf{P}\mathcal{T}\mathsf{S}\boldsymbol{A},
\label{eq:algeb}
\end{equation}
where $\boldsymbol{A_{p}}=\left[A_{1},\dots,A_{N_{p}}\right]^{T}$
refers to the fields in the passive links, $\mathsf{K}_{p}=\text{diag}\left\{ \sqrt{\kappa_{1}},\dots,\sqrt{\kappa_{N_{p}}}\right\} $
represents the losses in the passive network part, $\mathsf{P}$ is
the projector onto the passive links and, finally, $\mathcal{T}$
is the linear time-delay operator such that $\left[\mathcal{T}\boldsymbol{B}\right]_{j}=B_{j}(t-T_{j})$.
Similarly, we can eliminate ${\bf B}$ from Eqs.~(\ref{eq:Efinal-3},\ref{eq:Gfinal-3})
and rewrite them as
\begin{equation}
\mathsf{C}\frac{d\boldsymbol{A}_{a}}{dt}=-\left[\boldsymbol{\text{I}}-i\mathsf{C}\mathsf{\Omega}\right]\boldsymbol{A}_{a}+\mathsf{F}_{1}(\boldsymbol{G})(\mathsf{I}-\mathsf{P})\mathcal{T}\mathsf{S}\boldsymbol{A},\label{eq:diff1}
\end{equation}
\begin{equation}
\mathsf{D}\frac{d\boldsymbol{G}}{dt}=\boldsymbol{D}-\mathsf{F}_{2}(\boldsymbol{G})\left[\left((\mathsf{I}-\mathsf{P})\mathcal{T}\mathsf{S}\boldsymbol{A}\right)\circ\left((\mathsf{I}-\mathsf{P})\mathcal{T}\mathsf{S}\boldsymbol{A}\right)^{*}\right],\label{eq:diff2}
\end{equation}
where $\boldsymbol{A}_{a}=\left[A_{N_{p}+1},\dots,A_{N}\right]^{T}$
are the fields for the active network parts, $\boldsymbol{G}=\left[G_{N_{p}+1},\dots,G_{N}\right]$
are the gains for the active links, $\mathsf{C}=\text{diag}\left\{ \beta_{j}^{-1}\right\} _{j=N_{p}+1}^{N}$
and $\mathsf{D}=\text{diag}\left\{ \gamma_{j}^{-1}\right\} _{j=N_{p}+1}^{N}$
are photon and gain timescales for the active part, $\boldsymbol{D}=\left[d_{j}\right]_{j=N_{p}+1}^{N}$
is the vector of the rescaled injection currents, $\mathsf{\Omega}=\text{diag}\left\{ \Omega_{j}\right\} _{j=N_{p}+1}^{N}$
are frequencies of the spectral filtering for active links, $\mathsf{F}_{1}\left(\boldsymbol{G}\right)=\text{diag}\left\{ \sqrt{\kappa_{j}}e^{(1-i\alpha_j)G_{j}/2}\right\} _{j=N_{p}+1}^{N}$
and $\mathsf{F}_{2}\left(\boldsymbol{G}\right)=\text{diag}\left\{ r_{j}\left(1-e^{G_{j}}\right)\right\} _{j=N_{p}+1}^{N}$
are nonlinear gain functions. Finally, the symbol $\circ$ denotes
the component-wise multiplication.
From the representation (\ref{eq:algeb})--(\ref{eq:diff2}), we
see that the first two equations are linear in $\boldsymbol{A}$ --
the action of the passive part of the network is given by the linear
operator $\mathsf{K}_{p}\mathsf{P}\mathcal{T}\mathsf{S}$ involving
time delays and a matrix multiplication -- while the active subnetwork
is essentially nonlinear. The system also possesses $S^{1}$ symmetry
$\boldsymbol{A}\to\boldsymbol{A}e^{i\varphi}$, $\varphi\in S^{1}$
which is common for laser systems. The resulting system involves delay
differential equations (\ref{eq:diff1})--(\ref{eq:diff2}) as well
as algebraic conditions (\ref{eq:algeb}). Such delay-differential-algebraic
equations appear recently in a model for certain laser systems \citep{Schelte2019}
and they are the subject of emerging theoretical research \citep{Ha2014,Unger2018}.
A peculiarity of the model is the presence of two types of ``oscillators'':
active ones, characterized by two variables, and ``passive'' ones
described by algebraic relations with time-shifts (without time derivatives).
This is reminiscent of Boolean chaos~\citep{GhilZaliapinColuzzi2008,Zhang2009,Rosin2014,LohmannDHuysHaynesEtAl2017,LueckenRosinWorlitzerEtAl2017},
where all equations involve time-shifts and no ``filtering'' by
time-derivatives. Here, only a subset of variables follows such a kind of dynamical evolution. As for the network structure,
the links in $\mathcal{A}$ are all directed: each node can have at
most two outgoing links and two (different) incoming ones.
In the zero delay limit, the evolution equations of the active links (see Eqs.~(\ref{eq:Efinal-3},\ref{eq:Gfinal-3})) reduce to ODEs,
while passive links, determined by the Eq.~(\ref{eq:algeb}), reduce to (linear) algebraic conditions, meaning that they can
be eliminated from the evolution equations. As a result, in this limit, the LANER dynamics is equivalent to a
network of $N-N_p$ oscillators (the active links) each oscillator being described by three variables (the two components of the
field amplitude, plus the population inversion).
In a sense, the model would be not too dissimilar from ensembles of either Lorenz or R\"ossler oscillators, both characterized by the
same number of variables. An important difference is however in the
phase-shift symmetry, which is typical for laser models \cite{Uchida2005,Soriano2013}. As a result, the single oscillator cannot be chaotic, and complex dynamics can arise due to interactions. The properties of the zero-delays model are most close to the rate equation models for semiconductor diodes, or compound cavity lasers \cite{Yanchuk2004,Erzgraber2008,Kominis2017,Erneux2019}.
\section{Networks with a single active link\label{sec:Networks-single-active}}
\noindent Because of the presence of nonlinear elements, LANER systems
are expected to exhibit a rich and complex dynamics. In
order to clarify the role of passive network sections, see
Fig.~\ref{fig:Single-active-medium}, in this section
we analyse setups characterized by a single active (nonlinear)
link.
\begin{figure}
\centering{}\includegraphics[width=0.5\linewidth]{one-active-general.eps}
\caption{Single active medium LANER configuration. \label{fig:Single-active-medium}}
\label{1active}
\end{figure}
Since we have a single active medium, to simplify notations in this
section we use letters without subscript for the field $A$ (instead
of $A_{N}$), the population inversion $G$, and for all other parameters
of the active link. The scattering (coupling) matrix can be written
in the block form as
\begin{equation}
\mathsf{S}=\left[\begin{array}{cc}
\mathsf{S}_{p} & \mathbf{S}_{1}\\
{\bf S}_{2}^{T} & s
\end{array}\right]\label{eq:S-para1}
\end{equation}
where $\mathsf{S}_{p}$ is a $P\times P$ matrix representing the
connections between the passive links; ${\bf S}_{1}$ and ${\bf S}_{2}$
are two $P$-dimensional vectors encoding the connections between
passive and active links.
Finally the scalar $s$ accounts for the self-interaction present
when the active link is closed onto itself.
As a result, the passive (algebraic) part of the dynamical equation,
Eq.~(\ref{eq:algeb}), can be written as
\begin{equation}
{\bf A}_{p}=\mathsf{K}_{p}\mathcal{T}\left(\mathsf{S}_{p}\mathbf{A}_{p}+\mathbf{S}_{1}A\right).\label{eq:para1Ap}
\end{equation}
Analogously, the two differential equations describing the active
link are
\begin{equation}
\frac{1}{\beta}\frac{dA(t)}{dt} = -\left(1-\frac{i\Omega}{\beta}\right)A(t)+\sqrt{\kappa}\mathrm{e}^{(1-i\alpha)G(t)/2}B(t-T),\label{eq:para1A}
\end{equation}
\begin{equation}
\frac{1}{\gamma}\frac{dG(t)}{dt} = d-G(t)+r\left(1-\mathrm{e}^{G(t)}\right)|B(t-T)|^{2},\label{eq:para1G}
\end{equation}
where $B(t)$ represents the field amplitude at the beginning of
the active link and is given by
\begin{equation}
B(t)=sA(t)+{\bf S}_{2}^{T}{\bf A}_{p}(t),\label{eq:para1B}
\end{equation}
see Eqs.~(\ref{eq:splitter}) and (\ref{eq:S-para1}). Expressions
(\ref{eq:para1Ap})--(\ref{eq:para1B}) comprise the complete set
of equations determining the dynamics of the setup in Fig.~\ref{fig:Single-active-medium}.
There is a variety of possible configurations of LANERs with one active
link. Due to the nonlinearities, time-delays, possible complex coupling
structure encoded in the coupling matrix $\mathsf{S}$, the description
of their dynamical properties is interesting but infeasible in complete
generality. Therefore, here we restrict our consideration to the
solutions with stationary intensity of the form
\begin{equation}
A=\bar{A}\mathrm{e}^{i\omega t},\quad G=\bar{G},\quad\mathbf{A}_{p}=\bar{{\bf A}}_{p}\mathrm{e}^{i\omega t}\label{eq:stst}
\end{equation}
with time-independent $\bar{A}$, \textbf{$\bar{G}$}, and $\bar{\boldsymbol{A}}_{p}$.
We will call them stationary (LANER) solutions. Substituting (\ref{eq:stst})
into the evolution equations (\ref{eq:para1Ap})--(\ref{eq:para1B}),
we obtain
\begin{equation}
\begin{array}{cc}
& \frac{i\omega}{\beta}\bar{A}+\left(1-\frac{i\Omega}{\beta}\right)\bar{A}=\sqrt{\kappa}\mathrm{e}^{(1-i\alpha)\bar{G}/2-i\omega T}\left(s\bar{A}+{\bf S}_{2}^{T}\bar{{\bf A}}_{p}\right),\\
& \bar{G}=d+r\left(1-\mathrm{e}^{\bar{G}}\right)|s\bar{A}+\mathbf{S}_{2}^{T}\bar{\mathbf{A}}_{p}|^{2},\\
& \bar{\boldsymbol{A}}_{p}=\mathsf{K}_{\Omega}\left(\mathbf{S}_{1}\bar{A}+\mathsf{S}_{p}\bar{\mathbf{A}}_{p}\right),
\end{array}\label{eq:cweq}
\end{equation}
where $\mathsf{K}_{\Omega}=\text{diag}\left(\sqrt{k_{1}}\mathrm{e}^{-i\omega T_{1}},\dots,\sqrt{k_{n}}\mathrm{e}^{-i\omega T_{n}}\right)$.
The non-lasing state is trivially identified by $\bar{A}=0$, $\bar{\mathbf{A}}_{p}=0$,
and $\bar{G}=d$. The lasing states can be determined starting from
the last equation, which gives
\[
\left(\mathsf{I}-\mathsf{K}_{\Omega}\mathsf{S}_{p}\right)\bar{\mathbf{A}}_{p}=\mathsf{K}_{\Omega}\bar{A}\mathbf{S}_{1}.
\]
For physically relevant cases of $\kappa_{j}<1$, $j=1,\dots,N_{p}$,
the matrix $\mathsf{I}-\mathsf{K}_{\Omega}\mathsf{S}_{p}$ is invertible,
hence
\begin{equation}
\bar{{\bf A}}_{p}=\bar{A}\left(\mathsf{I}-\mathsf{K}_{\Omega}\mathsf{S}_{p}\right)^{-1}\mathsf{K}_{\Omega}\mathbf{S}_{1}.\label{eq:Ap}
\end{equation}
Upon replacing $\bar{{\bf A}}_{p}$ from (\ref{eq:Ap}) in the first
two equations of (\ref{eq:cweq}), we obtain
\begin{align}
& 1+i\frac{\omega-\Omega}{\beta}=\sqrt{\kappa}\mathrm{e}^{(1-i\alpha)\bar{G}/2-i\omega T}R\left(\omega\right),\label{eq:11}\\
& \bar{G}=d+r\left(1-\mathrm{e}^{\bar{G}}\right)\left|R(\omega)\right|^{2}\left|\bar{A}\right|^{2},\label{eq:22}
\end{align}
where the transfer function
\begin{equation}
R(\omega):=s+\mathbf{S}_{2}^{T}\left(\mathsf{I}-\mathsf{K}_{\Omega}\mathsf{S}_{p}\right)^{-1}\mathsf{K}_{\Omega}\mathbf{S}_{1}\label{eq:R-general}
\end{equation}
accounts for the propagation within the passive part of the network,
including the relative time-delays.
By equating the square moduli of both sides of Eq.~(\ref{eq:11}), we obtain the expression for the stationary gain
\begin{equation}
\bar{G}(\omega)=\ln\left[1+\left(\frac{\omega-\Omega}{\beta}\right)^{2}\right]-\ln\kappa-2\ln|R(\omega)|.\label{eq:gain-1}
\end{equation}
Further, Eq.~(\ref{eq:22}) leads to the following stationary field intensity
\begin{equation}
\left|\bar{A}\right|^{2}=\frac{\bar{G}(\omega)-d}{r\left(1-\mathrm{e}^{\bar{G}}\right)\left|R(\omega)\right|^{2}}.\label{eq:A-general}
\end{equation}
Finally, the argument of Eq.~(\ref{eq:11}) yields the condition
for the frequencies
\begin{equation}
\omega=\frac{1}{T}\left(-\frac{\alpha}{2}\bar{G}(\omega)-\arg\left(1+i\frac{\omega-\Omega}{\beta}\right)+\arg R(\omega)\right)+\frac{2\pi k}{T}, \label{eq:frequency}
\end{equation}
where $k\in\mathbb{Z}$.
The dependence on the delays due to the propagation along the passive links is contained in $\arg R(\omega)$ which is the
superposition of different ``periods'' $2\pi/T_i$, corresponding to the various passive links. In the limit case of the ring
laser~\citep{Menegozzi1973} (no passive links), $R(\omega)\equiv 1$.
In the
next section we consider the less trivial example of a double ring
configuration, illustrating its stationary states.
\section{A first non-trivial LANER: the double ring configuration\label{sec:A-first-non-trivial}}
The simplest, non-trivial example of LANER is the configuration depicted
in Fig.~\ref{fig:1para-setup}, with a single, directed link with propagation delay $T$, accompanied
by a bidirectional passive link with delay $T_2$.
\begin{figure}
\centering\includegraphics[width=0.5\linewidth]{fig-double-ring.eps} \caption{\label{fig:1para-setup} Double ring LANER with one active and one
passive physical link. Physical network is shown in (a) and the corresponding
abstract network in (b).}
\end{figure}
Similarly to the LANER with one active link in Sec.~\ref{sec:Networks-single-active},
$A(t)$ represents the field at the end of the active link and
$G$ the corresponding gain variable. The equation for the active
link is
\begin{equation}
\frac{1}{\beta}\frac{dA(t)}{dt} = -(1-i\frac{\Omega}{\beta})A(t)+\sqrt{\kappa}\mathrm{e}^{(1-i\alpha)G(t)/2}B(t-T),\label{eq:Efinal-1}
\end{equation}
\begin{equation}
\frac{1}{\gamma}\frac{dG(t)}{dt} = d-G(t)+r\left(1-\mathrm{e}^{G(t)}\right)|B(t-T)|^{2},\label{eq:Gfinal-1}
\end{equation}
while for the passive link
\begin{equation}
A_{2}=\sqrt{\kappa_{2}}B_{2}\left(t-T_{2}\right).\label{eq:passivex}
\end{equation}
The field $A_{1}$ is not coupled with the rest of the network. Since
$A_{1}(t)=\sqrt{\kappa_{2}}B_{1}\left(t-T_{2}\right)=\sqrt{\kappa_{2}/2}A_{1}\left(t-T_{2}\right)$,
its amplitude vanishes ($A_{1}(t)\to 0$) exponentially with time and it does not contribute to the stationary regime. Accordingly, such field could be removed from the very beginning in the model.
The coupling is described by the equation
\[
\left(\begin{array}{c}
B\\
B_{2}
\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}
1 & 1\\
-1 & 1
\end{array}\right)\left(\begin{array}{c}
A\\
A_{2}
\end{array}\right),
\]
where we assume the splitter action to be 50\% without phase delays.
The final dynamical equations are
\begin{multline}
\frac{1}{\beta}\frac{dA(t)}{dt} = -(1-i\frac{\Omega}{\beta})A(t)+ \\ \sqrt{\frac{\kappa_{3}}{2}}\mathrm{e}^{(1-i\alpha)G(t)/2}\left(A(t-T)+A_{2}(t-T)\right),\label{eq:Efinal-1-2}
\end{multline}
\begin{equation}
\frac{1}{\gamma}\frac{dG(t)}{dt} = d-G(t)+\frac{r}{2}\left(1-\mathrm{e}^{G(t)}\right)\left|A(t-T)+A_{2}(t-T)\right|^{2},\label{eq:Gfinal-1-2}
\end{equation}
accompanied by the equation for the passive link
\begin{equation}
A_{2}(t)=\sqrt{\frac{\kappa_{2}}{2}}\left(-A(t-T_{2})+A_{2}(t-T_{2})\right).\label{eq:passive-1}
\end{equation}
In practice, this equation is a compact way to account for infinitely many delays.
Following the general approach of Sec.~\ref{sec:Networks-single-active},
for the stationary solutions
\begin{equation}
A_{j}=\bar{A}_{j}\mathrm{e}^{i\omega t},\qquad G=\bar{G}\label{eq:CW}
\end{equation}
the transfer function $R(\omega)$ of the passive part is
\[
R(\omega)=\frac{1}{\sqrt{2}}\left(1-\sqrt{\frac{\kappa_{2}}{2}}e^{-i\omega T_{2}}\left(1-\sqrt{\frac{\kappa_{2}}{2}}e^{-i\omega T_{2}}\right)^{-1}\right).
\]
Given $R(\omega)$, the stationary states can be determined from
Eq.~(\ref{eq:frequency}) for the frequency $\omega$, Eq.~(\ref{eq:gain-1})
for the gain $\bar{G}$, and Eq.~(\ref{eq:A-general}) for the intensity
$|\bar{A}|^2$.
A typical set of stationary states is presented in Fig.~\ref{fig:Stationary-states-para1}
for different parameter values. The blue lines show the dependence
of the intensity $|A|^{2}$ in the active link on the frequency $\omega$, as
from Eq.~(\ref{eq:A-general}), irrespective whether the frequency
is supported by the LANER setup. The dots denote the actual stationary
states: the corresponding frequencies are determined by numerically
finding the roots of the scalar equation (\ref{eq:frequency}).
\begin{figure*}
\centering{}\includegraphics[width=1\textwidth]{cw-double-ring.eps} \caption{Stationary states (\ref{eq:CW}) of the double-loop network for different
parameter values. The points refer to the values of the intensity
$|A|^{2}$ of the active link versus its frequency $\omega$. The
blue line shows the dependence of $|A(\omega)|^{2}$ given by Eq.~(\ref{eq:A-general}).
Upper panel (a-c): $T=30$, $T_{2}=1$, and $d=0.5$ (a), $d=1.0$
(b), and $d=1.5$ (c). Middle panel (d-f): $T=2$, $T_{2}=7$, and
$d=0.5$ (d), $d=1.0$ (e), and $d=1.5$ (f). Bottom panel (g-i):
$T=21$, $T_{2}=13$, and $d=0.5$ (g), $d=1.0$ (h), and $d=1.5$
(i). \label{fig:Stationary-states-para1}}
\end{figure*}
The various panels are arranged in the following way: rows identify
different sets of propagation (delay) times $T$ and $T_{2}$; columns
identify different values of the pump parameter $d$. More specifically,
the upper row corresponds to a relatively long propagation time along
the active link ($T=30\gg T_{2}=1$); in the middle row, the ratio
is approximately opposite ($T=2<T_{2}=7$); finally, the lower
row corresponds to comparable propagation times ($T=21$, $T_{2}=13$
-- their ratio is an approximation of the golden mean). As for the
columns, from left to right, $d=0.5$, 1, and 1.5, respectively. A
first obvious consideration is that upon increasing the pump amplitude
(i.e. moving from left to right), the number of active modes increases
(see the range of possible frequencies) as well as their amplitude.
As in the standard multimode laser, the higher amplitude field modes
are located around the optical center frequency $\Omega$. The spectrum is not symmetric around $\Omega$ if the linewidth enhancement factor $\alpha \neq 0$.
By comparing the different rows, we see that the number of
active modes grows with the time-delays.
Moreover, we notice that the passive section determines the
amplitude of the active modes, inducing a relatively high
sensitivity of their frequency when $T_{2}$ is comparable to $T$.
The overall scenario can be traced back to the structure of the transfer function $R(\omega)$.
In Fig.~\ref{fig:R}, we separately plot modulus (upper panel) and phase (bottom panel) of $R(\omega)$
for the same delays as in the first row of Fig.~\ref{fig:Stationary-states-para1}.
Since the passive part is composed of a single link, $R(\omega)$ is periodic of period $2\pi/T_2$.
As $T\ll T_2$, the variation of $R(\omega)$ is slow compared over $\delta \omega = 2\pi/T$, so that
the distance between the active modes is approximately equal to $2\pi/ T$.
Moreover, we see that the intensity drops in correspondence of the minima
of the transfer function, while the local maxima of the field intensity are
located in the vicinity of the maxima of $\left|R(\omega)\right|$ as well as of the zeros
of the phase $\arg R(\omega)$; this corresponds to an optimal transfer
function for the minimal attenuation and no phase delay of the passive section.
For larger $T_2$, the spacing is more irregular. In particular, if $T_2 \ll T$, it is the length
of the passive link, which determines the average mode spacing (see the middle row in
Fig.~\ref{fig:Stationary-states-para1}).
\begin{figure}
\includegraphics[width=\linewidth]{filter-double-ring.eps}
\caption{\label{fig:R} Modulus $|R(\omega)|$ (panel (a), black line, right
axis) and phase $\arg R(\omega)$ (panel (b), red line, right axis)
of the transfer function of the passive section compared to the intensity
of the field $A$. Parameter values: $T=30$, $T_{2}=1$, and $d=1.5$. }
\end{figure}
\section{Conclusions and future perspectives}
In this paper we have introduced a formalism for the study of
networks whereby waves with different frequencies propagate and interfere
with one another, according to resonances implicitly selected by the
network structure. As a result, we are able to study physical networks composed
of both active and passive links, where the waves can be either amplified
or damped. The dynamical system is represented as an abstract network whose nodes correspond to
the electric fields propagating along the physical links, while its
links correspond to the splitters which couple incoming with outgoing
fields.
This formalism can be considered as an extension of the method developed
to analyse quantum graphs to a context where wave propagation is nonlinear and in the presence of
both gain and losses.
Under the simplifying assumption of a linear damped propagation along
the passive links, the corresponding dynamics can be treated as delayed boundary conditions.
In this paper we restricted the analysis to setups composed of a single active link. In such cases, the action
of the passive subnetworks can be schematized through the action of a possibly complex transfer function,
which contributes to select the active degrees of freedom, in principle, ``offered" by the amplification
along the single active link. The potential high dimensionality of the resulting dynamics is ensured by the
delayed character of the equations.
In the presence of multiple active links, we expect additional peculiarities to emerge because of the
interactions between them. This challenging task goes, however, beyond the scope of the present work, which
is mostly methodological.
From the point of view of the model, a relevant
assumption that has helped simplifying its mathematical structure
is the adiabatic elimination of the atomic polarization.
This is a legitimate approximation in the
context of semiconductor lasers, but much less so for, e.g. erbium
doped optical fibers.
A relevant step forward would be the extension
of the formalism to such systems. This objective can in principle
be achieved by explicitly including the spatial dependence
along the active links; however the corresponding computational complexity
would be so high as to make their analysis practically unfeasible.
The true question is indeed under which conditions it is possible
to keep a \textit{spaceless} mathematical structure, integrating out
the spatial dependence of the fields. This is a nontrivial step. Already
in the context of semiconductor lasers, it should be noted that our
equation of reference, used to describe an active link (see Eq.~(\ref{eq:active link}))
is partially phenomenological. As shown by Vladimirov and Turaev \citep{Vladimirov2005},
the spatial integration of the population inversion leads to an algebraic
delay equation, which exhibits time singularities under the form of
high frequency instabilities. The proposed solution, also adopted
herewith consists in the introduction of a filter (schematized by
the time derivative of the field and identified by the bandwidth $\beta$),
which regularizes the overall evolution.
This problem is reminiscent of the difficulty which emerges in a relatively
similar class of models proposed more than ten years ago to describe
an ensemble of dynamical units (the network nodes) which perform instantaneous
Boolean operations in the absence of an external clock which synchronizes
the single operations (see Ghil et al.~\citep{GhilZaliapinColuzzi2008}).
The resulting mathematical model consists of a set of standard (non-differential)
delay equations, where, like here, the delays originate from the transmission
times along the single links. In such a context, an unphysical ultraviolet
divergence appears as a consequence of the assumption of the instantaneous
response of the single devices. Also in that case, the evolution equation
has been phenomenologically regularized by turning it into a differential
equation.
Besides the extension of our formalism to a still wider class of optical
networks, another direction worth exploring is the actual dynamical
regimes that can arise within the current context. In this paper,
we have limited ourselves to determining the (many) stationary solutions.
What about their stability and the onset of irregular dynamics
because of the mutual nonlinear interactions?
Finally, useful information and hints can instead arise from the analysis of
a simpler setup: the zero-delay limit, as the LANER reduces to a
network of 3-d coupled oscillators with phase-shift symmetry:
it would be interesting to see to what
extent its dynamics differs from that of similar dynamical systems, more
extensively studied in the literature.
\section*{Acknowledgements}
Funding: SY was supported by the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) [project No.~411803875]
\section*{References}
|
2,869,038,155,313 | arxiv |
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{See Sections ~\ref{sec: waldo_section} and ~\ref{sec: experiments}}.
\item Did you describe the limitations of your work?
\answerYes{See Section ~\ref{sec: discussion}}.
\item Did you discuss any potential negative societal impacts of your work?
\answerNA
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{See Appendix~A.}
\item Did you include complete proofs of all theoretical results?
\answerNo{See Appendix A; we referenced a paper with the relevant proofs.}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\\
\answerYes{See Section~\ref{sec: intro}, where we included a link to all the code used. The dataset used in Section~\ref{sec: muons} is referenced therein.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{See Sections~\ref{sec: waldo_stat_properties} and ~\ref{sec: experiments}, and Appendix C.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNA
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{See Appendix C.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{See Section~\ref{sec: experiments}.}
\item Did you mention the license of the assets?
\answerNA
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{We included a link to our code in Section~\ref{sec: intro}.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA
\end{enumerate}
\end{enumerate}
\section{Introduction}
\label{sec: intro}
The vast majority of modern machine learning
targets {\em prediction} problems, with
algorithms such as Deep Neural Networks (DNN) being particularly successful with point predictions
of a target variable $Y \in \mathbb{R}$ given complex feature vectors or image data $\mathbf{x} \in \mathcal{X}$.
In many science applications, however, one is more interested in uncertainty quantification (UQ) than in point estimation per se. Domain sciences,
particularly the physical sciences,
often seek to {\em constrain parameters of interest} using theoretical (or simulation) models together with observed (experimental) data. For example, a primary goal in cosmological analysis is to increase the discovery potential of new physics by better constraining parameters of the $\Lambda$-CDM model in Big Bang cosmology, using higher-resolution simulation models and more precise survey data \cite{abell2009lsst}. In particle physics, a central goal
of collider experiments is to constrain parameters of theoretical models of particle interactions \cite{brehmer2018guide}. Climate scientists, on the other hand, are interested in constraining uncertain climate model parameters using atmospheric observations; e.g., \cite{Johnson2020} uses aircraft, ship, and ground observations to constrain parameters describing the impact of aerosols on Earth's radiative balance. The above-mentioned inferential tasks are all {\em inverse problems}, meaning that the parameters of interest are not directly observed but are the causes of the observed data
$\mathbf{x}$. We will henceforth denote internal parameters by $\boldsymbol{\theta}$ to distinguish the problem of using $\mathbf{x}$ to infer
an internal fixed parameter $\boldsymbol{\theta} \in \boldsymbol{\Theta}$,
from the ``predictive problem''
of using $\mathbf{x}$ to predict
an observable random variable $Y$.
UQ for inverse problems entails constructing confidence regions (rather than prediction regions) for $\boldsymbol{\theta}$.
Simulations can be used to predict how systems will behave in a variety of circumstances, and can help to constrain parameters when Gaussian
or other parametric likelihood models become questionable at the level of precision of modern scientific data.
Stochastic simulators are able to produce observable data $\mathbf{x}$ under different parameter settings, but they encode the likelihood function $\mathcal{L}(\boldsymbol{\theta};\mathbf{x}) \coloneqq p(\mathbf{x|\boldsymbol{\theta}})$ only implicitly. Statistical inference when $\mathcal{L}(\boldsymbol{\theta};\mathbf{x})$ is intractable is often dubbed likelihood-free inference (LFI), and is a form of simulation-based inference (SBI). LFI has undergone a revolution in terms of the complexity of problems that can be tackled (see \cite{cranmer2020frontier} for a recent review). Many LFI methods are, however, known to be {\em overly confident}, meaning that they yield confidence sets with empirical coverage
smaller than the desired nominal coverage \citep{hermans2021averting},
hence leading to potentially misleading results.
In parallel to LFI methods, DNNs (such as convolutional neural networks \cite{lecun1995convolutional}) are now used in many domain sciences to directly estimate internal parameters of interest in statistical models (e.g., \cite{kieseler2022calorimetric, gerber2021fast, lenzi2021neural, ho2019robust}).
DNNs are often easier to train than neural density estimators (such as normalizing flows), but the problem of producing reliable uncertainty estimates of internal parameters from DNN point predictions has not yet found a solution.
Let $\mathcal{D} \coloneqq (\mathbf{x}_1, \dots, \mathbf{x}_n)^T$
denote observable data, where the ``sample size'' $n$ refers to the number of observations from a fixed configuration of the parameters $\boldsymbol{\theta}$.
The goal of this work is to present a practical LFI procedure that can leverage any high-capacity prediction algorithm or neural posterior estimator
to construct
confidence regions for $\boldsymbol{\theta}$
with {\em correct conditional coverage}, that is
\begin{equation}
\label{eq:conditional_coverage}
\mathbb{P}(\boldsymbol{\theta} \in \mathcal{R}(\mathcal{D})|\boldsymbol{\theta}) = 1 - \alpha , \quad \forall \boldsymbol{\theta} \in \boldsymbol{\Theta},
\end{equation}
where $\alpha \in (0,1)$ is a prespecified miscoverage level.
Correct conditional coverage implies correct marginal coverage, $\mathbb{P}(\boldsymbol{\theta} \in \mathcal{R}(\mathcal{D})) = 1 - \alpha$, but the former is a stronger requirement that checks that the confidence set is calibrated
\textit{no matter what the true parameter is}, whereas marginal coverage only requires the set to be calibrated on average over the parameter space~$\boldsymbol{\Theta}$.
The concept of conditional coverage is also more widely applicable than marginal coverage: the latter only applies when $\boldsymbol{\theta}$ is treated as a random variable, while the former also applies when $\boldsymbol{\theta}$ is a non-random fixed parameter, as in all the above-mentioned scientific applications.
In addition to calibration, we want to construct confidence regions that are informative and can constrain parameters as much as possible; that is, with high statistical power.
Finally, for our method to be useful in practice, we need \textit{diagnostics} to verify whether we indeed achieve nominal conditional coverage across the entire parameter space $\boldsymbol{\Theta}$. Standard diagnostic tools for SBI \citep{talts2018validating} assess marginal coverage only, and cannot easily identify parameter values for which confidence sets are unreliable.
To address these problems, we introduce \textsc{Waldo}, a novel method to construct correctly calibrated confidence regions in an LFI setting. \textsc{Waldo} reframes the Wald test \cite{wald1943tests} and uses the Neyman construction \cite{neyman1937outline} to convert point predictions and posterior distributions from any prediction algorithm or posterior estimator into confidence regions with correct conditional coverage. It does so by exploiting estimates of the conditional mean $\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}]$ and conditional variance $\mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$, but it is indifferent to the source of these quantities. \textsc{Waldo} stems from a recent likelihood-free frequentist inference (LF2I) framework proposed in \cite{dalmasso2021likelihood}, which showed how to construct valid confidence regions using likelihood estimates in a SBI setting. However, the authors also demonstrated that the statistical power of the resulting sets could easily degrade due to numerical approximations when using a likelihood-based test statistic. By recalibrating predictions or posteriors, \textsc{Waldo} is able to leverage recent developments in AI, which allow to effectively handle complex high-dimensional data. Figure~\ref{fig:waldo_framework} summarizes the different components of \textsc{Waldo}, and the full procedure is detailed in Section~\ref{sec: waldo_methodology}. This approach embraces the best sides of both the Bayesian and frequentist perspectives to statistical inference by providing confidence sets that \textit{(i)} can effectively exploit available domain-specific knowledge, further constraining parameters when the prior is correctly specified, and \textit{(ii)} are guaranteed to have the nominal conditional coverage even in finite samples, regardless of the correctness of the prior. \textsc{Waldo} is amortized, meaning that once the procedure has been trained it can be evaluated on any number of observations. We lay out the statistical properties of \textsc{Waldo}, providing synthetic examples to support our claims. Finally, we show the effectiveness of this method with a real-world application in particle physics: inferring the energy of muons at a future particle collider using deep convolutional neural networks for high-dimensional 3D images from a particle detector. The results we obtain for this problem (see Figure~\ref{fig:muons68}) are of scientific interest by themselves, as a rigorous and powerful estimate of the uncertainty around observed muon energies is essential in the search of new physics. The code used to produce our experiments is available on \href{https://github.com/anonymoussoftware/waldo}{Github}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9 \textwidth]{figures/introduction/waldo_framework.png}
\caption{
\textbf{Schematic diagram of \textsc{Waldo}.} {\em Left}: To estimate the conditional mean $\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}]$ and variance $\mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$ and construct the test statistic $\hat{\tau}^{\textsc{Waldo}}$, we use a first train set $\mathcal{T}$ and a prediction algorithm (e.g., DNN) or posterior estimator (e.g., normalizing flow). {\em Center:} Then, we simultaneously estimate critical values $C_{\boldsymbol{\theta}_0,\alpha}$, for all parameter values $\boldsymbol{\theta}_0 \in \boldsymbol{\Theta}$, via quantile regression on a second train set $\mathcal{T}^{\prime}$. {\em Bottom:} For observed data $D$, the two pieces are combined to construct a confidence region via Neyman inversion. {\em Right:} We compute coverage diagnostics for constructed confidence regions of any parameter $\boldsymbol{\theta} \in \boldsymbol{\Theta}$ via spline regression on a third train set~$\mathcal{T}^{\prime}$. See Section~\ref{sec: waldo_methodology} for details.}
\label{fig:waldo_framework}
\end{figure}
\paragraph{Notation} We refer to parameters of interest as $\boldsymbol{\theta} \in \boldsymbol{\Theta} \subset \mathbb{R}^p$ and to input data as $\mathcal{D} = (\mathbf{x}_1, \dots, \mathbf{x}_n)^T$, with $\mathbf{x}_i \in \mathcal{X} \subset \mathbb{R}^{d}$ and possibly $p \neq d$. Throughout the paper, $n$ denotes the size of a
sample of dimension $d$ from a specific configuration of the parameters $\boldsymbol{\theta}$, and is distinct from $B, B^\prime$ and $B^{\prime\prime}$, i.e., the number of simulations in each of the datasets $\mathcal{T}, \mathcal{T}^\prime$ and $\mathcal{T}^{\prime\prime}$ required at different steps of our method. In addition, let $F_{\boldsymbol{\theta}}$ represent the stochastic simulator that encodes the likelihood function of $\boldsymbol{\theta}$. We distinguish between observable data and actual observations by denoting the latter as $D$. The true value of a parameter is always denoted as $\boldsymbol{\theta}^*$. Finally, we refer to confidence regions as $\mathcal{R}(\mathcal{D})$. The terms ``set'', ``region'' and (when $p=1$) ``interval'' are used interchangeably.
\section{Relation To Other Work}
\label{sec: related_work}
Existing approaches for uncertainty quantification in SBI (see \cite{cranmer2020frontier} for a recent review) are either based on Bayesian posterior estimation \cite{beaumont2002approximate,marin2016abcrf, Papamakarios2016SNP, lueckamnn2017Posterior, greenberg2019posterior, chen2019gaussiancopula, izbicki2019abc, radev2020Bayesflow} or on inversion of likelihood ratio tests \cite{Brehmer2020,brehmer2018guide,dalmasso2021likelihood}. \textsc{Waldo} differs from the former by aiming for conditional coverage instead of marginal coverage, and from the latter by taking advantage of the power of predictive ML algorithms. Similarly, existing approaches for deep learning uncertainty quantification (see \cite{gawlikowski2021survey} for a recent review), such as Monte Carlo drop out \cite{Gal2016} and conformal inference \cite{papadopoulos2007conformal}, construct \emph{prediction sets} instead of \emph{confidence sets}. As such, these approaches, again, control marginal coverage instead of conditional coverage. Before \textsc{Waldo}, there has been no straight-forward way to obtain confidence sets from point predictions or estimated posteriors obtained from deep neural networks and other predictive ML algorithms.
Various domain science applications have developed post-hoc corrections to predictive or posterior inferences to reduce observed biases and to improve the calibration of uncertainties. Examples of such corrections range from particle physics \cite{dorigo2022deep} to cosmology \cite{ho2019robust} and remote sensing \cite{kiel2019bias}. Usually the goal of these corrections is to reduce the impact of the prior specification, but in contrast to \textsc{Waldo}, these approaches do not provide formal coverage guarantees.
Posterior inferences do not control conditional coverage even for correctly specified priors \cite{patil2020objective}. \textsc{Waldo} addresses this by recalibrating the posterior using Neyman inversion. An alternative approach is to choose the prior so that the conditional coverage of posterior inferences is improved \cite{Bayarri2004,Berger2006,Kass1996,Scricciolo1999,Datta2005}. However, these modified prior distributions need to be created using properties of the likelihood instead of actual prior information, a limitation that is not present in \textsc{Waldo}.
\section{\textsc{Waldo}}
\label{sec: waldo_section}
We discuss the foundational tools needed for \textsc{Waldo}, describe the method in detail, and outline its statistical properties, while offering supporting numerical examples.
\subsection{Foundations}
\label{sec: waldo_background}
\iffalse
\paragraph{Conditional Coverage Across the Parameter Space} First of all we need to precisely pin down the definition of coverage on which we rely throughout this work. Without any restrictive assumption on $F_{\boldsymbol{\theta}}$, our goal is to construct a confidence set $\mathcal{R}(\mathcal{D})$ such that
\begin{equation}
\label{eq: conditional_coverage}
\mathbb{P}(\boldsymbol{\theta} \in \mathcal{R}(\mathcal{D})|\boldsymbol{\theta}) = 1 - \alpha \quad \forall \boldsymbol{\theta} \in \boldsymbol{\Theta},
\end{equation}
for a given confidence level $1-\alpha$, where $\mathcal{D}$ is an observable $p$-dimensional sample of \textit{finite} size $n$. It is important to highlight that the randomness in the equation above comes from $\mathcal{D}$, not $\boldsymbol{\theta}$. The usual interpretation of this probability statement relies on repeating the same experiment many times, which is rarely done in practice. A better interpretation is the following \cite{wasserman2004models}: if we collect $N$ independent datasets and construct a $1-\alpha$ confidence interval for each of the $N$ independent parameters of interests $\boldsymbol{\theta}$, then $(1-\alpha)\%$ of these intervals will contain the true parameter value. In this sense, Equation~\ref{eq: conditional_coverage} is a probability statement that characterizes the statistical properties \textit{of the procedure} used to draw scientific conclusions.
\fi
\paragraph{Neyman construction} A key ingredient of \textsc{Waldo} is the equivalence between hypothesis tests and confidence sets, which was formalized by Neyman in 1937 \cite{neyman1937outline}. The idea is to invert a series of two-sided size $\alpha$ hypothesis tests of the form
\begin{equation}
\label{eq: hypothesis_test}
H_0: \boldsymbol{\theta} = \boldsymbol{\theta}_0 \quad \mathrm{vs.} \quad H_1: \boldsymbol{\theta} \neq \boldsymbol{\theta}_0,
\end{equation}
for all $\boldsymbol{\theta}_0 \in \boldsymbol{\Theta}$. After observing a sample $D$, we then define a confidence region $\mathcal{R}(D)$ by taking all $\boldsymbol{\theta}_0$'s that were not rejected by the test above. By construction, the set $\mathcal{R}(\mathcal{D})$ satisfies Equation~\eqref{eq:conditional_coverage}, i.e., has the correct conditional coverage. Albeit powerful and intuitive, the Neyman construction is hard to use in practice because it requires estimating critical values $C_{\boldsymbol{\theta}_0, \alpha}$ that define the size $\alpha$ acceptance region for each hypothesis we invert. In the context of LFI, this is usually done by resorting to Monte Carlo or bootstrap approaches \cite{mackinnon2009bootstrap, ventura2010bootstrap}, which quickly become computationally prohibitive as the dimensionality of the parameter space increases.
\paragraph{The Wald test} Since any test that controls type I error at level $\alpha$ may be used, we can resort to the one introduced by Wald \cite{wald1943tests}, which leads to a uniformly most powerful test in many settings. It measures the agreement of the data with the null hypothesis for $\boldsymbol{\theta}$ using the squared distance between an unrestricted estimator $\hat{\boldsymbol{\theta}}$ of $\boldsymbol{\theta}$ and the hypothesized value $\boldsymbol{\theta}_0$. In the one-dimensional case, it is based on the following test statistic:
\begin{equation}
\label{eq: wald_statistic}
\tau^{\textsc{Wald}}(\mathcal{D};\theta_0) \coloneqq \frac{(\hat{\theta}^{\textsc{MLE}} - \theta_0)^2}{\mathbb{V}(\hat{\theta}^{\textsc{MLE}})},
\end{equation}
where $\hat{\theta}^{\textsc{MLE}}$ is the maximum-likelihood estimator of $\theta$ and $\mathbb{V}(\hat{\theta}^{\textsc{MLE}})$ can be any consistent estimator of its variance. In our setting, where we do not have access to the likelihood and we cannot resort to assumptions on the distribution of $\tau^{\textsc{Wald}}(\mathcal{D};\theta_0)$ nor to asymptotic regimes,
it becomes hard not only to estimate $\hat{\theta}^{\textsc{MLE}}$ and its variance effectively, but also to estimate critical values $\hat{C}_{\boldsymbol{\theta}, \alpha}$.
\subsection{Methodology}
\label{sec: waldo_methodology}
\paragraph{From Wald to \textsc{Waldo}: Recalibrating posteriors and point predictions.} \textsc{Waldo} replaces $\hat{\theta}^{\textsc{MLE}}$ and its variance in the Wald statistic with the typical output of standard LFI or prediction algorithms. We define the \textsc{Waldo} test statistic for parameters of arbitrary dimensionality $p$ as
\begin{equation}
\label{eq: waldo_statistic}
\tau^{\textsc{Waldo}}(\mathcal{D};\boldsymbol{\theta}_0) = (\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}] - \boldsymbol{\theta}_0)^T \mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]^{-1} (\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}] - \boldsymbol{\theta}_0),
\end{equation}
where $\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}]$ and $\mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$ are the conditional mean and covariance matrix of $\boldsymbol{\theta}$ given the data $\mathcal{D}$, respectively. Existing literature on the asymptotic behaviour of Bayes estimators (e.g., \cite{chao1970asymptotic, ghosh2003preliminaries, ghosh1982expansions, li2020deviance}) has shown convergence results which may explain the large-sample equivalence of \textsc{Waldo} and Wald observed empirically. Specifically, it has been proved that
\[\left( \mathbb{E}[\boldsymbol{\theta}|\mathcal{D}] - \hat{\boldsymbol{\theta}}^{\textsc{MLE}} \right) = \smallO_p(n^{-1/2}) \quad \text{and} \quad \left( \mathbb{V}[\boldsymbol{\theta}|\mathcal{D}] - \frac{1}{n}H^{-1}(\hat{\boldsymbol{\theta}}^{\textsc{MLE}}) \right) = \smallO_p(n^{-1}),\]
where $H^{-1}(\hat{\boldsymbol{\theta}}^{\textsc{MLE}})$ is the negative inverse Fisher information matrix evaluated at $\hat{\boldsymbol{\theta}}^{\textsc{MLE}}$. In this case, \textsc{Waldo} would also enjoy the asymptotic properties typical of the Wald test, which would also make it a pivotal test statistic.
\textsc{Waldo} expands on the framework formalized in \cite{dalmasso2021likelihood}, which consists of a modular procedure to \textit{\textbf{(i)}} estimate a likelihood-based test statistic via odds ratios, \textit{\textbf{(ii)}} estimate critical values $C_{\boldsymbol{\theta}_0, \alpha}$ across the parameter space via quantile regression, and \textit{\textbf{(iii)}} check that the constructed confidence sets achieve the desired coverage level for all $\boldsymbol{\theta} \in \boldsymbol{\Theta}$. Here, we replace \textit{\textbf{(i)}} and instead use posteriors or point predictions to compute $\tau^{\textsc{Waldo}}$ in \eqref{eq: waldo_statistic}. We now describe in detail the entire inferential machinery that is depicted in Figure~\ref{fig:waldo_framework}. Assuming we have access to a simulator $F_{\boldsymbol{\theta}}$ that can produce high-fidelity observable data $\mathcal{D}$ from a parameter configuration $\boldsymbol{\theta}$, we break down the construction of a confidence set (including diagnostics) in three steps:
\textit{\textbf{(i)}} \textbf{Neural density estimators or prediction algorithms for the test statistic.} By simulating a train set $\mathcal{T}=\{ (\boldsymbol{\theta}^{(j)}, \mathcal{D}^{(j)}) \}_{j=1}^{B}$, where each $\boldsymbol{\theta}^{(j)}$ is itself drawn from a prior $\pi_{\boldsymbol{\theta}}$, we have two ways of estimating $\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}]$ and $\mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$, which together define $\tau^{\textsc{Waldo}}$. By leveraging a modern neural density estimator, such as normalizing flows \cite{papamakarios2021normalizing}, we can compute a posterior distribution from which we can efficiently draw a large number of samples\footnote{Amortized neural density estimators are more suitable than their sequential counterparts, as we need to evaluate the posterior on more than one sample for steps \textit{\textbf{(ii)}} and \textit{\textbf{(iii)}}.}. The posterior mean and covariance matrix are then estimated via Monte Carlo sampling. Alternatively, we can leverage the fact that prediction algorithms output an estimate of the conditional mean of $\boldsymbol{\theta}$ given $\mathcal{D}$, when minimizing the squared error loss. More specifically, we can obtain $\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}]$ by directly predicting $\boldsymbol{\theta}$ from $\mathcal{D}$, and $\mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$ by predicting $(\boldsymbol{\theta} - \mathbb{E}[\boldsymbol{\theta}|\mathcal{D}])^2$ from $\mathcal{D}$ again, as $\mathbb{E}[(\boldsymbol{\theta} - \mathbb{E}[\boldsymbol{\theta}|\mathcal{D}])^2|\mathcal{D}] = \mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$\footnote{Note that the prediction approach is most suitable when $p=1$ and $n=1$, as standard algorithms are usually designed to handle pairs of data points of the form $(\theta, \boldsymbol{x})$.};
\textit{\textbf{(ii)}} \textbf{Quantile regression for critical values.} Once we have the building blocks of the test statistic from step \textit{\textbf{(i)}}, we can proceed to estimate critical values for $\tau^{\textsc{Waldo}}$. We simulate a second train set $\mathcal{T}^{\prime}=\{ (\boldsymbol{\theta}^{(j)}, \mathcal{D}^{(j)}) \}_{j=1}^{B^{\prime}}$, drawing $\boldsymbol{\theta} \sim r_{\boldsymbol{\theta}}$ uniformly over $\boldsymbol{\Theta}$ ($r_{\boldsymbol{\theta}} \equiv \mathcal{U}(\boldsymbol{\Theta})$) to allow calibration across the entire parameter space. By evaluating $\tau^{\textsc{Waldo}}$ over each element of $\mathcal{T}^\prime$, we obtain pairs $\left( \tau^{\textsc{Waldo}}(\mathcal{D}^{(j)};\boldsymbol{\theta}^{(j)}), \boldsymbol{\theta}^{(j)} \right)$ that we can use to train a quantile regressor and estimate critical values $\hat{C}_{\boldsymbol{\theta}_0, \alpha}$ over a fine grid $\boldsymbol{\theta}_0 \in \boldsymbol{\Theta}$. This procedure lets us test hypotheses of the form defined in Equation~\ref{eq: hypothesis_test} at the desired level $\alpha$;
\textit{\textbf{(i)}} $+$ \textit{\textbf{(ii)}} \textbf{Putting it all together: Neyman inversion.} The final step is to construct the confidence set as Neyman outlined in \cite{neyman1937outline}. Once $D$ is observed, we evaluate $\hat{\tau}^{\textsc{Waldo}}(D;\boldsymbol{\theta}_0) \text{ over a fine grid of } \boldsymbol{\theta}_0 \in \boldsymbol{\Theta}$, and retain all $\boldsymbol{\theta}_0$ for which the corresponding test does not reject the null. That is,
\begin{equation}
\label{eq:confidence_set_neyman}
\mathcal{R}(D) = \{ \boldsymbol{\theta}_0 \in \boldsymbol{\Theta} : \tau^{\textsc{Waldo}}(D;\boldsymbol{\theta}_0) \leq \hat{C}_{\boldsymbol{\theta}_0, \alpha}\}.
\end{equation}
As we show in Appendix~\ref{sec: waldo_theory}, step \textit{\textbf{(ii)}} leads to valid size $\alpha$ hypothesis tests as long as the quantile regression model is well estimated, which then implies that $\mathcal{R}(\mathcal{D})$ satisfies conditional coverage (Eq.~\ref{eq:conditional_coverage}) at level $1-\alpha$, regardless of the true value of $\boldsymbol{\theta}$ and of the size $n$ of the observable sample $\mathcal{D}$.
\textit{\textbf{(iii)}} \textbf{Coverage diagnostics.} To check that the constructed confidence sets indeed achieve the desired level of conditional coverage, we leverage the diagnostics procedure introduced in \cite{dalmasso2021likelihood}. For a third simulated train set $\mathcal{T}^{\prime\prime}=\{ (\boldsymbol{\theta}^{(j)}, \mathcal{D}^{(j)}) \}_{j=1}^{B^{\prime\prime}}$, we construct a confidence region for each $\mathcal{D}^{(j)} \in \mathcal{T}^{\prime\prime}$ and then regress $\mathds{1}\{\boldsymbol{\theta}^{(j)} \in \mathcal{R}(\mathcal{D}^{(j)})\}$ against $\boldsymbol{\theta}^{(j)}$ adopting a suitable regression method. By definition, this will estimate $\mathbb{E}[\mathds{1}\{\boldsymbol{\theta} \in \mathcal{R}(\mathcal{D}\}|\boldsymbol{\theta}] = \mathbb{P}[\boldsymbol{\theta} \in \mathcal{R}(\mathcal{D})|\boldsymbol{\theta}]$ across the whole parameter space. Note that this technique can be used to check the empirical coverage of any uncertainty estimate, as illustrated in Section~\ref{sec: experiments} for posterior credible regions and prediction sets.
\subsection{Statistical Properties}
\label{sec: waldo_stat_properties}
\label{sec: coverage}
\begin{figure}[t!]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top}}}]{figure}[\FBwidth]
{\caption{
\textbf{\textsc{Property I:} \textsc{Waldo} guarantees conditional coverage across the entire parameter space $\boldsymbol{\Theta}$, regardless of the specified prior}. \textit{Left:} median of upper/lower bounds of constructed parameter regions. \textit{Right:} empirical coverage estimated using 100,000 samples for each $\theta$ over a fine grid in~$\Theta$. }
\label{fig:coverage_toy}}
{\includegraphics[width=0.65 \textwidth]{figures/coverage_toy/coverage_toy.png}}
\end{figure}
\begin{figure}[b!]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top}}}]{figure}[\FBwidth]
{\caption{\textbf{\textsc{Property II:} \textsc{Waldo} exploits prior information and achieves higher statistical power.} Power curves computed by sampling $\mathcal{D}$ from the true likelihood $\mathcal{N}(40, 1)$ and recording the proportion of times a wrong parameter value is correctly outside the confidence set (1{,}000 repetitions). The prior influences how $\theta$ is sampled in $\mathcal{T}$. \textit{Left:} Uninformative prior, $\text{ }\mathcal{U}(35, 45)$. \textit{Right:} Informative prior, $\text{ }\mathcal{N}(40, 1)$. $(n=1)$}
\label{fig:power_gaussian}}
{\includegraphics[width=0.6 \textwidth]{figures/power_toy/power_gaussian_new.png}}
\end{figure}
\paragraph{\textsc{Property I:} \textsc{Waldo} guarantees coverage everywhere in $\boldsymbol{\Theta}$, regardless of the specified prior.} In scientific inquiry, practitioners sometimes have domain-specific knowledge that could guide inference through the elicitation of a prior distribution over the parameters of interest. In practice, there is no way to verify whether this prior is correctly specified with respect to the data-generating process or not, a fact that could introduce a bias in the estimated posterior or point prediction. Ideally, we would want the statistical guarantees of any estimated parameter region to be preserved under this bias. Figure~\ref{fig:coverage_toy} shows that this is not the case even for a simple one-dimensional setting. Specifically, we assume $\theta \sim \mathcal{N}(0, 2), \text{ } \mathcal{D}|\theta \sim \mathcal{N}(\theta, 1),$ with $p=1$ and $n=1$. We compute exact\footnote{Here we have everything available in closed form, so we can calculate the analytical versions of the Wald test statistics and of prediction sets.} 95\% confidence sets for the mean $\theta$ by inverting a series of Wald tests, the corresponding uncorrected 95\% exact central prediction intervals around the conditional mean $\mathbb{E}[\theta|\mathcal{D}] \pm 1.96\sigma$, and the fully estimated confidence sets with \textsc{Waldo} using $B=B^\prime=20{,}000$. The left panel clearly shows the bias of the uncorrected prediction sets increasing as we go further from the prior mean, which is then reflected in the poor coverage performance on the right panel; see \cite{patil2020objective} for further analysis of this effect. \textsc{Waldo} is able to recalibrate the uncertainty around the conditional mean and output a confidence set with the correct level of coverage across $\boldsymbol{\Theta}$.
\paragraph{\textsc{Property II:} \textsc{Waldo} exploits prior information and achieves higher statistical power.} When the prior is correctly specified, we would like to retain conditional coverage while also taking advantage of the available knowledge, embracing both the Bayesian and the frequentist perspectives to statistical inference. Figure~\ref{fig:power_gaussian} shows that \textsc{Waldo} is indeed able to further constrain parameters of interest when a well-specified prior is available, relative to a standard frequentist approach which instead uses only the information contained in the likelihood function. The setting is similar to that of the previous point, but here the data is generated from a unique ``true'' Gaussian likelihood $\mathcal{D}|\theta \sim \mathcal{N}(\theta=40, 1)$. When the prior is uninformative, i.e., $\theta \sim \mathcal{U}(35, 45)$, the two approaches coincide, but when $\theta \sim \mathcal{N}(40, 1)$, \textsc{Waldo} exhibits higher statistical power.
\paragraph{\textsc{Property III:} Estimating the conditional variance matters.} Finally, recall that in principle any test statistic defined in an LFI setting could be used for our framework. One could then define a simpler “unstandardized” test statistic $\tau^{\textsc{Waldo-novar}}(\mathcal{D};\boldsymbol{\boldsymbol{\theta}}_0) = (\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}] - \boldsymbol{\theta}_0)^T(\mathbb{E}[\boldsymbol{\theta}|\mathcal{D}] - \boldsymbol{\theta}_0)$. It turns out that estimating $\mathbb{V}[\boldsymbol{\theta}|\mathcal{D}]$ and using $\tau^{\textsc{Waldo}}$ is actually of crucial importance, as it leads to confidence regions of smaller or equal expected volume, especially in settings where the conditional variance varies significantly as a function of $\boldsymbol{\theta}$. Consider, for example, the problem of estimating the shape of a Pareto distribution with fixed scale $x_\mathrm{min}=1$ and true unknown shape $\theta^*=5$, which yields a strongly right-skewed data distribution. Figure~\ref{fig:power_pareto} shows that $\tau^{\textsc{Waldo}}$ has much higher power than $\tau^{\textsc{Waldo-novar}}$ for inferring $\theta$. Dividing by the conditional variance effectively stabilizes the test statistic and makes its distribution over $\mathcal{D}$ more pivotal, i.e., less dependent on $\theta$. This implies that the critical values will be relatively constant over $\theta$ (see top right panel for \textsc{Waldo}), which yields tighter parameter regions due to the curvature of the test statistic.
\begin{figure}[t!]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top}}}]{figure}[\FBwidth]
{\caption{\textbf{\textsc{Property III:} Estimating the conditional variance matters.} \textit{Left:} Power curves at 95\% confidence level when the true Pareto shape $\theta^*=5$, implying a very skewed data distribution. \textit{Right:} Test statistics and critical values as a function of $\theta$. ($n=10$).}
\label{fig:power_pareto}}
{\includegraphics[width=0.7\textwidth]{figures/power_toy/power_pareto.png}}
\end{figure}
\section{Experimental Results}
\label{sec: experiments}
We assess the performance of \textsc{Waldo} on two challenging experiments which complement the synthetic examples of Section~\ref{sec: waldo_section}. In the first example (Section~\ref{sec: mixture_bayesianLFI}), we show how to recalibrate a posterior distribution estimated via normalizing flows, especially when prior information is available. The second example (Section~\ref{sec: muons}) tackles a complex particle energy reconstruction problem in high-energy physics: we convert predictions from a custom deep learning architecture to confidence intervals with correct coverage and high power.
\subsection{Gaussian Mixture Model: Recalibrating a Posterior Estimated via Normalizing Flows}
\label{sec: mixture_bayesianLFI}
This inference task was introduced in \cite{sisson2007sequential} and has become a standard benchmark in the SBI literature \cite{clarte2021componentwise, toni2009approximate, simola2021adaptive, lueckmann2021benchmarking}. It consists of estimating the (common) mean of the components of a two-dimensional Gaussian mixture, where one component has much broader covariance than the other:
$\mathcal{D}|\boldsymbol{\theta} \sim \frac{1}{2}\mathcal{N}(\boldsymbol{\theta}, \mathbf{I}) + \frac{1}{2}\mathcal{N}(\boldsymbol{\theta}, 0.01\odot\mathbf{I}),$
where $\boldsymbol{\theta} \in \mathbb{R}^2$ and $n=1$\footnote{While \textsc{Waldo} works for a sample made of any number of observations, we had to use $n=1$ because the \texttt{SBI} Python library we used to estimate the posterior does not yet support larger sample sizes.}. We estimate $p(\boldsymbol{\theta}|D)$ using the implementation of masked autoregressive flows available in \cite{nflows} through the \texttt{SBI} software package of \cite{tejero-cantero2020sbi}, and report results obtained with two different priors: $\boldsymbol{\theta} \sim \mathcal{U}([-10, 10]^2)$ and $\boldsymbol{\theta} \sim \mathcal{N}(\mathbf{0}, 2\odot\mathbf{I})$. We estimated the critical values with a 3-layer neural network minimizing a quantile loss. Simulated data sets used for training are of the following sizes: $B=100{,}000$, $B^\prime=10{,}000$ when using a uniform prior and $B^\prime=30{,}000$ when using a Gaussian prior. Conditional mean and variance were approximated with $50{,}000$ Monte Carlo samples from the neural posterior. From the results, we conclude the following:
\textit{\textbf{(i)}} \textbf{Absent prior knowledge, \textsc{Waldo} corrects for estimation errors while retaining high power.} When no prior information is available, it is common to sample $\boldsymbol{\theta}$ according to a uniform distribution over the parameter space. In this case, confidence sets and posterior credible regions will largely overlap, but the latter might suffer from estimation bias and approximation errors that could hinder the statistical reliability of the estimated region. \textsc{Waldo} can correct even for this problem and guarantee conditional coverage, as we can see from the top left plot in the left panel of Figure~\ref{fig:waldo_bayesian_lfi}.
\begin{figure}[t]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,top}}}]{figure}[\FBwidth]
{\caption{\textbf{\textsc{Waldo} recalibrates posterior distributions to yield confidence regions with correct conditional coverage and high power}. \textit{Left:} Examples of 95\% confidence and credible regions when estimating the posterior using different prior distributions. \textit{Right:} Output of the diagnostics procedure using a Gaussian $\mathcal{N}(\mathbf{0}, 2\odot\mathbf{I})$ prior to train the posterior estimator.}
\label{fig:waldo_bayesian_lfi}}
{\includegraphics[width=0.78\textwidth]{figures/bayesian_sbi/sets_diagnostics_final.png}}
\end{figure}
\textit{\textbf{(ii)}} \textbf{\textsc{Waldo} guarantees coverage everywhere even if the prior is misspecified.} Even when domain knowledge is available, there is no way in practice to check for misspecifications with respect to the truth. If the observed data happens to derive from a true parameter that is far from the bulk of the the prior distribution, then the posterior will try to use information in the data to construct a credible region consistent with the misspecified prior. As we can see from the two right plots in the left panel of Figure~\ref{fig:waldo_bayesian_lfi}, \textsc{Waldo} can correct for the bias in the estimated posterior. The right panel shows the output of the diagnostic procedure described in Section~\ref{sec: waldo_methodology}: while credible regions \textit{approximate} the desired level of conditional coverage only where the prior puts non-negligible mass, \textsc{Waldo} yields confidence regions that are guaranteed to cover the true parameter everywhere with probability $1-\alpha$.
\textit{\textbf{(iii)}} \textbf{When the prior is correctly specified, \textsc{Waldo} further constrains parameters of interest.} Finally, we highlight that when prior knowledge is accurate, \textsc{Waldo} does not trade off coverage for power and yields confidence regions of smaller expected size with respect to those estimated using a non-informative prior, as we can see from the bottom left plot in the left panel of Figure~\ref{fig:waldo_bayesian_lfi}.
\subsection{High-Dimensional 3D Image Data: Estimate of Muon Energy in a
Granular Calorimeter}
\label{sec: muons}
We now discuss the performance of \textsc{Waldo} on an application of interest to fundamental research: estimating the energy of muons at a future particle collider. Muons are a heavier replica of electrons; they are produced in sub-nuclear reactions involving electroweak interactions. Muons are also excellent probes of new phenomena: in fact, their detection and measurement has been key to several crucial discoveries in the past decades \cite{augustin1974discovery, herb1977observation, cdf1995observation, aad2012observation}. The energy of a muon can be determined from the curvature of its trajectory in a magnetic field, but at energies above a few TeV this methods breaks down as trajectories become indistinguishable from straight paths even within the strongest practically achievable fields. Searching for viable alternatives, it has been observed \cite{kieseler2022calorimetric, dorigo2022deep} that both the pattern and the size of small radiative energy losses that muons withstand in traversing finely segmented calorimeters can be used to infer the incident muon energy. This technique may preserve the power of muons as probes of new physics in future higher-energy colliders.
\begin{figure}[t!]
{\caption{\small
\textbf{Muon energy reconstruction with \textsc{Waldo} using calorimetric measurements of increasing dimensionality.} \textsc{Waldo} (\textcolor{Blue}{blue}, \textcolor{Dandelion}{orange}, \textcolor{Maroon}{red}) guarantees nominal coverage ($68.3\%$), while $1\sigma$ prediction intervals (\textcolor{OliveGreen}{green}) under- or over-cover in different regions of $\Theta$ and are wider on average than the corresponding \textsc{Waldo} intervals. \textit{Left:} Energy deposited by a $\theta \approx 3.2$ TeV muon entering a calorimeter with $32\times32\times50$ cells. \textit{Center:} empirical coverage estimated via \textsc{Waldo} diagnostics. \textit{Right:} Median lengths of constructed intervals.}
\label{fig:muons68}}
{\includegraphics[width=1\textwidth]{figures/muons/muons_68_withHit_new.png}}
\end{figure}
We tackle this problem by applying \textsc{Waldo} within a prediction framework. We have available $886{,}716$ 3D “images” and scalar true muon energies $\theta$ obtained through Geant4 \cite{agostinelli2003geant4}, a high-fidelity stochastic simulator that encodes the physical process generating the calorimeter deposits. Data is available at \cite{kieseler_jan_2021_5163817}. As the interest is on constraining muon energies as much as possible while guaranteeing conditional coverage, we use three versions of the same data set, of increasing dimensionality: a 1D representation computed by summing over all calorimeter cells with deposited energy $E > 0.1$ GeV, for each muon; a 28-dimensional representation obtained by extracting 28 features from the spatial and energy information of the calorimeter cells \cite{kieseler2022calorimetric}; and the full calorimeter measurements ($\mathbf{x}_i \in \mathbb{R}^{51,200})$. For the first two datasets, we estimate the conditional mean and variance using Gradient Boosted Trees. For the full calorimeter data, we leverage the Deep 3D Convolutional Neural Network developed in \cite{kieseler2022calorimetric}. In each case, we use Gradient Boosted Trees for quantile regression.
Figure~\ref{fig:muons68} shows that confidence intervals constructed with \textsc{Waldo} achieve \textit{exact conditional coverage} ($68.3\%$) regardless of the data set used. The corresponding $1\sigma$ prediction intervals using full calorimeter data, instead, exhibit over- or under-coverage in different regions of the parameter space. The latter is especially problematic because it implies that prediction sets for very high energies contain the true value with much lower probability than anticipated. This is due to prediction sets being centered around the point prediction, which is downward biased at high energies because of low signal-to-noise ratio in the calorimeter data for these kinds of muons. In terms of the \textit{length of the intervals}, we make two key observations. First, using higher-dimensional representations of the particle-calorimeter interaction provides valuable information, with full calorimeter data yielding the tightest constraints on muon energies for the same level of coverage across the parameter space. Second, confidence intervals constructed with \textsc{Waldo} are on average even shorter than the corresponding prediction intervals, while also guaranteeing conditional coverage.
\section{Discussion}
\label{sec: discussion}
We presented \textsc{Waldo}, a novel method to construct correctly calibrated confidence regions for parameters in {\em inverse} problems by recalibrating predictions and posteriors from prediction or posterior estimation algorithms. To increase power, one may be able to leverage Bayesian priors, as shown in Sections~\ref{sec: waldo_stat_properties} and ~\ref{sec: mixture_bayesianLFI}, or take advantage of the internal structure of the simulator as in~\cite{Brehmer2020}. Alternatively, one could adaptively simulate more data in specific regions of interest in the parameter space. The latter, and a more formal treatment of the relation between power and priors, are interesting directions for future studies. Note that the prediction approach is especially useful for settings with $n=1$, $p=1$ and large $d$ (as seen in Section~\ref{sec: muons}). The posterior approach, on the other hand, is particularly valuable for settings with multiple observations $n$ and larger values of $p$. Finally, when dealing with nuisance parameters, standard (hybrid) approaches marginalize over them \cite{cousins1992}. Hybrid methods do not formally control $\alpha$, but offer a good approximation that can lead to robust results \cite{qian2016gaussian, dalmasso2021likelihood}. This approach can be easily incorporated into \textsc{Waldo}, and using the diagnostics procedure we can shed light on whether or not the final results have adequate conditional coverage, as was done in \cite{dalmasso2021likelihood}.
\input{acknowledgements}
\newpage
\bibliographystyle{plainnat}
\section{Theoretical Properties}
\label{sec: waldo_theory}
We assume that the quantile regression estimator described in Section~\ref{sec: waldo_methodology} is consistent
in the following sense:
\begin{Assumption}[Uniform consistency]
\label{assum:quantile_consistent_simple_null}
Let $ F(\cdot|\boldsymbol{\theta})$ be the cumulative distribution function of the test statistic $\tau(\mathcal{D};\boldsymbol{\theta}_0)$ conditional on $\boldsymbol{\theta}$, where $\mathcal{D} \sim F_{\boldsymbol{\theta}}$. Let $\hat{F}_{B'}(\cdot|\boldsymbol{\theta})$ be the estimated conditional distribution function, implied by a quantile regression with a sample $\mathcal{T}^\prime$ of $B^\prime$ simulations $\mathcal{D} \sim F_{\boldsymbol{\theta}}$.
Assume that the quantile regression estimator is such that
$$\sup_{\tau \in \mathbb{R}}|\hat F_{B^\prime}(\tau|\boldsymbol{\theta}_0)- F(\tau|\boldsymbol{\theta}_0)|\xrightarrow[B^\prime \longrightarrow\infty]{\enskip \mathbb{P} \enskip} 0.$$
\end{Assumption}
Assumption~\ref{assum:quantile_consistent_simple_null} holds, for instance, for quantile regression forests \cite{meinshausen2006quantile}. Next, we show that step \textit{\textbf{(ii)}} in Section~\ref{sec: waldo_methodology} yields a valid hypothesis test as $B^\prime \rightarrow \infty$.
\begin{thm}
\label{thm:valid_tests}
Let $C_{B^\prime} \in \mathbb{R}$ be the
critical value of the test based on a strictly continuous statistic $\tau(\mathcal{D};\boldsymbol{\theta}_0)$ chosen according to step \textit{\textbf{(ii)}}
for a fixed $\alpha \in (0,1)$. If the quantile estimator satisfies Assumption~\ref{assum:quantile_consistent_simple_null},
then,
$$ \mathbb{P}_{\mathcal{D}|\boldsymbol{\theta}_0,C_{B^\prime}}(\tau(\mathcal{D};\boldsymbol{\theta}_0) \geq C_{B^\prime}) \xrightarrow[B^\prime \longrightarrow\infty]{\enskip a.s. \enskip} \alpha,$$
where $\mathbb{P}_{\mathcal{D}|\boldsymbol{\theta}_0,C_{B^\prime}}$ denotes the probability integrated over $\mathcal{D}\sim F_{\boldsymbol{\theta}_0}$ and conditional on the random variable $C_{B^\prime}$.
\end{thm}
If the convergence rate of the quantile regression estimator is known (Assumption \ref{assum:quantile_regression_rate}), Theorem \ref{thm:valid_tests_rate} provides a finite-$B^\prime$ guarantee on how far the Type-I error of the test will be from the nominal level.
\begin{Assumption}[Convergence rate of the quantile regression estimator]
\label{assum:quantile_regression_rate}
Using the notation of Assumption \ref{assum:quantile_consistent_simple_null}, assume that the quantile regression estimator is such that
$$\sup_{\tau \in \mathbb{R}}|\hat F_{B^\prime}(\tau|\boldsymbol{\theta}_0)- F(\tau|\boldsymbol{\theta}_0)|= \mathcal{O}_p\left(\left(\frac{1}{B^\prime}\right)^{r}\right)$$
for some $r>0$.
\end{Assumption}
\begin{thm}
\label{thm:valid_tests_rate}
With the notation and assumptions of Theorem \ref{thm:valid_tests}, and if Assumption~\ref{assum:quantile_regression_rate} also holds,
then,
$$ |\mathbb{P}_{\mathcal{D}|\boldsymbol{\theta}_0,C_{B^\prime}}(\tau(\mathcal{D};\boldsymbol{\theta}_0) \geq C_{B^\prime}) - \alpha| = \mathcal{O}_p\left(\left(\frac{1}{B^\prime}\right)^{r}\right).$$
\end{thm}
Proofs of these results can be found in \cite{dalmasso2021likelihood}.
\newpage
\section{Algorithm to Construct Confidence Sets via \textsc{Waldo}}
\begin{algorithm}[!h]
\caption{Construct a confidence region for $\boldsymbol{\theta}$ using \textsc{Waldo}}
\algorithmicrequire{ Simulated training sets $\mathcal{T}, \mathcal{T}^{\prime}, \mathcal{T}^{\prime\prime}$; observed sample $D$; prediction algorithm or posterior estimator; number of samples $N_{\hat{p}}$ to draw for Monte Carlo sampling when using posterior estimator; quantile regressor; grid of parameter values $\boldsymbol{\Theta}_{N_{\text{grid}}}$ at which to perform Neyman inversion; desired coverage level $1- \alpha$.}
\begin{algorithmic}[1]
\State \codecomment{Estimate Conditional Mean and Variance or Posterior Distribution}
\If{prediction algorithm}
\State Use $\mathcal{T}$ to learn $\hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}]$ under squared error loss
\State Use $\mathcal{T}$ again to compute $(\boldsymbol{\theta} - \hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}])^2$ and learn
$\hat{\mathbb{V}}[\boldsymbol{\theta}|\mathcal{D}] = \hat{\mathbb{E}}[(\boldsymbol{\theta} - \hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}])^2|\mathcal{D}]$
\ElsIf{posterior estimator}
\State Use $\mathcal{T}$ to learn posterior $\hat{p}(\boldsymbol{\theta}|\mathcal{D})$
\EndIf
\State \codecomment{Estimate Critical Values}
\If{prediction algorithm}
\State Predict $\hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}]$ and $\hat{\mathbb{V}}[\boldsymbol{\theta}|\mathcal{D}]$ for each $\mathcal{D} \text{ that appears in } \mathcal{T}^{\prime}$
\ElsIf{posterior estimator}
\For{each $\mathcal{D} \text{ that appears in } \mathcal{T}^{\prime}$}
\State Draw $N_{\hat{p}}$ samples from the posterior $\hat{p}(\boldsymbol{\theta}|\mathcal{D})$
\State Approximate
$\hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}] \approx \frac{1}{N_{\hat{p}}}\sum_i \boldsymbol{\theta}_i$, $\text{ }\hat{\mathbb{V}}[\boldsymbol{\theta}|\mathcal{D}] \approx \frac{1}{(N_{\hat{p}}-1)} \sum_i (\boldsymbol{\theta}_i - \hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}])(\boldsymbol{\theta}_i - \hat{\mathbb{E}}[\boldsymbol{\theta}|\mathcal{D}])^T$
\EndFor
\EndIf
\State Compute $\hat{\tau}^{\textsc{Waldo}}(\mathcal{D};\boldsymbol{\theta})$ for all $(\mathcal{D},\boldsymbol{\theta}) \in \mathcal{T}^{\prime}$
\State Learn critical values $\hat{C}_{\boldsymbol{\theta},\alpha}$ using the quantile regressor
\State \codecomment{Construct Confidence Region via Neyman Inversion}
\If{prediction algorithm}
\State Predict $\hat{\mathbb{E}}[\boldsymbol{\theta}|D]$ and $\hat{\mathbb{V}}[\boldsymbol{\theta}|D]$
\ElsIf{posterior estimator}
\State Draw $N_{\hat{p}}$ samples from the posterior $\hat{p}(\boldsymbol{\theta}|D)$
\State Approximate $\hat{\mathbb{E}}[\boldsymbol{\theta}|D] \approx \frac{1}{N_{\hat{p}}}\sum_i \boldsymbol{\theta}_i$, $\text{ }\hat{\mathbb{V}}[\boldsymbol{\theta}|D] \approx \frac{1}{(N_{\hat{p}}-1)} \sum_i (\boldsymbol{\theta}_i - \hat{\mathbb{E}}[\boldsymbol{\theta}|D])(\boldsymbol{\theta}_i - \hat{\mathbb{E}}[\boldsymbol{\theta}|D])^T$
\EndIf
\State Initialize $\mathcal{R}(D) \gets \emptyset$
\For{$\boldsymbol{\theta}_0 \in \boldsymbol{\Theta}_{N_{\text{grid}}}$}
\If{$\hat{\tau}^{\textsc{Waldo}}(D;\boldsymbol{\theta}_0) \leq \hat{C}_{\boldsymbol{\theta}_0; \alpha}$}
\State $\mathcal{R}(D) \gets \mathcal{R}(D) \cup \{\boldsymbol{\theta}_0\}$
\EndIf
\EndFor
\State \textbf{return} confidence region $\mathcal{R}(D)$
\end{algorithmic}
\end{algorithm}
\newpage
\section{Details on the Experiments}
\subsection{Gaussian Mixture Model}
See Section~\ref{sec: mixture_bayesianLFI} for a description of the experiment. Training was done on a MacBook Pro M1Pro (CPU only); it took approximately 15--20 minutes to train the posterior estimator, and an additional $\sim$2 minutes for the quantile neural network to estimate the critical values. Note that the latter step requires computing the conditional mean, the conditional variance and the Waldo statistic over all sample points in $\mathcal{T}'$.
The posterior was sampled multiple times for each ${\mathbf{x}} \in \mathcal{T}'$ to compute $\mathbb{E}(\boldsymbol{\theta}|{\mathbf{x}})$ and $\mathbb{V}(\boldsymbol{\theta}|{\mathbf{x}})$ via Monte Carlo integration; this procedure took a total of $\sim$45 minutes (but could potentially be optimized in the future).
Figure~\ref{fig:coverage_diagnostics_mixture_uniform} shows the output of the diagnostics procedure when using a uniform prior to train the posterior estimator (compare with Figure~\ref{fig:waldo_bayesian_lfi}, right column, which used a Gaussian prior). We achieve correct conditional coverage for \textsc{Waldo} but not for credible regions even though the prior is is uniform, due to estimation and approximation errors, which \textsc{Waldo} can correct via recalibration.
\begin{figure}[!h]
{\caption{\small \textbf{Coverage diagnostics for Gaussian mixture model example with uniform prior}. We achieve correct conditional coverage for \textsc{Waldo} (left figure) but not for credible regions (right figure) even though the prior is is uniform, due to estimation and approximation errors, which \textsc{Waldo} can correct via recalibration.}
\label{fig:coverage_diagnostics_mixture_uniform}}
{\includegraphics[width=1\textwidth]{figures/muons/coverage_diagnostics_mixture_uniform.png}}
\end{figure}
\subsection{Muon Energy Reconstruction}
See Section~\ref{sec: muons} for a description of the experiment. We had access to 886,716 simulated muons in total; roughly 200,000 muons were used to estimate the critical values, $\sim$24,000 muons to construct the final confidence sets and diagnostics, and the rest was used to estimate the conditional mean and variance via the custom 3D CNN from \citet{kieseler2022calorimetric}. Training the latter CNN took approximately 20 hours for the conditional mean and another 20 hours for the conditional variance, using an NVIDIA V100 GPU at an Azure cloud computing machine. Estimating the critical values via quantile gradient boosted trees took approximately 1 minute.
Figure~\ref{fig:muons68_prediction_comparison} shows comparison of confidence sets and prediction sets for the full calorimeter data, showing clearly the bias in the prediction sets and the correction applied by Waldo. These results explain the observed patterns in Figure~\ref{fig:muons68}: prediction sets are centered around the point prediction, which is downward biased at high energies, because of low signal-to-noise ratio in this regime.
\begin{figure}[h!]
{\caption{\small \textbf{Confidence and prediction sets for the muon energy reconstruction experiment}. Boxplots of the upper and lower bounds of prediction sets (green) versus \textsc{Waldo} confidence sets (red) for full the calorimeter data, all divided in 19 bins over true energy. We clearly see the bias occurring in the prediction sets (especially at high energies) and the correction applied by \textsc{Waldo}.}
\label{fig:muons68_prediction_comparison}}
{\includegraphics[width=0.7\textwidth]{figures/muons/muons68_prediction_comparison.png}}
\end{figure}
|
2,869,038,155,314 | arxiv | \section{Supplementary Material}
\subsection{The ``fluctuating probability density function'' of stochastic thermodynamics}
The theory of stochastic thermodynamics is based on the hermaphroditic notion of a fluctuating probability density function (fpdf) whose functional form results from a {\it proper} probability density function (pdf) in which the state space variable is substituted by a particular realization of the considered process~\cite{S06,S12,SE}. In the present supplemental material we would like to illustrate this construct, combining aspects of states and observables as well as forward and backward dynamics, with the example of Markovian diffusion processes.
Finally we shall specialize to the case of a Brownian oscillator.
If we denote a point in the state space of the considered process as ${\mathbf z}$, the pdf, which is a solution of the Fokker-Planck equation, by $\rho({\mathbf z},t)$ and a solution of the Langevin equation by ${\mathbf Z}({\mathbf z},t)$ with ${\mathbf Z}({\mathbf z},0)={\mathbf z}$, the fpdf $\varphi({\mathbf z},t)$ is defined according to~\cite{S06,S12,SE} as
\begin{equation}
\varphi({\mathbf z},t) = \rho({\mathbf Z}({\mathbf z},t),t)\:,
\ee{fr}
which is a function of the starting point of the considered realization and no longer of its end point. Trivially, at the initial time $t=0$ the fpdf and pdf agree with each other. In the sequel we demonstrate by means of the example of a Brownian oscillator that the fpdf does {\it not} stay normalized, in general, as its very construction does not conform with the transformation rules of probability densities. An exception presents a Hamiltonian dynamics, as shown below.
\subsection{Brownian harmonic oscillator}
\subsubsection{Langevin equation and the probability density function}
The process of a damped harmonic oscillator of mass $m$ and frequency $\omega$ under the influence of a Gaussian white random force $\xi(t)$ is described by the Langevin equation \cite{WU}
\begin{equation}
\begin{split}
\dot{q}(t) &= \frac{p(t)}{m} \\
\dot{p}(t) &= - \gamma p(t) - m \omega^2 q(t) +\xi(t)\:,
\end{split}
\ee{LE}
where $q(t)$ and $p(t)$ are the position and momentum, respectively, of the oscillator at the time $t$, and $\gamma$ denotes the friction constant. The average of the random force $\xi(t)$ vanishes and its auto-correlation function is given by
\begin{equation}
\langle \xi(t)\xi(s) \rangle = 2 m \gamma k_B T \delta (t-s),
\ee{db}
depending on the temperature $T$ of the bath, causing the frictional and fluctuating forces. Here, $k_B$ denotes the Boltzmann constant. The vector ${\mathbf Z}({\mathbf z},t) = \big (Q({\mathbf z},t), P({\mathbf z},t) \big ) $ of the solutions $Q({\mathbf z},t)$ and $P({\mathbf z},t)$ of Eq.~\ref{LE} starting at ${\mathbf Z}({\mathbf z},0) = {\mathbf z} =(q,p)$ can be written as
\begin{equation}
{\mathbf Z}({\mathbf z},t) = {\mathbf Z}_h({\mathbf z},t) + {\mathbf Z}_i(t)\:,
\ee{Z}
where
\begin{equation}
\begin{split}
{\mathbf Z}_h({\mathbf z},t) &= e^{R t} {\mathbf z} \\
{\mathbf Z}_i(t) & = \int_0^t ds e^{R(t-s)} \bm\x(s)\:.
\end{split}
\ee{ZhZi}
Here, $R$ is the matrix of the coefficients of $q(t)$ and $p(t)$ on the right hand side of Eq.~\ref{LE}, hence reading
\begin{equation}
R = \left (
\begin{array}{cc}
0& 1/m\\
-m \omega^2 & -\gamma
\end{array}
\right )\:.
\ee{R}
Accordingly, the exponentiated matrix becomes
\begin{equation}
e^{R t} = e^{-\gamma t/2} \left (
\begin{array}{cc}
\cos \omega_\gamma t + \frac{\gamma}{2\omega_\gamma} \sin \omega_\gamma t &\frac{1}{m\omega_\gamma} \sin \omega_\gamma t\\
- \frac{m \omega^2}{\omega_\gamma} \sin \omega_\gamma t & \cos \omega_\gamma t - \frac{\gamma}{2 \omega_\gamma} \sin \omega_\gamma t\:
\end{array}
\right )\:,
\ee{eRt}
where $\omega_\gamma =(\omega^2 - \gamma^2/4)^{1/2}$ is the effective frequency of the oscillator. For the sake of simplicity we assume $\omega >\gamma/2$.
Finally, $\bm\x(t) = (0, \xi(t))$ denotes the vectorial random force.
In order to construct the fpdf we need the time-dependent pdf $\rho({\mathbf z},t) $.
Due to the linearity of the process defined by the Langevin equation (\ref{LE}), an initially Gaussian pdf stays Gaussian for all later times $t$, taking the form
\begin{equation}
\rho(q,p,t)= \left ( 2 \pi M(t) \right )^{-1/2} e^{-\frac{1}{2} ({\mathbf z} -\langle {\mathbf Z}(t) \rangle )^{tr} \cdot M^{-1}(t) \cdot ({\mathbf z} -\langle {\mathbf Z}(t) \rangle )^T }
\ee{gauss}
with the superscript $tr$ indicating the transposed respective vector or matrix. According to the Eqs. (\ref{Z}) and (\ref{ZhZi}), the average $\langle {\mathbf Z}(t) \rangle$ becomes
\begin{equation}
\langle {\mathbf Z}(t) \rangle = e^{R t} \langle {\mathbf z} \rangle\:,
\ee{Zt}
where $\langle {\mathbf z} \rangle$ denotes the average of ${\mathbf z}$ with respect to the initial distribution. The time-dependence of the covariance matrix $M(t)$ of the vector ${\mathbf Z}$ follows from the following equation of motion~\cite{Ludwig}
\begin{equation}
\dot{M}(t) = M(t) R^{tr} + R M(t) +2 D
\ee{dM}
with $D$ denoting the diffusion matrix reading
\begin{equation}
D = \left (
\begin{array}{cc}
0 & 0\\
0 & m \gamma k_B T
\end{array}
\right )\:,
\ee{D}
yielding for the time-dependent autocorrelation matrix the expression
\begin{equation}
M(t) = e^{R t} \left [M(0) +2 \int_0^t ds e^{-R s} D e^{-R^{tr} s} \right ] e^{R^{tr} t}\:.
\ee{MtR}
\subsection{The ``Fluctuating probability density function''}
Combining the Eqs. (\ref{fr}) and (\ref{gauss}) with (\ref{Z},\ref{ZhZi}) and (\ref{MtR}) one obtains
\begin{equation}
\varphi({\mathbf z},t) = \left (2 \pi M(t) \right )^{-1/2} e^{-1/2 (\delta z - {\mathbf v}(t))^{tr} Q^{-1}(t)(\delta z - {\mathbf v}(t)) }\:,
\ee{fho}
where $\delta {\mathbf z} = {\mathbf z} - \langle {\mathbf z} \rangle_0$ denotes the fluctuations of the phase space points ${\mathbf z}$ in the initial ensemble specified by $\rho({\mathbf z},0)$, ${\mathbf v}(t) = \int_0^t ds e^{-R s} \bm\x(s)$ and $Q(t) =M(0) + 2 \int_0^t ds e^{-R s} D e^{- R^{tr} s}$.
The integral of the fpdf extended over all initial phase space points then results in
\begin{equation}
\begin{split}
\int d{\mathbf z} \varphi({\mathbf z},t) &= \left [ \frac{\det Q(t) }{\det M(t)} \right ]^{1/2}\\
&= \det e^{-R t/2} = e^{\gamma t /2}\:,
\end{split}
\ee{ifpdf}
where we used Eq. (\ref{eRt}). Therefore, the fluctuating probability density does in general not stay normalized to unity and hence cannot be interpreted as a probability density. We note that the time dependence of the integral in Eq. (\ref{ifpdf}) is due to the dissipation, but not due to the randomness of the Brownian oscillator dynamics.
In the absence of friction, for purely Hamiltonian dynamics, the substitution of a time evolved trajectory in the probability density leads back to the initial pdf as can be seen by writing the time-evolved pdf $\rho({\mathbf z},t)$ in terms of the initial pdf $\rho({\mathbf z},0)$ according to the general transformation rules of pdfs as
\begin{equation}
\rho({\mathbf z},t) =\int d{\mathbf z}_0 \delta({\mathbf z}-{\mathbf Z}({\mathbf z}_0,t)) \rho({\mathbf z}_0,t) = \rho({\mathbf Z}^{-1}({\mathbf z},t), 0)\:,
\ee{rtr0}
with the Hamiltonian trajectory ${\mathbf Z}({\mathbf z},t)$, mapping the initial to the final phase space point and its uniquely defined inverse ${\mathbf Z}^{-1}({\mathbf z},t)$ acting oppositely. Hence, the replacement of ${\mathbf z}$ by the Hamiltonian trajectory ${\mathbf Z}({\mathbf z},t)$ in the time-dependent pdf leads back to the initial pdf according to
\begin{equation}
\varphi({\mathbf z},t) = \rho({\mathbf Z}({\mathbf z},t),t) = \rho({\mathbf z},0)\:.
\ee{frtr0}
\end{document}
|
2,869,038,155,315 | arxiv | \section{Introduction}\label{sec:intro}
A {\em shadow} is an orthogonal projection of a knot onto a plane. The {\em size} of a shadow is its number of crossings. As usual, all shadows under consideration are {\em regular}, that is, they have no triple points and no points of self-tangency. A shadow $S$ {\em resolves into} a knot $K$ if there is an over/under assignment at the crossings of $S$ that gives a diagram of $K$.
This work revolves around the following fundamental problem.
\begin{question}\label{que:que1}
Given a shadow $S$, which knots $K$ satisfy that $S$ resolves into $K$?
\end{question}
To investigate this question we must restrict our attention to {\em reduced} shadows, that is, shadows with no nugatory crossings. As in~\cite{taniyama}, we say that a crossing $x$ in a shadow $S$ is {\em nugatory} if $S{\setminus}\{x\}$ is disconnected. This restriction is crucial: it is easy to exhibit arbitrarily large non-reduced shadows that only resolve into the unknot.
In Figure~\ref{fig:one}(a) we illustrate the shadow of a minimal crossing diagram of a torus knot $T_{2,m+1}$, {and in (b) we show the shadow of a minimal crossing diagram of a twist knot $T_m$.} As proved in~\cite{fertility}, these shadows only resolve into torus knots $T_{2,n}$ and into twist knots, respectively.
\begin{figure}[ht!]
\centering
\vglue 0.3 cm
\scalebox{0.5}{\input{nt12.pdf_t}}
\caption{The shadow of a minimal crossing diagram of a torus knot $T_{2,m+1}$ (left) and of a twist knot $T_m$ (right).}
\label{fig:one}
\end{figure}
{Thus} there are arbitrarily large reduced shadows that only resolve into torus knots $T_{2,n}$ {(including the unknot $T_{2,1}$)}, and there are arbitrarily large reduced shadows that only resolve into twist knots {(including the unknot $T_0$).}
As we {illustrate} in Figure~\ref{fig:two}, {there are} arbitrarily large reduced shadows that only resolve into connected sums of (left-handed or right-handed) trefoil knots, and into the unknot.
\begin{figure}[ht!]
\centering
\vglue 0.3 cm
\scalebox{0.5}{\input{nt16.pdf_t}}
\caption{This shadow only resolves into connected sums of trefoil knots. The labelled points are indicated for future reference.}
\label{fig:two}
\end{figure}
Torus knots $T_{2,n}$ and twist knots are the simplest prime knots and connected sums of trefoil knots are the simplest composite knots. Our main result is that these three knot types lie at the core of Question~\ref{que:que1}, in the sense that every reduced shadow resolves into a knot with ``large'' crossing number in one of these families.
\begin{theorem}\label{thm:main}
For each even integer $m\ge 2$, there is an integer $n$ with the following property. Every reduced shadow with at least $n$ crossings resolves either into a torus knot $T_{2,m+1}$, or into a twist knot $T_m$, or into a connected sum of $m$ trefoil knots.
\end{theorem}
{We remark that throughout this work we do not distinguish between a knot and its mirror image. It is valid to take this license because clearly a shadow resolves into a knot if and only if it resolves into its mirror image.}
\ignore{
\begin{figure}[ht!]
\centering
\vglue 0.3 cm
\scalebox{0.5}{\input{nt12.pdf_t}}
\caption{The shadow of a minimal crossing diagram of a torus knot $T_{2,m+1}$ (left) and of a twist knot $T_m$ (right).}
\label{fig:one}
\end{figure}
}
\ignore{For each odd integer $n\ge 3$, $T_{2,n}$ is $n_1$ in Rolfsen knot table~\cite{atlas,rolfsen}. For each even integer $n\ge 4$, $T_{n-2}$ is the knot $n_1$, and for each odd $n\ge 5$, $T_{n-2}$ is $n_2$.}
\subsection{Related work}
Besides proving the result mentioned above on the shadows in Figure~\ref{fig:one}, Cantarella, Henrich, Magness, O'Keefe, Perez, Rawdon, and Zimmer investigate in~\cite{fertility} several problems related to Question~\ref{que:que1}, including an exhaustive analysis on shadows of minimal crossing diagrams of knots with crossing number at most $10$.
In~\cite{hanaki1}, Hanaki investigates the following related question: given a shadow $S$, what can be said about invariants of a knot that projects to $S$? Hanaki's work illustrates very well the difficulty of Question~\ref{que:que1}, with a running example of a shadow with $9$ crossings for which it is not easy to determine whether or not it resolves into the torus knot $T_{2,7}$.
In a seminal paper, Taniyama~\cite{taniyama} proved that every nontrivial reduced shadow resolves into a trefoil knot, and characterized which shadows resolve into a figure-eight knot, or into a torus knot $T_{2,5}$, or into a twist knot $T_3$ ($5_2$ in Rolfsen's table).
In~\cite{taniyama2}, Taniyama proved the following result closely related to Theorem~\ref{thm:main}. For each even integer $m\ge 2$, there is an integer $n$ such that the following holds. If $S$ is a $2$-component link shadow, in which the projections of the components cross each other at least $n$ times, then $S$ resolves into a torus link $T_{2,m}$. The techniques and ideas in Taniyama's proof are the workhorse of the proof of one of our key lemmas.
\section{Proof of Theorem~\ref{thm:main}}\label{sec:proofmain}
{We start with an informal account of the main ideas in the proof of Theorem~\ref{thm:main}. The proof has three main ingredients. Let $m\ge 2$ be an integer. First we identify four kinds of shadows. A shadow that shares certain features with the shadows in Figure~\ref{fig:one}(a) (respectively, (b)) will be called $m$-{\em consistent} (respectively, $m$-{\em reverse}). A shadow that shares certain features with the shadow in Figure~\ref{fig:two} will be called $m$-{\em nontrivial}.}
{
We identify a fourth kind of shadow, inspired by the type of shadow illustrated in Figure~\ref{fig:fir}. A shadow will be called $m$-{\em decomposable} if it can be ``decomposed'' into $m$ shadows, each of which resolves into a trefoil knot.
}
\begin{figure}[ht!]
\centering
\scalebox{0.5}{\input{fir3.pdf_t}}
\caption{A $3$-decomposable shadow.}
\label{fig:fir}
\end{figure}
{The second main ingredient in the proof is that each of these four kinds of shadows resolves into a knot of one of the types in the statement of Theorem~\ref{thm:main}. More precisely, an $m$-consistent shadow resolves into a torus knot $T_{2,m+1}$, an $m$-reverse shadow resolves into a twist knot $T_m$, and an $m$-nontrivial or $m$-decomposable shadow resolves into a connected sum of $m$ trefoil knots.
}
{The third and final ingredient in the proof is the Structure Lemma: for each fixed even integer $m\ge 2$, every sufficiently large shadow is either $m$-consistent, or $m$-reverse, or $m$-nontrivial, or $m$-decomposable. In view of the previous paragraph, this completes the proof of Theorem~\ref{thm:main}.
}
\ignore{consists of showing that every sufficiently large reduced shadow $S$ either (i) ``resembles'' one of the shadows in Figures~\ref{fig:one} and~\ref{fig:two}, and so it resolves into a torus knot $T_{2,m+1}$, or into a twist knot $T_m$, or into a connected sum of trefoil knots; or (ii) can be ``decomposed'' into $m$ shadows, each of which resolves into a trefoil knot, and so $S$ resolves into a connected sum of trefoil knots.}
\subsection{{Subarcs and subshadows.}}
{Before we proceed to formally identify the four types of shadows mentioned in the previous subsection, we discuss the notions of a subarc and of a subshadow of a shadow.}
{We refer the reader to Figure~\ref{fig:Notions1}. Let $S$ be a shadow with a pre-assigned traversal direction. An {\em open subarc} of $S$ is obtained by taking two distinct points $x,y\in S$ that are not crossing points, and traversing $S$ from $x$ to $y$, following this direction.}
{Suppose now that $x$ is a crossing point. If we traverse $S$ starting at $x$ following its direction, until we come back to $x$, we obtain a {\em closed subarc} of $S$. If the closed subarc is a simple closed curve, then it is a {\em loop}, and $x$ is its {\em root}.}
{Note that for each crossing $x$ of $S$ there are two closed subarcs $S_1,S_2$ that start and end at $x$. Note also that $S_1$ and $S_2$ are shadows in their own right. We say that $S_1$ and $S_2$ are the {\em subshadows} of $S$ {\em based at $x$}, and write $S=S_1{\oplus_{x}} S_2$.}
\begin{figure}[ht!]
\centering
\scalebox{0.8}{\input{Not6.pdf_t}}
\caption{In (a) we have a shadow $S$. In (b) we show two closed subarcs $L$ (dotted) and $M$ (solid) of $S$, where $L$ is a loop with root $x$. {Here $L$ and $M$ are the subshadows of $S$ based at $x$, and so $S=L{\oplus_{x}}M$.} In (c) we show an open nontrivial subarc of $S$.}
\label{fig:Notions1}
\end{figure}
We remark that we assume that every shadow under consideration comes with a preassigned traversal direction. As illustrated in Figure~\ref{fig:Notions1}, a (open or closed) subarc of a shadow naturally inherits the traversal orientation of the shadow.
\subsection{{Two kinds of shadows inspired by Figure~\ref{fig:one}}}
{We start by identifying the feature that we capture from the shadow in Figure~\ref{fig:one}(a).}
\begin{definition}
A shadow $S$ is $m$-{\em consistent} if it has a crossing $0$ such that the subshadows $S_1,S_2$ based at $0$ cross each other at points $1,2,\ldots,m$, and as we traverse each of $S_1$ and $S_2$ starting at $0$, we encounter these crossings precisely in this order.
\end{definition}
\begin{lemma}\label{lem:third}
For each even integer $m\ge 2$, every $m$-consistent shadow resolves into a torus knot $T_{2,m+1}$.
\end{lemma}
We defer the proof of this lemma to Section~\ref{sec:thirdfourth}. We remark that in this definition it is not required that $S_1$ and $S_2$ cross each other {\em only} at these $m$ points. They may cross each other arbitrarily many times, but as long as there exist $m$ crossing points with the required property, then $S$ is $m$-consistent. A similar remark applies to the following definition, motivated by the shadow of the twist knot in Figure~\ref{fig:one}(b).
\begin{definition}
A shadow $S$ is $m$-{\em reverse} if it has a crossing $0$ such that the subshadows $S_1,S_2$ based at $0$ cross each other at points $1,2,\ldots,m$, and as we traverse $S_1$ starting at $0$ we encounter these crossings in this order, but as we traverse $S_2$ starting at $0$ we encounter them in the reverse order $m,\ldots,2,1$.
\end{definition}
\begin{lemma}\label{lem:fourth}
For each even integer $m\ge 2$, every $m$-reverse shadow resolves into a twist knot $T_m$.
\end{lemma}
We also defer the proof of this lemma to Section~\ref{sec:thirdfourth}.
\subsection{{A kind of shadow inspired by Figure~\ref{fig:two}}}
{The shadow in Figure~\ref{fig:two} is the concatenation of $m$ open subarcs: the open subarcs that start at $p_i$ and end at $p_{i+1}$, for $i=1,\ldots,m-1$, and the one that starts at $p_m$ and ends at $p_1$.}
{Each of these $m$ open subarcs has the following property, illustrated in Figure~\ref{fig:Notions1}(c): it can be written as a concatenation $\alpha L \beta$, where $L$ is a loop and $\alpha\cup\beta$ crosses $L$ at least twice. We say that an open arc with this property is {\em nontrivial}.}
\begin{definition}
A shadow is $m$-{\em nontrivial} if it is the concatenation of $m$ nontrivial open subarcs.
\end{definition}
\begin{lemma}\label{lem:first}
For each integer $m\ge 1$, every $m$-nontrivial shadow resolves into a connected sum of $m$ trefoil knots.
\end{lemma}
\begin{proof}
In~\cite[Theorem 1]{taniyama} it is proved that every reduced shadow that is not a simple closed curve resolves into a trefoil knot. Using the same techniques, it is easily shown that if $A$ is a nontrivial open subarc of a shadow $S$, then $A$ resolves into a $1$-tangle whose closure is a trefoil knot. From this it follows that if $S$ is an $m$-nontrivial shadow, then $S$ resolves into a connected sum of $m$ trefoil knots.
\end{proof}
\subsection{{A kind of shadow inspired by Figure~\ref{fig:fir}}}
{To formally identify the fourth kind of shadow that plays a major role in the proof of Theorem~\ref{thm:main}, we start with an observation. We refer the reader back to Figure~\ref{fig:fir} for an illustration. Let $1$ be a crossing of a shadow $S$, and let $S_1,S_2'$ be the subshadows of $S$ based at $1$. That is, $S=S_1{\oplus_{1}} S_2'$. Suppose now that $S_2'$ (seen as a shadow on its own) has a crossing $2$, and let $S_2,S_3$ be the subshadows of $S_2'$ based at $2$, so that $S_2'=S_2{\oplus_{2}} S_3$.}
{Thus $S=S_1{\oplus_{1}} (S_2{\oplus_{2}}S_3)$. If we now go back to $S$, and consider the crossing $2$, we find that $S=S_1'{\oplus_{2}} S_3$, where $S_1'$ is precisely $S_1{\oplus_{1}}S_2$. Thus we can unambiguously write $S=S_1{\oplus_{1}}S_2{\oplus_{2}}S_3$, as $S_1{\oplus_{1}}(S_2{\oplus_{2}}S_3) = (S_1{\oplus_{1}}S_2){\oplus_{2}}S_3$.
}
{An iterative application of this observation yields that if $1,\ldots,m-1$ are crossings of a shadow $S$, then there exist shadows $S_1,\ldots,S_m$ such that $S=S_1{\oplus_{1}} \cdots {\oplus_{m-1}} S_m$. We say that $S$ {\em decomposes} into the shadows $S_1,\ldots,S_m$.
}
\begin{definition}
{Let $m\ge 2$ be an integer. A shadow is $m$-{\em decomposable} if it decomposes into $m$ shadows, each of which resolves into a trefoil knot.}
\end{definition}
\ignore{
\begin{definition}
For each integer $m\ge 2$, a shadow $S$ is $m$-{\em decomposable} if it has a crossing point $x$ such that one of the subshadows of $S$ based at $x$ resolves into a trefoil knot, and the other subshadow is $(m-1)$-decomposable.
\end{definition}
}
\begin{lemma}\label{lem:second}
{For each integer $m\ge 2$}, every $m$-decomposable shadow resolves into a connected sum of $m$ trefoil knots.
\end{lemma}
As we will see below, Lemma~\ref{lem:second} follows easily from the next remark.
\begin{observation}\label{obs:seci}
{Suppose that $S=S_1\oplus_{1} S_2$, and that $S_1$ resolves into a trefoil knot, and that $S_2$ resolves into a knot $K$.} Then $S$ resolves into a connected sum of $K$ with a trefoil knot.
\end{observation}
\begin{proof}As shown in Figure~\ref{fig:nef}(b), we obtain a resolution of $S$ by combinining the resolution of $S_1$ into a trefoil knot $T$ and the resolution of $S_2$ into a knot $K$, prescribing that each crossing between $T$ and $K$ is an overpass for the strand in $T$.
In this resolution $K'$ of $S$, the crossing $1$ is nugatory. As illustrated in (c), twisting $T$ around $1$ shows that $K'$ is a connected sum of $K$ with a trefoil knot.
\end{proof}
\begin{figure}[ht!]
\centering
\scalebox{0.5}{\input{nef9.pdf_t}}
\caption{Illustration of the proof of Observation~\ref{obs:seci}.}
\label{fig:nef}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{lem:second}]
The lemma follows immediately from the definition of an $m$-decomposable shadow and an inductive application of Observation~\ref{obs:seci}.
\end{proof}
\ignore{
\begin{figure}[ht!]
\centering
\scalebox{0.68}{\input{bt4.pdf_t}}
\caption{A shadow of a torus knot $T_{2,m+1}$ (left) and a shadow of a twist knot $T_m$ (right).}
\label{fig:overc}
\end{figure}
}
\subsection{The Structure Lemma and proof of Theorem~\ref{thm:main}}
The final ingredient for the proof of Theorem~\ref{thm:main} is the following result, whose proof is given in Section~\ref{sec:structure}.
\begin{lemma}\label{lem:structure}
For each even integer $m\ge 2$, there is an integer $n$ with the following property. Every reduced shadow with at least $n$ crossings is either $m$-nontrivial, or $m$-decomposable, or $m$-consistent, or $m$-reverse.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
It follows from Lemmas~\ref{lem:third}, \ref{lem:fourth},~\ref{lem:first}, ~\ref{lem:second}, and~\ref{lem:structure}.
\end{proof}
\section{Proofs of Lemmas~\ref{lem:third} and~\ref{lem:fourth}}\label{sec:thirdfourth}
\ignore{
Before proceeding to the proofs of the lemmas, we give an informal overview of the main ideas behind the proof of Lemma~\ref{lem:third}; the proof of Lemma~\ref{lem:fourth} follows a very similar strategy.
We refer the reader to Figure~\ref{fig:101}(a) below. If $S$ is an $m$-consistent shadow, then using standard arguments we may assume that the layout of $S$ is as illustrated in that figure. That is, (i) with the exception of a crossing $0$, all crossings of $S$ are inside a disk $\Delta$ in the $xy$-plane; and (ii) the part $U$ of $S$ inside $\Delta$ is a {\em tangle shadow} (formal definitions will be given shortly) with the property that the arcs $L$ and $R$ cross each other at points $1,\ldots,m$, and as we traverse the arc $L$ from bottom to top, and also as we traverse the arc $R$ from bottom to top, we encounter these crossings in precisely this order.
The workhorse behind the proof of Lemma~\ref{lem:third} is the following: such a tangle shadow $U$ is a projection of a tangle isotopic to the tangle shown in Figure~\ref{fig:wh1}(a). For the final step in the proof of the lemma, we observe that the part of $S$ outside $\Delta$ is a projection of the piecewise linear strings outside the cylinder in Figure~\ref{fig:101}(b). Combining these arguments, it follows that $S$ resolves into a knot $K$ that is isotopic to the knot illustrated in Figure~\ref{fig:wh2}, which is clearly a torus knot $T_{2,m+1}$.
}
In the proofs of Lemmas~\ref{lem:third} and~\ref{lem:fourth} we make essential use of tangles. We adopt the notion that a {\em tangle} is the disjoint union of two {\em strings} (homeomorphic images of $[0,1]$) in the cylinder $\Delta\times[0,3]$, where $\Delta$ is the disk in the $xy$-plane of radius $\sqrt{2}$ centered at the origin. Admittedly, this choice of host cylinder for tangles may seem unnatural, but it will be very convenient for illustration purposes.
\begin{figure}[H]
\centering
\scalebox{0.32}{\input{yau2.pdf_t}}
\caption{In (a) we illustrate a tangle of Type I, and in (b) a tangle of Type II. Both tangles have the braid diagram in (c).}
\label{fig:wh1}
\end{figure}
All tangles we consider are $z$-{\em monotone}, that is, for each $c\in[0,3]$ the plane $z=c$ intersect each string in exactly one point. Moreover, we only consider two particular types of tangles, illustrated in Figure~\ref{fig:wh1}. A tangle is {\em of Type I} if it consists of a string $\lambda$ with endpoints $(-1,-1,0)$ and $(-1,1,3)$, and a string $\rho$ with endpoints $(1,-1,0)$ and $(1,1,3)$, and it is {\em of Type II} if it consists of a string $\lambda$ with endpoints $(-1,-1,0)$ and $(-1,1,3)$, and a string $\rho$ with endpoints $(1,1,0)$ and $(1,-1,3)$.
\ignore{Two tangles $T_1,T_2$ are {\em equivalent} if there is an isotopic deformation of the cylinder, fixed on its boundary, which deforms $T_1$ into $T_2$. We note that, evidently, a tangle of Type I cannot be equivalent to a tangle of Type II.}
The {\em shadow} $U$ of a tangle $T$ is its projection onto the $xy$-plane, without over/under information at the crossings. Thus, regardless of whether $T$ is of Type I or II, $U$ consists of an arc $L$ (the projection of $\lambda$) with endpoints $(-1,-1)$ and $(-1,1)$ and an arc $R$ (the projection of $\rho$) with endpoints $(1,-1)$ and $(1,1)$. We refer the reader to Figure~\ref{fig:101}(a) (the part contained in $\Delta$) for an illustration of a tangle shadow.
{The {\em vertical diagram} of a tangle (or of a knot) is its projection onto the plane $y=2$, with over/under information at each crossing. Since every tangle $T$ we consider is $z$-monotone, the vertical diagram of $T$ is a braid diagram. This is {\em the braid diagram of $T$}. We define the {\em linking index} ${\text{\rm lk}}(T)$ of $T$ as the linking index of its braid diagram~\cite{muras2}. The tangles in Figure~\ref{fig:wh1}(a) and (b) have the same braid diagram, shown in Figure~\ref{fig:wh1}(c), and so the linking index of each of these tangles is $m/2$.}
\medskip
\noindent{\bf Remark. } {In all illustrations of vertical diagrams, the indicated coordinates of points are the $x$- and $z$-coordinates of these points, as they all have $y$-coordinate $2$.}
\vglue 0.15 cm
Our interest in tangles lies in Proposition~\ref{pro:key} below, which is the workhorse behind the proofs of Lemmas~\ref{lem:third} and~\ref{lem:fourth}. We use the following terminology, motivated by the definition of an $m$-consistent or $m$-reverse knot shadow.
Let $m\ge 2$ be an even integer. We say that a tangle shadow $U$ has {\em rank $m$} if there exist crossings $1,\ldots,m$ between the arcs $L$ and $R$ of $U$, such that as we traverse $L$ from $(-1,-1)$ to $(-1,1)$, and also as we traverse $R$ from $(1,-1)$ to $(1,1)$, we encounter these crossings in precisely this order. In Figure~\ref{fig:101}(a) we illustrate a tangle shadow of rank $m$ (inside the disk $\Delta$). On the other hand, if as we traverse $L$ from $(-1,-1)$ to $(-1,1)$ we encounter these crossings in this order, but as we traverse $R$ from $(1,-1)$ to $(1,1)$ we encounter these crossings in the reverse order $m,\ldots,1$, then we say that $U$ has {\em rank $-m$}. In Figure~\ref{fig:c1}(a) we illustrate a tangle shadow of rank $-m$.
\begin{proposition}\label{pro:key}
Let $U$ be a tangle shadow, and let $m\ge 2$ be an even integer. If $U$ has rank $m$ (respectively, $-m$) then $U$ is the shadow of a tangle $T$ of Type I (respectively, of Type II) such that $|{\text{\rm lk}}(T)|=m/2$.
\end{proposition}
We defer the proof of this statement for the moment, and give the proofs of Lemmas~\ref{lem:third} and~\ref{lem:fourth}. We have a final observation before proceeding to the proofs. Let us say that two shadows $S,S'$ are {\em analogous} if $S$ resolves into a knot $K$ if and only if $S'$ resolves into a knot $K'$ isotopic to $K$. The observation we use is that if $x$ is a crossing in a shadow $S$, then using a standard Riemann stereographic projection argument we may turn $S$ into an analogous shadow $S'$ in which $x$ is incident with the unbounded face.
\ignore{
\begin{figure}[H]
\centering
\scalebox{0.38}{\input{jau1.pdf_t}}
\caption{These tangles play a key role in the proofs of Lemmas~\ref{lem:third} and~\ref{lem:fourth}.}
\label{fig:wh1}
\end{figure}
}
\ignore{
\begin{figure}[ht!]
\centering
\scalebox{0.38}{\input{qr1.pdf_t}}
\caption{In (a) we illustrate a tangle, and in (b) and (c) we illustrate the shadow and diagram of $T$, respectively, which result by projecting $T$ onto the $xy$-plane.}
\label{fig:att}
\end{figure}
}
\begin{proof}[Proof of Lemma~\ref{lem:third}]
Let $S$ be an $m$-consistent shadow on the $xy$-plane, for some even integer $m\ge 2$. We recall that this means that $S$ has a crossing $0$, such that the subshadows $S_1,S_2$ based at $0$ satisfy that there are crossings $1,2,\ldots,m$ between $S_1$ and $S_2$ that we encounter in this order as we traverse each of $S_1$ and $S_2$, starting at $0$. Using the observation mentioned before this proof, we may assume that $0$ is incident with the unbounded face of $S$. Performing a suitable self-homeomorphism of the plane, we may further assume that the layout of $S$ is as shown in Figure~\ref{fig:101}(a). In particular, with the exception of $0$, all crossings of $S$ are contained inside the disk $\Delta$. In this illustration, $S_1$ is the black subshadow and $S_2$ is the gray subshadow.
\begin{figure}[ht!]
\centering
\scalebox{0.26}{\input{pl7a.pdf_t}}\hglue 1 cm
\scalebox{0.27}{\input{ub7.pdf_t}}
\caption{Illustration of the proof of Lemma~\ref{lem:third}.}
\label{fig:101}
\end{figure}
The $m$-consistency of $S$ implies that the part $U$ of $S$ inside $\Delta$ is a tangle shadow of rank $m$. Thus it follows from Proposition~\ref{pro:key} that $U$ is the shadow of a tangle $T$ of Type I such that $|{\text{\rm lk}}(T)|=m/2$. We assume that ${\text{\rm lk}}(T)=m/2$, as the arguments in the case ${\text{\rm lk}}(T)=-m/2$ are totally analogous.
{It is easy to see that there exist strings $\alpha$ and $\beta$ in $3$-space, disjoint from the interior of the cylinder $\Delta\times[0,3]$, such that (i) the endpoints of $\alpha$ (respectively, $\beta$) are $(-1,-1,0)$ and $(-1,1,3)$ (respectively, $(1,-1,0)$ and $(1,1,3)$); (ii) the projection of $\alpha\cup\beta$ onto the $xy$-plane is $S\setminus U$; and (iii) the vertical projections of $\alpha$ and $\beta$ are the strands $a$ and $b$, respectively, shown in Figure~\ref{fig:101}(b).}
{Let $K$ be the knot obtained by adding $\alpha\cup\beta$ to the tangle $T$. Since $U$ is the shadow of $T$, and $S\setminus U$ is the shadow of $\alpha\cup\beta$, it follows that $S$ resolves into $K$. Consider now the vertical diagram $D$ of $K$. The part of $D$ that corresponds to $\alpha$ and $\beta$ are the strands $a$ and $b$; the rest of $D$ is the braid diagram of $T$. Since ${\text{\rm lk}}(T)=m/2$, a sequence of Reidemeister moves of Type II on this braid diagram takes this part of $D$ into the braid diagram shown in Figure~\ref{fig:101}(b). Thus $D$ is equivalent to the diagram in Figure~\ref{fig:101}(b), which is a diagram of a torus knot $T_{2,m+1}$. We conclude that $S$ resolves into a torus knot $T_{2,m+1}$.}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:fourth}]
Let $S$ be an $m$-reverse shadow, where $m\ge 2$ is an even integer. Similarly as in the proof of Lemma~\ref{lem:third}, we may assume that the layout of $S$ is as shown in Figure~\ref{fig:c1}(a). In this case, since $S$ is $m$-reverse it follows that the part $U$ of $S$ inside $\Delta$ is a tangle shadow of rank $-m$. Thus it follows from Proposition~\ref{pro:key} that $U$ is the shadow of a tangle $T$ of Type II such that $|{\text{\rm lk}}(T)|=m/2$. As in the proof of Lemma~\ref{lem:third} we assume that ${\text{\rm lk}}(T)=m/2$, as the arguments in the case ${\text{\rm lk}}(T)=-m/2$ are totally analogous.
\begin{figure}[ht!]
\centering
\scalebox{0.25}{\input{pl6e.pdf_t}}\hglue 0.8 cm
\scalebox{0.29}{\input{try12.pdf_t}}
\caption{Illustration of the proof of Lemma~\ref{lem:fourth}.}
\label{fig:c1}
\end{figure}
{It is easy to see that there exist strings $\alpha$ and $\beta$ in $3$-space, disjoint from the interior of the cylinder $\Delta\times[0,3]$, such that (i) the endpoints of $\alpha$ (respectively, $\beta$) are $(-1,-1,0)$ and $(-1,1,3)$ (respectively, $(1,1,0)$ and $(1,-1,3)$); (ii) the projection of $\alpha\cup\beta$ onto the $xy$-plane is $S\setminus U$; and (iii) the vertical projections of $\alpha$ and $\beta$ are the strands $a$ and $b$, respectively, shown in Figure~\ref{fig:c1}(b).}
{Let $K$ be the knot obtained by adding $\alpha\cup\beta$ to $T$. Using analogous arguments as in the last part of the proof of Lemma~\ref{lem:third}, it follows that $S$ resolves into a twist knot $T_m$.}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pro:key}]
We give the proof of the proposition for tangle shadows that have rank $m$, as the proof for tangle shadows with rank $-m$ is totally analogous.
Let $m\ge 2$ be an even integer, and let $U$ be a tangle shadow of rank $m$, as illustrated in Figure~\ref{fig:to1}(a). {Let $A$ and $B$ be the arcs also illustrated in that figure. Note that $S=U\cup A\cup B$ is a shadow of a $2$-component link. }
{It is easy to see that there exist strings $\alpha$ and $\beta$ in $3$-space, disjoint from the interior of the cylinder $\Delta\times[0,3]$, such that (i) the endpoints of $\alpha$ (respectively, $\beta$) are $(-1,-1,0)$ and $(-1,1,3)$ (respectively, $(1,-1,0)$ and $(1,1,3)$); (ii) the projections of $\alpha$ and $\beta$ onto the $xy$-plane are $A$ and $B$, respectively; and (iii) the vertical projections of $\alpha$ and $\beta$ are the strands $a$ and $b$, respectively, shown in Figure~\ref{fig:to1}(b).}
\begin{figure}[ht!]
\centering
\hglue -0.7 cm \scalebox{0.27}{\input{to3.pdf_t}}\hglue 1 cm
\scalebox{0.24}{\input{an3.pdf_t}}
\caption{Illustration of the proof of Proposition~\ref{pro:key}.}
\label{fig:to1}
\end{figure}
{The strategy to prove the proposition is to show that $S$ is the shadow of a link that satisfies certain properties. We start by letting ${\mathscr M}$ be the set of all links $M$ that have $S$ as their shadow, and such that: (i) the part of $M$ that projects to $U$ is a tangle $T$ of Type I; and (ii) the part of $M$ that projects to $A\cup B$ is $\alpha\cup\beta$.}
Using that $U$ has rank $m$, a straightforward adaptation of the techniques and arguments in~\cite[Algorithm 4]{taniyama2} and~\cite[Proof of Theorem 4]{taniyama2} shows that there is a link $M_0\in{\mathscr M}$ whose linking number ${\text{\rm Lk}}(M_0)$ satisfies $|{\text{\rm Lk}}(M_0)|=m/2$. (We use ${\text{\rm Lk}}(M_0)$ to denote the linking number of a link $M_0$, to distinguish it from the linking index ${\text{\rm lk}}(T)$ of a tangle $T$).
Let $T_0$ be the part of $M_0$ that is a tangle of Type I whose shadow is $U$. Let $D$ be the vertical diagram of $M_0$ (see Figure~\ref{fig:to1}(b)). The vertical projections of $\alpha$ and $\beta$ in $D$ do not intersect each other, and they do not intersect the projection of $T_0$ (which is the braid diagram of $T_0$). Thus all the crossings in $D$ are the crossings in the braid diagram of $T_0$. Therefore ${\text{\rm Lk}}(M_0)={\text{\rm lk}}(T_0)$, and so $|{\text{\rm lk}}(T_0)|=m/2$.
\end{proof}
\section{The four relevant types of shadows in terms of Gauss codes}\label{sec:gauss}
In this section we take a first step toward the proof of Lemma~\ref{lem:structure}, finding conditions, in terms of Gauss codes, that guarantee that a shadow is $m$-nontrivial, or $m$-decomposable, or $m$-consistent, or $m$-reverse. This will turn the proof of Lemma~\ref{lem:structure} into a purely combinatorial problem.
We start with a brief review of the notion of a Gauss code of a shadow $S$. Label the crossing points of $S$, and let $p$ be an arbitrary noncrossing point of $S$. The {\em Gauss code} $\omega$ of $S$ {\em starting at $p$} is the word obtained by traversing $S$ and noting each crossing point we encounter. Thus every label occurs exactly twice in $\omega$: if the crossings of $S$ are labelled $1,\ldots,n$, then $\omega$ is a permutation of the multiset $\{1,1,\ldots,n,n\}$.
\ignore{
\begin{figure}[ht!]
\centering
\hglue -0.8 cm \scalebox{0.5}{\input{aga2.pdf_t}}
\caption{The Gauss code of this shadow, starting at $p$, is $1\, 4\, 5\, 2\, 4\, 1\, 3\, 5\, 2\, 3$.}
\label{fig:1f}
\end{figure}
}
We adopt the following standard terminology. A {\em substring} of a word $a_1a_2\cdots a_t$ is a word of the form $a_i a_{i+1}\cdots a_{j-1} a_{j}$, for some $1\le i\le j\le t$. A {\em subword} of $a_1a_2\cdots a_t$ is a word of the form $a_{i_1} a_{i_2} \cdots a_{i_j}$, where $1 \le i_1 < i_2 < \cdots < i_j \le t$. We adhere to the convention to use $\sigma{\,\bigl|\,}\omega$ to denote that $\sigma$ is a subword of $\omega$.
{We start by finding a condition for a Gauss code $\omega$ that guarantees that its corresponding shadow $S$ is $m$-nontrivial.} We say that a substring $\alpha$ of $\omega$ is {\em good} if it contains distinct symbols $a_i,a_j,a_k$, such that the following hold:
\smallskip
\begin{description}
\item[(1)] no symbol of $\omega$ has both occurrences in between the two occurrences of $a_i$; and
\item[(2)] $\alpha$ contains both occurrences of each of $a_i, a_j$ and $a_k$, and each of $a_j$ and $a_k$ occurs exactly once in between the occurrences of $a_i$.
\end{description}
\smallskip
Let $A$ be a open subarc of $S$, and let $\alpha$ be the substring that is the part of $\omega$ that corresponds to the traversal of $A$. Suppose that $\alpha$ is good. Then {(1)} implies that $a_i$ is the root of a loop $L$ contained in $A$, and {(2) implies} that $L$ is crossed at least twice in $A$. That is, $A$ is a nontrivial open subarc of $S$.
{We say that a Gauss code is $m$-{\em good} if it is the concatenation of $m$ good substrings. The observation in the previous paragraph implies the following.}
\begin{fact}\label{fac:non
Let $S$ be a shadow, and let $\omega$ be a Gauss code of $S$. If $\omega$ {is $m$-good}, then $S$ is $m$-nontrivial.
\end{fact}
\ignore{
\begin{figure}[ht!]
\centering
\scalebox{0.75}{\input{al11.pdf_t}}
\caption{The Gauss code $\omega$ of the shadow $S$ on the left hand side, starting at $p$, is $4\,1\,3\,2\,1\,a\,5\,\,11\,\,10\,\,8\,9\,6\,\,11\,\,10\,\,7\,9\,a\,4\,2\,3\,8\,7\,6\,5$. Let $\alpha=4\,1\,3\,2\,1\,$, $\gamma=5\,\,11\,\,10\,\,8\,9\,6\,\,11\,\,10\,\,7\,$ $9$, and $\beta=4\,2\,3\,8\,7\,6\,5$. Then $\omega=\alpha\, a\, \gamma\, a\, \beta$. Let $S_1$ (center) and $S_2$ (right) be the subshadows of $S$ based at $x$. The subword $\mea{\alpha\beta}=4\,1\,3\,2\,1\, 4\,2\,3$ is a Gauss code of $S_1$, and $\mea{\gamma}=11\,\,10\,\,9\,\,11\,\,10\,\,9$ is a Gauss code of $S_2$.}
\label{fig:al}
\end{figure}
}
{To investigate the Gauss codes of $m$-consistent, $m$-reverse, and $m$-nontrivial shadows,} we will use the following terminology. Let $S$ be a shadow, and let $\omega$ be a Gauss code of $S$. Two symbols $a_i, a_j$ of $\omega$ form an {\em alternating pair} if either $a_i a_j a_i a_j{\,\bigl|\,}\omega$ or $a_j a_i a_j a_i{\,\bigl|\,}\omega$. A symbol $a$ of $\omega$ is {\em nugatory} if it corresponds to a nugatory crossing of $S$. It is easy to see that $a$ is a nugatory symbol if and only if $a$ does not form part of an alternating pair. Finally, if $\sigma{\,\bigl|\,}\omega$, then $\mea{\sigma}$ denotes the subword of $\sigma$ obtained by eliminating the symbols that appear only once in $\sigma$.
We make essential use of the following easy remark.
\begin{observation}\label{obs:dosmil}
Let $S$ be a shadow, let $\omega$ be a Gauss code of $S$, and let $a$ be a crossing of $S$. Write $\omega$ as a concatenation $\alpha\, a\, \gamma\, a \, \beta$. Then $\mea{\alpha\beta}$ is a Gauss code of one subshadow of $S$ based at $a$, and $\mea{\gamma}$ is a Gauss code of the other subshadow.
\end{observation}
We now consider $m$-consistent shadows. Let $S$ be a shadow, and let $\omega$ be a Gauss code of $S$. {We say that $\omega$ is $m$-{\em increasing} if it has symbols $a_0,a_1,\ldots,a_m$ such that $a_0 \, a_1\, \cdots\, a_m\, a_0 \, a_1\, \cdots a_m{\,\bigl|\,}\omega$.} If $\omega$ has such a subword, then it follows from Observation~\ref{obs:dosmil} that the Gauss codes of both subshadows of $S$ based at $a_0$ have $a_1a_2\ldots a_m$ as a subword. In view of the definition of an $m$-consistent shadow, this immediately implies the following.
\begin{fact}\label{fac:con
Let $S$ be a shadow, and let $\omega$ be a Gauss code of $S$. If $\omega$ {is $m$-increasing}, then $S$ is $m$-consistent.
\end{fact}
{We say that $\omega$ is $m$-{\em decreasing} if it has symbols $a_0,a_1,\ldots,a_m$ such that $a_0 \, a_1\, \cdots$ $ a_m a_0 \, a_m\,\cdots a_1{\,\bigl|\,}\omega$.} If $\omega$ has such a subword, then by Observation~\ref{obs:dosmil} the Gauss code of one of the subshadows of $S$ based at $a_0$ has $a_1a_2\ldots a_m$ as a subword, and the Gauss code of the other subshadow has $a_m \ldots a_2a_1$ as a subword. The following is then an immediate consequence of the definition of an $m$-reverse shadow.
\begin{fact}\label{fac:rev
Let $S$ be a shadow, and let $\omega$ be a Gauss code of $S$. If $\omega$ {is $m$-decreasing}, then $S$ is $m$-reverse.
\end{fact}
{We finally find a property of a Gauss code corresponding to $m$-decomposability.} We make use of the following remark. By~\cite[Theorem 3]{ams}, every shadow in which not every crossing is nugatory resolves into a trefoil knot. As we discussed above, $a$ is a nugatory crossing of a shadow $S$ if and only if $a$ does not form part of any alternating pair of symbols in the Gauss code of $S$. Thus we have the following.
\begin{observation}\label{obs:mil}
If a Gauss code $\omega$ of a shadow $S$ has an alternating pair of symbols, then $S$ resolves into a trefoil knot.
\end{observation}
{Let $m\ge 2$ be an integer.} A Gauss code $\omega$ is $m$-{\em nice} if it can be written as
\[
\omega=\alpha_1\, a_1 \, \alpha_2\, a_2 \, \cdots \, \alpha_{m-1} \, a_{m-1} \,\, \alpha_{m}\,\beta_{m}\, \, a_{m-1}\, \beta_{m-1}\, \cdots \, a_2\, \beta_2 \, a_1\, \beta_1,
\]
{where $a_1,\ldots,a_{m-1}$ are symbols, and for $i=1,\ldots,m$, the concatenation $\alpha_i\beta_i$ has an alternating pair of symbols.}
\begin{fact}\label{fac:dec
{Let $S$ be a shadow, let $\omega$ be a Gauss code of $S$, {and let $m\ge 2$ be an integer. If $\omega$ is $m$-nice,} then $S$ is $m$-decomposable.}
\end{fact}
\begin{proof}
{The crossings $a_1,\ldots,a_{m-1}$ induce a decomposition $S_1{\oplus_{a_1}}S_2{\oplus_{a_2}}\cdots S_{m-1}$ ${\oplus_{a_{m-1}}}S_m$ of $S$. An iterative application of Observation~\ref{obs:dosmil} yields that $\mea{\alpha_i\beta_i}$ is a Gauss code of $S_i$, for $i=1,\ldots,m$. Since $\alpha_i\beta_i$ has an alternating pair for each $i=1,\ldots,m$, then $\mea{\alpha_i\beta_i}$ also has an alternating pair for each $i=1,\ldots,m$. Therefore, by Observation~\ref{obs:mil}, $S_i$ resolves into a trefoil knot for each $i=1,\ldots,m$.}
\end{proof}
\section{Proof of Lemma~\ref{lem:structure}}\label{sec:structure}
The following propositions are the workhorses behind the proof of Lemma~\ref{lem:structure}.
\begin{proposition}\label{pro:work4}
Let $\omega$ be a Gauss code of a reduced shadow $S$, and let $m\ge 2$ be an integer. Suppose that $1\,1\,2\,2\,\cdots {9m^5}\, {9m^5} {\,\bigl|\,} \omega$. Then {$\omega$ either is $m$-increasing, or it is $m$-good, or it has symbols $a_1,\ldots,a_{3m^2}$ such that $a_1 \, \cdots\, a_{3m^2}\, a_{3m^2} \, \cdots\, a_1{\,\bigl|\,}\omega$.}
\end{proposition}
\begin{proposition}\label{pro:work3}
Let $\omega$ be a Gauss code of a reduced shadow $S$, and let $m\ge 2$ be an integer. Suppose that $1\,2\,\cdots\, {3m^2}\, {3m^2}\, \cdots\, 2\,1{\,\bigl|\,}\omega$. {Then $\omega$ is either $m$-decreasing or $m$-nice.}
\end{proposition}
\begin{proof}[Proof of Lemma~\ref{lem:structure}, assuming Propositions~\ref{pro:work4} and~\ref{pro:work3}]
{Let $S$ be a reduced shadow with $n$ crossings, and let $\omega$ be a Gauss code of $S$. We show that if $n$ is at least the $3$-colour Ramsey number $R(m+1,3m^2,9m^5)$, then $\omega$ is either $m$-increasing, or $m$-decreasing, or $m$-good, or $m$-nice. This implies the lemma, using Facts~\ref{fac:non},~\ref{fac:con},~\ref{fac:rev}, and~\ref{fac:dec}.}
{We first note that we may label the symbols of $\omega$ with $1,2,\ldots,n$ so that ($*$) for each $1\le i<j\le n$, the first occurrence of $i$ is before the first occurrence of $j$.}
Let $G$ be the complete graph whose vertices are the symbols $1,2,\ldots,n$. Let $i,j\in\{1,2,\ldots,n\}$, where $i<j$. We assign colour $1$ to the edge $ij$ if the subword of $\omega$ induced by $i$ and $j$ is $ijij$, colour $2$ if this subword is $ijji$, and colour $3$ if this subword if $iijj$. By ($*$), every edge is of one of these three colours.
By Ramsey's theorem, $G$ has either (i) a complete subgraph with vertices $a_0 {<} a_1 {<}$ $ \cdots < {a_{m}}$, all of whose edges are of colour $1$; or (ii) a complete subgraph with vertices $a_1{<}a_2{<}\cdots{<}a_{3m^2}$, all of whose edges are of colour $2$; or (iii) a complete subgraph with vertices $a_1{<}a_2{<}\cdots{<}a_{9m^5}$, all of whose edges are of colour $3$.
{If (i) holds, then $a_0 a_1 \cdots a_m a_0 a_1 \cdots a_m{\,\bigl|\,}\omega$, and so $\omega$ is $m$-increasing. If (ii) holds, then $a_1 a_2\cdots a_{3m^2} a_{3m^2} \cdots a_2 a_1{\,\bigl|\,}\omega$, and so by Proposition~\ref{pro:work3} then $\omega$ is either $m$-decreasing or $m$-nice. Finally, if (iii) holds then $a_1 a_1 a_2 a_2 \cdots a_{9m^5}a_{9m^5}{\,\bigl|\,}\omega$, and so by Proposition~\ref{pro:work4} either $\omega$ is $m$-increasing or $m$-good, or it has symbols $b_1,\ldots,b_{3m^2}$ such that $b_1 \cdots b_{3m^2} b_{3m^2}\cdots b_1{\,\bigl|\,} \omega$. In this latter case, by Proposition~\ref{pro:work3} it follows that $\omega$ is either $m$-decreasing or $m$-nice.}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pro:work4}]
The hypothesis is that $1122\cdots 9m^5 9m^5 {\,\bigl|\,} \omega$. We start by noting that we may assume that ($*$) for each $i=1,\cdots,9m^5$, there is no symbol $b$ in $\omega$ such that $i\, b\, b\, i$ is a subword of $\omega$.
Write $\omega$ as a concatenation $\alpha_1 \alpha_2 \cdots \alpha_{m}$, where for each $i=1,\ldots,m$, $\alpha_i$ has ${(i-1)9m^4+1}$ ${(i-1)9m^4+1}\,\,\, \cdots \,\,\, i(9m^4)\,\,\,i(9m^4)$ as a subword. If $\alpha_i$ is a good substring for every $i=1,\ldots,m$, then $\omega$ is $m$-good, and so we are done.
Thus we may assume that there is an $i\in\{1,2,\ldots,m\}$ such that $\alpha_i$ is not a good substring. To simplify the discussion, we note that $\alpha_i\,\alpha_{i+1}\cdots\,\alpha_m\, \alpha_1\,\cdots\,\alpha_{i-1}$ is also a Gauss code of $S$, and so by relabelling, if necessary, we may assume that $\alpha_1$ is not a good substring.
Recall that $\alpha_1$ contains $1\, 1\, \cdots 9m^4\, 9m^4$ as a subword. We invoke the easy fact that for every symbol $a$ in a Gauss code of a reduced shadow, there must exist two distinct symbols that occur in between the two occurrences of $a$. Thus for each $i=1,\ldots,9m^4$, there exist symbols $b_i,c_i$ such that $ib_ic_ii{\,\bigl|\,}\omega$. Note that ($*$) implies that each of $b_i$ and $c_i$ occurs exactly once in between the two occurrences of $i$.
The hypothesis that $\alpha_1$ is not good implies that, for each $i=1,\ldots,9m^4$, there is a (at least one) $d_i\in\{b_i,c_i\}$ that only occurs once in $\alpha_1$ (namely, in between the two occurrences of $i$). Therefore there are symbols $d_1,d_2,\ldots,d_{9m^4}$ that appear exactly once in $\alpha_1$, and so each of these symbols also appears exactly once in $\alpha_2\cdots \alpha_m$.
The Erd\H{o}s-Szekeres theorem on increasing/decreasing subsequences then implies that there are $a_1,\ldots,a_{3m^2}$ in $\{d_1,\ldots,d_{9m^4}\}$ such that either (i) $a_1a_2 \cdots $ $a_{3m^2}a_1a_2\cdots a_{3m^2}{\,\bigl|\,}\omega$ or (ii) $a_1a_2\cdots a_{3m^2} a_{3m^2} \cdots a_2 a_1{\,\bigl|\,}\omega$. {If (ii) holds then we are done, and if (i) holds then $\omega$ is $(3m^2-1)$-increasing, and so (since $3m^2-1 > m$) it is $m$-increasing.}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pro:work3}]
{Since $12\cdots 3m^2 3m^2\cdots 21{\,\bigl|\,}\omega$, then we can write $\omega$ as}
\medskip
\hglue -0.45cm \scalebox{0.87}{
${\alpha_{2m} (2m) \alpha_{4m} (4m) \cdots \alpha_{2m^2-2m} (2m^2-2m)\alpha_{2m^2}\beta_{2m^2} (2m^2-2m)\beta_{2m^2-2m} \cdots (4m)\beta_{4m} (2m) \beta_{2m},}$
}
\medskip
\noindent {where the substrings $\alpha_i,\beta_i$ are uniquely determined for $i=2m,4m,\ldots,2m^2-2m$, and we set $\alpha_{2m^2}$ (respectively, $\beta_{2m^2}$) so that $(2m^2-2m+1)(2m^2-2m+2)\cdots 3m^2{\,\bigl|\,} \alpha_{2m^2}$ (respectively, $3m^2\cdots (2m^2-2m+2)(2m^2-2m+1){\,\bigl|\,}\beta_{2m^2}$.}
{If $\alpha_i\beta_i$ has an alternating pair for each $i=2m,4m,\ldots,2m^2$, then this expression of $\omega$ witnesses that $\omega$ is $m$-nice, and so we are done. Thus we may assume that there is an $i\in \{2m,4m,\ldots,2m^2\}$ such that $\alpha_i\beta_i$ has no alternating pair.}
{Let $j:=i-m$, $\sigma:=(j-m+1)(j-m+2)\cdots (j-1)$, and $\tau:=(j+1)(j+2)\cdots (j+m-1)$. Note that $\sigma\,j\,\tau{\,\bigl|\,}\alpha_i$ and $\tau^{-1}\,j\,\sigma^{-1}{\,\bigl|\,}\beta_i$. We show that there is a symbol $b$ such that either (I) $b\sigma j b j\sigma^{-1}{\,\bigl|\,}\omega$; or (II) $bj\,\tau\,b \tau^{-1}\,j{\,\bigl|\,}\omega$. This will complete the proof, as each of (I) and (II) implies that $\omega$ is $m$-decreasing.}
{Since $S$ is reduced, then every symbol of $\omega$, and in particular $j$, forms part of an alternating pair. Thus there is a $b$ such that either $bjbj{\,\bigl|\,}\omega$ or $jbjb{\,\bigl|\,}\omega$. We may assume that the former possibility holds, as in the alternative we may work with $\omega^{-1}$, which is also a Gauss code of $S$.}
{Thus $bjbj{\,\bigl|\,}\omega$. If $\alpha_i$ contains both occurrences of $b$, then $bjb{\,\bigl|\,} \alpha_i$, and so (since $j$ is also in $\beta_i$) $bjbj{\,\bigl|\,}\alpha_i\beta_i$, contradicting that $\alpha_i\beta_i$ has no alternating pair. Thus $\alpha_i$ contains at most one occurrence of $b$. Therefore either (i) the first occurrence of $b$ is to the left of $\sigma$; or (ii) the second occurrence of $b$ is to the right of $\tau$. If (i) holds then we are done, since then it follows that (I) holds. Suppose finally that (ii) holds, and that (i) does not hold (this last assumption implies that $b$ occurs in $\sigma$). The second occurrence of $b$ must then be to the left of $\tau^{-1}$, as otherwise if would necessarily be in $\tau^{-1}$, implying that $bjbj{\,\bigl|\,} \alpha_i\beta_i$, again contradicting that $\alpha_i\beta_i$ has no alternating pair. Thus the second occurrence of $b$ is in between $\tau$ and $\tau^{-1}$, and so (II) holds.
}
\end{proof}
\section{Open questions}
{For each reduced shadow $S$, let $f(S)$ be the number of non-isotopic knots into which $S$ resolves. The shadows $S_m$ in Figure~\ref{fig:one}(a) have $m+1$ crossings, and it is proved in~\cite{fertility} that $S_m$ resolves into a knot $K$ if and only if $K$ is a torus knot $T_{2,n}$ with crossing number at most $m+1$. Taking into account that $T_{2,n}$ is not isotopic to its mirror image $T_{2,-n}$ if $|n|>1$, it follows that $f(S_m)$ is precisely the number of crossings in $S_m$, namely $m+1$.}
{Thus for each odd integer $n\ge 3$, there is a reduced shadow $S$ with $n$ crossings such that $f(S)=n$. Is it true that for each $n\ge 3$, every reduced shadow $S$ with $n$ crossings satisfies that $f(S)\ge n$? Here is an even easier question: is there a universal constant $c>0$ such that every reduced shadow $S$ with $n$ crossings satisfies that $f(S) > c\cdot n$?}
{What about a ``typical'' reduced shadow? Pick a shadow $S$ randomly among all reduced shadows with $n$ crossings. What is the expected value of $f(S)$? Is this number exponential, or at least superpolynomial, in $n$? The strong techniques recently developed by Chapman in~\cite{chapman} may shed light on this question.}
\ignore{
\section{Concluding remarks and open questions}\label{sec:concluding}
In~\cite{fertility}, Cantarella, Henrich, Magness, O'Keefe, Perez, Rawdon, and Zimmer proved that torus knots $T_{2,p}$ and twist knots have few {\em descendants}. In our context, they show that a shadow of a minimal crossing diagram of a torus knot $T_{2,p}$ only resolves into $(p+1)/2$ knots, and a shadow of a minimal crossing diagram of a twist knot $T_p$ (say with $p$ even) only resolves into $(p+2)/2+1$ knots. Thus there exist reduced shadows with $p$ crossings that only resolve into (roughly) $p/2$ distinct knots. Is it true that {\em every} reduced shadow with $p$ crossings resolves into at least $p/2$ distinct knots?
\ignore{
\begin{conjecture}\label{conj:conj2}
There are positive constants $c,\alpha$ such that every reduced shadow with $n$ crossings resolves into at least $cn^\alpha$ distinct knot types.
\end{conjecture}
This last conjecture is not just an effort to pose a weaker, perhaps more approachable conjecture, but instead it is motivated by the following discussion. In our proof we use Ramsey's theory, and so an explicit upper bound for $n$ would be at least exponential in $m$. To keep the discussion simple, in a best case scenario our techniques would imply that every reduced shadow with at least $n$ crossings resolves into at least $\log{n}$ distinct knots. This is far away from what we propose in the previous conjecture.
We note that a polynomial upper bound for $n$ in terms of $m$ in Theorem~\ref{thm:main} would not only be in our opinion very interesting by itself, but it would also imply Conjecture~\ref{conj:conj2}.
}
Can anything interesting be said about the fertility of a typical (random) shadow? The strong results and techniques recently developed by Chapman~\cite{chapman} may shed light on, our maybe even outright solve, the following.
\begin{question}\label{que:que3}
Pick a shadow $S$ at random among all reduced shadows with $n$ crossings. What is the expected number of distinct knots into which $S$ resolves? Is it easy to show that this expected number is at least a linear function on $n$? Is it true that this number is exponential, or at least superpolynomial, in $n$?
\end{question}
}
\ignore{
Finally, diving further into this direction, we bring up the recent paper~\cite{uni}. In this work, Even-Zohar, Hass, Linial, and Nowik investigate infinite families of shadows that are {\em universal}, in the sense that they yield diagrams for all knots. Restricting our attention to single shadows, rather than to infinite families of shadows, let us say that a shadow $S$ is {\em $m$-universal}, for some positive integer $m$, if it resolves into all knots with crossing number at most $m$.
We note the connection of this notion with the previously cited work by Cantarella et al.~\cite{fertility}. In that paper, it is reported that there is a shadow with $7$ crossings that resolves into {\em all} knots with crossing number $6$ or less. It is also mentioned that no shadow with $n=8,9$, or $10$ crossings resolves into all knots with crossing number $n-1$ or less, and they ask if the same is true for every $n\ge 8$.
Most likely the answer to this last question is no, but it does not seem easy (to say the least) to show this. Going back to the notion of an $m$-universal shadow, we have a final question.
Our guess is that it is reasonably easy to show, using Chapman's techniques, that this expected number is at least polynomial in $n$.
}
\ignore{
We can come up with many open questions around the main theme of this paper. We find the following basic question particularly interesting. Again, we feel that there should be a reasonably easy way to settle this in the affirmative, but our efforts so far have failed.
\begin{question}
Is it true that if a shadow $S$ resolves into a torus knot $T_{2,n}$ then it also resolves into a torus knot $T_{2,n,-2}$?
\end{question}
\ignore{We finally discuss a possible extension of Theorem~\ref{thm:main} for more restricted kinds of shadows. As we noted in its statement, Theorem~\ref{thm:main} is best possible, in the sense that torus knots $T_{2,n}$, twist knots, and connected sums of trefoil knots are the only knots that are guaranteed to lie above every reduced shadow. On the other hand, the shadows that show this, namely the ones in Figure~\ref{fig:tangles2}, are ``long and thin".
We purposefully leave the glaring feature ``long and thin" in this informal fashion, as one can think of more than one parameter that formally captures this kind of structure. One could for instance say that these shadows have small path-width~\cite{rspath}, or that they have small width in the sense of~\cite{uni}.
\begin{question}
Let $k$ be a positive (large enough, if needed) integer. Let $S$ be a shadow with path-width/width at least $k$. Can we characterize into which knots $S$ necessarily resolves?
\end{question}
}
}
\begin{bibdiv}
\begin{biblist}
\bib{fertility}{article}{
author={Cantarella, Jason},
author={Henrich, Allison},
author={Magness, Elsa},
author={O'Keefe, Oliver},
author={Perez, Kayla},
author={Rawdon, Eric},
author={Zimmer, Briana},
title={Knot fertility and lineage},
journal={J. Knot Theory Ramifications},
volume={26},
date={2017},
number={13},
pages={1750093, 20},
}
\bib{chapman}{article}{
author={Chapman, Harrison},
title={Asymptotic laws for random knot diagrams},
journal={J. Phys. A},
volume={50},
date={2017},
number={22},
pages={225001, 32},
}
\bib{hanaki1}{article}{
author={Hanaki, Ryo},
title={On scannable properties of the original knot from a knot shadow},
journal={Topology Appl.},
volume={194},
date={2015},
pages={296--305},
}
\bib{ams}{misc}{
author={Medina, Carolina},
author={Ram\'\i rez-Alfons\'{\i}n, Jorge},
author={Salazar, Gelasio}
title={On the number of unknot diagrams},
note={{\mathscr T}{https://arxiv.org/abs/1710.06470}},
}
\bib{muras2}{book}{
author={Murasugi, Kunio},
author={Kurpita, Bohdan I.},
title={A study of braids},
series={Mathematics and its Applications},
volume={484},
publisher={Kluwer Academic Publishers, Dordrecht},
date={1999},
pages={x+272},
}
\bib{taniyama}{article}{
author={Taniyama, Kouki},
title={A partial order of knots},
journal={Tokyo J. Math.},
volume={12},
date={1989},
number={1},
pages={205--229},
}
\bib{taniyama2}{article}{
author={Taniyama, Kouki},
title={A partial order of links},
journal={Tokyo J. Math.},
volume={12},
date={1989},
number={2},
pages={475--484},
}
\end{biblist}
\end{bibdiv}
\end{document}
\bib{endo}{article}{
author={Endo, Toshiki},
author={Itoh, Tomoko},
author={Taniyama, Kouki},
title={A graph-theoretic approach to a partial order of knots and links},
journal={Topology Appl.},
volume={157},
date={2010},
number={6},
pages={1002--1010},
}
\section*{{Leftovers}}
\begin{itemize}
\item Cite Mariel.
\item The introduction is quite maleable. So is the second section, as one can explain how the proof goes after the drawings. But I would really like, as much as possible, to have the figures close to the text. Even the abstract is, and some parts before stating the Main Theorem.
\item Cite more Millet. And papers in good journals.
\item Erickson! Conjecture on width. Also cite the petaluma guys.
\item {Size, number of crossings.}
\item {This result is best possible, as there are reduced shadows with arbitrarily many crossings that only resolve into one of these three kinds of knots.} Subtle?
\item Parity!!! When do we require $m$ to be even.
\item {If the final point of an open subarc $A_1$ coincides with the initial point of another open subarc $A_2$, then we can form the {\em concatenation} $A_1\cdot A_2$ of these open subarcs. }
\item{Links, links, links!}
\item{Shadow always means knot shadow, unless otherwise stated.}
\item{Cite more fertility!}
\item{Does a shadow have a prescribed direction?}
\item{Issue of parity in main statement and in lemmas. Don't distract.}
\item Nugatory should not appear unless it is correct.
\item Universal thing. Bounded width implies pretzel knots? Montesinos?
\item Maybe we want to include something along these lines, maybe not. In an earlier version of this manuscript, this main theorem did not specify which connected sums of trefoil knots we could guarantee.
Prompted by a question of Ken Millett (personal communication), we went further, and we now specify exactly which connected sums of trefoil knots belong in the statement. These are {\em linear} and {\em cyclic} connected sums of trefoil knots, which are illustrated in Figure~\ref{fig:newone}(a) and (b), respectively. We remark that in these figures, each of the trefoil knots may be a left-handed or a right-handed trefoil knot. If yes, thank Millet, and cite him.
\item {As follow-up project. Can we say when a shadow resolves into a large twist or torus knot? With an eye on complexity issues.}
\item See stupid remark, commented out in un69.tex.
\item {The word vertex should never appear.}
\item {In un69.tex there is a Theorem 2, unavoidable $2$-links. But just too tired to keep pushing it.}
\end{document}
\bib{millett}{article}{
author={Millett, Kenneth C.},
title={Knots in knots: a study of classical knot diagrams},
journal={J. Knot Theory Ramifications},
volume={25},
date={2016},
number={9},
pages={1641013, 10},
}
\bib{pr}{article}{
author={Przytycki, J\'ozef H.},
title={Positive knots have negative signature},
language={English, with Russian summary},
journal={Bull. Polish Acad. Sci. Math.},
volume={37},
date={1989},
number={7-12},
pages={559--562 (1990)},
}
\bib{subwords}{book}{
author={Lothaire, M.},
title={Combinatorics on words},
series={Encyclopedia of Mathematics and its Applications},
volume={17},
note={A collective work by Dominique Perrin, Jean Berstel, Christian
Choffrut, Robert Cori, Dominique Foata, Jean Eric Pin, Guiseppe Pirillo,
Christophe Reutenauer, Marcel-P. Sch\"{u}tzenberger, Jacques Sakarovitch and
Imre Simon;
With a foreword by Roger Lyndon;
Edited and with a preface by Perrin},
publisher={Addison-Wesley Publishing Co., Reading, Mass.},
date={1983},
pages={xix+238},
isbn={0-201-13516-7},
}
\bib{sto}{book}{
author={Stoimenow, Alexander},
title={Diagram genus, generators, and applications},
series={Monographs and Research Notes in Mathematics},
publisher={CRC Press, Boca Raton, FL},
date={2016},
pages={xviii+173},
}
\bib{adams}{book}{
author={Adams, Colin C.},
title={The knot book. An elementary introduction to the mathematical theory of knots},
publisher={American Mathematical Society, Providence, RI},
date={2004},
pages={xiv+307},
}
\bib{atlas}{misc}{
title={The Knot Atlas},
note={{\mathscr T}{http://katlas.org/}.},
}
\bib{rasmussen}{article}{
author={Rasmussen, Jacob},
title={Khovanov homology and the slice genus},
journal={Invent. Math.},
volume={182},
date={2010},
number={2},
pages={419--447},
}
\bib{erse}{article}{
author={Erd\"{o}s, P.},
author={Szekeres, G.},
title={A combinatorial problem in geometry},
journal={Compositio Math.},
volume={2},
date={1935},
pages={463--470},
}
|
2,869,038,155,316 | arxiv | \section{Introduction}
\label{sec:intro}
The advent in recent years of extremely deep imaging surveys, designed
to probe distant galaxies through multiwavelength imaging extending
deep into the infrared \citep[e.g. GOODS,][]{2004ApJ...600L..93G}, has
made possible studies of extremely faint, and often extremely red,
sources. In order to target high redshift galaxies, colour cuts have
been developed to exclude the majority of Galactic stars - including
the low metallicity dwarfs expected in the halo of our own galaxy.
Nonetheless, it has become clear that an unexpected number of
extremely red stars remain at faint magnitudes. The spectroscopic
component of high-redshift galaxy searches have confirmed the nature
of such stars, in several cases obtaining deep spectroscopy as well as
photometry \citep[e.g.][]{2004ApJ...607..704S,2005A&A...434...53V},
but the overall population has not been systematically characterised.
Low mass stars of class M or later comprise more than 80 per cent of the
stellar population of our galaxy. Faint stars of late-M, L and T
classes probe the tail of the stellar mass function, bridging the
divide between the main sequence and low mass brown dwarfs. Such stars
have unusual colours, arising from the presence of deep molecular
absorption bands in their cool atmospheres.
The faintness of these stars has made their investigation challenging,
particularly with increasing distance from our sun. Efforts to
determine the properties of such stars in the halo and thick disk have
concentrated on nearby stars with high proper motions indicative of a
halo origin \citep[e.g.][]{1999AJ....117..508G}. While deep surveys of
the M star population in order to identify distant examples have been
undertaken with WFPC2 on the {\em Hubble Space Telescope} ({\em HST}),
reaching $I_{AB}$=23.2 \citep{2001ApJ...555..393Z}, these have been
limited to wavebands shortwards of 9000\AA\ where the coolest stars
have very little flux.
Meanwhile, the more local population - late M stars at disk
metallicities within 1.5\,kpc of the sun - have been the subject of
detailed study in recent years. The Sloan Digital Sky Survey
\citep[SDSS,][]{2000AJ....120.1579Y} is relatively shallow in the
optical bands, reaching an $I$ band limit of $i'\approx20$. However,
the addition of a redder $z'$ band in the optical to similar depths has enabled the
photometric selection of cooler, class L and T stars as
sources with large flux decrements between adjacent passbands, caused
by sharply-defined, strong absorption and transmission features. The
$z'$-band imaging also allows the characterisation of late M stars by
their $i'-z'$ colour. Such surveys have enabled templates based on
thousands of local (mainly disk) dwarf stars to be constructed
\citep{2002AJ....123.3409H,2007AJ....133..531B}.
At the same time, advancements in infrared detector technology have
made it increasingly easy to study these faint, cool stars at
wavelengths longwards of the optical. Their near and mid-infrared
bands are dominated by molecular absorption, leading to extreme
colours in infrared bands \citep{2006ApJ...651..502P}. Again, such
studies have been limited to local stars, in order to attain the
maximum possible signal to noise.
However, given the lower typical metallicity of stars in the halo when
compared to the galactic disc, the characteristics of local
disk-dominated M star samples may well be unrepresentative of fainter
sources at larger distances and the colour selection methods applied
locally might be expected to detect nothing at halo distances.
In this paper we present an analysis of faint low mass stars, selected
for their extreme colours between two adjacent bands in high spatial
resolution, deep optical imaging. Such a colour selection is expected
to select either relatively-nearby late M stars at sub-solar metallicities
or near-solar metallicity early M stars at the large distances more
normally associated with the Galactic halo, if such a population
exists. These two alternatives cannot be distinguished on the basis of
a single red colour, but can be explored using spectroscopy and photometry
probing well into the infrared.
In section \ref{sec:photometry} we present the data used in this
analysis and we discuss the selection of stellar candidates in section
\ref{sec:phot-sel}. In sections \ref{sec:halo-pop} and
\ref{sec:spitzer-properties} we discuss the optical and infrared
properties of our sample, and in section \ref{sec:cooler} we examine
the photometry of cooler stars in the same fields. In section
\ref{sec:spect} we discuss existing spectroscopy of such faint stars,
including a spectroscopic sample presented here for the first time,
and spectroscopic abundance indicators that may hint at a typical
metallicity for this population. Finally, in section
\ref{sec:halo-pop} we discuss the interpretation of the overall
properties of the faint, cool star population including their
metallicity and distance distribution.
All magnitudes in this paper (optical and
infrared) are quoted in the AB system \citep{1983ApJ...266..713O}.
\section{Datasets used in this Analysis}
\label{sec:photometry}
\subsection{Filters and Photometric Systems}
We choose to use the AB system \citep{1983ApJ...266..713O} when
expressing magnitudes in this analysis. The reason for this is twofold.
Firstly the AB system is widely used by extragalactic astronomy and
the surveys that generate much of the deepest optical imaging,
including the data discussed below. The AB magnitude scheme is also
used in more local large spectroscopic samples such as the Sloan
Digital Sky Survey (SDSS).
Secondly, AB magnitudes are also constructed on a physical basis. A
source flat in flux as a function of frequency
(f$_\nu\propto\lambda^{-2}$) has zero colour in all bands.
Colours are
calculated throughout this paper by convolving available observational
and theoretical spectra with the filter transmissions concerned
(including the instrument and detector response) to accurately account
for filter-specific effective wavelengths and full-width half-maxima.
Our main reference spectra for faint stars the observationally-derived
spectra of \citet{2007AJ....133..531B} which were constructed for each
subtype of class M and later stars from a total sample of over 700
observed spectra taken by the SDSS. As figure \ref{fig:templates}
illustrates, these template spectra trace the same stellar locus as
those of \citet{1998PASP..110..863P}, although slight deviations are
seen between the two template sets, particularly at late spectral
types.
Use of the AB (rather than Vega) magnitude system introduces a
blueward shift of 0.43 magnitudes in the ``Standard'' Bessell $V-I$
colour, which accounts for the spectral slope of $\alpha$ Lyr at long
wavelengths. In addition the $F606W$ ($v$) and $F775W$ ($i'$) filter
set has redder central wavelengths than their Bessell filter
counterparts, and the $F606W$ filter is broader than the ``Standard''
$V$ as figure \ref{fig:filter_profiles} illustrates.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{filters_star_paper.ps}
\caption{The transmission profiles of the $F606W$
($v$, dotted), $F775W$ ($i'$, dashed) and $F850LP$ ($z'$, dot-dash) wavebands, and of the
\citet{1990PASP..102.1181B} $V$ (dark grey) and $I$ (light grey)
filters. Offset and plotted below is the spectrum of an M4 star.}\label{fig:filter_profiles}
\end{figure}
\subsection{Surveys}
The analysis presented in this paper utilises two deep,
multi-wavelength imaging surveys in order to combine their different
strengths.
The photometric selection of cool dwarf stars parallels that of high
redshift galaxies in many respects. In both cases the accurate
characterisation of the sources, and their reliable separation from
other sources with similar optical colours, requires multiwavelength
imaging spanning a long baseline from 5000\AA\ to 3.6$\mu$m, and high
angular resolution impossible to obtain from the ground without
the use of adaptive optics.
The Great Observatories Origins Deep Survey\footnote{http://www.stsci.edu/science/goods/} (GOODS)
satisfies the requirements for such a selection. In addition to an
abundance of auxiliary data, the survey comprises two primary data
sets, optical and infrared, each of which surveys a field in the
northern hemisphere (GOODS-N) and a second in the south (GOODS-S). All
GOODS data is publically released to the astronomical community in the
form of fully reduced images.
The optical (0.3-1$\mu$m) component of GOODS \citep{2004ApJ...600L..93G} consists of
deep optical imaging in the $F435W$ ($b$), $F606W$ ($v$), $F775W$
($i'$) and $F850LP$ ($z'$) bands, each reaching depths $>28$th
magnitude as shown in table \ref{tab:depths}, obtained using the wide
field channel of the Advanced Camera for Surveys (ACS) on the {\em
Hubble Space Telescope} (HST). The imaging has been drizzled to a
pixel scale of 0$.''$03 per pixel, with a point source FWHM of
0$.''$05 in the $i'$ band. In addition, the GOODS team have made
publically available catalogues of the source photometry and properties
in each field, generated using the SExtractor software package
\citep{1996A&AS..117..393B}, and with detection parameters tuned to
the survey depth and resolution. In this analysis we utilise the
Kron-radius based automatic magnitudes reported by SExtractor (i.e.
MAG\_AUTO). These magnitudes follow the curve of growth of the source
profile, and are equivalent to a corrected aperture magnitude for an
unresolved source. We use version r1.1 of the GOODS
catalogues\footnote{http://archive.stsci.edu/prepds/goods/}.
The second primary dataset generated by the Great Observatories
Origins Deep Survey comprises deep {\em Spitzer Space Telescope}
infrared (3-10$\mu$m) imaging of the fields targeted by {\em Hubble}
\citep{2005AAS...207.9507D}. The IRAC instrument was used to survey
the GOODS fields at 3.6, 4.5, 5.8 and 8.0\,$\mu$m, with a pixel scale
of 1.2$''$/pix to a depth $>$26th magnitude in the first two
bands and $>$24th magnitude at longer wavelengths as shown in table
\ref{tab:depths}\footnote{The GOODS programme also obtained imaging
with the MIPS instrument that is not discussed here}. As discussed in
section \ref{sec:spitzer-properties}, confusion is a significant issue
for sources at the faint magnitudes of our targets, and in imaging of
this depth. As a result, the 12 arcsecond (10 pixel) apertures
recommended for IRAC photometry in sparse fields seldom escape
confusion in the GOODS images. \citet{2007astro.ph..1725V} determined
that smaller apertures with a diameter of 4.5 arcseconds were suitable
for compact sources in the GOODS images, and could be reliably
corrected for flux in the wings of the point spread function. We apply
the same prescription for aperture magnitudes as
\citeauthor{2007astro.ph..1725V}, correcting to total magnitudes using
the offsets given in table \ref{tab:depths}.
\begin{table*}
\begin{tabular}{lcccccccc}
Band & b & v & i' & z' & 3.6\,$\mu$m & 4.5\,$\mu$m & 5.8\,$\mu$m & 8.0\,$\mu$m\\
\hline\hline
Depth & 28.87 & 29.46 & 28.85 & 28.55 & 26.96 & 26.38 & 24.50 & 24.37\\
$m_{tot}-m_{ap}$ & - & - & - & - & -0.180 & -0.180 & -0.326 & -0.418\\
\end{tabular}
\caption{The 3\,$\sigma$ depth of the GOODS imaging used in this analysis,
and aperture corrections applied in the Spitzer wavebands. The limits
for optical bands are given in a 1 arcsecond aperture and those for the
Spitzer bands in a 4.5 arcsecond aperture, corrected to total
magnitude.}\label{tab:depths}
\end{table*}
Both GOODS fields look out of the plane of the galactic disc and away
from the galactic centre, with galactic coordinates of $l$= 125.865,
$b$=+54.808 for the GOODS-N field and $l$=223.57, $b$=-54.43 for the
GOODS-S. Both fields were also selected to lie in regions of low
Galactic extinction, selecting against structures in the galactic
disk.
The spectroscopic analysis of faint M stars presented in section
\ref{sec:spect} is also based in part on GOODS data. Several of
the stars identified in section \ref{sec:phot-sel} have deep optical
spectroscopy taken with FORS2 at the Very Large Telescope (VLT) as
part of the ESO GOODS survey \citep{2005A&A...434...53V}.
In order to improve the statistical significance of our analysis, we
supplement the ESO GOODS data with further M class star spectra
observed as part of the BDF project. The BDF survey comprises deep
multicolour imaging and spectroscopic follow-up of red sources in four
near-contiguous fields, applying an $R-I$ colour selection to identify
galaxies at $z>5$, and reaching a limiting depth of $I_{AB}=26.3$. The
first of these fields, BDF1, was described in
\citet{2003ApJ...593..630L} while the remaining three fields, BDF2-4,
were observed with an identical strategy and will be described in a
forthcoming paper. These data are not used in the photometric analysis
since with ground-based seeing it is impossible to separate mid-M
stars from high redshift galaxies on the basis of optical colours
alone (without $B$ band imaging reaching some four magnitudes deeper
than the $I$ band limit). However, the high spectroscopic completeness
of this survey for objects with extreme $R-I$ colours (essentially the
same selection function as the {\em HST}/ACS $V-I$ selection discussed
in section \ref{sec:phot-sel}) makes it extremely useful for the
\textit{a posteriori} analysis of stellar spectra.
The FORS2 $R$ (R\_SPECIAL, ESO number 76) and $I$ (I\_BESS, ESO number
77) band filters used for imaging in the BDF fields were selected to
be sharp sided and to have minimal overlap. Despite the different
naming, the wavelength coverage of the $R$ band at 6550\AA\
(FWHM=1650\AA) overlaps significantly with that of the $v$ band at
5907\AA\ (FWHM=2343\AA). Hence a colour cut of $(R-I)_{AB}>1.5$
reproduces the {\em HST}/ACS $v-i'>1.3$ selection function for stars
discussed in section \ref{sec:phot-sel}, selecting class M3-M4 and
later.
\section{A Photometric Sample of Faint M-dwarfs}
\label{sec:phot-sel}
Class M, L and T stars have an underlying red continuum due to
their cool temperatures. Their spectrum in the yellow and red is
dominated by jagged molecular absorption bands that further redden the
$v-i'$ and $V-I$ colours.
As a result, such cool stars can have extreme colours in
adjacent filters and are regularly selected by `dropout' surveys
searching for high redshift sources (which show a similar drop between
bands due to neutral hydrogen absorption in the intergalactic medium).
We note that although `dropout' is a historical term used by
convention in the selection of high redshift galaxies, neither high
redshift galaxies nor cool stars are expected to have zero flux in the
bluewards band. As a result, whether the sources show a finite colour
or drop entirely below the detection limit in the bluewards band
depends on the relative depth of the imaging. Hence the term `break'
(as in Lyman break galaxy) or `drop' is more technically correct.
Sources are required to satisfy the criterion of a significant
spectral break, measured through their photometry, but not to remain
undetected in any given band. Nonetheless, for historical reasons, we
continue to use the term `dropout' while cautioning readers that its
literal meaning is inaccurate.
In order to select mid-M class stars and later, we apply the same
$v$-drop selection function commonly used to select high redshift
galaxies. As figure \ref{fig:templates} illustrates, a colour cut of
$(v-i')_{AB}>1.7$ \footnote{which has been used to identify the
Lyman-$\alpha$ spectral break in galaxies at $4.8<z<5.8$}
effectively selects stars of class M3 and later at solar metallicity
(see section \ref{sec:halo-pop} for discussion for extreme low
metallicity populations which will fall out of our colour selection).
We note that we are likely to be insensitive to later spectral types:
despite their red colours in $v-i'$, these sources are generally too
faint in $i'$ to be detected to our survey limit in the $i'$-band.
Beyond M6, the almost-linear increase in $v-i'$ colour begins to
plateau, reducing the ability of this colour to distinguish between M
class subtypes. Adding a second, redwards colour such as $i'-z'$
improves the discrimination between cool stars. Hence in order to
examine the coolest stars in our survey fields we also consider a
$z'$-band selected sample of stars satisfying $(i'-z')_{AB}>1.3$
\footnote{which identifies galaxies at $5.6<z<6.5$ or class L and T
stars}.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{viz_stars_pickles.ps}
\caption{The $v-i'$ and $i'-z'$ colours of cool (class M and L) stars,
calculated from empirical template spectra and convolved with the
{\em HST}/ACS filter set. Solid points indicate the cool star sample
of \citet{2007AJ....133..531B}, diamonds indicate the M star spectra
of \citet{1998PASP..110..863P} while crosses are K stars from the
same source. M class stars increase linearly in $i'-z'$ colour with
increasing subclass, although we note that the two sets of empirical
spectra differ significantly at certain
subclasses.}\label{fig:templates}
\end{figure}
The resulting sample is expected to contain several distinct
populations: the targeted stars, high redshift ($z>5$) galaxies and
quasars which show a strong break at $\lambda_{rest}=$1216\AA, and old
elliptical galaxies at $1<z<2$ which break at
$\lambda_{rest}=$4000\AA. The majority of contaminant galaxies can be
identified by their extended half light radii in the {\em HST}/ACS
data. All confirmed $z>5$ galaxies selected by such a colour cut
criterion are resolved in space-based imaging, albeit barely in
several cases
\citep{2003MNRAS.342L..47B,2004MNRAS.347L...7B,2004ApJ...607..704S,2004ApJ...604L..13S,2004ApJ...611L...1B}.
Similarly, galaxies at $1<z<2$ are typically well resolved by HST. As
figure \ref{fig:fwhm_mag} illustrates, unresolved sources separate
cleanly from extended sources in the GOODS imaging for sources
brighter than $i'_{AB}=25$ (or for $z'_{AB}=26$ in the case of
$z'$-selected sources).
Clearly high redshift quasars cannot be distinguished from stars by
their half light radii, since both populations are expected to be
unresolved. The predicted number of quasars at the faint magnitudes
probed here is expected to be small. \citet{2007astro.ph..1724D}
identified only one active galactic nucleus (AGN) in a highly
spectroscopically complete survey of dropout objects examining 450
arcmin$^2$ to a depth of 26th magnitude in $I$\footnote{Similar to the
surface densities for faint AGN found by \citet{2005ApJ...634L...9M}
and \citet{2007astro.ph..1515S}}. That source was slightly resolved
in {\em HST}/ACS data, with comparable contributions to the rest-frame
ultraviolet flux from the AGN and a starburst in the host galaxy.
Hence we might expect to detect one AGN in the 300\,arcmin$^2$ GOODS
survey area, either unresolved or marginally resolved. Fortunately,
such a source should be distinguishable based on its infrared colours
\citep{2006astro.ph..8603S} as discussed in section
\ref{sec:spitzer-properties}.
In order to select low mass stars in the available ACS imaging with
minimal contamination from extragalactic sources, we apply an
additional selection criterion, requiring that: FWHM$_{i'} \le 4.0$
pixels (0$''.$12), and we truncate our $v$-drop selection at
$i'_{AB}=25$ and our $i'$-drop selection at $z'_{AB}=26$.
Each source was visually inspected to ensure that the photometry was
not contaminated by neighbouring objects. In this deep imaging, the
area of sky affected by sources above the detection limit is $2$\%.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{plot_fwhm_mag.ps}
\caption{The distribution of Gaussian FWHM determined for sources
in the GOODS fields. The stellar locus is clearly separated from the
distribution of extended galaxies for $i'_{AB}<25$ with very few
ambiguous sources. We apply a criterion of FWHM $<4$\,pixels to
select stars. Note that sources saturate in the GOODS imaging at
19th magnitude}\label{fig:fwhm_mag}
\end{figure}
The resultant number counts of dropout-selected stars is shown in
table \ref{tab:number}. In this analysis we focus on the M stars in
the sample since these are both numerous, allowing a statistical
analysis, and relatively bright, allowing spectroscopic follow-up.
While the $i'$-drop selected sample is too
small to derive reliable statistics, the magnitude distribution of the
$v$-drop sample (i.e. M stars) is illustrated in figure
\ref{fig:number}. There is no clear trend in the number density of
cool stars with apparent magnitude. The northern GOODS field is 34\%
more abundant in low mass stars than the southern field, despite their
similar areas and orientation with respect to the galactic disk.
\begin{table}
\begin{tabular}{lccc}
Field & Area/sq arcmin & $N(v$-drop)& N($i$-drop)\\
\hline\hline
GOODS-S & 150 & 53 & 7 \\
GOODS-N & 150 & 71 & 5 \\
\end{tabular}
\caption{The number of unresolved sources in the deep
{\em HST}/ACS imaging satisfying our colour selection
criteria as described in section \ref{sec:phot-sel}.}\label{tab:number}
\end{table}
\begin{figure}
\includegraphics[width=0.95\columnwidth]{numbercounts.ps}
\caption{The apparent magnitude distribution of $v$-drop stars
in the GOODS field. The decline in number counts at bright
magnitudes arises both due to the small volumes probed at bright
magnitudes and due to the potential saturation of brighter sources
in the deep GOODS imaging. There is no obvious decline in the number
of sources seen at faint magnitudes. Each field surveys an area of
150\, arcmin$^2$.}\label{fig:number}
\end{figure}
\section{The Optical Properties of Faint M-dwarfs}
\label{sec:halo-pop}
The selection of stars by photometric methods relies on a combination
of the red optical spectrum that arises from a cool blackbody with the
abrupt absorption edges of metal molecules. In the absence of
significant metal enrichment, the colours of stars at a given
temperature are likely to be less extreme since the strength of
molecular features is reduced. Hence a sample of faint stars based on
extreme colours in two adjacent bands is expected to select either
relatively-nearby late M stars at low metallicities as well as
near-solar metallicity early M stars at the large distances more
normally associated with the Galactic halo, if such a population
exists.
The \citet{1995ApJ...445..433A} `NexGen' models and the Kurucz ATLAS9
models \citep{2004astro.ph..5087C,1996ASPC..108....2K} remain the most
recent stellar atmosphere models to attempt the temperature and
metallicity regime of interest here (although the ATLAS9 models in
fact do not survey T$<$3500K late M stars and do not account for the
important CaH bands in cool stellar spectra). As the recent analysis
by \citet{2004AJ....128..829B} shows, neither provides a satisfactory
fit to the spectra of cool stars. While the MARCS spectra produce a
somewhat better fit, the publically available library only extend as
cool as 4000K. In practise, no stellar models can reliably reproduce
the colours of cool stars.
On the other hand, there exist a number of published 'extreme' cool
stars known to be at significantly sub-solar metallicity
\citep[e.g.][]{1997AJ....113..806G,2007ApJ...657..494B}. Unfortunately
the photometry and/or spectral coverage of these observations
inevitably covers too short a wavelength range to reliably calculate
$v$, $i'$ and $z'$ colours simultaneously from the empirical spectra.
As a result, while we tentatively consider the results of available
model templates as well as empirical spectra, we caution the reader
that such results should not be considered reliable. We note that
future interpretations of this population would benefit from improved
modelling and interpretation of these faint stars in the red spectral
region ($>$8000\AA) accessible to red-sensitive spectrographs.
We have determined the optical colours
expected for cool stars, in the same filter set employed by the GOODS
survey, over a range of metallicities. We use the M-star spectral
models of \citet{1995ApJ...445..433A}\footnote{In their `NextGen' form
as implemented by \citet{1998A&AS..130...65L}} over the range
[Fe/H]=0. to -4.0 (where such models are available) and at four
different temperatures: T=2000K, 2500K, 3000K and 3500K.
\begin{figure*}
\includegraphics[width=0.95\columnwidth]{viz_models_templates_overlay.ps}
\includegraphics[width=0.95\columnwidth]{viz_models_templates_overlay2.ps}
\caption{The $v-i'$ and $i'-z'$ colours of model dwarf star
atmospheres \citep[from][, triangles]{1995ApJ...445..433A},
convolved with the {\em HST}/ACS filter set. Points at different
temperature for the same metallicity are joined by lines and the
temperature in kelvin is labelled. Figure a (left) overlays these
tracks with the colours of moderate metallicity dwarf star templates
derived by \citet{2007AJ....133..531B}, while figure b (right)
overlays the colours of faint stars identified in section
\ref{sec:phot-sel}. Photometric errors in both colours are
$<$0.05\,mag.}\label{fig:metallicity}
\end{figure*}
Figure \ref{fig:metallicity} illustrates the colours predicted by the
`NextGen' models, and compares these with the colours of the SDSS
dwarf star templates derived by \citet{2007AJ....133..531B} which are
dominated by nearby disk stars and the photometric sample derived in
section~\ref{sec:phot-sel}. It is clear that while the $v-i'$ colour
fails to distinguish between different metallicity populations, the
addition of an $i'-z'$ colour may provide a crude measure of metallicity.
We note that a slight offset (approx 0.1 magnitudes in $i'-z'$)
appears to be required to match the model spectra to the observed
near-solar, old disk stellar sample at bright magnitudes.
Given the uncertainties in current models of M star atmospheres, we
compare their predictions with observational data on Galactic globular
clusters. Observational evidence for the $i'-z'$ (or more generally
$I-Z$) colours of cool stars as a function of metallicity seems to be
poor. However the $V-I$ colours of such stars (observed in the
Johnson-Cousins or Bessell systems and in Vega magnitudes) have been reported from observations of several
globular clusters. While few observations probe deep enough to
identify the base of the main sequence (as opposed to the main
sequence turn-off), some observations of low metallicity globular
clusters have characterised the main sequence ridge to very faint
magnitudes.
NGC 6397 is one such cluster, with a metallicity of [Fe/H]=-2.0 and
lying at a distance of 2.2\,kpc \citep{1996ApJ...468..655C}. The main
sequence in this source has been observed with {\em HST}/ACS down to a
limiting magnitude of $I_{F814W}\sim30$ (Vega) and is seen extending
redwards to $V_{F606W}-I_{F814W}\sim4$, the theoretical
hydrogen-burning limit at this metallicity
\citep{2006Sci...313..936R}. At our colour cut of $v-i'$=1.7
(approximately $V-I_\mathrm{Vega}=2.6$), the main sequence in this
cluster corresponds to an observed magnitude of approximately
$I\_{814}=24.5$ (Vega), or an absolute magnitude limit of 12.4 in the
I band, while our detection limit of $i'_{AB}=25$ would only select
sources with $V-I_\mathrm{Vega}<3.0$. This cutoff in apparent
magnitude implies that we are only sensitive to approximately class
M4-M5 stars at the distance and metallicity of NGC 6397
\citep{2002AJ....123.3409H} and would not expect to select any stars
with halo metallicities at greater distances. This emphasises that
our colour cut allows only for a narrow range of low metallicity
objects which are sufficiently red and sufficiently bright to meet our
colour selection criteria.
In figure \ref{fig:globulars} we illustrate the effects of decreasing
metallicity on the main sequence colours of globular clusters with
published photometry. As discussed above, NGC 6397 is a distant
globular cluster with the low metallicity expected of halo stars. By
contrast NGC 6366 and NGC 6144 are two clusters with [Fe/H]=-0.58 and
[Fe/H]=-1.7 respectively, and published main sequence star photometry
to faint magnitudes observed with HST/ACS \citep{2007AJ....133.1658S}
as part of an ACS survey of globular clusters. While these data are
amongst the deepest available, they do not reach the red colours, and
hence faint $V$-band magnitudes of the M stars of interest here.
Nonetheless, it is interesting to note that the well-defined main
sequence in these globular clusters marks a gradual shift to bluer
$V-I$ colours with decreasing metallicity, and all three are bluer
than the old-disk stars measured by the SDSS.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{glob_cluster.ps}
\caption{The colour-absolute magnitude sequence of three globular clusters
\citep{2006Sci...313..936R,2007AJ....133.1658S} and of the cool
stars identified by the SDSS \citep{2007AJ....133..531B}. In each
case information in the given references is used to correct sources
to absolute magnitudes on the AB system and, where necessary, to
correct for Galactic reddening. For ease of comparison with the
globular cluster work, the SDSS stars are shifted into the HST ACS
$F606W$ and $F814W$ bands. The data in NGC 6144 and NGC 6366 become
unreliable redwards of approximately $V-I=1.3$}\label{fig:globulars}
\end{figure}
The spectra of several very low metallicity extreme subdwarfs in the
field have now been published. \citet{2007ApJ...657..494B} demonstrate
the effects of decreasing metallicity on the optical spectra of late M
dwarfs. The weakening of metal oxide bands at 7200\AA, 7900\AA\ and
8500\AA\ with decreasing metallicity boosts stellar flux in the $i'$
and $z'$ bands. In particular, the deep 7900\AA\ absorption feature
in the spectra of cool stars at disk metallicities suppresses the
$i'$-band magnitudes of these sources. Decreasing metallicity removes
this suppression, boosting the $v-i'$ colour of these sources while
decreasing their colour in $i'-z'$. While none of the published
extreme subdwarfs have published $z'$-band photometry, synthetic
colours calculated from the \citet{2007ApJ...657..494B} spectra
indicate that at these low metallicities, late M dwarfs are
approximately 0.2\,mag bluer in $i'-z'$ than typical old-disk M stars
(Bessell, private communication).
It is clear from the combination of
theoretical and observed data that most stars earlier than M3-M4 will
be omitted at low metallicities due to their blue colours, while later
M class stars are only likely to be detected relatively nearby due to
their faint magnitudes. Thus our selection preferentially identifies
stars with disk metallicities, while probing to halo distances.
This analysis of our colour selection function is supported by the
distribution of optical colours in our faint star sample as shown in
figure \ref{fig:metallicity}b. The majority of stars have colours consistent with
metallicities in the range {$-1<\mathrm{[Fe/H]}<0$} (i.e. similar to that of
the Galactic disk).
Two stars in the GOODS-N field show anomalously low $i'-z'$ colours
($i'-z'<0. 5$, $v-i'=2.1$) as might be expected at [Fe/H]$>$-2, and
both of these are relatively faint ($i'>23$). Comparison with the
colours of the extreme subdwarfs of \citet{2007ApJ...657..494B}
suggests that these might be identified as very low metallicity
sources, although both are somewhat redder in $v-i'$ than might be
expected. Either spectroscopy of these two blue stars, or optical
photometry of known extreme subdwarfs will be required to make a more
quantitative comparison.
\section{The Infrared Properties of Faint M-dwarfs}
\label{sec:spitzer-properties}
Having identified the stars in HST data, we now consider the colours
of these unresolved sources in the infrared with {\em Spitzer} GOODS
data. The deep images are close to the confusion limit in the IRAC
bands, and many of the $v$-drop sources are confused in the imaging.
Of a total of 124 $v$-drop stars in the GOODS fields we classify 48
(39\%, figure \ref{fig:spitzer_examples} case a) as isolated, a
further 36 (29\%, figure \ref{fig:spitzer_examples} case b) as
confused but potentially deblendable and the remaining 40 (32\%,
figure \ref{fig:spitzer_examples} case c) as hopelessly confused. In
this analysis we use only isolated objects since the noise in
deconvolved magnitudes at this level is often dominated by residuals
from neighbouring objects rather than the target source.
\begin{figure*}
\includegraphics[height=0.6\columnwidth]{spitzer_examples_n_acs.ps}
\includegraphics[height=0.6\columnwidth]{stamps_examples_spitzer_n.ps}
\caption{Examples of GOODS/ACS selected sources in the
GOODS/IRAC imaging. Some sources are isolated or have only much
fainter neighbours (e.g. case a), while others are hopelessly
confused in the IRAC bands (e.g. case c) with neighbours of
comparable magnitude or brighter. Some sources are blended but not
beyond hope of deconvolution (e.g case b). Boxes are 20$''$ to a
side.}\label{fig:spitzer_examples}
\end{figure*}
The infrared colours of the remaining sample are shown in figures
\ref{fig:spitzer_23v12} and \ref{fig:spitzer_14v23}, with the colours
measured for M, L and T dwarfs in the same bands by
\citet{2006ApJ...651..502P} shown for comparison. Working some three
magnitudes above the 3\,$\sigma$ background limit, the errors in the
colours are dominated by uncertainty in the aperture correction and
are set to an indicative 0.1 mag in each band.
The colours of cool stars in the infrared are dominated by the
relative strengths of molecular absorption bands in their atmospheres,
in particular those of CH$_4$ in the 3.6\,$\mu$m band, CO in the
4.5\,$\mu$m band and H$_2$O in the 5.8\,$\mu$m band.
The infrared colours of our faint $v$-drop selected sources are very
similar to those of the brighter population studied by
\citeauthor{2006ApJ...651..502P}, despite being fainter by some ten
magnitudes. The GOODS $v$-drop stars extend slightly blueward of the
bright sample in the [4.5]-[5.8] band, as would be expected given that
our $v$-drop sample extends to earlier M subclasses than the bright
sample (M3/M4 as opposed to M5). It is not clear from existing models
or data whether a similar bluewards shift might be expected with
varying metallicity.
\begin{figure}
\includegraphics[height=0.9\columnwidth]{ch23_v_ch12_c_col.ps}
\caption{The [4.5]-[5.8] versus [3.6]-[4.5] colours of isolated stars
selected as $v$-drops in the IRAC GOODS fields. The cool star sample
of \citet{2006ApJ...651..502P} which comprises M5-T9 stars at bright
magnitudes is shown for reference. A typical uncertainty in the
colours (dominated by uncertainty in the aperture correction and
residual flux from nearby sources) is shown in the upper
left.}\label{fig:spitzer_23v12}
\end{figure}
\begin{figure}
\includegraphics[height=0.9\columnwidth]{ch14_v_ch23_c_col.ps}
\caption{The [3.6]-[8.0] versus [4.5]-[5.8] colours of isolated
stars selected as $v$-drops in the IRAC GOODS fields. As in figure
\ref{fig:spitzer_23v12}. Error bars are shown separately for
stars with anomalous colours in these bands (i.e. [3.6]-[8.0]$>$-1).
These fall into three categories; A, B and C as described in section
\ref{sec:spitzer-properties}.}\label{fig:spitzer_14v23}
\end{figure}
While the bulk of the population has colours consistent with brighter
M stars, a subset of seven sources are anomalous with M star optical
colours and T star colours in the {\em Spitzer} wavebands. These fall
loosely into three categories.
One source, labelled `A' on figure \ref{fig:spitzer_14v23} has colours
consistent with a late-M/early-L classification within approximately
3\,$\sigma$ of the photometric errors in both optical and infrared
bands. The colours are also consistent with those expected for a close
M/L binary system with the M star flux dominating the optical bands
and the fainter L star contributing to the flux longwards of
3\,$\mu$m. Binary M/L and M/T systems have been observed at bright
magnitudes in the past
\citep[e.g.][]{2002MNRAS.332...78L,2006AJ....131.1007B}. This source
is towards the faint end of our sample, with $i'_{AB}=24.12$. A binary
with separation of 1$''$.09 at a heliocentric distance of 35 pc
\citep[using the M/L binary pair of ][as a model]{2006AJ....131.1007B}
would be unresolved at {\em HST}/ACS resolution at the distance of
1\,kpc estimated for an M6 star at this magnitude. If it is indeed a
binary system then each component will be intrinsically fainter still
and may lie at greater distances than estimated. Radial velocity
measurements of the system may be necessary to determine whether the
star is an anomalous M dwarf or a binary pair.
A further three sources, labelled `B' on figure
\ref{fig:spitzer_14v23} have $z'$-5.8\,$\mu$m colours consistent
with those of M stars but are anomalously bright in the 8\,$\mu$m
band. In one case, the photometric errors are large and the faint star
is within 2\,$\sigma$ of the colours expected for an M star. However,
the other two stars are well detected in all bands, and show no
evidence of unreliable photometry. Both the 3.6 and 8\,$\mu$m bands
are dominated by CH$_4$ absorption with the fundamental band in the
bluewards filter. There is no clear interpretation of an 8\,$\mu$m
excess.
Finally the remaining three sources, labelled `C' on figure
\ref{fig:spitzer_14v23}, show no distinctive breaks or obvious
features in their spectral energy distributions but their flux peaks
further to the red than expected for M stars. Again these sources have
reliable photometry with no clear reason to believe photometric errors
are larger than shown. As such they have M star colours in the
optical and T stars in the infrared. Interestingly their $i'-[3.6]$
colours are well within the distribution of the rest of the sample,
showing no evidence for M/T multiplicity (which would lead to red
colours as the T dwarf contribution becomes dominant at 3.6\,$\mu$m).
We note that all of these anomalous sources defy straightforward
explanation based on photometry alone and may benefit from
spectroscopic analysis.
One unresolved object in the GOODS-N has extremely unusual colours in
the IRAC wavebands, lying outside of the region plotted in figures
\ref{fig:spitzer_23v12} and \ref{fig:spitzer_14v23}. The colours of
this source (($z'$-[3.6])$_{AB}$=0.71, ([3.6]-[4.5])$_{AB}$=0.09) may be
consistent with identification as a high redshift quasar or compact
galaxy and inconsistent with any cool star
\section{The Properties of Cooler Stars in this Sample}
\label{sec:cooler}
To a limit of $z'_{AB}=26$, there are a total of 12 unresolved sources
in the GOODS fields that are identified as $i'$-drops but do not form
part of the $v$-drop sample due to their faint magnitudes in $i'$.
As figure \ref{fig:templates} indicated, an $i'$-drop selection is
sensitive to stars of spectral classes later than M8 (assuming
near-solar metallicities). The linear increase of $i'-z'$ colour with
spectral type becomes less clear cut at the same subclass, due to the
difficulty in assigning spectral types based on spectroscopic indices,
making classification of these sources impossible based on optical
photometry alone.
Seven of these sources are unconfused in the 3.6\,$\mu$m band,
although given their faint magnitudes, one is undetected at
4.5\,$\mu$m a further four undetected longwards of 4.5\,$\mu$m.
Figure \ref{fig:idrops_23v12} shows the [4.5]-[5.8] versus [3.6]-[4.5]
colours or 3\,$\sigma$ limits on the colours of the six stars for
which these colours can be determined. All six are consistent with the
infrared colours of local M and L dwarf stars studied by
\citet{2006ApJ...651..502P}.
\begin{figure}
\includegraphics[height=0.9\columnwidth]{ch23_v_ch12_idrop.ps}
\caption{The [4.5]-[5.8] versus [3.6]-[4.5] colours (open circles) or 3\,$\sigma$
limits on the colours of isolated stars selected as $i'$-band
dropouts. Given their $i'-z'$ colours, these sources are expected to
be of class M8 or later. Symbols as in figure
\ref{fig:spitzer_23v12}.}\label{fig:idrops_23v12}
\end{figure}
We note that an $i'$-drop selection used to identify cool stars
here is likely to be significantly incomplete for low metallicity dwarfs if the
available stellar models provide a reasonable approximation to their
colours, as figure \ref{fig:metallicity} illustrates. A number of low
metallicity extreme subdwarfs have now been found. Unfortunately none
of these have published $z$-band photometry.
\citet{2007ApJ...657..494B} present the optical spectra and $I-J$
colours of a sample of late M and L extreme subdwarfs. The
spectroscopically confirmed subdwarfs show $I-J$ colours between 1.4
and 1.8 for M dwarfs increasing to about 2.4 for L4 dwarfs. Given the
spectra of M class stars are essentially flat in f$_\lambda$ in the
range 8000-10000\AA, it seems unlikely that most late M subdwarfs
would have $i'-z'$ colours as extreme as 1.3. Indeed, synthetic
colours calculated from these spectra indicate that at these low
metallicities, late M dwarfs are approximately 0.2\,mag bluer in
$i'-z'$ than typical old-disk M stars (Bessell, private
communication). Nonetheless, the early L class extreme subdwarfs
of \citet{2007ApJ...657..494B} lie well redwards of the colour
selection criterion used here.
This metallicity-dependent selection effect complicates the
analysis of L and T star number counts, and larger surveys will be
needed to accurately constrain the surface density of these cool stars
at faint magnitudes..
Further progress in understanding the
incidence and properties of low metallicity class M, L and T dwarfs
may be possible in future given improved modelling of the effects of
varying metallicity on the spectrum longwards of 1\,$\mu$m and in the
{\em Spitzer}/IRAC wavebands.
\section{Spectroscopy of Distant M-dwarfs}
\label{sec:spect}
Some tens of faint low mass stars have now been observed in deep
spectroscopy, as part of the follow up to dropout-selected high
redshift surveys.
Extremely deep 8\,m class telescope spectra of M or L class stars with
$i'>22$ have been published or publically released by
\citet{2004ApJ...607..704S} and \citet{2005A&A...434...53V}.
Additional deep stellar spectra were obtained
as part of the ESO Remote Galaxy Survey
\citep[ERGS,][Douglas 2007, in prep]{2007astro.ph..1724D} and by
\citet{2003ApJ...593..630L} as part of their BDF survey. Many such
spectra remain unpublished, being considered incidental to the search
for $z>5$ galaxies.
Six of the $v$-drop sample discussed above have deep spectroscopy
obtained in multi-object mode with FORS2 on the VLT as part of the
GOODS programme and publically released by
\citet{2005A&A...434...53V}. These sources have magnitudes in the
range $i'_{AB}=23-24.5$ and were observed at a dispersion of
3.2\AA/pixel and a spectral resolution R=660, for a typical exposure
time of 14400 seconds.
As discussed in section \ref{sec:photometry}, we supplement this small
sample with a further 18 faint M class star spectra observed as part
of the BDF project. The BDF survey comprises deep multicolour imaging
and spectroscopic follow-up of red sources in four near-contiguous
fields, applying an $R$-drop colour selection to identify galaxies at
$z>5$ and including a number of stars (which can be difficult to
separate from compact galaxies on the basis of optical colour alone).
$R$-drop sources were observed spectroscopically in multi-object mode.
Two FORS2 slitmasks were observed in each field, with a total integration
time of 13000 seconds per mask, divided into twenty exposures of 650\,s,
each offset by a different distance along the slitlets. Each slitlet
was 1 arcsecond wide. The spectra were wavelength calibrated from
arclamps and flux calibrated from standard star observations taken at
the same time. The final spectra span the spectral range
6000-10000\AA\ at a dispersion of 3.2\AA, and with a spectral
resolution R=660. This configuration is virtually identical to that
used by \citet{2005A&A...434...53V}, with the two samples differing
only in the selection filters.
Examples of such faint star spectra are shown in figure
\ref{fig:spec_examples}. As expected for sources a hundred times
fainter than those observed by the SDSS, the spectra are noisy.
However, despite the low signal to noise of such spectra, they show
the similar molecular absorption features to their brighter
counterparts. Each of the twenty-four stars in the final
spectroscopic sample discussed here has $i'_{AB}>23$
and signal to noise on the continuum around the
features of interest (i.e. around 7000\AA) of 10 or
greater (in some cases higher than 20).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{figures_bdf_0.ps}
\includegraphics[width=0.9\columnwidth]{figures_bdf_1.ps}
\includegraphics[width=0.9\columnwidth]{figures_bdf_2.ps}
\caption{Examples of stars identified in the BDF spectroscopy of
an $R$-drop M star sample, as described in the text. The absorption
features of TiO 5, CaH 2 and Na I as defined in table
\ref{tab:indices} are indicated by labels and/or grey shading on an
inset magnification of the low wavelength
features.}\label{fig:spec_examples}
\end{figure}
Given these deep spectra it is possible to identify a number of
molecular absorption features, both lines and absorption bandheads.
Here we consider two diagnostic features in the spectrum of M stars:
the surface gravity dependent Na I doublet at 8183, 8195\AA\ and the
metallicity dependent TiO to CaH ratio around 7000\AA.
The Na I doublet is a strong feature in the spectrum of M
stars, but its measurement is complicated by the crowded spectrum around
8200\AA\ which makes the true continuum level difficult to determine.
\citet{1980ApJ...235..405F} introduced a systematic measurement of the
equivalent width of this line by defining a region to contain the
line, and two adjacent regions as a proxy for the continuum level.
This definition is shown in table \ref{tab:indices}.
As discussed by \citet{1997ApJ...479..902S}, this feature is a
sensitive indicator of surface gravity for stars cooler than 3900K,
and hence discriminates between dwarf stars with a surface gravity of
$\log(g)\approx5$ and giant stars of the same spectral class with a
typical surface gravity $\log(g)=1.0$.
Giant stars of class M4 and later are rare, and not expected to
substantially affect the number counts of faint stars, but the presence
of a significant subpopulation of giants could theoretically impact
our distance distributions (which are based on distance modulus,
assuming that the stars are dwarfs). As expected, the Na I equivalent
width of all eighteen spectra are entirely consistent with dwarf stars
(having EW$>5$), and inconsistent with those of M giants (which would
have an EW$<1.5$).
\label{sec:indices}
Given their optical colours (discussed in section \ref{sec:halo-pop})
the M dwarfs described here are expected to be comparable to or
slightly lower in metallicity than M dwarfs observed in a local
sample. To test this hypothesis, we consider the ratio of two
molecular line indices. Low metallicity dwarfs have lower ratios of
CaH to TiO than higher metallicity dwarfs \citep{1997AJ....113..806G},
allowing their ratio to be used as a crude metallicity indicator.
The strength of these lines is usually gauged by a series of
narrowband indices first measured by \citet{1995AJ....110.1838R}.
These indices measure the strength of an absorption feature relative
to a pseudo-continuum flux measured nearby. The definition of the TiO
5 and CaH 2 indices used here is given in table \ref{tab:indices}.
\begin{table}
\begin{tabular}{lcc}
Band & S1 & W\\
\hline\hline
TiO 5 & 7042 - 7046\AA & 7126 - 7135\AA \\
CaH 2 & 7042 - 7046\AA & 6814 - 6846\AA \\
\\
Na I EW & 8169-8171\AA & 8172-8209\AA \\
& 8209-8211\AA & \\
\end{tabular}
\caption{The regions used in spectroscopic indices defined by
\citet{1995AJ....110.1838R} and the Na I doublet equivalent
width defined by \citet{1980ApJ...235..405F}.
Indices are defined as $R=F_W/F_{S1}$ where the spectral region
$W$ measures the line strength and the sidebar $S1$ is indicative
of the continuum strength.}\label{tab:indices}
\end{table}
On figure \ref{fig:indices} we plot the ratio of line indices measured
on the 18 spectra of the combined GOODS and BDF samples. There is
little evidence for significantly sub-solar metallicities (i.e.
[Fe/H]$\approx$-2). Around 40\% of the sample (7 of 18) are consistent
with slightly sub-solar metallicities (i.e. [Fe/H]$\approx$-1), while
the same fraction are completely consistent with the indices for a
local (solar metallicity) sample. 20\% of the sample (4 stars) show
anomalously strong CaH features given the strength of their TiO
absorption bands. Of these, two also show TiO5$<0.3$ suggesting that
they are of spectral type M6 or later (consistent with their
photometric colours).
We caution the reader that it is possible that these spectral indices
may be affected by noise due to the subtraction residuals of night sky
emission, and by night sky absorption, which isn't removed by our data
reduction. Spectroscopy aimed at identifying high redshift galaxies is
routinely obtained in such a manner as to optimise night sky line
subtraction. By offsetting the telescope between exposures, such that
the target moves along the slit, the frames can be combined during
reduction in such a way as to produce a simultaneous measurement of
the sky flux that can be subtracted from the object spectrum. The
measured signal to noise on the spectra account for the error in the
sky line subtraction, and hence given our requirement of a signal to
noise exceeding ten, we estimate statistical error on our spectral
indices of less than 15\%. However, we note that the indices are
measured in a spectral region affected by a weak skyline complex, and
that our spectra are of comparatively low resolution.
While we do not attempt further quantitative analysis of our spectra,
we note that there are other spectral features that may be accessible
even in these faint sources. In particularly, it may be possible in
future to measure the metallicity using the KI doublet feature at
7700\AA. This doublet is observed to be extremely strong in the low
metallicity extreme subdwarfs of \citet{2007ApJ...657..494B}. While
the line is not as strong in early M dwarfs as in later subtypes, we
note that our spectra appear qualitatively similar to those from the
nearby near-solar metallicity with no sign of the enhancement in line
strength that might be expected at low metallicities.
The typical metallicity of the sample as derived from 18 spectra is
consistent with that determined from their optical colours (figure
\ref{fig:metallicity}b). Given that none of the eighteen stars in our
spectroscopic sample are of very low metallicity ([Fe/H]$<$-2.), the
fraction of significantly sub-solar sources in our photometric sample
is likely to be small ($<6$\%), consistent with the suggestion of two
sources out of 124 ($<2$\%) based on their optical colours.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{M_star_CaH2_indices2a.ps}
\caption{The spectroscopic abundance indices of faint M-stars identified
during spectroscopic follow-up of high redshift dropout samples.
Blue circles indicate the GOODS sample of
\citet{2005A&A...434...53V} while red triangles show the BDF sample
presented in this work. Typical errors on these indices are better
than 10\%. The spectroscopic indexes of local M and L stars are shown for
reference. Points indicate the local (solar metallicity) sample of
\citet{1996AJ....112.2799H} and \citet{1995AJ....110.1838R} while
small symbols indicate the sample of \citet{1997AJ....113..806G}
(diamonds -- solar metallicity, crosses -- [Fe/H]=-1, asterisks -
[Fe/H]=-2).}\label{fig:indices}
\end{figure}
\section{A Metal-rich Population of M-class Stars at Halo Distances}
\label{sec:phot-properties}
Given the evidence of their optical and infrared colours, combined
with photometry of a number of examples, it appears that the M star
population identified at faint magnitudes (and hence in extragalactic
surveys) is relatively metal-rich ([Fe/H]$>$-1) compared to the
metal-poor halo population that might be expected to dominate star
counts at large distance. In fact, their metallicities are more
typical of those observed locally in the thick disk. This suggests
that there is only a weak metallicity gradient in the class M and
later dwarfs identified as photometric dropout sources, although
this analysis does not exclude the possibility that a low
metallicity M-star population also exists at faint magnitudes.
Given this relatively metal-rich population, it is possible to explore
the comparison with brighter M stars further. As demonstrated by
\citet{2002AJ....123.3409H}, for dwarf stars with metallicities
[Fe/H]$>$-1, $i'-z'$ colour increases linearly with subclass between
spectral classes M3 and M9, flattens through the early L dwarfs and
then increases again. As a result, M stars identified as $v$-drops
(which are typically too blue in $i'-z'$ to be L stars) can be crudely
classified by their $i'-z'$ colours. We chose to use $i'-z'$ colour
for this classification rather than $v-i'$ for two reasons: in order
to minimise errors on the colour, and in order to provide a second
colour, thus beginning to constrain the SED for these sources. In the
deep optical imaging employed here, the typical photometric errors on
this colour are of order 0.05 mag, less than the measured variation of
colours within each subclass in a brighter sample from the SDSS
(determined by comparing spectral typing based on template fitting,
spectral indices and photometric colour) which is typically 0.2 mag in
these filters,\citep{2007AJ....133..531B,2002AJ....123.3409H}. This
scatter in $i'-z'$ colour within subclasses arises from variation in
physical properties of the stars in question. A continuum in the
metallicity and temperature of individual stars will affect their line
indices and photometry, leading to a gradual range of characteristics
in a field population rather than the discrete set of subtypes that
might be expected from a single age, single metallicity population
such as a globular cluster. This uncertainty in intrinsic colour
limits photometric classification to $\pm1$ subclass. The surface
density of faint stars decreases rapidly with increasing M star
subclass, since a smaller volume is probed to the same magnitude
limit. As a result stars in the range M4-M5 are expected to outnumber
later classes.
We make an approximate classification of our sample of $v$-drop M stars
by assigning each to the closest subclass in typical colour, based on
the templates of \citet{2007AJ....133..531B}. As figures
\ref{fig:data_n} and \ref{fig:data_s} illustrate, early M stars
(classes M3-M4) are found spanning the entire range from the
saturation limit to the faintest magnitudes at which the GOODS data
can distinguish unresolved sources from compact galaxies.
Later M stars (spectral class M5 and later) are intrinsically fainter
and so our survey probes progressively smaller volumes with increasing
subclass. These are not seen at bright magnitudes but become
increasingly common at $i'>22$. The latest stellar class seen in our
$v$-drop samples is M7. Stars of classes later than M7 are
intrinsically faint and so will only be detected in a relatively small
volume probed locally. In addition, the very red $i'-z'$ colours of
these sources is likely to lead to them dropping below the limit of
the $i'$ band used for selection. Later classes may be detected rather
as $i'$-band dropouts.
We note that M3 stars should formally be too blue to be selected given
our $v-i'$ colour cut. As discussed above, classification by $i'-z'$
colour is at best an approximate method, and the large `M3' sample may
be explained as a combination of M4 stars with bluer than average
$i'-z'$ colour and genuine M3 stars with a redder than average $v-i'$
colour. Spectral classification would be necessary to classify
individual sources. Interestingly these `M3' sources are more numerous
than the `M4' sample suggesting that scatter in the $i'-z'$ colour is
skewed to the blue rather than symmetrical which may be indicative of
low metallicity as discussed in section \ref{sec:halo-pop}. If so, a
slight correction must be applied to the absolute magnitude of each
subclass resulting in slightly lower estimated distances. However,
while this will apply a systematic shift to the distribution of
distance moduli, it will not change the distribution.
As expected, the two probable low metallicity candidate $v$-drops in
the GOODS-N field defy classification based on $i'-z'$ colour.
\begin{figure*}
\includegraphics[width=0.9\columnwidth]{dist_mod_n.ps}
\includegraphics[width=0.9\columnwidth]{dist_mod_by_class_n.ps}
\caption{(Left) The $i'-z'$ colours of $v$-drop selected low
mass stars in the GOODS-N field. M-star subclass increases linearly
with $i'-z'$ colour \citep{2002AJ....123.3409H,2005PASP..117..706W}
enabling a crude classification of stars based on their photometry.
(Right) The distance to those stars, classified into subclasses by
$i'-z'$ colour and applying the spectroscopic parallax calibrated
typical absolute magnitudes for each subclass as determined by
\citet{2005PASP..117..706W}. The maximum distance probed by our
photometric sample for each subclass is indicated by an
arrow.}\label{fig:data_n}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.9\columnwidth]{dist_mod_s.ps}
\includegraphics[width=0.9\columnwidth]{dist_mod_by_class_s.ps}
\caption{(Left) The $i'-z'$ colours of $v$-drop selected low
mass stars in the GOODS-S field. (Right) The distance to those
stars, classified into subclasses by $i'-z'$ colour and applying the
typical absolute magnitudes for
each subclass as determined by \citet{2005PASP..117..706W}. As in
figure \ref{fig:data_n}.}\label{fig:data_s}
\end{figure*}
Using the typical absolute magnitude for each class \citep[as
determined by ][]{2007AJ....133..531B} we can estimate the distance
modulus, and hence distance to each star, essentially using the
relation between colour and absolute magnitude determined from the
local disk population \citep{2007AJ....133..531B,2002AJ....123.3409H}
to derive a photometric parallax for each star in the sample. While
this has traditionally been carried out using the $V$ vs $V-I$
relation, we chose instead to use $i'$ vs $i'-z'$ \citep[in common
with][]{2002AJ....123.3409H} since these sources are brighter, and
hence their magnitudes are more reliable, in the redwards bands.
The distribution of source
distances from the sun are shown in the right-hand panels of figures
\ref{fig:data_n} and \ref{fig:data_s}. We are observing class M3-M4 stars
at distances between 1 and 14\,kpc. The distant tail of this population
is well above both the scale height of 275\,pc for the thin disk, and the
1.5\,kpc scale height of the thick disk that may be more appropriate for
disc M dwarfs \citep{2001ApJ...555..393Z}.
The most extensive survey of faint M dwarfs to date has been a survey
of high galactic latitude fields with {\em HST}/WFPC
\citep{1994ApJ...435L..51B,2001ApJ...555..393Z}. These authors probed
about 1400 stars, reaching a depth of $I_\mathrm{Vega}=23.7$
($I_{AB}=23.2$). Their finding of a sharp decline in the number of
stars with a relatively short scale height of $<$ 1\,kpc above the
galactic disc \citet{2001ApJ...555..393Z} seems inconsistent with our
flat number counts out to a heliocentric distance of $10$\,kpc.
Clearly our sample is smaller by a considerable margin, but is also
significantly deeper and hence highly complete ($>$98\%) at large
heliocentric distances. The fact that a similar distribution of
number counts is observed in two widely separated fields argues against
Galactic substructure as an explanation.
A flat number density of stars with increasing magnitude is not easily
reproduced given standard models of the galactic density profile. In
order to explore this, we constructed Monte Carlo simulations of a
sample of 50 stars randomly drawn from a population of $1 \times 10^6$
stars distributed according to a known density profile. Profiles
considered include the thick disc density law of
\cite{2001ApJ...555..393Z} which takes the form of a sech$^2$(z/h) law
with a scale height h=300\,pc above the Galactic disk and `model B' of
\cite{1996AJ....112.1472R} which considers a composite model of thin
disk and thick disc (both with sech$^2$ density profiles and scale
heights of 350pc and 1.5kpc respectively) together with a halo density
profile (scaling as approximately r$^{-3.5}$ with radius from the
galactic centre). Variations on these models were also considered with
varied scaling between disc and halo components, and a variety of halo
power laws. In each case, number counts are predicted to increase
sharply at faint magnitudes, since the density law varies less rapidly
than the volume element at a given magnitude. The absence of such an
upturn suggests that this population is falling off more rapidly than
those expected from standard disk profiles, despite extending many
scale heights above the plane of the disk, or alternately that the
conventional components of galactic structure need to be rethought.
Our results might be interpreted as suggest that either the scale
height of the thick disk has been significantly underestimated (as
also suggested by \citet{1996AJ....112.1472R}) or that a third distant
but relatively metal-rich component is needed to explain the
population. However, we do caution that our stellar number counts in
the most distant bins are small and that analysis of larger deep
surveys are needed.
We note with interest that the presence of a large population of
massive astrophysical compact halo objects (MACHOs) that are faint in
the optical and contribute up to 20\% of the halo mass has been
inferred from microlensing analyses \citep[e.g.][and references
therein]{2007A&A...469..387T,2005ApJ...633..906B,2000ApJ...542..281A}.
These sources have been inferred to have a mass of
$\sim$0.5\,M$_\odot$, similar to the mass of an early to mid M-dwarf
star. According to the widely used `S'-model
\citep{2000ApJ...542..281A}, the MACHO objects occupy a spherical,
isothermal halo of core radius 5\,kpc. In at least one case, a lensing
star has been confirmed photometrically and spectroscopically to be an
M4-5 dwarf star, with a red colour similar to those discussed in this
analysis \citep[$V_{F555W}-I_{F814W}=3.2$
(Vega),][]{2001Natur.414..617A}. While the M dwarf population studied
here may explain only a fraction of the proposed MACHO mass
contribution to the dark halo, we probe only a small subset of the
halo M star population with the colour and flux limits applied in our
study. We note that these cool, red stars would not easily be
identified in the relatively shallow and blue monitoring imaging
employed by the MACHO \citep{2000ApJ...542..281A} and EROS
\citep{2007A&A...469..387T} collaborations.
Our survey also highlights both the utility and the difficulty of
relying on the relatively small deep fields designed for extragalactic
observations. While such surveys are extremely sensitive and benefit
from multiwavelength observations - often including coordinated
observations in the infrared and deep spectroscopic follow-up - they
also suffer from being necessarily small in size. The newer generation
of ground-based, wide-field imagers have enabled larger deep fields
to be observed, but under seeing conditions that make the effective
separation of stars and high redshift galaxies (which become
comparable in number density at faint magnitudes) impossible. The
variation of 40\% in the M star number counts between the GOODS fields
suggest that larger deep fields should be observed at {\em HST} resolutions
in order to average out structure in the inner halo or outer disc.
Clearly further work in larger fields with both deep and high
resolution imaging \citep[e.g. COSMOS,][]{2006astro.ph.12305S} is
required to strengthen the observational result that the number
density of sources remains constant to faint magnitudes, which may
imply a rapid cut-off in the thick disk density profile at large
distances above the Galactic plane.
\section{Conclusions}
\label{sec:conclusions}
We have investigated the properties of a faint population of M stars
selected using a photometric drop method similar to that used to
identify high redshift galaxies. The M-dwarfs in this study are up to
five magnitudes fainter than those considered by previous authors, and
hence lie at distances up to ten times further than those investigated
before.
This population extends well above the galactic plane, with little
evidence for a decline in number at faint magnitudes/large distances.
This is difficult to reconcile with models of known disk structure.
The M-stars selected by this photometric method show similar colours
to those within a kiloparsec of the sun, both in the optical and the
infrared. Together with the evidence from line indices, this
suggests that there are significant numbers of M-stars with moderate
to high metallicities extending well above the plane of the galaxy.
Our sample of stars with $v-i' > 1.7$ should be complete for mid-M
dwarfs of near-solar metallicity but is incomplete for low-metallicity
M dwarfs many of which have bluer colours or are more distant.
Determining the metallicity of individual sources is difficult without
exceptionally deep spectroscopy. If one or more of our sources are at
low metallicity, they may be even cooler - and hence somewhat closer -
than we believe, although our spectroscopic results indicate that
fewer than 6\% of M stars are likely to fall into this category.
\section*{Acknowledgements}
ERS gratefully acknowledges support from the Particle Physics and
Astronomy Research Council (PPARC). The authors thanks Gerry Gilmore,
Laura Douglas and Avon Huxor for useful discussions. We also thank our
referee, Mike Bessell for comments and suggestions that have improved the paper.
Based on observations made with the NASA/ESA Hubble Space Telescope,
obtained from the Data Archive at the Space Telescope Science
Institute, which is operated by the Association of Universities for
Research in Astronomy, Inc., under NASA contract NAS 5-26555. This
work is also based in part on observations made with the Spitzer Space
Telescope, which is operated by the Jet Propulsion Laboratory,
California Institute of Technology under a contract with NASA. We
thank the GOODS team for their hard work in making these high quality
datasets public.
Results from the BDF fields was based on observations made with ESO
telescopes at the Paranal Observatory under programme IDs 69.A-0656
and 71.A-0290.
|
2,869,038,155,317 | arxiv | \section*{ACKNOWLEDGMENTS}
We thank Vanya Belyaev and Tomasz Skwarnicki for correspondence on LHCb
$b$-quark fragmentation measurements, and an anonymous referee for
constructive comments..
The research of M.K. was supported in part by NSFC-ISF grant No.\ 3423/19.
|
2,869,038,155,318 | arxiv | \section{Introduction}
Binary neutron star (BNS) and neutron star--black hole (NSBH) mergers have long been predicted to be associated with short gamma-ray bursts \citep[GRBs; e.g.,][]{Blinnikov1984}, and optical/infrared transients called kilonovae \citep[e.g.,][]{Li1998}. Along with claimed evidence for kilonovae in some short-gamma ray bursts \citep[e.g.,][]{Tanvir:2013pia,BeWo2013}, the discovery of a kilonova \citep[e.g.,][]{CoFo2017} associated with the first BNS merger detected in gravitational waves, GW170817 \citep{PhysRevLett.119.161101}, nicely confirmed these predictions. This multi-messenger source marked a watershed moment in astrophysics, with prospects to strongly constrain both the neutron star equation of state \citep[e.g.,][]{Metzger:2017wot, Radice:2017lry, Annala:2017llu, Most:2018hfd, Radice:2018ozg, Tews:2018chv, Essick:2019ldf, Capano:2019eae, DiCo2020} and the Hubble Constant \citep[e.g.,][]{Abbott:2017xzu,Fishbach:2018gjp,HoNa2018,2020ApJ...888...67D,Coughlin:2019vtv,DiCo2020}, amongst many other science cases \citep{Baker:2017hug,Ezquiaga:2017ekz}.
Dynamical ejecta \citep[e.g.,][]{Hotokezaka:2012ze,BaGo2013,Dietrich:2016fpt}, which arise from tidal stripping of the neutron star(s) and the neutron stars contact interface, and post-merger ejecta \citep[e.g.,][]{MePi2008,Fernandez:2014cna,Siegel:2017jug,Fernandez:2018kax}, which arise from accretion disk winds surrounding the remnant object, are characterized by low electron fractions. This scenario favors the production of heavy elements such as lanthanides and actinides via rapid neutron capture (known as the $r$-process), and the decay of these unstable nuclei powers the optical/infrared kilonova \citep[e.g.,][]{LaSc1974,Kasen:2013xka,Barnes:2013wka,Barnes:2016umi,Kasen:2017sxr}.
Questions about the sources of heavy element production in the Universe and diversity in the ejecta of the kilonova population can only be answered by the detection and characterization of a large sample of sources. Unveiling such a population is difficult because kilonovae are rare ($<$ 1\,\% of the core collapse supernova rate), fast (fading $\gtrsim$\,0.5\,mag per day in the optical), and faint transients ($M \gtrsim -16$ at peak), and hence are particularly hard to discover.
Signatures of kilonovae are mostly found during the follow-up of short GRBs \citep[e.g.,][]{Perley2009} and the follow-up of LIGO/Virgo candidates, although only for GW170817 has a counterpart been identified so far.
Rates of BNS mergers are still highly uncertain, with $R=80-810$\,Gpc$^{-3}$\,yr$^{-1}$ based on GW observations \citep{LIGO_GWTC2_pop_2020arXiv}; empirical limits on kilonovae rates by optical surveys are nearing the upper end of the gravitational-wave measurements \citep{AnKo2020, AnCo2021arXiv}.
The advent of Vera C. Rubin Observatory \citep{2019ApJ...873..111I} presents us with a great opportunity to identify a population of kilonovae independent of any gravitational-wave or gamma-ray burst trigger, thanks to the unprecedented volume that the Legacy Survey of Space and Time (LSST) will be able to probe \citep[see e.g.,][]{Andreoni2019PASP, 2019MNRAS.485.4260S}.
Unfortunately, due to their fast fading and intrinsically underluminous properties, ``detection'' is not enough; it is imperative that kilonova candidates found by Rubin Observatory are recognized as such in real time so that follow-up observations can confirm their nature.
Projects exist that are dedicated to fast transient discovery in current wide-field surveys such as the Zwicky Transient Facility \citep[ZTF;][]{Bellm2019PASP, Graham2019PASP}. The ``ZTF Realtime Search and Triggering'' \citep[``\texttt{ZTFReST}''\footnote{\url{github.com/growth-astro/ztfrest}},][]{AnCo2021arXiv} project, for example, employs i) alert queries, ii) forced point-spread-function photometry, and iii) nightly light curve stacking in flux space to discover fast-evolving transients such as kilonovae.
\texttt{ZTFReST} is proving to be very effective at identifying extragalactic fast transients, having already revealed seven serendipitously-discovered GRB afterglows and at least two supernova shock breakouts in 2020 and in the first three months of 2021 \citep{AnCo2021arXiv}.
In this work, we used the most recent \texttt{OpSim} simulations and a set of new metrics to assess the effectiveness of cadence options for un-triggered, or ``serendipitous,'' kilonova discovery. We employed metrics that both assess Rubin Observatory's ability to simply detect the transients, as well as metrics that are designed to identify a transient as ``fast'' based on its flux evolution. We argue that the latter is the most appropriate metric for potentially maximizing the science output from these rare objects, and we provide suggested cadences based on this metric.
\section{Methods}
\label{sec:methods}
We used the new \texttt{kneMetrics}\footnote{\url{https://github.com/LSST-nonproject/sims_maf_contrib/blob/master/mafContrib/kneMetrics.py}} to recover synthetic kilonova light curves injected in \texttt{OpSim} simulations.
The synthetic light curves are taken from \cite{DiCo2020}, which rely on the radiative transfer code \texttt{POSSIS} \citep{Bulla:2019muo}, which vary four parameters over physically viable priors: the dynamic ejecta mass $M_{\rm dyn}$, the disk wind ejecta mass $M_{\rm wind}$,
the half opening angle of the lanthanide-rich dynamical-ejecta component $\phi$ and the viewing angle $\iota$ \citep[see][for more details about the adopted geometry]{DiCo2020}.
Examples of synthetic light curves injected in the Rubin baseline cadence can be found in Fig.\,\ref{fig:lc_baseline}.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/plot_wide_lc_baseline_5137_158Mpc.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_wide_lc_baseline_1686_219Mpc.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_wide_lc_baseline_4537_372Mpc.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_wide_lc_baseline_4888_414Mpc.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_wide_lc_baseline_2054_725Mpc.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_wide_lc_baseline_4684_929Mpc.pdf}
\caption{Examples of GW170817-like kilonova light curves injected in the baseline cadence (v1.7, individual exposures). Observations from 30 days before peak to 30 days after peak are presented. The light curves were uniformly distributed in volume and uniformly distributed in time throughout the 10-year survey. Circles indicate the detections, solid lines show the simulated light curves in bands where at least one detection is present, and triangles indicate $5\sigma$ upper limits. The luminosity distance at which each light curve is places is also indicated.
}
\label{fig:lc_baseline}
\end{center}
\end{figure*}
\subsection{Metrics}
\label{subsec:metrics}
To assess kilonova detectability in different cadence simulations, we employed a number of metrics. We improved the existing \texttt{TDEsPopMetric}, designed to inject and recover diverse populations of tidal disruption event light curves, by adding the possibility to inject synthetic transients distributed uniformly in volume (with numbers increasing as a function of distance to the third power), rather than placed at a fixed distance. Light curves at a larger distance share the same properties of those at lower distances, but their apparent luminosity is fainter, making them detectable for shorter times and only when the images' magnitude limits are particularly deep.
The metrics most relevant to this work, all included as functions in the \texttt{kneMetrics} code, are:
\begin{itemize}
\item \texttt{multi\_detect}: $\geq2$ detections $>5\sigma$
\item \texttt{ztfrest\_simple}: metric that reproduces a discovery algorithm similar\footnote{In \texttt{ZTFReST}, linear fitting is performed, while the \texttt{ztfrest\_simple} metric relies in a more simplistic estimate of the rising or fading rates based on the time and magnitude differences between the brightest and the faintest detected ($>5\sigma$) points in the light curves.} to \texttt{ZTFReST}, in which sources found to be rising faster than 1\,mag\,day$^{-1}$ and fading faster than 0.3\,mag\,day$^{-1}$ are selected
\item \texttt{ztfrest\_simple\_red}: same as \texttt{ztfrest\_simple}, but applied only to $izy$ bands
\item \texttt{ztfrest\_simple\_blue}: same as \texttt{ztfrest\_simple}, but applied only to $ugr$ bands
\end{itemize}
The metrics were deliberately designed to range from standard transient detection (with $\geq 2$ detections)
which typically provides only spatial information on the celestial coordinates of a source, to methods more likely to lead to source characterization -- in other words, kilonova discovery. Simple detection can be crucial during gravitational-wave follow-up, but is of little use during fast transient searches in the regular survey, especially for transients at large distances. Importantly, the conclusions of this study can be applied to a range of fast transients, including, for example, GRB afterglows and fast blue optical transients (FBOTS), for which light curve sampling with spacing between one hour and one day is crucial.
There are a variety of methods in the literature to promptly identify fast-transient candidates.
For example, methods are being developed for early transient classification via machine learning techniques \citep[e.g.,][]{Muthukrishna:2019wgc}, or as part of the Photometric LSST Astronomical Time Series Classification Challenge \citep[PLAsTiCC;][]{Kessler2019PASP}. Prior work on detecting and identifying fast transients in Rubin LSST \texttt{Opsims} include \cite{Bianco2019}.
A simple but effective strategy to identify transients with rapidly fading or rising light curves can be based on magnitude rise and decay rate measurements. In this work, we consider significant fading rates to be those faster than $0.3$\,mag\,day$^{-1}$, which is the threshold used in real time by the \texttt{ZTFReST} pipeline and is expected to be particularly suitable for the discovery of kilonovae from BNS mergers (Fig.\,\ref{fig:threshold}), or rising rates faster than 1\,mag\,day$^{-1}$, which can separate rapidly evolving transients from most supernovae. Within \texttt{ZTFReST}, this has greatly helped to separate fast transient candidates from slower, ``contaminant'' sources, with $\sim 30\%$ purity in ``archival'' data searches when considering only fade rates and thresholds tailored for each band.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{plots/evolution_lsst_ti=peak_dt=6.pdf}
\caption{
A distribution of fade rates was calculated using the kilonova model grid obtained by \cite{Bulla2019}, tailored for BNS (upper panels) and NSBH mergers (lower panels). Averaged decay rates from peak to six days later are shown in Rubin $ugrizy$ bands from left to right.
Dashed and dot-dashed vertical lines indicate the lower 99\% and 95\% decay rate for each distribution. Blue vertical lines indicate the decay rates for the GW170817 kilonova in each band. A fading-rate threshold of 0.3\,mag\,day$^{-1}$ can enable the identification of kilonovae from BNS mergers ($>99\%$ of the distribution) in all filters.}
\label{fig:threshold}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/plot_families_v17_KNePopMetric__ztfrest_simple.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_families_v17_KNePopMetric__multi_detect.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_families_v15_KNePopMetric__ztfrest_simple.pdf}
\includegraphics[width=0.49\textwidth]{plots/plot_families_v15_KNePopMetric__multi_detect.pdf}
\caption{For each cadence family, we report the ratio between the recovery fraction of the individual cadences and the maximum recovery fraction from the best baseline cadence (baseline\_nexp1). Blue dots indicate the mean of the cadences' recovery fractions, red triangles their maximum, and black triangles their minimum. The two-snaps baseline cadence (baseline\_nexp2) performs systematically worse than the single-exposure baseline cadence. Simulations part of FBS v1.7 (top) and of FBS v1.5 (bottom) are considered separately.}
\label{fig:metric_results}
\end{center}
\end{figure*}
\subsection{Kilonova models}
\label{subsec:models}
In this work, we considered kilonova models from the grid generated with the three-dimensional radiation transfer simulation code \texttt{POSSIS} \citep{Bulla2019}. The model grid allowed us to explore a diversity of intrinsic properties, such as ejecta masses, as well as different viewing angles to the system.
First, we injected synthetic light curves using a single model: the GW170817-like kilonova (dynamical ejecta mass $M_{\text{dyn}} = 0.005 M_{\odot}$, disk wind mass $M_{\text{wind}} = 0.050 M_{\odot}$, and viewing angle $\iota = 25.8{\degree}$). A half-opening angle $\phi = 30\degree$ for the lanthanide-rich region is used for this model and all the other models considered in this work. Second, we injected a population of kilonovae with the same ejecta masses of the GW170817-like model but viewed from eleven viewing angles, uniformly distributed in $cos(\iota)$. Third, we explored kilonova detectability in an optimistic and a pessimistic scenarios, in which the kilonova properties make it particularly bright or dim, respectively. Ejecta masses were chosen to be physically realistic as determined by numerical relativity simulations, with $M_{\text{dyn}} = 0.020 M_{\odot}$, $M_{\text{wind}} = 0.090 M_{\odot}$ for the optimistic scenario and $M_{\text{dyn}} = 0.005 M_{\odot}$, $M_{\text{wind}} = 0.010 M_{\odot}$ for the pessimistic scenario.
\section{Results}
\label{sec:results}
\subsection{GW170817-like kilonova light curves}
Cadences were made available in several releases and were grouped into ``families", in which ideas that deviate from the baseline cadence were implemented and encompass parameter variations, for example in the area of the footprint. Detailed information about simulations can be found in official Rubin Observatory online resources\footnote{for example \url{https://github.com/lsst-pst/survey_strategy}}.
Fig.\,\ref{fig:metric_results} shows results obtained by injecting $5 \times 10^5$ synthetic GW170817-like kilonova light curves, uniformly distributed in volume out to a luminosity distance of 1.5\,Gpc, in \texttt{OpSim} simulated cadences part of the v1.5\footnote{\url{https://github.com/lsst-sims/sims_featureScheduler_runs1.5}} (bottom panels) and v1.7\footnote{\url{https://github.com/lsst-sims/sims_featureScheduler_runs1.7}} and v1.7.1\footnote{\url{https://community.lsst.org/t/survey-simulations-v1-7-1-release-april-2021/4865}} releases (upper panels). The results displayed in Fig.\,\ref{fig:metric_results} were obtained using the \texttt{multi\_detect} and the \texttt{ztfrest\_simple} metrics described in \S\ref{subsec:metrics}.
In all simulations, the best baseline cadence entails individual 30\,s exposures (baseline\_nexp1).
The baseline cadence where $2\times 15$\,s snaps (baseline\_nexp2) performs consistently worse.
The number of recovered kilonovae in cadence families simulated as part of the v1.5 cadence release (bottom panels) are relatively similar, with results comparable with the best baseline cadence within 15\%. When looking at v1.7 cadences, it is evident that the best baseline performs distinctly better than any other cadence in terms of kilonova detection (\texttt{multi\_detect} metric; top-right panel of Fig.\,\ref{fig:metric_results}). The baseline cadence does a better job than most cadence families, also according to the \texttt{ztfrest\_simple} metric. We found that rolling cadences, in which a smaller fraction of the footprint is observed in each season at higher cadence, perform significantly ($\sim 50-60\%$) worse as coded for the v1.7 release than the baseline cadence\footnote{Significant changes to the \texttt{Opsim} approach to simulate rolling cadence strategies were implemented for v1.7, such that these simulations should be considered more reliable (see Bianco et al.; front paper of this Focus Issue)}. However, rolling cadences part of the v1.7.1 release, indicated as ``new\_rolling'' in the figure, perform up to $\sim 20\%$ better than the baseline cadence (in the Figure, uncertainties are in the order of 5-10\%).
In order to compare baseline and rolling cadences with a higher statistical significance, we ran simulations in which the number of injected sources was increased to $10^6$, using a variety of surrogate models. A summary of the results can be found in Tab.\,\ref{tab:bas_vs_roll}.
When we injected GW170817-like kilonovae, the baseline cadence performed better than the best rolling cadence\footnote{six\_stripe\_scale0.90\_nslice6\_fpw0.9\_nrw0.0v1.7\_10yrs} at any distance with the \texttt{multi\_detect} metric (Fig.\,\ref{fig:distribution}, central panel), but the rolling cadence outperforms the baseline cadence beyond $\sim 400$\,Mpc with the \texttt{ztfrest\_simple} metric. This means that a rolling cadence could enable the identification of a few more kilonova candidates than the baseline cadence at large distances. In total, 32--334 kilonovae can be expected to be detectable with the baseline cadence and 23--238 with the rolling cadence, assuming that all kilonovae are similar to the observed GW170817. However, the number of kilonovae recognizable to be fast transients (\texttt{ztfrest\_simple} metric) would be 3--29 and 3--32 for the baseline and rolling cadences, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{plots/distrib_distance_all_notnorm_500k_sources.pdf}
\includegraphics[width=\columnwidth]{plots/efficiency_dist.pdf}
\includegraphics[width=\columnwidth]{plots/efficiency_dist_color.pdf}
\caption{{\it Top:} Distribution of recovered kilonovae using a simple multi-detection metric (cyan), a \texttt{ZTFReST}-like metric (orange), and a \texttt{ZTFReST}-like metric applied only to red ($izy$) bands (red lines). One million sources were injected uniformly distributed in volume between 10\,Mpc and 1.5\,Gpc. {\it Center:} Efficiency as a function of luminosity distance; $5\times 10^5$ sources were injected at regular intervals. For the \texttt{ztfrest\_simple} metric, due to the rapidly growing rate at the edge of the sensitive volume, small differences between detection efficiencies for the rolling and the baseline cadence beyond $\sim 400$\,Mpc is enough to yield an improvement of $\sim 20\%$ in kilonova detection.
{\it Bottom:} Ratio between the detection efficiency in redder $izy$ bands and in bluer $ugr$ bands for multi-detection and \texttt{ZTFReST}-like metrics. Employing redder filters presents clear advantages at lower distances, where spectroscopic and multi-wavelength follow-up observations are possible.
}
\label{fig:distribution}
\end{center}
\end{figure}
Nearby kilonovae have the potential of being recognized sooner, associated with hosts at known redshifts, and can be better characterized with follow-up observations than distant (fainter) sources. To better explore detectability of nearby kilonovae, we injected $10^6$ GW170817-like kilonovae uniformly distributed in volume out to 300\,Mpc, which is within the distance range where the best Wide Fast Deep (WFD) baseline cadence performs better than any of the rolling cadences simulated so far. With the best baseline cadence, we can expect up to 101 kilonovae to be detectable at least twice in this distance range and up to 31 could be recognized to be fast-fading in at least one band. In the simulations, about $68\%$ of kilonovae were found to be fast-fading in red $izy$ bands (\texttt{ztfrest\_simple\_red} metric) and 44\% in blue $ugr$ bands (\texttt{ztfrest\_simple\_blue} metric). Only 37\% of kilonovae found in $izy$ bands were detected at least 4 times in $ugr$ bands (Fig.\,\ref{fig:distribution}, bottom panel). The combination of transient detection, color information, and possible association with a catalogued nearby galaxy (which enables an estimate of the transient's absolute magnitude, expected to be fainter than $M \sim -16$ for most kilonovae) can lead to the identification of solid kilonova candidates. For the fraction of events that are relatively nearby (below 300\,Mpc), they can be followed up spectroscopically with $\gtrsim$\,8-m class telescopes such as the Very Large Telescope (VLT), Gemini, Keck, or with the upcoming SoXS at ESO New Technology Telescope (NTT), which was designed specifically for LSST transient classification, to be classified \citep{2016SPIE.9908E..41S}. In summary, our analysis suggests that employing more observations in redder bands is preferred to maximize scientific return.
\subsection{Exploring the kilonova light curve parameter space}
Multi-messenger observations of GW170817 constrained the viewing angle to be $\iota = 32_{-13}^{+10}$\,deg \citep{Finstad2018}, see also \citet{2020ApJ...888...67D}. Superluminal motion from radio observations suggests a lower value for the viewing angle of about 15--20\,deg \citep{Mooley2018Nat, Ghirlanda2019Sci}.
However, merging BNS systems can be oriented in any direction with respect to the observer. We compared the performance of baseline and rolling cadences also by injecting synthetic light curves, from the grid of kilonova models simulated with \texttt{POSSIS}, with the same intrinsic parameters as the GW170817-like model (\S\ref{subsec:models}), but at different viewing angles. According to our simulations, up to 15 (17) kilonovae should be identified as fast transients in the baseline (rolling) cadence, while up to 176 (127) kilonovae should be detectable at least twice.
Finally, we assessed kilonova detectability for ``pessimistic" and ``optimistic'' kilonova models, in which the ejecta masses make the optical emission particularly faint or bright (see \S\ref{subsec:models}). For the pessimistic case, only a handful of kilonovae are expected to be present in Rubin images, with at most 5 kilonovae expected to be recognizable as fast transients. On the other hand, the optimistic scenario could result in $>50$ kilonovae found to evolve rapidly with the baseline cadence and $>60$ with the currently best rolling cadence. A better understanding of the kilonova luminosity function is required to set more precise serendipidous kilonova discovery expectations.
\begin{table*}[h]
\centering
\caption{Kilonova recovery efficiencies ($\epsilon$), calculated with a number of metrics, for the best baseline cadence
and the best rolling cadence.
The efficiency was then converted into the number of expected kilonovae using the BNS merger rate $R=320_{-240}^{+490}$\,Gpc$^{-3}$\,yr$^{-1}$ from the GWTC-2 catalog \citep{LIGO_GWTC2_pop_2020arXiv}, where N$_{\text{KN}}$ corresponds to the median rate and N$_{\text{KN,min}}$, N$_{\text{KN,max}}$ correspond to the 90\% symmetric credible intervals, taking the uncertainty in $\epsilon$ into account. A duration of 10\,yr for the WFD survey was assumed. }
\label{tab:bas_vs_roll}
\begin{tabular}{llcccccccc}
\hline\hline
Kilonova & Metric & $\epsilon_{\text{baseline}}$ & $\epsilon_{\text{rolling}}$ & N$_{\text{KN}}$ & N$_{\text{KN}}$ & N$_{\text{KN,min}}$ & N$_{\text{KN,min}}$ & N$_{\text{KN,max}}$ & N$_{\text{KN,max}}$ \\
model & & $\times 10^4$ & $\times 10^4$ & baseline & rolling & baseline & rolling & baseline & rolling\\
\hline
GW170817 & multi\_detect & $60.5 \pm 0.8$ & $43.1 \pm 0.7$ & 130 & 93 & 32 & 23 & 334 & 238 \\
$M_{\text{dyn}} = 0.005 M_{\odot}$ & blue\_color\_detect & $6.8 \pm 0.3$ & $6.4 \pm 0.2$ & 15 & 14 & 4 & 3 & 38 & 36 \\
$M_{\text{wind}} = 0.050 M_{\odot}$ & red\_color\_detect & $2.9 \pm 0.2$ & $3.1 \pm 0.2$ & 6 & 7 & 1 & 2 & 17 & 18 \\
& ztfrest\_simple & $5.2 \pm 0.2$ & $5.6 \pm 0.2$ & 11 & 12 & 3 & 3 & 29 & 32 \\
& ztfrest\_simple\_blue & $3.8 \pm 0.2$ & $4.4 \pm 0.2$ & 8 & 9 & 2 & 2 & 22 & 25 \\
& ztfrest\_simple\_red & $1.7 \pm 0.1$ & $2.1 \pm 0.1$ & 4 & 4 & 1 & 1 & 10 & 12 \\
\hline
GW170817 & multi\_detect & $1306.3 \pm 3.6$ & $931.2 \pm 3.0$ & 40 & 28 & 10 & 7 & 101 & 72 \\
$<300$\,Mpc & blue\_color\_detect & $141.0 \pm 1.2$ & $143.8 \pm 1.2$ & 4 & 4 & 1 & 1 & 11 & 11 \\
& red\_color\_detect & $369.5 \pm 1.9$ & $312.0 \pm 1.8$ & 11 & 10 & 3 & 2 & 29 & 24 \\
& ztfrest\_simple & $400.3 \pm 2.0$ & $334.1 \pm 1.8$ & 12 & 10 & 3 & 3 & 31 & 26 \\
& ztfrest\_simple\_blue & $176.7 \pm 1.3$ & $173.0 \pm 1.3$ & 5 & 5 & 1 & 1 & 14 & 13 \\
& ztfrest\_simple\_red & $272.9 \pm 1.6$ & $244.5 \pm 1.6$ & 8 & 7 & 2 & 2 & 21 & 19 \\
\hline
GW170817 & multi\_detect & $31.8 \pm 0.6$ & $22.7 \pm 0.5$ & 68 & 49 & 17 & 12 & 176 & 127 \\
viewing & blue\_color\_detect & $3.4 \pm 0.2$ & $3.4 \pm 0.2$ & 7 & 7 & 2 & 2 & 20 & 19 \\
angles & red\_color\_detect & $1.4 \pm 0.1$ & $1.7 \pm 0.1$ & 3 & 4 & 1 & 1 & 8 & 10 \\
& ztfrest\_simple & $2.6 \pm 0.2$ & $2.9 \pm 0.2$ & 6 & 6 & 1 & 1 & 15 & 17 \\
& ztfrest\_simple\_blue & $1.8 \pm 0.1$ & $2.3 \pm 0.1$ & 4 & 5 & 1 & 1 & 10 & 13 \\
& ztfrest\_simple\_red & $1.0 \pm 0.1$ & $1.1 \pm 0.1$ & 2 & 2 & 0 & 1 & 6 & 7 \\
\hline
Pessimistic & multi\_detect & $8.9 \pm 0.3$ & $6.5 \pm 0.2$ & 19 & 14 & 5 & 3 & 50 & 37 \\
$M_{\text{dyn}} = 0.005 M_{\odot}$ & blue\_color\_detect & $0.8 \pm 0.1$ & $0.9 \pm 0.1$ & 2 & 2 & 0 & 0 & 5 & 5 \\
$M_{\text{wind}} = 0.010 M_{\odot}$ & red\_color\_detect & $0.3 \pm 0.1$ & $0.4 \pm 0.1$ & 1 & 1 & 0 & 0 & 2 & 3 \\
& ztfrest\_simple & $0.5 \pm 0.1$ & $0.6 \pm 0.1$ & 1 & 1 & 0 & 0 & 3 & 4 \\
& ztfrest\_simple\_blue & $0.4 \pm 0.1$ & $0.4 \pm 0.1$ & 1 & 1 & 0 & 0 & 2 & 2 \\
& ztfrest\_simple\_red & $0.2 \pm 0.1$ & $0.3 \pm 0.1$ & 0 & 1 & 0 & 0 & 1 & 2 \\
\hline
Optimistic & multi\_detect & $116.6 \pm 1.1$ & $87.0 \pm 0.9$ & 251 & 187 & 62 & 46 & 641 & 479 \\
$M_{\text{dyn}} = 0.020 M_{\odot}$ & blue\_color\_detect & $11.9 \pm 0.3$ & $12.9 \pm 0.4$ & 26 & 28 & 6 & 7 & 67 & 72 \\
$M_{\text{wind}} = 0.090 M_{\odot}$ & red\_color\_detect & $5.2 \pm 0.2$ & $5.9 \pm 0.2$ & 11 & 13 & 3 & 3 & 29 & 34 \\
& ztfrest\_simple & $9.2 \pm 0.3$ & $10.8 \pm 0.3$ & 20 & 23 & 5 & 6 & 52 & 61 \\
& ztfrest\_simple\_blue & $6.7 \pm 0.3$ & $8.3 \pm 0.3$ & 14 & 18 & 3 & 4 & 38 & 47 \\
& ztfrest\_simple\_red & $3.2 \pm 0.2$ & $4.2 \pm 0.2$ & 7 & 9 & 2 & 2 & 18 & 24 \\
\hline
\end{tabular}
\end{table*}
\section{Conclusion}
\label{sec:conclusion}
Rubin Observatory has a great potential of revealing a population of kilonovae during the WFD survey, in addition to discoveries made following up GW triggers. We injected synthetic kilonova light curves into simulated Rubin observations to assess which ones of the available cadences can maximize serendipitous kilonova discovery. We demonstrated that, for the WFD survey, the simulated baseline cadence with single 30\,s exposures should be greatly preferred over $2 \times 15$s consecutive snaps for kilonova discovery.
Rolling cadences are expected to be particularly suitable for fast transient discovery \citep[e.g.,][]{Andreoni2019PASP}. We found that the development of rolling cadences has significantly improved from \texttt{OpSim} version v1.7 to v1.7.1. While this indicates progress, the baseline plan may still be preferred over any other cadence family currently available due to a larger efficiency at detecting more nearby (and therefore brighter) fast transients, easier to follow-up and classify with other telescopes. We recommend simulating new rolling cadences further optimizing the algorithms used in v1.7.1, possibly maximizing the exposure time in each band (barring $u$-band) rather than using snap pairs.
We found strong evidence that red $izy$ bands are preferred for kilonova discovery at distances below 300\,Mpc, in agreement with the results of other studies such as, for example, \cite{Almualla2021} and \cite{Sagues2021}. This is expected because kilonovae appear as red and rapidly-reddening transients due to heavy $r$-process elements synthesised in neutron-rich ejecta. At redder wavelengths, kilonova light curves can be brighter and longer-lived, especially if the system is viewed from equatorial viewing angles \citep[e.g.,][]{Bulla2019}. Very rapid ``blue" kilonovae could be found at larger distances (Fig.\,\ref{fig:distribution}) due to the greater sensitivity of $g$ and $r$ filters, however, these kilonovae might be rarer and more difficult to classify spectroscopically.
Therefore, we recommend that the number of $izy$ observations is increased in the WFD cadence plan. Such red-band observations would be particularly effective, scientifically, if coupled with at least one observation in $g$- (preferred) or $r$-band on the same night, so that kilonovae can be separated photometrically from other transients and their temperature evolution can be measured. A recommendation for same-night multi-band photometry in LSST has already been put forward for example by \cite{Andreoni2019PASP} and \cite{Bianco2019}. In particular, \cite{Bianco2019} address the advantages of acquiring sets of three exposures per field in the same night, in two filters and appropriately spaced in time, towards rapid identification of rare fast transients.
Major uncertainty in the results of this work results from our limited understanding of the BNS merger rate and the kilonova luminosity function. Systematic kilonova searches during gravitational-wave follow-up \cite[e.g.,][]{Kasliwal2020}, short GRB follow-up \cite[e.g.,][]{Gompertz2018, Rossi2020}, and un-triggered wide-field surveys \citep[e.g.,][]{Doctor2017, AnKo2020, AnCo2021arXiv, McBrien2021MNRAS} are expected to improve those measurements significantly before Rubin Observatory's first light.
\section*{Acknowledgments}
We thank Peter Yoachim and Lynne Jones. This paper was created in the nursery of the Rubin LSST Transient and Variable Star Science Collaboration \footnote{\url{https://lsst-tvssc.github.io/}}. The authors acknowledge the support of the Vera C. Rubin Legacy Survey of Space and Time Transient and Variable Stars Science Collaboration that provided opportunities for collaboration and exchange of ideas and knowledge and of Rubin Observatory in the creation and implementation of this work.
The authors acknowledge the support of the LSST Corporation, which enabled the organization of many workshops and hackathons throughout the cadence optimization process by directing private funding to these activities.
\noindent M.~W.~C acknowledges support from the National Science Foundation with grant number PHY-2010970.
M. B. acknowledges support from the Swedish Research Council (Reg. no. 2020-03330).
A.G, A.S.C and E. C. K. acknowledge support from the G.R.E.A.T. research environment funded by {\em Vetenskapsr\aa det}, the Swedish Research Council, under project number 2016-06012, and support from The Wenner-Gren Foundations.
\noindent This research uses services or data provided by the Astro Data Lab at NSF's National Optical-Infrared Astronomy Research Laboratory. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation.
\software{LSST metrics analysis framework \citep[MAF;][]{Jones2014SPIE}; Astropy \citep{2013A&A...558A..33A}; JupyterHub\footnote{\url{https://jupyterhub.readthedocs.io/en/stable/index.html}}}
\bibliographystyle{aasjournal}
\begin{small}
|
2,869,038,155,319 | arxiv | \section{Introduction}
A significant amount of recent research work has addressed the problem of solving various data management problems in the cloud. The major algorithmic challenges in map-reduce computations involve balancing a multitude of factors such as the number of machines available for mappers/reducers, their memory requirements, and {\em communication cost} (total amount of data sent from mappers to reducers). Most past work provides custom solutions to specific problems, e.g., performing fuzzy joins in map-reduce~\cite{ADMPU12, VCL10}, clustering~\cite{TTLKF11}, graph analyses~\cite{AFU12, SV11, Schank07}, and so on. While some problems are amenable to very efficient map-reduce algorithms, some other problems do not lend themselves to a natural distribution, and have provable lower bounds. Clearly, the ease of ``map-reducability'' is closely related to whether the problem can be partitioned into independent pieces, which are distributed across mappers/reducers. What makes a problem distributable? Can we characterize general properties of problems that determine how easy or hard it is to find efficient map-reduce algorithms?
This is a vision paper that attempts to answer the questions described above. We define and study {\em replication rate}. Informally, the replication rate of any map-reduce algorithm gives the average number of reducers each input is sent to. There are many ways to implement nontrivial problems in a round of map-reduce; the more parallelism you want, the more overhead you face due to having to replicate inputs to many reducers. In this paper:
\begin{itemize}
\item We offer a simple model of how inputs and outputs are related, enabling us to study the replication rate of problems. We show how our model can capture a varied set of problems. (Section~\ref{model-sect})
\item We study two interesting problems---{\em Hamming Distance-1} (Section~\ref{hd1-sect}) and {\em triangle finding} (Section~\ref{other-results})---and show in each case
there is a lower bound on the replication rate that grows as the number of inputs
per reducer shrinks (and therefore as the parallelism grows).
Moreover, we present methods of mapping inputs to reducers that meet
these lower bounds for various values of inputs/reducer.
\end{itemize}
\noindent It is our long-term goal to understand how the structure of a problem, as reflected by the input-output relationship in our model, affects the degree of parallelism/replication tradeoff.
\section{The Model}
\label{model-sect}
The model looks simple~-- perhaps too simple. But with it we can discover some quite interesting and realistic insights into the range of possible map-reduce algorithms for a problem. For our purposes, a {\em problem} consists of:
\begin{enumerate}
\item
Sets of {\em inputs} and {\em outputs}.
\item
A {\em mapping} from outputs to sets of inputs. The intent is that each output depends on only the set of inputs it is mapped to.
\end{enumerate}
\noindent Note that our model essentially captures the notion {\em provenance}~\cite{provenance}. In our context, there are two nonobvious points about this model:
\begin{itemize}
\item
Inputs and outputs are hypothetical, in the sense that they are all the possible inputs or outputs that might be present in an instance of the problem. Any {\em instance} of the problem will have a subset of the inputs. We assume that an output is never made unless at least one of its inputs is present, and in many problems, we only want to make the output if {\em all} of its associated inputs are present.
\item
We need to limit ourselves to finite sets of inputs and outputs. Thus, a finite domain or domains from which inputs and outputs are constructed is often an integral part of the problem statement, and a ``problem'' is really a family of problems, one for each choice of finite domain(s).
\end{itemize}
We hope a few examples will make these ideas clear.
\subsection{Examples of Problems}
\label{prob-ex-subsect}
\begin{example}
\label{join-ex}
Consider the natural join of relations $R(A,B)$ and $S(B,C)$. The inputs are tuples in $R$ or $S$, and the outputs are tuples with schema $(A,B,C)$. To make this problem finite, we need to assume finite domains for attributes $A$, $B$, and $C$; say there are $N_A$, $N_B$, and $N_C$ members of these domains, respectively.
Then there are $N_AN_BN_C$ outputs, each corresponding to a triple $(a,b,c)$. This output is mapped to the set of two inputs. One is the tuple $R(a,b)$ from relation $R$ and the other is the tuple $S(b,c)$ from relation $S$. The number of inputs is $N_AN_B + N_BN_C$.
Notice that in an instance of the join problem, not all the inputs will be present. That is, the relations $R$ and $S$ will be subsets of all the possible tuples, and the output will be those triples $(a,b,c)$ such that both $R(a,b)$ and $S(b,c)$ are actually present in the input instance.
\end{example}
\begin{example}
\label{triangle-ex}
For another example, consider finding tri\-angles. We are given a graph as input and want to find all triples of nodes such that in the graph there are edges between each pair of these three nodes. To model this problem, we need to assume a domain for the nodes of the input graph with $N$ nodes. An output is thus a set of three nodes, and an input is a set of two nodes. The output $\{u,v,w\}$ is mapped to the set of three inputs $\{u,v\}$, $\{u,w\}$, and $\{v,w\}$. Notice that, unlike the previous and next examples, here, an output is a set of more than two inputs. In an instance of the triangles problem, some of the possible edges will be present, and the outputs produced will be those such that all three edges to which the output is mapped are present.
\end{example}
\begin{example}
\label{hd1-ex}
This example is a very simple case of a similarity join. The inputs are binary strings, and since we have to make things finite, we shall assume that these strings have a fixed length b. There are thus $2^b$ inputs. The outputs are pairs of inputs that are at Hamming distance 1; that is, the inputs differ in exactly one bit. There are thus $(b/2)2^b$ outputs, since each of the $2^b$ inputs is Hamming distance 1 from exactly $b$ other inputs~-- those that differ in exactly one of the $b$ bits. However, that observation counts every pair of inputs at distance 1 twice, which is why we must divide by 2.
\end{example}
\begin{example}
\label{sum-ex}
Suppose we have a relation $R(A,B)$ and we want to implement group-by-and-sum:
\begin{verbatim}
SELECT A, SUM(B)
FROM R
GROUP BY A;
\end{verbatim}
We must assume finite domains for $A$ and $B$. An output is a value of $A$, say $a$, chosen from the finite domain of $A$-values, together with the sum of all the $B$-values. This output is associated with a large set of inputs: all tuples with $A$-value $a$ and any $B$-value from the finite domain of $B$. In any instance of this problem, we do not expect that all these tuples will be present, but as long as at least one of them is present, there will be an output for this value $a$.
\end{example}
\subsection{Mapping Schemas and Replication Rate}
\label{rr-subsect}
For many problems, there is a tradeoff between the number of reducers to which a given input must be sent and the number of inputs that can be sent to one reducer. It can be argued that the existence of such a tradeoff is tantamount to the problem being ``not embarrassingly parallel''; that is, the more parallelism we introduce, the greater will be the total cost of computation.
The more reducers that receive a given input, the greater the communication cost for solving an instance of a problem using map-reduce. As communication tends to be expensive, and in fact is often the dominant cost, we'd like to keep the number of reducers per input low. However, there is also a good reason to want to keep the number of inputs per reducer low. Doing so makes it likely that we can execute the Reduce task in main memory. Also, the smaller the input to each reducer, the more parallelism there can be and the lower will be the wall-clock time for executing the map-reduce job (assuming there is an adequate number of compute-nodes to execute all the Reduce tasks in parallel).
In our discussion, we shall use the convention that $p$ is the number of reducers used to solve a given problem instance, and $q$ is the maximum number of inputs that can be sent to any one reducer. We should understand that $q$ counts the number of potential inputs, regardless of which inputs are actually present for an instance of the problem. However, on the assumption that inputs are chosen independently with fixed probability, we can expect the number of actual inputs at a reducer to be $q$ times that probability, and there is a vanishingly small chance of significant deviation for large $q$. If we know the probability of an input being present in the data is $x$, and we can tolerate $q_1$ real inputs at a reducer, then we can use $q=q_1/x$ to account for the fact that not all inputs will actually be present.
With this motivation in mind, let us define a {\em mapping schema} for a given problem, with a given value of $q$, to be an assignment of a set of reducers to each input, subject to the constraints that:
\begin{enumerate}
\item
No more than $q$ inputs are assigned to any one reducer.
\item
For every output, its associated inputs are all assigned to one reducer. We say the reducer {\em covers} the output. This reducer need not be unique, and it is, of course, permitted that these same inputs are assigned also to other reducers.
\end{enumerate}
The figure of merit for a mapping schema is the {\em replication rate}, which we define to be the average number of reducers to which an input is mapped by that schema. Suppose that for a certain algorithm, the $i$th reducer is assigned $q_i \le q$ inputs, and let $I$ be the number of different inputs. Then the replication rate $r$ for this algorithm is
$$r = \sum_{i=1}^p q_i/I$$
We want to derive lower bounds on $r$, as a function of $q$, for various problems, thus demonstrating the tradeoff between high parallelism (many small reducers) and overhead (total communication cost~-- the replication rate). These lower bounds depend on counting the total number of outputs that a reducer can cover if it is given at most $q$ inputs. We let $g(q)$ denote this number of outputs that a reducer with $q$ inputs can cover.
Observe that, no matter what random set of inputs is present for an instance of the problem, the expected communication is $r$ times the number of inputs actually present, so $r$ is a good measure of the communication cost incurred during an instance of the problem. Further, the assumption that the mapping schema assigns inputs to processors without reference to what inputs are actually present captures the nature of a map-reduce computation. Normally, a map function turns input objects into key-value pairs independently, without knowing what else is in the input.
\section{The Hamming-Distance-1 Problem}
\label{hd1-sect}
We are going to begin our development of the model with the tightest result we can offer. For the problem of finding pairs of bit strings of length $b$ that are at Hamming distance 1, we have a lower bound on the replication rate $r$ as a function of $q$, the maximum number of inputs assigned to a reducer. This bound is essentially best possible, as we shall point to a number of mapping schemas that solve the problem and have exactly the replication rate stated in the lower bound.
\subsection{Bounding the Number of Outputs}
\label{output-bound-subsect}
The key to the lower bound on replication rate as a function of $q$ is a tight upper bound on the number of outputs that can be covered by a reducer assigned $q$ inputs.
\begin{lemma}
\label{hd-outputs-lemma}
For the Hamming-distance-1 problem, a reducer that is assigned $q$ inputs can cover no more than $(q/2)\log_2q$ outputs.
\end{lemma}
\begin{proof}
The proof is an induction on $b$, the length of the bit strings in the input. The basis is $b=1$. Here, there are only two strings, so $q$ is either 1 or 2. If $q=1$, the reducer can cover no outputs. But $(q/2)\log_2q$ is 0 when $q=1$, so the lemma holds in this case. If $q=2$, the reducer can cover at most one output. But $(q/2)\log_2q$ is 1 when $q=2$, so again the lemma holds.
Now let us assume the bound for $b$ and consider the case where the inputs consist of strings of length $b+1$. Let $X$ be a set of $q$ bit strings of length $b+1$. Let $Y$ be the subset of $X$ consisting of those strings that begin with 0, and let $Z$ be the remaining strings of $X$~-- those that begin with 1. Suppose $Y$ and $Z$ have $y$ and $z$ members, respectively, so $q=y+z$.
An important observation is that for any string in $Y$, there is at most one string in $Z$ at Hamming distance 1. That is, if $0w$ is in $Y$, it could be Hamming distance 1 from $1w$ in $Z$, if that string is indeed in $Z$, but there is no other string in $Z$ that could be at Hamming distance 1 from $0w$, since all strings in $Z$ start with 1. Likewise, each string in $Z$ can be distance 1 from at most one string in $Y$. Thus, the number of outputs with one string in $Y$ and the other in $Z$ is at most $\min(y,z)$.
So let's count the maximum number of outputs that can have their inputs within $X$. By the inductive hypothesis, there are at most $(y/2)\log_2y$ outputs both of whose inputs are in $Y$, at most $(z/2)\log_2z$ outputs both of whose inputs are in $Z$, and, by the observation in the paragraph above, at most $\min(y,z)$ outputs with one input in each of $Y$ and $Z$.
Assume without loss of generality that $y\le z$. Then the maximum number of strings of length $b+1$ that can be covered by a reducer with $q$ inputs is
$$\frac{y}{2} \log_2y + \frac{z}{2} \log_2z + y$$
We must show that this function is at most $(q/2)\log_2q$, or, since $q=y+z$, we need to show
\begin{equation}
\label{yz-eq}
\frac{y}{2} \log_2y + \frac{z}{2} \log_2z + y \le \frac{y+z}{2} \log_2(y+z)
\end{equation}
under the condition that $z\ge y$.
First, observe that when $y=z$, Equation~\ref{yz-eq} holds with equality. That is, both sides become $y(\log_2y + 1)$. Next, consider the derivatives, with respect to $z$, of the two sides of Equation~\ref{yz-eq}. $d/dz$ of the left side is
$$\frac{1}{2}\log_2z + \frac{\log_2e}{2}$$
while the derivative of the right side is
$$\frac{1}{2}\log_2(y+z) + \frac{\log_2e}{2}$$
Since $z\ge y\ge0$, the derivative of the left side is always less than or equal to the derivative of the right side. Thus, as $z$ grows larger than $y$, the left side remains no greater than the right. That proves the induction step, and we may conclude the lemma.
\end{proof}
\subsection{The Tradeoff for Hamming Distance 1}
\label{hd-trade-subsect}
We can use Lemma~\ref{hd-outputs-lemma} to get a lower bound on the replication rate as a function of $q$, the maximum number of inputs at a reducer.
\begin{theorem}
\label{hd-trade-thm}
For the Hamming distance 1 problem with inputs of length $b$, the replication rate $r$ is at least $b/\log_2q$.
\end{theorem}
\begin{proof}
Suppose there are $p$ reducers, each with $\leq q$ inputs. By Lemma~\ref{hd-outputs-lemma}, there are at most $(q_i/2)\log_2q_i$ outputs covered by reducer $i$.
The total number of outputs, given that inputs are of length $b$ is $(b/2)2^b$. Thus, since every output must be covered, and $log_2q \geq log_2q_i$ for all $i$, we have
\begin{equation}
\label{hd-trade-eq}
\sum_{i=1}^p\frac{q_i}{2}\log_2q_i \ge \frac{b}{2}2^b
\end{equation}
\begin{equation}
\label{hd-trade-eq}
\sum_{i=1}^p\frac{q_i}{2}\log_2q \ge \frac{b}{2}2^b
\end{equation}
The replication rate is $r = \sum_{i=1}^pq_i/2^b$, that is, the sum of the inputs at each reducer divided by the total number of inputs. We can move factors in Equation~\ref{hd-trade-eq} to get a lower bound on $r = \sum_{i=1}^pq_i/2^b \ge b/\log_2q$, which is exactly the statement of the theorem.
\end{proof}
\subsection{Upper Bound for Hamming Distance 1}
\label{hd-upper-subsect}
There are a number of algorithms for finding pairs at Hamming distance 1 that match the lower bound of Theorem~\ref{hd-trade-thm}. First, suppose $q=2$; that is, every reducer gets exactly 2 inputs, and is therefore responsible for exactly one output. Theorem~\ref{hd-trade-thm} says the replication rate $r$ must be at least $b/\log_22 = b$. But in this case, every input string $w$ of length $b$ must be sent to exactly $b$ reducers~-- the reducers corresponding to that input and the $b$ inputs that are Hamming distance 1 from $w$.
There is another simple case at the other extreme. If $q = 2^b$, then we need only one reducer, which gets all the inputs. In that case, $r=1$. But Theorem~\ref{hd-trade-thm} says that $r$ must be at least $b/\log_2(2^b) = 1$.
In \cite{ADMPU12}, there is an algorithm called Splitting that, for the case of Hamming distance 1 uses $2^{1+b/2}$ reducers, for some even $b$. Half of these reducers, or $2^{b/2}$ reducers correspond to the $2^{b/2}$ possible bit strings that may be the first half of an input string. Call these {\em Group I reducers}. The second half of the reducers correspond to the $2^{b/2}$ bit strings that may be the second half of an input. Call these {\em Group II reducers}. Thus, each bit string of length $b/2$ corresponds to two different reducers.
An input $w$ of length $b$ is sent to 2 reducers: the Group-I reducer that corresponds to its first $b/2$ bits, and the Group-II reducer that corresponds to its last $b/2$ bits. Thus, each input is assigned to two reducers, and the replication rate is 2. That also matches the lower bound of $b/\log_2(2^{b/2}) = b/(b/2) = 2$. It is easy to observe that every pair of inputs at distance 1 is sent to some reducer in common. These inputs must either agree in the first half of their bits, in which case they are sent to the same Group-I reducer, or they agree on the last half of their bits, in which case they are sent to the same Group-II reducer.
We can generalize the Splitting Algorithm to give us an algorithm whose replication rate $r$ matches the lower bound, for any integer $r>2$. We must assume that $r$ divides $b$ evenly. Thus, strings of length $b$ can be split into $r$ pieces, each of length $b/r$. We will have $r$ groups of reducers, numbered 1 through $r$. In each group of reducers there is a reducer corresponding to each of the $2^{b-b/r}$ bit strings of length $b-b/r$.
To see how inputs are assigned to reducers, suppose $w$ is a bit string of length $b$. Write $w = w_1w_2\cdots w_r$, where each $w_i$ is of length $b/r$. We send $w$ to the group-$i$ reducer that corresponds to bit string $w_1\cdots w_{i-1}w_{i+1}\cdots w_r$, that is, $w$ with the $i$th substring $w_i$ removed. Thus, each input is sent to $r$ reducers, one in each of the $r$ groups, and the replication rate is $r$. The input size for each reducer is $q = 2^{b/r}$, so the lower bound says that the replication rate must be at least $b/\log_2(2^{b/r}) = b/(b/r) = r$. That is the replication rate of our generalization of the Splitting algorithm is tight.
Finally, we need to argue that the mapping schema solves the problem. Any two strings at Hamming distance 1 will disagree in only one of the $r$ segments of length $b/r$. If they disagree in the $i$th segments, then they will be sent to the same Group~$i$ reducer, because reducers in this group ignore the value in the $i$th segment. Thus, this reducer will cover the output consisting of this pair.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{tradeoff.pdf}}
\caption{Known algorithms matching the lower bound on replication rate}
\label{tradeoff-fig}
\end{figure}
Figure~\ref{tradeoff-fig} illustrates what we know. The hyperbola is the lower bound. Known algorithms that match the lower bound on replication rate are shown with dots.
\pagebreak
\subsection{An Algorithm for Large {\it \large q}}
\label{weights-subsect}
There is a family of algorithms that use reducers with large input~-- $q$ well above $2^{b/2}$, but lower that $2^b$. The simplest version of these algorithms divides bit strings of length $b$ into left and right halves of length $b/2$ and organizes them by weights, as suggested by Fig.~\ref{weights-fig}. The {\em weight} of a bit string is the number of 1's in that string. In detail, for some $k$, which we assume divides $b/2$, we partition the weights into $b/(2k)$ groups, each with $k$ consecutive weights. Thus, the first group is weights 0 through $k-1$, the second is weights $k$ through $2k-1$, and so on. The last group has an extra weight, $b/2$, and consists of weights $\frac{b}{2}-k$ through $b/2$.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{weights.pdf}}
\caption{Partitioning by weight. Only the border weights need to be replicated}
\label{weights-fig}
\end{figure}
There are $(\frac{b}{2k})^2$ reducers; each corresponds to a range of weights for the first half and a range of weights for the second half. A string is assigned to reducer $(i,j)$, for $i,j=1,2,\ldots, b/2k$ if the left half of the string has weight in the range $(i-1)k$ through $ik-1$ and the right half of the string has weight in the range $(j-1)k$ through $jk-1$.
Consider two bit strings $w_0$ and $w_1$ of length $b$ that differ in exactly one bit . Suppose the bit in which they differ is in the left half, and suppose that $w_1$ has a 1 in that bit. Finally, let $w_1$ be assigned to reducer $R$. Then unless the weight of the left half of $w_1$ is the lowest weight for the left half that is assigned to reducer $R$, $w_0$ will also be at $R$, and therefore $R$ will cover the pair $\{w_0,w_1\}$. However, if the weight of $w_1$ in its left half is the lowest possible left-half weight for $R$, then $w_0$ will be assigned to the reducer with the same range for the right half, but the next lower range for the left half. Therefore, to make sure that $w_0$ and $w_1$ share a reducer, we need to replicate $w_1$ at the neighboring reducer that handles $w_0$. The same problem occurs if $w_0$ and $w_1$ differ in the right half, so any string whose right half has the lowest possible weight in its range also has to be replicated at a neighboring reducer. We suggested in Fig.~\ref{weights-fig} how the strings with weights at the two lower borders of the ranges for a reducer need to be replicated at a neighboring reducer.
Now, let us analyze the situation, including the maximum number $q$ of inputs assigned to a reducer, and the replication rate. For the bound on $q$, note that the vast majority of the bit strings of length $n$ have weight close to $n/2$. The number of bit strings of weight exactly $n/2$ is $\binom{n}{n/2}$. Stirling's approximation \cite{feller68} gives us $2^n/\sqrt{2\pi n}$ for this quantity. That is, one in $O(\sqrt{n})$ of the strings have the average weight.
If we partition strings as suggested by Fig.~\ref{weights-fig}, then the most populous $k\times k$ cell, the one that contains strings with weight $b/4$ in the first half and also weight $b/4$ in the second half, will have no more than
$$k^2\Bigr(\frac{2^{b/2}}{\sqrt{2\pi(b/2)}}\Bigl)^2 = \frac{k^22^b}{\pi b}$$
strings assigned.\footnote{Note that many of the cells have many fewer strings assigned, and in fact a fraction close to 1 of the strings have weights within $\sqrt{b}$ of $b/4$ in both their left halves and right halves. In a realistic implementation, we would probably want to combine the cells with relatively small population at a single reducer, in order to equalize the work at each reducer.}
If $k$ is a constant, then in terms of the horizontal axis in Fig.~\ref{tradeoff-fig}, this algorithm has $\log_2q$ equal to $b - \log_2b$ plus or minus a constant. It is thus very close to the right end, but not exactly at the right end.
For the replication rate of the algorithm, if $k$ is a constant, then for any cell there is only a small ratio of variation between the numbers of strings with weights $i$ and $j$ in the left and right halves, for any $i$ and $j$ that are assigned to that cell. Moreover, when we look at the total number of strings in the borders of all the cells, the differences average out so the total number of replicated strings is very close to $(2k)/k^2 = 2/k$. That is, a string is replicated if either its left half has a weight divisible by $k$ or its right half does. Note that strings in the lower-left corner of a cell are replicated twice, strings of the other $2k-2$ points on the border are replicated once, and the majority of strings are not replicated at all. We conclude that the replication rate is $1+\frac{2}{k}$.
\subsection{Generalization to {\large \tt d} Dimensions}
\label{weights-d-subsect}
The algorithm of Section~\ref{weights-subsect} can be generalized from 2 dimensions to $d$. Break bit strings of length $b$ into $d$ pieces of length $b/d$, where we assume $d$ divides $b$. Each string of length $b$ can thus be assigned to a cell in a $d$-dimensional hypercube, based on the weights of each of its $d$ pieces. Assume that each cell has side $k$ in each dimension, where $k$ is a constant that divides $b/d$.
The most populous cell will be the one that contains strings where each of its $d$ pieces has weight $b/(2d)$. Again using Stirling's approximation, the number of strings assigned to this cell is
$$k^d\Bigl(\frac{2^{b/d}}{\sqrt{2\pi b/d}}\Bigr)^d = \frac{k^d2^b}{b^{d/2}(2\pi/d)^{d/2}}$$
On the assumption that $k$ is constant, the value of $\log_2q$ is
$$b-(d/2)\log_2b$$
plus or minus a constant.
To compute the replication rate, observe that every point on each of the $d$ faces of the hypercube that are at the low ends of their dimension must be replicated. The number of points on one face is $k^{d-1}$, so the sum of the volumes of the faces is $dk^{d-1}$. The entire volume of a cell is $k^d$, so the fraction of points that are replicated is $d/k$, and the replication rate is $1+d/k$. Technically, we must prove that the points on the border of a cell have, on average, the same number of strings as other points in the cell. As in Section~\ref{weights-subsect}, the border points in any dimension are those whose corresponding substring has a weight divisible by $k$. As long as $k$ is much smaller than $b/d$, this number is close to $1/k$th of all the strings of that length.
\section{Triangle Finding}
\label{other-results}
In this section, we present a brief description of other results obtained using our framework, specifically on finding triangles. The pattern that lets us investigate any problem is, we hope, clear from the analysis of Section~\ref{hd1-sect}.
\begin{enumerate}
\item
Find an upper bound, $g(q)$, on the number of outputs a reducer can cover if $q$ is the number of inputs it is given.
\item
Count the total numbers of inputs $|I|$ and outputs $|O|$.
\item
Assume there are $p$ reducers, each receiving $q_i \leq q$ inputs and covering $g(q_i)$ outputs. Together they cover all the outputs. That is $\sum_{i=1}^pg(q_i) \geq |O|$.
\item
Manipulate the inequality from (3) to get a lower bound on
the replication rate, which is $\sum_{i=1}^pq_i / |I|$.
\item
Hopefully, demonstrate that there are algorithms whose replication rate matches the formula from (4).
\end{enumerate}
\subsection{The Tradeoff}
\label{tri-subsect}
We shall briefly show how this method applies to the problem of finding triangles introduced in Example~\ref{triangle-ex}. Suppose $n$ is the number of nodes of the input graph. Following the outline just given:
\begin{enumerate}
\item
We claim that the largest number of outputs (triangles) a reducer with at most $q$ inputs occurs when the reducer is assigned all the edges running between some set of $k$ nodes. This point was proved, to within an order of magnitude in \cite{Schank07}. Suppose we assign to a reducer all the edges between a set of $k$ nodes. Then there are $\binom{k}{2}$ edges assigned to this reducer, or approximately $k^2/2$ edges. Since this quantity is $q$, we have $k = \sqrt{2q}$.
The number of triangles among $k$ nodes is $\binom{k}{3}$, or approximately $k^3/6$ outputs. In terms of $q$, the upper bound on the number of outputs is $\frac{\sqrt{2}}{3} q^{3/2}$.
\item
The number of inputs is $\binom{n}{2}$ or approximately $n^2/2$. The number of outputs is $\binom{n}{3}$, or approximately $n^3/6$.
\item
So using the formulas from (1) and (2), if there are p reducers each with $\leq q$ inputs:
$\sum_{i=1}^p\frac{\sqrt{2}}{3} q_i^{3/2} \ge n^3/6$, which implies that $\sum_{i=1}^p\frac{\sqrt{2}}{3} q_iq^{1/2} \ge n^3/6$.
\item
The replication rate is $\sum_{i=1}^pq_i$ divided by the number of inputs, which is $n^2/2$ from (1). We can manipulate the inequality from (3) to get
$$r = \frac{2\sum_{i=1}^p q_i}{n^2} \ge \frac{n}{\sqrt{2q}}$$
\item
There are known algorithms that, to within a constant factor, match the lower bound on replication rate. See \cite{SV11} and \cite{AFU12}.\footnote{It is a little tricky to relate these algorithms to the bound, since those algorithms assume the actual data graphs are sparse and calculate replication and input sizes in terms of the number of edges rather than nodes. However, on randomly chosen subsets of all possible edges, they do get us within a constant factor of the lower bound.}
\end{enumerate}
{\bf Generalizing to multiway joins} Finding triangles is equivalent to computing
the multiway join $E(A,B) \& E(B,C) \& E(C,A)$.
Similar techniques can be used to compute lower and upper bounds for any multiway join.
In particular, in the case where we have one relation of arity $a$ and the multiway join uses $m$ variables then we get lower and upper bounds that are both
$O(q^{1-m/a}n^{m-a})$.
\section{Summary}
\label{summary-sect}
This abstract introduced a simple model for defining map-reduce problems, enabling us to study their ``distributability'' properties. We studied the notion of {\em replication rate}, which is closely related communication cost, and the number of machines available for mappers and reducers. We showed that our model effectively captures a multitude of map-reduce problems, and is a natural formalism for the study of replication rate. We presented a detailed treatment of the hamming-distance-1 problem, providing tight bounds on the replication rate. We also presented a summary of some other results on multiway joins and triangle finding we have obtained.
We believe that our formalism presents a new direction for the study of a large class of map-reduce problems, and allows us to prove results on the limits of map-reducibility for any algorithm for a problem that fits our model.
{\footnotesize
\bibliographystyle{plain}
|
2,869,038,155,320 | arxiv | \section{Introduction\label{sec:Introduction}}
The Sun shows p-mode oscillations that are due to the stochastic excitation
by turbulent convection. Certain frequencies are damped and others
are amplified, which leads to a characteristic set of frequencies
observed on the Sun. In helioseismology, observed oscillation frequencies
are compared with theoretical model predictions to probe the solar
structure below the optical surface \citep{christensen-dalsgaard_helioseismology_2002}.
In 1D stellar structure calculations, convection is modelled with
the mixing length theory \citep{bohm-vitense_uber_1958}, hence the
mismatch in the superadiabatic region (SAR) just below the optical
surface leads to systematic residuals for the high-frequency modes.
These are termed surface effects. To account correctly for convection,
and in particular, for the turbulent pressure, we need to employ 3D
hydrodynamic models, where convection emerges from first principles
\citep{stein_simulations_1998}. \citet{rosenthal_convective_1999}
found that appending a mean stratification from a 3D model \citep{stein_simulations_1998}
to a 1D solar model \citep{1996Sci...272.1286C} can indeed reduce
the surface effects by providing additional support through turbulent
pressure and a concomitant expansion of the acoustic cavity at the
surface by 150 km, which in turn reduces the oscillation frequencies.
\citet{sonoi_surface-effect_2015} calibrated surface-effect corrections
for other stars than the Sun from 3D models. In the present work,
we repeat the efforts of \citet{rosenthal_convective_1999} with the
solar model from the \textsc{Stagger grid} \citep{magic_stagger-grid:_2013-1}
and study the solar surface effects in detail.
\section{Methods\label{sec:Methods}}
\subsection{Theoretical models\label{sub:Models}}
\begin{figure}
\includegraphics[width=88mm]{fig/sun_amdls}
\caption{\label{fig:sun_structure}Total pressure, density, and adiabatic exponent
for the 1D model combined with the $\left\langle \mathrm{3D}\right\rangle_{z}$ model. We marked the
location of the transition between $\left\langle \mathrm{3D}\right\rangle_{z}$ and 1D (\emph{black
dot}). In additition to the original 1D model (\emph{blue dashed line}),
we also show the values of the 1D model corrected for both the geometrical
depth and density (\emph{red dashed line}) and corrected for the depth
alone (\emph{red dotted line}).}
\end{figure}
\begin{figure}
\includegraphics[width=88mm]{fig/rosenthal7}\caption{\label{fig:Rosenthal_pressure}Total pressure vs. depth averaged from
the 3D models of this work and compared with \citet{rosenthal_convective_1999}.}
\end{figure}
\begin{figure}
\includegraphics[width=88mm]{fig/golf_hav2}\caption{\label{fig:sun_averages}Similar to Fig. \ref{fig:sun_structure},
but showing models with different $\left\langle \mathrm{3D}\right\rangle$ averages. The geometrically
averaged model, $z$ (solid black line), is identical to the model
shown in Fig. \ref{fig:sun_structure} (black solid line). Inward
of a depth of roughly -0.1 Mm, the averages of the upflows, $\left\langle \mathrm{3D}\right\rangle_{\mathrm{up}}$,
are the lowest curves, while the downflows, $\left\langle \mathrm{3D}\right\rangle_{\mathrm{dn}}$,
are the uppermost curves in all three panels.}
\end{figure}
We considered the solar mean $\left\langle \mathrm{3D}\right\rangle$ model taken from the \textsc{Stagger
grid} \citep[see][for more details]{magic_stagger-grid:_2013}, which
was computed with the \textsc{Stagger} code. In the 3D models, the
equation-of-state (EOS) from \citet{1988ApJ...331..815M} is used,
and the opacities are taken from the \textsc{marcs} package \citep{2008A&A...486..951G}.
The numerical resolution is $240^{3}$ and twelve opacity bins were
considered for the radiative transfer.
For the 1D model we took a standard solar calibrated model (including
atomic diffusion) computed with the \textsc{garstec} stellar evolution
code \citep[see][for more details]{weiss_garstecgarching_2008,magic_using_2010}.
In the 1D model, we used the EOS by OPAL 2005 \citep{1996ApJ...456..902R}
for higher temperatures, while for the lower temperatures we extended
the EOS by those from \citet{1988ApJ...331..815M}, for consistency
between the 3D and 1D models. The opacity in the \textsc{garstec}
model is a combination of results from OPAL \citep{1996ApJ...464..943I}
and \citet{2005ApJ...623..585F} for higher and lower temperatures,
respectively. In all models, we assumed the solar chemical composition
by \citet{asplund_chemical_2009}, except for the oscillation frequencies
in Sect. \ref{sub:Abundances}.
The pressure stratification of our $\left\langle \mathrm{3D}\right\rangle$ model shown in Fig. \ref{fig:sun_structure}
is very similar to the gas gamma model by \citet{rosenthal_convective_1999}.
Our 3D model exhibits a slightly higher pressure stratification only
above the optical surface (see Fig. \ref{fig:Rosenthal_pressure}).
This is probably owed to the differences in the input physics, the
treatment of the radiative transfer, and numerical resolution (theirs
is lower with $100^{2}\times82$). When we compared our solar model
with models with higher numerical resolution ($480^{3}$ and $960^{2}\times480$),
we found the differences in the stratifications of the models to be
minute. Comparisons with other numerical codes (\textsc{co5bold} and
\textsc{muram}) also result in very similar temperature and pressure
stratifications \citep[see Figs. 4 and 5 in][]{2012A&A...539A.121B}.
These comparisons mean that our temperature and pressure stratifications
are reliable.
To compute the theoretical eigenmode frequencies, we employed the
adiabatic oscillation code \textsc{adipls} \citep{christensen-dalsgaard_adiplsaarhus_2008}.
Here, the independent variables are the total pressure, $p_{\mathrm{tot}}$,
the density, $\rho$, and the adiabatic exponent, $\Gamma_{1}$, (see
Fig. \ref{fig:sun_structure}). \citet{rosenthal_convective_1999}
computed oscillation frequencies with the spatially and temporally
averaged adiabatic exponent, to which they referred as the ``gas
gamma model''. They also worked with a ``reduced gamma model''.
Based on the differences between the gas gamma and the reduced gamma
model, they found a difference of $\sim10\,\mu\mathrm{Hz}$ at higher
frequencies. However, their reduced gamma model disagreed with observations
because the adiabatic exponent was too low. Therefore, we used the
spatial and temporal averages of $\Gamma_{1}$ in the present study
(gas gamma model). We also computed the reduced gamma, $\Gamma_{1}^{\mathrm{red}}=\left<p_{\mathrm{gas}}\Gamma_{1}\right>/\left<p_{\mathrm{gas}}\right>$,
but we found only very little difference to the plain values of $\Gamma_{1}$.
Since the surface effects are less dependent on the angular degree,
we considered the lowest mode ($l=0$) in comparison with the observational
data provided by GOLF measurements \citep{1997SoPh..175..227L}.
\subsection{Temporal and spatial averaging\label{sub:Averaging}}
\begin{table}
\caption{\label{tab:mean_3d}Overview of the different mean $\left\langle \mathrm{3D}\right\rangle$ models
and their reference depth scale for the horizontal or spatial averaging.}
\begin{tabular}{ll}
\hline
Symbol & Spatial averages over layers of constant \tabularnewline
\hline
\hline
$\hav_z$ & geometrical depth\tabularnewline
$\left\langle \mathrm{3D}\right\rangle_{\mathrm{up}}$ & geometrical depth for the upflows\tabularnewline
$\left\langle \mathrm{3D}\right\rangle_{\mathrm{dn}}$ & geometrical depth for the downflows\tabularnewline
$\left\langle \mathrm{3D}\right\rangle_{L}$ & (pseudo-Lagrangian) geometrical depth \tabularnewline
$\left\langle \mathrm{3D}\right\rangle_{m}$ & column mass density\tabularnewline
$\left\langle \mathrm{3D}\right\rangle_{p_{\mathrm{tot}}}$ & total pressure\tabularnewline
$\left\langle \mathrm{3D}\right\rangle_{\tau}$ & acoustic depth\tabularnewline
\hline
\end{tabular}
\end{table}
To compute the spatially and temporally averaged mean $\left\langle \mathrm{3D}\right\rangle$ is nontrivial,
as we showed in \citet{magic_stagger-grid:_2013-1} for spectroscopy.
For the application in helio- and asteroseismology, we similarly needed
to test and find a reference depth-scale to average independent variables
in the most suitable way. Therefore, we considered seven different
averages (see Table \ref{tab:mean_3d}) that we explain in the following.
The geometrical average (denoted with $z$) is used by default, since
it fulfils the hydrostatic equilibrium. We computed geometrical averages
separated into up- and downflows (denoted with $\mathrm{up}$ and
$\mathrm{dn}$), based on the sign of the vertical velocity component.
Furthermore, we also used pseudo-Lagrangian averages (denoted with
$L$) that were spatially averaged over geometrical depth and temporally
averaged by mapping to a fixed column mass scale to remove the contribution
of the p-mode oscillations from the spatial averages \citep[see][]{2014MNRAS.442..805T}.
Optical depth can be ruled out as a reference depth scale, because
the averages are then correlated to the temperature \citep[see][for more details]{magic_stagger-grid:_2013}.
We considered averages over layers of constant column mass density
\begin{eqnarray}
m & = & \int\rho dz,
\end{eqnarray}
acoustic depth
\begin{eqnarray}
\tau & = & \int\frac{1}{c_{s}}dz,
\end{eqnarray}
and total pressure $p_{\mathrm{tot}}$. In the photosphere, where
the fluctuations are strongest, we can expect differences between
the different averages that might alter the oscillation frequencies.
In Fig. \ref{fig:sun_averages} we show the total pressure, density,
and adiabatic exponent for the different $\left\langle \mathrm{3D}\right\rangle$ averages. The different
averages exhibit small differences for the independent variables,
except for the averages for the up- and downflows, which show larger
differences than the other averages. In particular, this is the case
for the density and adiabatic exponent, while the total pressure shows
only smaller differences. The averages of the upflows depict the hot,
ascending granules, while the downflows depict the cold, descending
intergranular lane at the optical surface. The pseudo-Lagrangian averages
(red dashed lines) are indistinguishable from the geometrical averages
(solid black lines). As mentioned above the pseudo-Lagrangian averages
are spatially averaged over geometrical depth and differ only in the
temporal averaging, which seems to have very little effect on the
stratification. Furthermore, we note that the averages over the column
mass density are clearly distinct from the former because they employ
the column mass density as the reference depth scale.
\subsection{Including the effects of the turbulent expansion\label{sub:Including-the-effects}}
The 3D models are very shallow, therefore we need to append the $\left\langle \mathrm{3D}\right\rangle$
models to a 1D model to study the effect on the p-mode oscillations.
To do this, we employed two methods (Sects. \ref{sub:Appending-3d}
and \ref{sub:Depth-dependent-corrections}) to include the effects
of the turbulent expansion on the 1D model. The first method is similar
to the one by \citet{rosenthal_convective_1999}, and we appended
a $\left\langle \mathrm{3D}\right\rangle$ model to a 1D model. In the second method, we expanded the
geometrical depth in the top layers of the 1D model to achieve the
same total pressure stratification as for the $\left\langle \mathrm{3D}\right\rangle$ model.
\subsubsection{Appending the $\left\langle \mathrm{3D}\right\rangle$ on the 1D model \label{sub:Appending-3d}}
We appended the $\left\langle \mathrm{3D}\right\rangle_{z}$ model averaged over layers of constant
geometrical depth, where the zero-point was set to coincide with the
optical surface, that is, $\left\langle \tau_{\mathrm{Ross}}\right\rangle =0$, to
the 1D model at the bottom of the $\left\langle \mathrm{3D}\right\rangle_{z}$ model. To do this, we
considered the total pressure in the deepest layers of the $\left\langle \mathrm{3D}\right\rangle_{z}$
model, $p_{\mathrm{tot}}^{\mathrm{bot}}$, and matched the difference
in geometrical depth
\begin{eqnarray}
\Delta z^{\mathrm{bot}} & = & z^{\mathrm{3D}}(p_{\mathrm{tot}}^{\mathrm{bot}})-z^{\mathrm{1D}}(p_{\mathrm{tot}}^{\mathrm{bot}}),\label{eq:bot_shift}
\end{eqnarray}
which is a single depth-independent correction value. We obtained
for $\Delta z^{\mathrm{bot}}$ a shift of 81.6 km. Next, we shifted
the geometrical depth of the $\left\langle \mathrm{3D}\right\rangle_{z}$ models for the difference,
that is, $z^{\mathrm{3D,shifted}}=z^{\mathrm{3D}}+\Delta z^{\mathrm{bot}}$.
Finally, we interpolated the independent variables $p_{\mathrm{tot}}$,
$\rho$, $\Gamma_{1}$ from the $\left\langle \mathrm{3D}\right\rangle_{z}$ to the geometrical depth
of the 1D model ($z^{\mathrm{1D}}$) and merged both models into a
single model. The appended model is expanded relative to the 1D model
and the stratifications of $p_{\mathrm{tot}}$, $\rho$ and $\Gamma_{1}$
are visibly modified at the top (see Fig. \ref{fig:sun_structure}).
The global radius is increased by the total elevation at the surface
with $\Delta z\left(0\right)=105\,\mathrm{km}$, as determined in
Sect. \ref{sub:Depth-dependent-corrections}. This value can be compared
to the findings by \citet{rosenthal_convective_1999} of 150 km. The
correction of the depth at the bottom of the $\left\langle \mathrm{3D}\right\rangle$ model, $\Delta z^{\mathrm{bot}}=81.6\,\mathrm{km}$,
is necessary, since the 1D model compensates the missing turbulent
pressure and the higher temperatures at the surface of the 3D model
with a higher pressure scale height below the optical surface. The
different temperature stratification in the 3D model is a result of
the temperature sensitivity of the opacity and inhomogeneities of
the granulation \citep[see Sect. 5.2 in][]{rosenthal_convective_1999}.
On the other hand, without the depth correction, the 3D and 1D model
would exhibit a discontinuous transition at the connection point.
Both of the latter are important to improve the oscillation frequency,
as we show below. We refer to the $\left\langle \mathrm{3D}\right\rangle_{z}$ model appended to 1D
model as the 3D model.
\subsubsection{Depth-dependent structure corrections\label{sub:Depth-dependent-corrections}}
Instead of appending a $\left\langle \mathrm{3D}\right\rangle$ structure to the 1D model, we employed
depth and density corrections directly on the 1D stellar structure
as well. The idea behind this is very simple: we wish to find a depth
correction that expands the geometrical depth, so that the same pressure-depth
relation of the 3D model is realised for the 1D model. To do this,
we determined the difference in geometrical depth, which is necessary
to yield the same pressure stratification. The depth-dependent correction
factor is determined by
\begin{eqnarray}
\Delta z\left(z\right) & = & z^{\mathrm{3D}}(p_{\mathrm{tot}}^{\mathrm{3D}})-z^{\mathrm{1D}}(p_{\mathrm{tot}}^{\mathrm{1D}}),\label{eq:depth_correction}
\end{eqnarray}
which can expand the geometrical depth of the 1D model to the match
the total pressure stratification of the $\left\langle \mathrm{3D}\right\rangle$ model. We note that
the geometrical depth of the 3D model, $z^{\mathrm{3D}}$, in Eq.
\ref{eq:depth_correction} is corrected for the depth shift at the
bottom with Eq. \ref{eq:bot_shift}, that is, $z^{\mathrm{3D,shifted}}=z^{\mathrm{3D}}+\Delta z^{\mathrm{bot}}$.
Furthermore, we note that in contrast to Eq. \ref{eq:bot_shift},
here $\Delta z\left(z\right)$ is depth dependent. We neglected negative
depth corrections below the optical surface for consistency. Then,
the resulting correction was applied to the geometrical depth, that
is, $z^{*}=z+\Delta z\left(z\right)$, which leads to an expansion
of the entire stellar structure at the top alone. This results in
an extension of the solar radius at the optical surface, meaning that
from Eq. \ref{eq:depth_correction} we obtain $\Delta z\left(0\right)=105\,\mathrm{km}$
(see Fig. \ref{fig:corrections}).
The corrected geometrical depth deviates from the hydrostatic equilibrium
in the SAR, therefore, we corrected the density stratification by
reducing it by the ratio of the turbulent pressure and the total pressure:
\begin{eqnarray}
\rho^{*} & = & \rho\left(1-p_{\mathrm{turb}}/p_{\mathrm{tot}}\right),\label{eq:density_correction}
\end{eqnarray}
since the missing hydrostatic support from the $p_{\mathrm{turb}}$
is balanced by higher densities in the 1D model. On the other hand,
we can also determine the density stratification, which fulfils the
hydrostatic equilibrium. Then, this correction leads to the identical
density stratification as given by the $\left\langle \mathrm{3D}\right\rangle$ model, since the corrected
geometrical depth results in an identical pressure stratification
with depth by construction, and the equation of hydrostatic equilibrium
(Eq. \ref{eq:hydrostatic_equilibrium}) is only fulfilled by a unique
density stratification.
\begin{figure}
\includegraphics[width=88mm]{fig/corrections}
\caption{\label{fig:corrections}Depth-dependent corrections for the geometrical
depth, $\Delta z$, and corrections of the density, $1-p_{\mathrm{turb}}/p_{\mathrm{tot}}$,
in the 1D solar model (top and bottom panel, respectively). Furthermore,
we show the turbulent elevation from Eq. \ref{eq:turb_levitation}
(\emph{dashed line} in the top panel). The location of the optical
surface is indicated (\emph{vertical dotted line}).}
\end{figure}
\begin{figure*}
\subfloat[\label{fig:3d_vs_1d}]{\includegraphics[width=88mm]{fig/golf_3dvs1d}
}\subfloat[\label{fig:rosenthal6}]{\includegraphics[width=88mm]{fig/rosenthal6}
}
\caption{\textbf{(a)} Frequency residuals vs. frequency for two solar models
in comparison with solar observations for a low-degree mode with $l=0$.
The differences are retrieved by $\delta\nu_{nl}=\nu_{nl}^{\mathrm{obs}}-\nu_{nl}^{\mathrm{mod}}$.
Shown are the 3D corrected model (\emph{black solid line}), the standard
1D solar model (\emph{dashed lines}) and a 1D model expanded with
the turbulent elevation (\emph{dotted lines}; see Eq. \ref{eq:turb_levitation}).
The height of zero differences is indicated (\emph{horizontal solid
line}). \textbf{(b)} We also show the residuals for different modes
scaled with $Q_{nl}$ resulting from a solar model corrected with
3D model (GGM: gas gamma model) published by \citet{rosenthal_convective_1999}
(black symbols) in comparison with our result for $l=0$ (red solid
line).}
\end{figure*}
In Fig. \ref{fig:corrections} we show both the depth and density
correction. The corrected 1D model including $z^{*}$ and $\rho^{*}$
is also shown in Fig. \ref{fig:sun_structure} (red dashed lines),
which is very close to the $\left\langle \mathrm{3D}\right\rangle$ stratification. The density stratification
without the correction (Eq. \ref{eq:density_correction}) is only
slightly higher, and only minor differences are visible for $\Gamma_{1}$.
As we show in Sect. \ref{sub:Corrected-structures}, both have effects
of very different magnitudes on the oscillation frequencies.
The hydrostatic equilibrium can be retrieved by averaging the momentum
equation, which results in
\begin{eqnarray}
\frac{d(\bar{p}_{\mathrm{th}}+\bar{\rho}\bar{u}_{z}^{2})}{dz} & = & \bar{\rho}g_{z},\label{eq:hydrostatic_equilibrium}
\end{eqnarray}
with $\bar{u}_{z}$ being the density-weighted average of the vertical
velocity component (often referred to as turbulent velocity) and $p_{\mathrm{turb}}=\rho u_{z}^{2}$
the turbulent pressure, which is an additional pressure support against
gravity. The density correction stems mostly from the vertical component
of the velocity field. By separating the support from $p_{\mathrm{turb}}$
and integrating for the depth, we obtain the turbulent elevation
\begin{eqnarray}
\Lambda\left(z\right) & = & -\int_{-\infty}^{z}\frac{dp_{\mathrm{turb}}}{\rho g},\label{eq:turb_levitation}
\end{eqnarray}
which depicts the expansion caused by the turbulent velocity field
that is due to convection in SAR \citep{1997MsT..........3T}. In
Fig. \ref{fig:corrections} we also show the turbulent elevation (dashed
line). This has a shape similar to $\Delta z\left(z\right)$, but
the amplitude of the total elevation at the surface is mismatched
with $\Lambda\left(0\right)=26\,\mathrm{km}$, in contrast to $\Delta z\left(0\right)=105\,\mathrm{km}$.
This difference propagates into a larger difference in oscillation
frequencies, rendering the use $\Lambda\left(z\right)$ directly as
futile, meaning that the resulting frequencies are unable to match
the observations (see Fig. \ref{fig:3d_vs_1d}).
To append $\left\langle \mathrm{3D}\right\rangle$ models to 1D structures is straightforward in the
case of solar parameters; for other stars, such as red giants or turn-off
stars, however, this becomes more difficult, because the gradients
of 3D and 1D models often do not match. A possible solution of this
dilemma is to include the missing turbulent pressure in the 1D model.
To do this, we can use the ratio between the turbulent and total pressure,
which quickly drops close to zero below the SAR. We corrected for
the density first with Eq. \ref{eq:density_correction} and for the
ratio in pressures from the $\left\langle \mathrm{3D}\right\rangle$ model. Then, we integrated for
the geometrical depth under hydrostatic equilibrium with Eq. \ref{eq:hydrostatic_equilibrium},
which expanded the geometrical depth. To obtain good results, we needed
to apply a constant factor of $3/2$ to the ratio of pressures in
Eq. \ref{eq:density_correction}. We emphasize that the advantage
of this approach is that the $\left\langle \mathrm{3D}\right\rangle$ model does not need to be matched
at the bottom, which results in a more continuous structure; but this
is a subject of a forthcoming work.
\section{Oscillation frequencies\label{sec:Results}}
In Fig. \ref{fig:3d_vs_1d} we show the computed oscillation frequencies
in comparison with the observations. The $\left\langle \mathrm{3D}\right\rangle$ model (black solid
line) clearly exhibits smaller residuals than the 1D solar model,
thereby verifying that it renders the SAR more accurately than the
1D model. We also show a 1D model, in which we expanded the geometrical
depth with the turbulent elevation (dashed line in Fig. \ref{fig:corrections})
and corrected the density for hydrostatic equilibrium. The expansion
improves the 1D model, but it is not enough. A comparison of our $\left\langle \mathrm{3D}\right\rangle$
model results with \citet{rosenthal_convective_1999} leads to the
same residuals in the oscillation frequencies (see Fig. \ref{fig:rosenthal6})
because the pressure stratifications are almost the same, as we mentioned
above in Sect. \ref{sub:Models}.
\subsection{Corrected structures\label{sub:Corrected-structures}}
\begin{figure*}
\subfloat[\label{fig:without_shift}]{\includegraphics[width=88mm]{fig/golf_contributions}
}\subfloat[\label{fig:with_shift}]{\includegraphics[width=88mm]{fig/golf_contributions2}
}
\caption{\textbf{(a)} Similar to Fig. \ref{fig:3d_vs_1d}, but showing 1D models
without any geometrical depth corrections $\Delta z\left(z\right)$,
but with corrections in density and adiabatic exponent. \textbf{(b)}
Similar to Fig. \ref{fig:without_shift}, but here the 1D models include
the geometrical depth corrections $\Delta z\left(z\right)$. Note
the differences in the ordinates. For comparison we also show the
3D model (black solid line) in both panels. See text for more details.
See text for more details.}
\end{figure*}
To determine the actual influences of the corrections in depth, density
and adiabatic exponent, we corrected them individually. First we consider
the models without any depth correction $\Delta z\left(z\right)$
given by Eq. \ref{eq:depth_correction} (see Fig. \ref{fig:without_shift}).
In this panel, all four of the 1D models show large, negative systematic
offsets that reach values of between 12 to $28\,\mu\mathrm{Hz}$ at
the high-frequency end. When we employed only the density correction
to the 1D model (orange solid line), we found that the mismatch was
substantially worsened to $\sim28\,\mu\mathrm{Hz}$. In addition,
adopting the adiabatic exponent alone from the $\left\langle \mathrm{3D}\right\rangle$ model in the
1D model gives the best match (green solid line), but it does not
improve the mismatch significantly. The change in the adiabatic exponent
affects the oscillation frequencies only slightly with a difference
of $\sim5\mu\mathrm{Hz}$ at higher frequencies that increases above
$\sim3000\,\mu\mathrm{Hz}$ (between $\Gamma_{1}=1$ and $\Gamma_{1}=0$)
compared to the density correction (between $\rho=1$ and $\rho=0$).
This yields a difference of $\sim10\mu\mathrm{Hz}$ (see Fig. \ref{fig:without_shift}).
The adiabatic exponent for the $\left\langle \mathrm{3D}\right\rangle$ and 1D models is shown in Fig.
\ref{fig:sun_structure} (see blue dashed line in the bottom panel).
Only by adopting the depth correction $\Delta z\left(z\right)$ given
in Eq. \ref{eq:depth_correction} and the density correction (Eq.
\ref{eq:density_correction}), can we achieve a significant improvement
in the oscillation frequencies in comparison with observations (see
Fig. \ref{fig:with_shift}). The differences in the frequencies between
the 1D model and the observations are reduced by expanding the structure.
It is obvious, however, that the density correction is crucial for
a closer match (blue dashed and orange solid lines; we note that these
two lines overlap in Fig. \ref{fig:with_shift}). The correction of
$\Gamma_{1}$ alone hardly changes anything (blue dashed and green
solid line), which is consistent with the very small differences in
the $\Gamma_{1}$ stratification between the $\left\langle \mathrm{3D}\right\rangle$ and 1D model
(red lines in bottom panel of Fig. \ref{fig:sun_structure}).
\begin{figure}
\includegraphics[width=88mm]{fig/golf_fac}
\includegraphics[width=88mm]{fig/golf_fac2}
\caption{\label{fig:density}Top panel: Similar to Fig. \ref{fig:3d_vs_1d},
but showing models with different density corrections. The line with
the density correction of 1.0 is the same as shown in Fig. \ref{fig:with_shift}
(blue dashed line). Bottom panel: The density profiles are shown.
In both panels, the 3D-corrected model is also shown for comparison
(\emph{black dashed line}).}
\end{figure}
The density corrections seem small (see red dotted line in Fig. \ref{fig:sun_structure}),
but their effect on the frequencies is very strong. This illustrates
that the frequencies are very sensitive to the density stratification
in the SAR immediately below the optical surface. The density needs
to be reduced because the missing turbulent pressure is compensated
for by higher densities. We varied the density correction (Eq. \ref{eq:density_correction})
with a constant scaling factor between 0.8 and 1.4, to show the effect
on the p-mode frequency, which is shown in Fig. \ref{fig:density}.
Again, the changes in the frequencies are very strong, indicating
the importance of the density stratification on the oscillations.
A smaller density correction increases the residuals between observations
and theoretical model, in particular around $\sim3000\,\mu\mathrm{Hz}$,
while a larger density correction decreases the residuals, but at
higher frequencies the mismatch is more pronounced. The net effect
of these changes in the residuals is to shift the frequency range
over which the residuals are small from high frequencies for the largest
density corrections to lower frequencies for the smallest density
corrections.
\begin{figure}
\includegraphics[width=88mm]{fig/golf_rad}
\caption{\label{fig:rad_exp}Similar to Fig. \ref{fig:3d_vs_1d}, but showing
1D models without any corrections. Instead, only the radii are expanded.
The 3D-corrected model is also shown for comparison (\emph{black dashed
line}).}
\end{figure}
An expansion of the global radius of the model alone will extend the
acoustic cavity and it will also result in increases in the modelled
frequencies. However, such an isolated change to the model will not
resolve the mismatch, since the structures of the surface layers are
unchanged (see Fig. \ref{fig:rad_exp}). The relative differences
between model and observations are shifted to higher values, but the
overall shape is preserved. Therefore, none of the models with expanded
radius can provide a good match to the observations. This illustrates
that the depth-dependent corrections from the $\left\langle \mathrm{3D}\right\rangle$ model in the
SAR of the solar structure are as important as the expansion of the
radius alone. The expansion of the radius has to be included, but
by itself it is not sufficient to correct for the surface effects.
Instead of correcting the frequencies for the surface-effects, we
can also directly correct the surface layers of a 1D model. As shown
in Fig. \ref{fig:with_shift}, a depth-dependent expansion of the
geometrical depth and a correction of the density are necessary steps.
Furthermore, from asteroseismic observations, we can infer the correct
structure of stellar structure model-independently models by finding
the right corrections around the critical surface layers.
\subsection{Different averages\label{sub:Averages}}
\begin{figure}
\includegraphics[width=88mm]{fig/golf_hav}
\caption{\label{fig:averages}Similar to Fig. \ref{fig:3d_vs_1d}, but showing
models with different $\left\langle \mathrm{3D}\right\rangle$ averages.}
\end{figure}
The question arises about the effect of different averaging methods
of the 3D model structure on the p-mode frequencies. In Fig. \ref{fig:sun_averages}
we compared the radial profiles of pressure, density, and adiabatic
exponent that we obtained using seven different averaging methods
of the 3D model structure. We found above in Sect. \ref{sub:Averaging}
that most of the different averages depict only small changes, which
also results in only small changes of the oscillation frequencies
as shown in Fig. \ref{fig:averages}. The geometrical averages for
the up- and downflows exhibit the largest difference in the density
and adiabatic exponent immediately below the optical surface. The
upflows depict lower densities, because the hotter granules are lighter
than the turbulent downdrafts in the intergranular lane. The $\left\langle \mathrm{3D}\right\rangle$
model for the downflows leads to the highest mismatches with observations
compared to the other $\left\langle \mathrm{3D}\right\rangle$ model. On the other hand, the $\left\langle \mathrm{3D}\right\rangle$
model of the upflows leads to the best agreement with observations,
since the residuals are almost a straight line. In particular, the
agreement at higher frequencies is much better, which is mainly due
to the adiabatic exponent. One reason for this result might be the
adiabatic nature of the frequency calculations, since the averages
over the upflows are closer to the adiabatic stratification. The pseudo-Lagrangian
averages are indistinguishable from the plain geometrical averages.
The averages on the acoustic depth are slightly closer to observations
below $\sim3000\,\mu\mathrm{Hz}$, but in higher frequencies the mismatch
is larger. The averages on constant layers of column mass density
and total pressure are about the same as on geometrical depth.
\subsection{Chemical composition\label{sub:Abundances}}
\begin{figure}
\includegraphics[width=88mm]{fig/golf_abund}
\includegraphics[width=88mm]{fig/golf_abund2}
\caption{\label{fig:abund}Top panel: Similar to Fig. \ref{fig:3d_vs_1d},
but showing models with different abundances. Shown are uncorrected
1D models (\emph{red lines}) and 3D-corrected models (\emph{blue lines}).
The latter are the different 1D models, but appended with the same
3D model, which is why they converge towards higher frequencies and
differ towards lower frequencies. We also note the low-frequency mismatch
for the individual abundances. Bottom panel: The differences in density
compared to inferred $\rho$ from observations \citep{2009ApJ...699.1403B}. }
\end{figure}
Next we consider the influence of the chemical composition on the
oscillations frequency. In Fig. \ref{fig:abund} we show frequencies
computed with solar 1D models assuming solar chemical compositions
different from our standard composition \citep[AGS09]{asplund_chemical_2009},
while the abundances in the $\left\langle \mathrm{3D}\right\rangle$ model were fixed (the difference
would be only minor). We considered the solar chemical compositions
by \citet[AGS05]{2005ASPC..336...25A}, \citet[GS98]{1998SSRv...85..161G},
and \citet[GN93]{1993oee..conf...15G}. The modes at low frequency
are affected by a change in chemical composition. For chemical compositions
assuming higher metallicity, the low frequencies match the solar observations
better. With the most metal-poor composition AGS05 we find a mismatch
of $2.12\,\mu\mathrm{Hz}$ at the low-frequency end ($1500\,\mu\mathrm{Hz}$),
while for most of the metal-rich composition GN93 we find the best
match with much lower residuals with $0.3\,\mu\mathrm{Hz}$. The different
chemical compositions change the structure in the solar models. The
models with higher metallicity match the interior better, which is
also known for the inferred sound speed and density in the convection
zone (see Fig. \ref{fig:abund}).
\subsection{Magnetic field\label{sub:Magnetic-field}}
\begin{figure}
\includegraphics[width=88mm]{fig/mag_amdls}
\caption{\label{fig:magnetic_struct}Comparison of total pressure, density
and adiabatic exponent for $\left\langle \mathrm{3D}\right\rangle$ models with different magnetic
field strengths. }
\end{figure}
\begin{figure}
\includegraphics[width=88mm]{fig/mag_dnu1}
\caption{\label{fig:magnetic}Frequency differences between $\left\langle \mathrm{3D}\right\rangle$ models
with different magnetic field strengths compared to non-magnetic case,
i.e. $\delta\nu=\nu_{X\mathrm{G}}-\nu_{0\mathrm{G}}$. We also performed
a modified Lorentzian fit (\emph{dotted lines}).}
\end{figure}
\begin{figure}
\includegraphics[width=88mm]{fig/mag_dnu2}
\caption{\label{fig:magnetic_shift}Frequency shifts determined at $\nu_{nl}=4000\,\mu\mathrm{Hz}$
from the modified Lorentzian fits (Fig. \ref{fig:magnetic}), shown
against the magnetic field strength (\emph{black solid line}). We
also show the linear fit (\emph{dashed line}).}
\end{figure}
\begin{table}
\caption{\label{tab:hyp_fit}Coefficients $\alpha$ and $\beta$ for the modified
Lorentzian function (Eq. \ref{eq:modified_lorentzian}) shown in Fig.
\ref{fig:magnetic}.}
\begin{tabular}{rccrcc}\hline\hline
$B_{z,0}$ & $\alpha$ & $\beta$ & $B_{z,0}$ & $\alpha$ & $\beta$\\
\hline
50G & 0.278 & 5.940 & 800G & 10.014 & 4.888 \\
100G & 0.576 & 5.885 & 900G & 9.103 & 5.290 \\
200G & 2.527 & 4.369 & 1000G & 10.242 & 5.322 \\
300G & 2.799 & 5.384 & 1100G & 10.893 & 5.254 \\
400G & 4.912 & 4.917 & 1200G & 12.018 & 5.180 \\
500G & 5.785 & 3.755 & 1300G & 11.711 & 5.503 \\
600G & 6.930 & 4.670 & 1400G & 11.391 & 5.309 \\
700G & 7.249 & 4.848 & 1500G & 15.675 & 5.076 \\
\hline\end{tabular}
\end{table}
We also computed solar 3D MHD models with vertical magnetic field
strengths of up to 1500 kG. We included monolithic vertical fields
in initially hydrodynamic models and appointed the same vertical magnetic
field to the gas entering the numerical box at the bottom. Then, we
evolved the simulations until the MHD models reached the quasi-stationary
state. From the subsequent models we determined spatial and temporal
averages. We truncated the initial hydrodynamic solar model slightly
at the top and we reduced the numerical resolution to $120^{3}$ to
keep computational costs low. The magnetic inhibition of the flow
increases with the magnetic field strength through the coupling of
the velocity field with the magnetic field through the Lorentz force
in the SAR. This leads to significant changes in the stratification
of the solar model. Owing to the magnetic inhibition of convection,
the mean temperature, density, and pressure are reduced significantly
in the surface layers (see Fig. \ref{fig:magnetic_struct}). Furthermore,
the turbulent pressure is reduced, and the magnetic pressure increasingly
evacuates the plasma. In this way the optical surface is depressed
by 400 km in the model with 1.5 kG, which affects the frequencies
by shortening the acoustic cavity, which in turn increases the oscillation
frequencies. We appended the magnetic $\left\langle \mathrm{3D}\right\rangle$ models to the 1D solar
model to test the influence of the magnetic field on the oscillation
frequencies. We show in Fig. \ref{fig:magnetic} the corresponding
comparisons with the non-magnetic model. As a consequence of the depression
of the optical surface, the frequencies are enhanced with increasing
field strength. At high field strength with 1.5 kG, the p-mode frequencies
are increased by almost $\sim45\,\mu\mathrm{Hz}$ at the high-frequency
end. We also determined the relation between magnetic field strength
and the frequency shift, $\delta\nu_{nl}\propto B_{z}$ (see Figs.
\ref{fig:magnetic} and \ref{fig:magnetic_shift}). We fitted the
relative differences with the modified Lorentzian function
\begin{eqnarray}
\delta\nu/\nu_{\mathrm{max}} & = & \alpha\left(1-1/\left[1+(\nu/\nu_{\mathrm{max}})^{\beta}\right]\right)\label{eq:modified_lorentzian}
\end{eqnarray}
with $\nu_{\mathrm{max}}=3090\,\mu\mathrm{Hz}$ \citep[see][]{ball_new_2014}
and determined the shift at $\nu_{nl}=4000\,\mu\mathrm{Hz}$. We found
the following response relation for the magnetic field
\begin{eqnarray}
\delta\nu_{nl} & = & 26.21B_{z},\label{eq:response}
\end{eqnarray}
where the vertical magnetic field strength is given in units of kG
(see Fig. \ref{fig:magnetic_shift}). We note that the frequency shift
considered at $\nu_{\mathrm{max}}=3090\,\mu\mathrm{Hz}$ would instead
yield a smaller slope of $16.57$. The changes in oscillation frequencies
are due to both the reduction of the radius and the altered stratification
of the independent variables. The latter has the stronger effect,
in particular for the high-frequency shifts, since we tested this
by keeping the radius unchanged, which led to very similar results.
Furthermore, for the higher angular degrees modes of $l=1,2$ and
$3$, we find very similar slopes with $25.80,\,25.92$, and $26.01$.
In Table \ref{tab:hyp_fit}, we list the coefficients of the modified
Lorentzian function (Eq. \ref{eq:modified_lorentzian}) that result
from the fitting for $l=0$.
To estimate the global impact on the frequency, we determined the
probability distribution function of the magnetic field on the solar
surface, $f_{B_{z}}$, during the different phases of the solar cycle.
We determined $f_{B_{z}}$ for solar cycle 23 (Carrington rotation
numbers from 1909 till 2057, i.e. between the years 1996 and 2008)
from synoptic (radially corrected) magnetograms observed by SOHO/MDI\footnote{Retrieved from \url{http://soi.stanford.edu}.}
(see Fig. \ref{fig:mag_mdi_hist}). We computed the histograms for
100 bins from the logarithm of the absolute magnetic field strength
values. As expected, during the ascending phase of cycle 23, the probability
of the magnetic field strength increases at higher field strength,
which reduces the probabilities at lower $B_{z}$ (the total probability
is unity). The opposite is true for the declining phase.
Then, with the linear response function (Eq. \ref{eq:response}),
we can determine the total shift by integrating the probability distribution
function of the surface magnetic field, $f_{B_{z}}$, over the magnetic
field strength,
\begin{eqnarray}
\delta v_{nl}\left(t\right) & = & \frac{1}{B_{z,\mathrm{max}}}\int_{0}^{B_{z,\mathrm{max}}}\delta\nu_{n0}^{*}(B_{z},t)f_{B_{z}}(B_{z},t)\,dB_{z}.\label{eq:total_shift}
\end{eqnarray}
We calculated the total shift (Eq. \ref{eq:total_shift}) for solar
cycle 23, which is shown in Fig. \ref{fig:mag_total_shift}. The total
shift is largest during the maximum (2002), leading to a shift of
$\sim0.2\,\mu\mathrm{Hz}$. \citet{2004MNRAS.352.1102C} considered
the frequency shifts for different phases during solar cycles 22 and
23. For the low-angular degree $l=0/2$ at the solar activity maximum
they found a mean shift around $\sim0.2/0.4\,\mu\mathrm{Hz}$. We
found very good agreement with our theoretical results. However, we
did not find a strong dependence on angular degree mode. From 3D MHD
models for stars other than the Sun, the linear response functions,
$\delta\nu_{nl}^{*}$, can be predicted. This opens the possibility
of inferring their magnetic field distribution function, $f_{B_{z}}$,
at their surface, by comparing and matching $f_{B_{z}}$ to observations
of their stellar cycles.
\section{Conclusions\label{sec:Conclusions}}
The turbulent expansion that elevates the depth-scale and reduces
the density stratification is very important for matching the observed
solar oscillation frequencies more accurately. On the other hand,
the differences in the stratification of the adiabatic exponent are
often very small, therefore, the their effects on on the oscillation
frequencies are accordingly less important. Furthermore, instead of
correcting for the frequencies, the surface of the solar model can
be corrected for by expanding the the geometrical depth scale in a
depth-dependent way and by reducing the density by the turbulent pressure.
This leads to very similar results, as achieved with a $\left\langle \mathrm{3D}\right\rangle$ model
appended to a 1D model. Considering alternative reference depth scales
for determining the spatial averages for the 3D model leads to very
similar results, as achieved with averages over geometrical depth.
We also found that 1D solar models with higher metallicity result
in models that match the low-frequency part that probe the interior
better. Finally, we found that strong magnetic fields have a distinct
influence by predominantly increasing the high-frequency range and
that the linear response is able to reproduce the solar activity cycles
properly.
\begin{figure*}
\subfloat[\label{fig:mag_mdi_hist}]{\includegraphics[width=88mm]{fig/mag_mdi}
}\subfloat[\label{fig:mag_total_shift}]{\includegraphics[width=88mm]{fig/mag_mdi_dnu}
}
\caption{\textbf{(a)} Histogram of magnetic field strength determined from
synoptic SOHO/MID maps for the ascending and declining phases (\emph{top}
and \emph{bottom} panel, respectively). \textbf{(b)} The integrated
frequency shift, $\delta v_{nl}\left(t\right)$ (Eq. \ref{eq:total_shift}),
vs. time for solar cycle 23. Both figures have the same colour-coding
for time.}
\end{figure*}
\begin{acknowledgements}
This work was supported by a research grant (VKR023406) from VILLUM
FONDEN.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,155,321 | arxiv | \section{Introduction}\label{intro}
\subsection{Motivation and Related Work}
The Internet of Things (IoT) is currently making a rapid transition from theory to practice. For instance, in Australia large scale IoT networks targeting smart cities~\cite{smartcity1},~\cite{smartcity2} and smart agriculture~\cite{agriculture} are currently being deployed. As we move towards a world filled with a large number of IoT devices, the means to sustainably powering these IoT devices is a key challenge. In this regard, far-field wireless power transfer (WPT) is a promising technology to provide convenient wireless charging to low power IoT devices~\cite{jayakody2017wireless,huang2015cutting,Bi2015,zeng2017sigdes}.\\
\indent The problem of efficient WPT from an energy transmitter (ET) to an energy receiver (ER) has received much attention in the literature~\cite{Zhang-2018survey,Alsaba2018,Liu-Zhang2014,yZeng2015a,yZeng2015b,gYang2014,xChen2014,hSon2014,park2015joint,jXu-2014}. Typically, WPT relies on highly directional beamforming to increase the end-to-end power efficiency and overcome the severe radio frequency (RF) signal attenuation over distance. In this regard, different beamforming architectures have been proposed~\cite{Zhang-2018survey,Alsaba2018,Liu-Zhang2014}. However, the implementation of these beamforming architectures requires channel state information (CSI) estimation at the ET~\cite{yZeng2015a,yZeng2015b} or at the ER~\cite{gYang2014,xChen2014,hSon2014,park2015joint}, or energy feedback from the ER to the ET~\cite{jXu-2014,Zeng-2015}. CSI estimation at the ER increases the complexity of the ER, which is undesirable. In addition, training methods suffer from high feedback overhead, which should also be avoided.\\
\indent \sugcom{Employing the concept of retrodirectivity is a promising solution to avoid the need for any CSI estimation for efficient WPT~\cite{zeng2017sigdes,Massa-2013}. Originally, retrodirective arrays, such as Van Atta array~\cite{Vanatta-1960} and Pon array~\cite{Pon-1964}, were proposed as `reflection type' arrays to reflect an incident signal back to the direction that it came from. This reflection of incoming waves is realized by their reversal in the time domain or phase conjugation in the frequency domain. Recently, more advanced versions of arrays employing the retrodirective principle have been developed for the purpose of WPT~\cite{Massa-2013,Jandhyala-2012,RodenbeckPhased-2004,Hsieh-2003,RodenbeckLimitation-2005}. The retrodirective WPT exploits channel reciprocity and provides WPT without explicit channel estimation. In particular, it involves the ET equipped with a phased array, providing WPT to an ER. This is accomplished by the ET first receiving a signal from the prospective ER, which then serves as a reference signal to steer a beam back towards the ER. This is done by conjugating this received signal and using this conjugated signal to set the phase of an energy signal such that it is emanated towards the ER~\cite{zeng2017sigdes}. In this regard, a novel massive MIMO retrodirective WPT scheme was proposed in~\cite{Lee-2018}. However, this scheme still required active signal transmission from the ER to initiate WPT, which consumes energy and may not be desirable for low power IoT devices. The active signal transmission from the ER to the ET was avoided in~\cite{Yang-2015csi} by enabling the ER to backscatter the pilots emitted by the ET. However, conventional beamforming was still employed at the ET. A WPT scheme employing monostatic backscatter at the ER and retrodirective WPT at the ET was proposed in~\cite{Krikidis-2018}. However, the charging request was initiated by the ET using active transmission.}\\
\indent Backscatter communication is a promising ultra-low power wireless communication paradigm, which eliminates the need for active transmission by the low power IoT devices~\cite{Huynh-2018,liu2019next}. Conventional monostatic backscatter systems enable a tag to transmit to the reader by reflecting the RF signal sent by the reader itself. Recently, ambient backscatter which enables the tag to make use of ambient RF signals generated from ambient RF sources for communication has attracted a lot of attention~\cite{DevineniBER-2019,liu-2013ambient,parks2015turbocharging,Kimionis-2014,Wang-2016,Qian-2017,DevineniNonCoh-2019,kellogg2016passive,Yang-2018air,zhang2016enabling,iyer2016inter,bharadia2015backfi,zhang2016hitchhike}. A key issue in ambient backscatter communication systems is the direct-link interference that the RF ambient source causes to the tag. This is due to the fact that the ambient signals are omnipresent and much stronger than their backscattered versions. Numerous works in literature \sugcom{evaluate the impact of this direct-link interference on different aspects of system performance, e.g., bit error rate (BER) of ambient backscatter communication~\cite{DevineniBER-2019}} and propose different techniques to resolve this issue~\cite{kellogg2016passive,Kimionis-2014,Wang-2016,Qian-2017,DevineniNonCoh-2019,zhang2016enabling,iyer2016inter,Yang-2018air,bharadia2015backfi,zhang2016hitchhike}. One approach is to consider this direct-link interference as a component of the background noise~\cite{Kimionis-2014,Wang-2016,Qian-2017}. However, since the backscatter signal is very weak as compared to the ambient signal, such schemes do not perform so well. \sugcom{~\cite{DevineniNonCoh-2019} demonstrated the existence of a BER floor in a single antenna backscatter device and used multiple antennas to cancel the direct link interference in a non-coherent receiver setup.} Other approaches involve general signal processing techniques~\cite{kellogg2016passive,Yang-2018air,zhang2016enabling,iyer2016inter} or backscatter specific solutions such as frequency shifting~\cite{bharadia2015backfi,zhang2016hitchhike}. In this regard, to the best of our knowledge, the use of Direct Sequence Spread Spectrum (DSSS) has not been considered to date.
\subsection{Our Contributions}
\sugcom{In this paper, we consider a scenario with an ET equipped with a large phased antenna array capable of retrodirective WPT and an ER equipped with an ambient backscatter tag. The fundamental signal recovery problem at the ET is then: \textit{How to recover the weak backscattered signal in the presence of strong direct-link ambient interference?} We consider this problem assuming general Nakagami-$m$ fading and non-linear energy harvesting model.} In this context, our main contributions are:
\begin{itemize}
\item Taking inspiration from DSSS, we consider an ambient backscatter training scheme in which we vary the backscatter coefficient at the ER. This in effect multiplies the backscattered signal with a DSSS training signal and aims to capitalize on the spreading gain to boost the backscattered signal. We show that with a pseudo-noise (PN) training sequence, the average harvested power at the ER is small and it even reduces as the training period increases. This is due to the fact that the use of PN training sequence completely fails in dealing with the strong direct-link ambient interference.
\item We then propose the design of the training sequence (i.e., the pattern of varying the reflection coefficient), to completely eliminate the direct-link ambient interference. We show that when the ambient symbol duration is known, the ambient interference is cancelled as long as there are equal number of $+1$ and $-1$ chips over one ambient symbol. The number of chips or equivalently the switching rate does not matter in this case. Hence, we can use the slowest switching rate, i.e., we can switch the backscatter coefficient only twice per ambient symbol period. We analytically model the system and derive a closed-form expression for the average harvested power at the ER. We show that this deterministic training sequence scheme has superior performance as compared to the PN training sequence scheme.
\item Finally, we show that the proposed solution is robust to small timing offset mismatch at the correlator. This is because the undesired component is still perfectly eliminated. However, good synchronization is needed for the best performance. In addition, when the ambient duration is unknown, the power transfer performance under the proposed deterministic training scheme can be severely degraded. This is due to unequal durations of $+1$ and $-1$ chips in one ambient symbol. We show that in this mismatched case, the number of chips does matter, i.e., it is best to use a fast switching rate to minimize the effect of the uncancelled ambient. \redcom{In addition, we consider interference from neighbouring signals in the ambient environment, which is shown to impact the energy harvesting performance. However, the system can still harvest tens to hundreds of $\mu$W of power if these interference signals from neighbouring ambient sources are significantly weaker than the direct-link ambient signal.}
\end{itemize}
\begin{table}[t]
\centering
\caption{Summary of main mathematical symbols.}\label{tb:2}
\begin{tabular}{|c||c|l|} \hline
& Symbol & Description \\ \hline%
\parbox[t]{1mm}{\multirow{20}{*}{\rotatebox[origin=c]{90}{System Parameters}}}
&&\\
&$\alpha$& Path-loss exponent \\
&$\gamma$& Large scale channel attenuation \\
&$\beta$& Backscatter coefficient \\
&${m_g}$& Nakagami fading parameter for AS $\,\to\,$ ER link \\
&${m_h}$ & Nakagami fading parameter for AS $\,\to\,$ ET link\\
&${m_f}$& Nakagami fading parameter for ER $\,\to\,$ ET link \\
&$\sigma_n^2$& Variance of AWGN \\
& $d_1$& Distance between the AS and the ER \\
&$d_2$& Distance between the ER and the ET \\
&$d_3$& Distance between the AS and the ET\\
&$P_s$& Transmit power of the AS\\
& ${T_b}$ & Duration of backscatter phase \\
& ${T_c}$ & Chip duration (fixed backscatter coefficient)\\
& ${T_s}$ & Duration of one ambient symbol \\
& ${T_{\textrm{off}}}$ & Duration of offset mismatch at the correlator\\
& $N_c$& Number of chips during backscatter phase \\
& $M$ & Number of antennas at the ET \\
& $N_s$ & Number of ambient signals in one backscatter phase \\
& ${c_n}$ & $n$-th chip in the training sequence \\
& $P_t$& Transmit power of the ET \\
&&\\
\hline
\parbox[t]{1mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Random Variables}}}
&&\\
& ${s_i}$ & $i$-th ambient symbol \\
& $g$ & Channel from the AS to the ER \\
& $\mathbf{h}$ & Channel from the AS to the ET \\
& $\mathbf{f}$ & Channel from the ER to the ET\\
& $\mathbf{{r}_{\textrm{ET}}}$ & Signal received at the ET during the backscatter phase\\
& $\mathbf{{r}_{\textrm{ER}}}$ &Signal received at the ER during PT phase \\
&&\\
\hline
\end{tabular}
\end{table}
\subsection{Notation and Paper Organization}
The following notation is used in this paper. Pr($\cdot$) indicates the probability measure and $\mathop{\mathbb{E}}[\cdot]$ denotes the expectation operator. $f_X (x)$ denotes the probability density function (pdf) of a random variable $X$. For a complex valued vector $\mathbf{v}$, $\mathbf{v}^*$, $\mathbf{v}^T$ and $\mathbf{v}^H$ denote the conjugate, transpose and conjugate transpose, while the norm of the vector $\mathbf{v}$ is given by $\left\|\mathbf{v}\right\| = \sqrt{\mathbf{v}^T \mathbf{v}}$. \sugcom{Finally, $\exp(\cdot)$ is the exponential function.} A list of the main mathematical symbols employed in this paper is given in Table~\ref{tb:2}.\\
\indent \sugcom{The rest of the paper is organized as follows. Section~\ref{sys_model} describes the system model and assumptions, along with the proposed wireless power transfer scheme and its phases. Section~\ref{sig_model} presents the signal model of the system in terms of mathematical equations and defines the metric of interest. Section~\ref{PNseq} gives the analysis of the proposed scheme with a PN sequence applied at the ER. Section~\ref{detseq} proposes the design of the deterministic training sequence for the elimination of the direct-link ambient interference and also gives the analysis of the system in this scenario. Section~\ref{imperf} deals with the impact of practical system aspects like imperfect synchronization at the correlator and change in ambient symbol duration. Section~\ref{res} presents the numerical results. Finally, Section VI concludes the paper}.
\section{System Model}\label{sys_model}
\indent We consider a WPT scenario with an ambient source (AS), an energy transmitter (ET) and an energy receiver (ER). The signal broadcasted from the AS is received by both the ET and the ER. We study the design of wireless power transfer (WPT) from the ET to the ER, as illustrated in Fig.~\ref{smodel}.\\
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{sysmodel-eps-converted-to.pdf}
\caption{Illustration of the system model.}
\label{smodel}
\end{figure}
\indent The ER is a device (e.g., a sensor) that is capable of backscatter transmissions. It is composed of a single antenna element, a micro-controller, a variable impedance and an energy harvester. We also assume that the ER is equipped with an ideal energy storage element (e.g., a supercapacitor) for storing the energy transferred by the ET. The block diagram of the ER is illustrated in Fig.~\ref{bdiag}a.\\
\indent The ET is connected to the power grid and transmits with a fixed power $P_t$ using a \sugcom{phased antenna array} with $M$ elements where $M$ is large, which ensures that the ET forms a thin focussed beam. The block diagram of the ET is illustrated in Fig.~\ref{bdiag}b.
\subsection{Channel Assumptions}
\indent We assume that all the channel links are composed of large-scale path loss, with exponent $\alpha$. \sugcom{The block fading for all links is modelled as the independent and identically distributed (i.i.d.) Nakagami-$m$ fading with $m_g$, $m_f$ and $m_h$ being the Nakagami-$m$ parameters of the AS to ER, ER to ET and AS to ET channels respectively.} We denote the distances between AS $\,\to\,$ ER, ER $\,\to\,$ ET and AS $\,\to\,$ ET by $d_1$, $d_2$ and $d_3$ respectively. Thus, large-scale attenuation is modelled as $ \gamma_i = k_0 (d_i/d_0)^{-\alpha} $ where $k_0$ is the constant attenuation for path-loss at a reference distance of $d_0$ and $i \in \{1,2,3\}$.\\
\indent The ER $\,\to\,$ ET, AS $\,\to\,$ ET and AS $\,\to\,$ ER fading channel coefficients, denoted by $\mathbf{f}$, $\mathbf{h}$ and $g$ respectively, are modeled as quasi-static and frequency non-selective parameters. Consequently, the complex fading channel coefficient $g$ is a circular symmetric complex Gaussian random variable with zero mean and unit variance. Similarly, $\mathbf{f}$ and $\mathbf{h}$ are also uncorrelated circularly symmetric complex Gaussian random vectors, i.e., $\mathbf{f} = [f_1, \dots , f_M]^T \sim \mathcal {CN} (0,\boldsymbol{I}_M$) and $\mathbf{h} = [h_1, \dots , h_M]^T \sim \mathcal{CN} (0,\boldsymbol{I}_M$). We make the following assumptions regarding the channels:
\begin{itemize}
\item The fading channel coefficients are assumed to be constant over the duration of one set of backscatter and power transfer phases, i.e., $T_b + T_p$ seconds and independent and identically distributed from one $T_b + T_p$ slot to the next. The use of such channels is in line with the recent work in this research field~\cite{Krikidis-2018, Lee-2018, Tao-2018}.
\item We assume channel reciprocity, i.e., the channel from ER $\,\to\,$ ET during the backscatter phase and the channel from ET $\,\to\,$ ER during the power transfer phase are constant and transpose of each other~\cite{Lee-2018,gYang2014,xChen2014,hSon2014,park2015joint,jXu-2014,Zeng-2015}.
\item In this work, we do not need to make any channel state information (CSI) assumption at the ET or the ER, as the retrodirective WPT technique precludes the need for CSI at either ET or the ER.
\end{itemize}
\subsection{ Proposed Transmission Phases}
\indent The wireless power transfer from the ET to the ER takes place in two phases: (i) the backscatter phase and (ii) the power transfer phase, as shown in Fig.~\ref{smodel}. During the first backscatter phase of duration $T_b$, the ER initiates a request for WPT by sending a backscattered ambient signal to the ET. During the second power transfer phase of duration $T_p$, the ET performs retrodirective energy beamforming towards the ER. Note that in this work we will study the effect of varying the backscatter phase duration $T_b$, while we assume unit time in the power transfer phase.
\subsubsection{The Backscatter Phase}
\indent The backscattering at the ER is achieved by adapting the level of the antenna impedance mismatch, which affects the power of the reflected signal. During the backscatter phase of duration $T_b$ seconds, the switch in Fig.~\ref{bdiag}a stays in position 1 and the ER backscatters the ambient signal given by $r_b(t) = \sqrt{\gamma_1} g \beta s(t)$ where $\beta$ is the backscatter reflection coefficient and $\sqrt{\gamma_1}gs(t)$ is the ambient signal arriving at the ER to be backscattered after suffering large scale attenuation $\gamma_1$ and channel coefficient $g$. In this work, we consider a BPSK-like backscatter coefficient having two different values, i.e., $\beta = \pm 1$.\footnote{$\beta$ can assume any pair of values $|\beta| \leq 1$. However, for simplicity we assume that $|\beta| = 1$.} The backscatter training means that the tag backscatters the ambient signal while switching the backscatter coefficient $N_c$ times\footnote{\sugcom{In practice, the switching of the backscatter coefficient would be activated using an oscillator. The state-of-the art low power backscatter tags have internal oscillators that consume only tens of microwatts of power~\cite{zhang2016enabling} and are feasible to be employed in our system model.}} according to a pre-defined sequence between the values $+1$ and $-1$ at a rate of $\frac{1}{T_c}$, where $T_c$ is the duration for which the backscatter coefficient maintains a certain value. This is effectively equivalent to multiplying the backscattered signal with a training signal $c(t)$ of $N_c$ short duration pulses of amplitude $+1$ and $-1$. Thus, at a given time instant $t$, the backscattered signal from the ER is given by $r_b(t) = \sqrt{\gamma_1} c(t) s(t)$, where $\gamma_1$, $g$ and $s(t)$ are as given above and $c(t)$ is the training signal composed of a sequence of $+1$ and $-1$ pulses governed by the backscatter coefficient. This training sequence applied at the ER is quite similar to the Direct Sequence Spread Spectrum (DSSS)~\cite{goldsmith-2005}\footnote{\sugcom{The signal backscattered from the ER is spread in frequency. However, its in-band and out-of-band interference to the licensed users is negligible since it is very weak, i.e., it is being generated by ambient backscatter and not active transmission~\cite{Huynh-2018}.}}. Henceforth, we will also refer to the short duration pulses of switching the reflection coefficient as `chips' and $T_c$ as the chip duration due to the similarity of this scenario with DSSS.\\
\indent The ET receives the composite signal consisting of the backscattered signal from the ER as well as the ambient signal and noise. The ET correlates this composite signal with the known training sequence $c(t)$. In this work, we assume perfect timing synchronization at the ET, in the baseline case. We then investigate the impact of imperfect synchronization in Section~\ref{imperf}.\\
\indent The purpose of using backscatter training is as follows. In general, the ambient signal is much stronger than the backscattered signal. This is because the latter suffers pathloss and attenuation twice and is orders of magnitude smaller than the former. The training performed at the ER before backscattering opens up a possibility for dealing with this issue of direct-link interference from the ambient signal at the ET. This is discussed in Section~\ref{detseq}.
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{blockdiag-eps-converted-to.pdf}
\caption{Block diagram of the energy transmitter and receiver.}
\label{bdiag}
\end{figure}
\subsubsection{The Power Transfer Phase}
During the power transfer phase, the ET provides retrodirective wireless power transfer to the ER. Specifically, the ET conjugates the phase of the de-spread signal and each antenna at the ET sends a single-tone sinusoidal waveform towards the ER as shown in Fig.~\ref{bdiag}b. The phase and amplitude of this waveform are set according to the conjugated signal, subject to the maximum total transmit power $P_t$ at the ET. The switch in the ER in Fig.~\ref{bdiag}a moves to position 2. Consequently, the ER stops backscattering and only harvests energy from the energy beam directed to it by the ET. This energy is stored in the energy storage device in the ER. Note that during the backscatter phase when the ER is backscattering the ambient signals, the energy harvester remains idle and can complete the rectification and storage of energy.
\section{Signal Model}\label{sig_model}
In this section, we present the signal equations that form the basis of analysis and design in the later sections. We adopt a continuous-time baseband signal model.
\subsection{The Ambient Signal}
For simplicity, similar to the previous works~\cite{Tao-2018}, we model the ambient signal as
\begin{align}\label{ambient}
\ s(t) = \sqrt{P_s}\sum_{i=1}^{\infty} s_i p_{s}(t-iT_s),
\end{align}
where $s_i \sim \mathcal {CN} (0,1)$ and $p_{s}(t)$ is a rectangular pulse of duration $T_s$ given by
\begin{align}\label{pulse_amb}
p_s(t) =\begin{cases}
1, & 0\leq t\leq T_s\\
0, & t > T_s.
\end{cases}
\end{align}
Note that the power of an ambient symbol in~\eqref{ambient} is $P_s$.
\subsection{The Backscatter Phase}
In the backscatter phase, as described in Section II, the backscattered signal from the ER is given by
\begin{align}\label{back_sig}
\ r_b(t) = \sqrt{ \gamma_1} g c(t)s(t),
\end{align}
where $c(t)$ is the training sequence with length $N_c$ and chip duration $T_c$. It can be modelled as
\begin{align}\label{chip_seq}
\ c(t) = \sum_{n=0}^{N_c-1} c_n p_c(t-nT_c),
\end{align}
where $c_n$ is the $n$-th chip ($+1$ or $-1$) of the training sequence and $p_c(t)$ is a rectangular pulse of duration $T_c$, i.e.,
\begin{align}\label{pulse_chip}
p_c(t) =\begin{cases}
1, & 0\leq t\leq T_c\\
0, & t > T_c.
\end{cases}
\end{align}
The received signal at the ET is given by
\begin{align}\label{rec_sig}
\mathbf{r}_\textrm{ET}(t) &= \sqrt{\gamma_2}\mathbf{f} r_b(t)+ \sqrt{\gamma_3}\mathbf{h} s(t) + \mathbf{n}(t)\nonumber\\
&= \sqrt{\gamma_1 \gamma_2} g \mathbf{f} c(t)s(t) + \sqrt{\gamma_3} \mathbf{h} s(t) + \mathbf{n}(t) ,
\end{align}
where $\mathbf{n}(t)\sim \mathcal{CN} (0,{\sigma_n}^2 \boldsymbol{I}_M)$ is the AWGN. Note that $\mathbf{r}_{ET}(t)$ is a composite signal with three components, i.e., the backscattered signal from the ER, the ambient signal from the AS and the AWGN.
The ET correlates this composite signal with the known training sequence with perfect frame synchronization to give
{\small \begin{align}\label{despread}
\mathbf{x}_{r} &= \frac{1}{\ N_cT_c} \int \limits_{0}^{N_cT_c} \mathbf{r}_\textrm{ET} (t) c(t) dt\nonumber\\
&= \underbrace{\frac{1}{\ N_cT_c}\int \limits_{0}^{N_cT_c} \sqrt{\gamma_1\gamma_2} g \mathbf{f} c(t)s(t)c(t) dt}_{\mathbf{x}_{s}} \nonumber\\
&+ \underbrace{\frac{1}{\ N_cT_c}\int \limits_{0}^{N_cT_c} \sqrt{\gamma_3} \mathbf{h} s(t)c(t) dt}_{\mathbf{x}_{i}} + \underbrace{\frac{1}{\ N_cT_c}\int \limits_{0}^{N_cT_c} \mathbf{n}(t) c(t)dt}_ {\mathbf{\widetilde{n}}},
\end{align}} where $\mathbf{x}_{s}$ and $\mathbf{x}_{i}$ are desired signal and undesired ambient (i.e., interference) components at the output of the correlator. Substituting the value of $c(t)$ from~\eqref{chip_seq}, we get $\mathbf{x}_{s}$ and $\mathbf{x}_{i}$ as
{\small \begin{align}
\mathbf{x}_{s}&= \frac{\sqrt{\gamma_1 \gamma_2} g \mathbf{f}}{\ N_cT_c} \int \limits_{0}^{N_cT_c} s(t) \sum_{n=0}^{N_c-1} c_n p_c(t-nT_c)\sum_{m=0}^{N_c-1} c_m p_c(t-mT_c)
dt,\nonumber\\
&=\frac{\sqrt{\gamma_1 \gamma_2} g \mathbf{f}}{\ N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{n=0}^{N_c-1}c_n^2 s(t){p_c}^2(t-nT_c)dt. \label{xs} \\
\mathbf{x}_{i} &= \frac{1}{\ N_cT_c} \int \limits_{0}^{N_cT_c} \sqrt{\gamma_3} s(t)c(t)\mathbf{h} dt, \nonumber\\
&= \frac{\sqrt{\gamma_3} \mathbf{h}}{ N_cT_c} \int \limits_{0}^{N_cT_c} s(t)\sum_{n=0}^{N_c-1} c_n p_c(t-nT_c)dt. \label{xi}
\end{align}}
\subsection{Power Transfer Phase}\label{pt_sig_model}
Once the received signal is correlated with the local copy of the training sequence, the phase of the signal at the output of the correlator in~\eqref{despread} is conjugated in accordance with the principle of retrodirective WPT. This conjugate signal then controls the phase and amplitude of ET's energy signal subject to the maximum total transmit power $P_t$ at the ET. It is given as in~\cite{Lee-2018},
\begin{align}\label{xt}
\mathbf{x}_t &= \sqrt{P_t} \frac{(\mathbf{x}_{r})^*}{\left\|\mathbf{x}_{r}\right\|},
\end{align}
where $\left\|\mathbf{x}_{r}\right\| = \sqrt{{\mathbf{x}_{r}}^T \mathbf{x}_{r}}$. Note that in~\eqref{xt}, we have dropped the time index $t$ because the baseband signal $x_t$ does not vary with time.
The signal received by the ER in the power transfer phase is given by
\begin{align}\label{rER}
{r}_{\textrm{ER}} &= \sqrt{\gamma_2}\mathbf{f}^T \mathbf{x}_t, \nonumber\\
&= \sqrt{\gamma_2 P_t} \frac{\left(\mathbf{f}^T {\mathbf{x}_{s}}^*+ \mathbf{f}^T {\mathbf{x}_{i}}^*+ \mathbf{f}^T {\mathbf{\widetilde{n}}}^*\right)}{\left\|\mathbf{x}_{s}+ \mathbf{x}_{i} + \mathbf{\widetilde{n}}\right\|},
\end{align}
where $\mathbf{x}_{s}$ is given in~\eqref{xs}, $\mathbf{x}_{i}$ is given in~\eqref{xi} and $ \mathbf{\widetilde{n}}\sim \mathcal{CN} (0,\frac{{\sigma_n}^2}{N_cT_c}\boldsymbol{I}_M)$ is the noise at the output of the matched filter. Note that the receiver noise at the ER is not included in~\eqref{rER} because it is irrelevant to energy harvesting.
\subsection{Non-linear Energy Harvester}
In this work, we have assumed that the ER is equipped with a non-linear energy harvester modelled as follows~\cite{Alevizos2018nonlinear,Boshkovska-2017,Clerckx-2019WIPT}. Assuming that the incident RF power on the ER is $Q_{RF} = |{r}_{\textrm{ER}}|^2$, where ${r}_{\textrm{ER}}$ is the received signal at the ER during power transfer phase as given in~\eqref{rER}, the instantaneous harvested power by the energy harvester in the ER is given by
\begin {align}\label{qi_def}
Q = \frac{\frac{c_0}{1+\exp(-a_0(Q_{RF}-b_0))}-\frac{c_0}{1+\exp(a_0b_0)}}{1 - \frac{1}{1+\exp(a_0b_0)}},
\end{align}
where the parameters $a_0$, $b_0$ and $c_0$ respectively reflect the nonlinear charging rate with respect to the input power, the minimum turn-on voltage and the maximal harvested power when the energy harvester is drawn into saturation.
\subsection{Metric}
In this work, we use the average harvested power at the ER, $\overline{Q}$, as the figure of merit. \sugcom{It is defined as
\begin {align}\label{q_def}
\overline{Q} = E[Q],
\end{align}
where $Q$ is the instantaneous harvested power given by~\eqref{qi_def}}.\footnote{In this work, we assume unit time in the power transfer phase. Hence, we use the terms energy and power interchangeably}.
\section{Analysis of Energy Harvested with a PN Sequence}\label{PNseq}
In this section, we discuss the ambient backscatter training performed at the ER. As explained before, the ET receives a backscattered ambient signal from the ER. In addition to this, the ET also receives the original ambient signal which is orders of magnitude stronger than its backscattered version from the ER. This is due to the fact that the backscatter signal suffers attenuation twice, i.e., in going from AS to ER and then from ER to ET. As a result, it is considerably weakened and the signal received at the ET during the backscatter phase is predominantly composed of the ambient component.\\
\indent This problem of recovering the weak backscatter signal in the presence of a much stronger unwanted ambient signal is quite similar to the signal recovery problem in the direct sequence spread spectrum (DSSS). Taking inspiration from that, we consider a pseudo-noise (PN) training sequence at the ER when backscattering, i.e., the backscatter coefficient is switched between $+1$ and $-1$ in a pseudo-random fashion. By doing this, we expect to capitalize on the spreading gain and boost the backscatter signal against the direct-link ambient interference. In order to assess this technique and the impact of the spreading gain, we evaluate the power harvested at the ER during the power transfer phase of this scheme. We assume that the number of ambient symbols in the backscatter phase is $N_s$, i.e., $T_b = N_sT_s = N_cT_c$.\\
\indent We analyze the expressions for the desired signal component and the undesired ambient component to find the energy harvested by the ER in the following two cases: (i) $N_s \leq N_c$ and (ii) $N_s \geq N_c$. The main result is presented in the proposition below.\\
\begin{proposition}
For the system model considered in Section~\ref{sys_model} with Nakagami-$m$ fading channels when the number of antennas at the ET $M\,\to\, \infty$, the incident RF power on the ER is given by~\eqref{q_instant}
where
{\begin{align}
\mu &= \left|\sum_{i=1}^{N_s} s_i\right|^2 = \left|\sum_{i=1}^{N_s} s_i^*\right|^2,\label{mu}\\
\nu &= \left|\sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n s_i^*\right|^2 = \left|\sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n s_i\right|^2,\label{nu}
\end{align}}
for simplicity. Substituting this value of $Q_{RF}$ in~\eqref{qi_def}, we get the instantaneous harvested power at the ER, from which the average harvested power is calculated according to~(13).
\vspace{5mm}
\begin{figure*}[t]
\begin{equation}\label{q_instant}
{ Q_{RF} =
\begin{cases}
\gamma_2 P_t \left(\dfrac{\gamma_1 {\gamma_2} |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \gamma_3 \nu \left(\dfrac{N_s}{N_c}\right)^2 + \dfrac{\sigma_n^2 N_s}{T_s P_s}} { \gamma_1 \gamma_2 |g|^2 \mu + \gamma_3 \nu \left(\dfrac{N_s}{N_c}\right)^2 + \dfrac{\sigma_n^2 N_s}{T_s P_s}}\right) & \text{if $N_s \leq N_c$}\\
\gamma_2 P_t \left(\dfrac{\gamma_1 {\gamma_2} |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \gamma_3 \nu + \dfrac{\sigma_n^2 N_s}{T_s P_s}} { \gamma_1 \gamma_2 |g|^2 \mu + \gamma_3 \nu + \dfrac{\sigma_n^2 N_s}{T_s P_s}}\right) & \text{if $N_s \geq N_c$}\\
\end{cases}}
\end{equation}
\rule{18.2cm}{0.5pt}
\vspace{-5mm}
\end{figure*}
\end{proposition}
\begin{IEEEproof}
See Appendix~\ref{a}.
\end{IEEEproof}
\indent The general expression for the instantaneous harvested power in~\eqref{q_instant} has two mutually dependent random variables $\mu$ and $\nu$, in addition to $g$, $\mathbf{f}$ and $\mathbf{h}$. In addition, due to the nonlinear nature of the energy harvester, the overall expression for $Q$ in~\eqref{q_def} is fairly complex. Therefore, it is not possible to obtain a closed form expression for the expected value of harvested power. However, we can easily find the average harvested power by numerically taking the average of~\eqref{q_instant} substituted in~\eqref{qi_def} over a large number of Monte carlo realizations. Our simulation results in Section~\ref{res} confirm the accuracy of this approach.\\
\indent We have presented the average harvested power for the two possible cases of $N_s \leq N_c$ and $N_s \geq N_c$ in~\eqref{q_instant}. However, we will show in Fig.~\ref{eh1} in Section~\ref{res} that the harvested power becomes very low with increasing values of $N_s$. As $N_s$ exceeds $N_c$, the average harvested power stays perpetually low. This is due to the fact that the proposed scheme depends upon the variation of the backscatter coefficient during each ambient symbol that is backscattered. Therefore, from this point onwards, we only consider the case $N_s \leq N_c$.\\
\indent From the results in Fig.~\ref{eh1} in Section~\ref{pn_res}, the main conclusion is that even with the training sequence at work, the value of average harvested power is very small and it actually decreases with the increase of training duration. This is due to the fact that the ambient signal is orders of magnitude stronger than the backscattered signal. The spreading gain of the training sequence employed is not sufficient to boost the backscatter signal significantly against the ambient signal. Thus, during the power transfer phase, most of the energy transmitted by the ET effectively leaks towards the AS. Since the PN-sequence approach for training design fails to boost up the backscattered signal in the presence of the strong ambient interference, another approach of training sequence design is considered in the next section, that directly looks at eliminating the ambient interference. This new scheme relies on the variation of the backscatter coefficient between $\pm 1$ during each ambient symbol.
\section{The Proposed Training Sequence Design}\label{detseq}
As mentioned in the previous section, the purpose of employing backscatter training was to enable the ET to differentiate the backscattered transmission from the ambient signal. However, since the backscattered signal is orders of magnitude weaker than the ambient interference and the DSSS approach cannot boost up the backscatter signal, the only option left is to directly cancel or significantly suppress the ambient interference. In the following, we propose a scheme to remove the direct-link interference from the AS.\\
\indent \underline{\textit{Design Criterion:}} For the system model considered in Section~\ref{sys_model}, the ambient component can be eliminated at the output of the correlator in the ET if for each ambient symbol that is backscattered from the ER during the backscatter phase, the number of $+1$ and $-1$ chips is equal, i.e., $N_{+1} = N_{-1}$ and $N_{+1} + N_{-1} = \frac{N_c}{N_s}$, where $N_{+1}$ and $N_{-1}$ are the number of positive and negative chips respectively that are multiplied per symbol of the ambient source. This means that the backscatter coefficient is switched between $+1$ and $-1$ an even number of times, i.e., $N_c = 2kN_s$ where $k$ is a positive integer.\\
\indent We justify the above design criterion as follows:
In this case, $c(t)$ is a deterministic sequence of equal number of $+1$ and $-1$ chips instead of a PN sequence. Any sequence with equal number of $+1$ and $-1$ chips applied to each ambient symbol while backscattering, does the job. So we consider the expressions for $\mathbf{x}_\textrm{s}$ and $\mathbf{x}_\textrm{i}$, which are the expanded forms of~\eqref{xs} and~\eqref{xi} for $N_s \leq N_c$ (as derived in the Appendix), and are given below
\begin{align}\label{xsresolv}
\mathbf{x}_s &= \sqrt{\gamma_1 \gamma_2 P_s} \frac{ g \mathbf{f}}{N_s} \sum_{i=1}^{N_s} s_i.
\end{align}
\begin{align}\label{xiresolv}
\mathbf{x}_i &= \sqrt{\gamma_3 P_s} \frac{ \mathbf{h}}{N_c} \sum_{i=1}^{N_s} s_i \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i - 1} c_n.
\end{align}
We can see from~\eqref{xsresolv} that, the desired backscattered component at the output of the correlator $\mathbf{x}_\textrm{s}$ does not depend on the attributes of the training sequence, i.e., how the backscatter coefficient is changed. Therefore, it remains the same as in the previous case. However, with our proposed training sequence satisfying the \textit{design criterion},~\eqref{xiresolv} becomes
\begin{align}
\mathbf{x}_i &= \sqrt{\gamma_3 P_s} \frac{ \mathbf{h}}{N_c} \sum_{i=1}^{N_s} s_i \sum_{n= \frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i - 1} c_n, \nonumber\\
& = \sqrt{\gamma_3 P_s} \frac{ \mathbf{h}}{N_c} \sum_{i=1}^{N_s} s_i \left[ (+1)N_{+1} + (-1)N_{-1} \right] =0
\end{align}
since $N_{+1} = N_{-1}$.
Thus, the ambient component at the output of the correlator cancels out.\\
\indent The following remarks discuss important practical aspects related to the \textit{design criterion}.
\begin{remark}\label{r4}
The \textit{design criterion} is generic, i.e., any sequence that satisfies the two properties can serve the purpose. Moreover, we have seen that once the ambient component is removed, having a greater number of chips does not affect the harvested energy. Therefore, taking into account the hardware implementation, it is best to have the minimum number of chips per ambient symbol period, i.e., $k = 1$ and $N_c = 2N_s$ or $T_c = \frac{T_s}{2}$. This means that we can switch the backscatter coefficient only twice per ambient symbol, i.e., for each ambient symbol that is backscattered, the backscatter coefficient is kept $+1$ for half of the ambient symbol duration and $-1$ for the other half.
\end{remark}
\begin{remark}\label{r5}
It is interesting to see how this design criterion compares with the well-known training sequences commonly used in wireless communications, i.e., Maximal length sequences, Gold sequences, Walsh-Hadamard sequences and Kasami sequences. Out of these, only the Walsh-Hadamard sequences have equal number of $+1$ and $-1$ and hence satisfy the \textit{design criterion}.
\end{remark}
Using the proposed sequence in the \textit{design criterion}, we find the average harvested energy at the ER, which is presented in the proposition below.
\begin{proposition}\label{p2}
For the system model considered in Section~\ref{sys_model} with Nakagami-$m$ fading channels and $N_s \leq N_c$ while employing the backscatter training scheme proposed in the \textit{design criterion}, when the number of antennas at the ET $M\,\to\, \infty$, the incident RF power on the ER is given by
\begin{align}\label{q_instp}
Q_{RF} \approx \gamma_2 P_t \left(\frac{\gamma_1 \gamma_2 |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \frac{\sigma_n^2 N_s}{T_s P_s}} {\gamma_1 \gamma_2 |g|^2 \mu + \frac{\sigma_n^2 N_s}{T_s P_s}}\right).
\end{align}
where $\mu$ is as defined in~\eqref{mu}. Substituting this value of $Q_{RF}$ in~\eqref{qi_def}, we get the instantaneous harvested power at the ER, from which the average harvested power is calculated according to~\eqref{q_def}.
\end{proposition}
\begin{IEEEproof}
The proof is similar to the procedure in Appendix A and is omitted for the sake of brevity.
\end{IEEEproof}
\indent The following insight is gained from \textit{Proposition 2.}
\begin{remark}\label{r3}
As the proposed scheme completely removes the direct-link ambient interference, the term involving the random variable $\nu$ is removed from the expression of $Q_{RF}$. Thus, during the power transfer phase, the ET forms a focussed beam towards the ER with no energy leaking towards the AS. This leads to a significant improvement in the harvested energy. This is demonstrated in the numerical results in Section~\ref{res}.
\end{remark}
\section{Impact of Practical System Imperfections}\label{imperf}
In the previous section, we propose a training design, under the perfect synchronization assumption. However, in practice, if the ambient symbol duration is unknown or changes from the one for which the system is designed, it may lead to the loss of timing synchronization at the correlator in the ET or unequal durations of $+1$ and $-1$ values of the backscatter coefficient at the ER. Consequently, the ambient signal may not be completely cancelled and the performance of the system in terms of average harvested power at the ER may be affected. In this section, we study the impact of the following practical system imperfections caused by the unknown duration of the ambient symbol.
\subsection{Imperfect synchronization at the correlator}\label{imperfsynch}
The analysis in Section~\ref{detseq} assumes perfect synchronization. In this sub-section, we consider the case when an integer number of ambient symbols fit in the duration of the backscatter phase, but there is a misalignment between the received signal at ET and the locally generated training sequence during the backscatter phase. We model this misalignment as a time offset $T_{\textrm{off}}$.
\subsubsection{Effect of offset on the desired signal component}
We assume that the timing offset $T_{\textrm{off}} < T_c$. This is shown in Fig.~\ref{misal}.
In this case, the desired component at the output of the correlator in~\eqref{xs} becomes
\begin{align}\label{misxs}
\mathbf{x}_{s}&= \frac{\sqrt{\gamma_2 \gamma_1} g \mathbf{f}}{\ N_cT_c} \int_{0}^{N_cT_c} \sum_{n=0}^{N_c-1}s(t)c_n p_c(t-nT_c)\nonumber\\
& \sum_{m=0}^{N_c-1}c_m p_c(t-T_{\textrm{off}}-mT_c) dt,\nonumber\\
& \displaystyle_{=}^{(a)} \sqrt{\gamma_1 \gamma_2 P_s} \frac{g \mathbf{f}}{N_cT_c} \sum_{i=1}^{N_s} s_i \frac{N_c}{N_s}\left( \int_{0}^{T_\textrm{off}}-1 dt + \int_{T_\textrm{off}}^{T_c} 1dt \right), \nonumber\\
& = \sqrt{\gamma_1 \gamma_2 P_s} \frac{g \mathbf{f}}{N_cT_c} \sum_{i=1}^{N_s}s_i \frac{N_c}{N_s} \left( -2 T_\textrm{off} + T_c \right),\nonumber\\
& = \sqrt{\gamma_1 \gamma_2 P_s}\frac{g \mathbf{f}}{N_s}\sum_{i=1}^{N_s}s_i \left(1 - 2\frac{T_\textrm{off}}{T_c} \right),
\end{align}
\noindent where (a) splits the overall integration into intervals over each symbol.\\
\indent Comparing~\eqref{xs} and~\eqref{misxs} above we get for $T_\textrm{off} \leq T_c$
\begin{align}
\mathbf{x}_{s\textrm{(misaligned)}} = \left(1 - 2\frac{T_\textrm{off}}{T_c} \right) \mathbf{x}_{s\textrm{(sychronized)}}.
\end{align}
Similarly, it can be shown that for $T_c < T_\textrm{off} \leq 2T_c$,
\begin{align}
\mathbf{x}_{s\textrm{(misaligned)}} = \left(2\frac{T_\textrm{off}}{T_c} -1 \right) \mathbf{x}_{s\textrm{(sychronized)}}.
\end{align}
Thus, we can see that if the synchronization is not perfect, the desired backscatter component is a fraction of the fully synchronized case.
\subsubsection{Effect of offset on the undesired ambient component}
Again assuming that the timing offset $T_{\textrm{off}} <T_c$, the undesired ambient component from~\eqref{xi} at the output of the correlator becomes,
{\small \begin{align}\label{misxi}
\mathbf{x}_{i}&= \frac{\sqrt{\gamma_3} \textbf{h}}{\ N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{m=0}^{N_c-1}c_m p_c(t-T_{\textrm{off}}-nT_c) s(t)dt,\nonumber\\
&=\frac{\sqrt{\gamma_3 P_s} \textbf{h}}{\ N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{m=0}^{N_c-1}c_m p_c(t-T_{\textrm{off}}-nT_c) \sum_{i=1}^{S} s_i p_{s}(t-iT_s)dt,\nonumber\\
& = \sqrt{\gamma_3 P_s} \frac{\textbf{h}}{N_cT_c} \sum_{i=1}^{N_s} s_i\nonumber\\
& \left(\int \limits_{0}^{T_\textrm{off}}-1 dt + \int \limits_{T_\textrm{off}}^{T_c} +1 dt + \int \limits_{T_c}^{2T_c} -1 dt + \cdots + \int \limits_{(\frac{N_c}{N_s}-1) T_c}^{\frac{N_c}{N_s} T_c -T_\textrm{off}} -1 dt \right),\nonumber\\
& = \sqrt{\gamma_3 P_s} \frac{\textbf{h}}{N_cT_c} \sum_{i=1}^{N_s} s_i \left( -T_\textrm{off} +T_c -T_c + T_c \cdots -T_c + T_\textrm{off} \right),\nonumber\\
& = 0.
\end{align}}
Note that the same result is obtained even when $T_{\textrm{off}} > T_c$.\\
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{misalign_gr-eps-converted-to.pdf}
\caption{Misalignment between the backscattered signal and locally generated spreading sequence at the ET: (a) Effect on the backscatter component (b) Effect on the ambient component}
\label{misal}
\end{figure}
\indent As we have seen in the previous sub-section, the desired component is scaled down because of the offset in synchronization while the undesired component is still completely being eliminated. This change in the magnitude of the desired component is reflected in the energy harvested at the ER. \textit{Therefore, we can conclude that the system can work reasonably well with a small timing offset. However good synchronization is needed for best performance.}\\
\indent Using the above values of $\mathbf{x}_s$ and $\mathbf{x}_i$, the incident RF power at the ER in case of misalignment at the ET can be shown to be given by
\begin{align}\label{q_instp}
Q_{RF} \approx \gamma_2 P_t \left( \frac{\left|1 - \frac{2T_{\textrm{off}}}{T_c}\right|^2\gamma_1 \gamma_2 |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \frac{\sigma_n^2 N_s}{T_s P_s}} {\gamma_1 \gamma_2 |g|^2 \mu + \frac{\sigma_n^2 N_s}{T_s P_s}}\right).
\end{align}
which holds for all values of $T_\textrm{off}$ except when $T_\textrm{off} = k\frac{T_c}{2}$, where $k$ is an integer and $k \geq 0$. Substituting this value of $Q_{RF}$ in~\eqref{qi_def}, we get the instantaneous harvested power at the ER, from which the average harvested power is calculated according to~\eqref{q_def}.
\subsection{Effect of change in ambient symbol duration}
In this subsection, we consider the case where due to unknown ambient symbol duration, an even number of chips or backscatter coefficient changes do not fit in each ambient symbol. Consider the scenario in which the system is designed for an ambient symbol duration $T_s$. However, when the system is actually deployed, the available ambient symbol has a different duration, i.e., $T_s'$. In this situation, it is difficult to present any analytical results. Hence, we will investigate its impact using simulations in Section~\ref{ts_res}.
\subsection{Effect of other interference from neighbouring ambient sources}
In this subsection, we consider the impact of interference on our system from neighbouring ambient sources. In particular, the application of the chipping sequence at the ET increases the bandwidth of the backscatter signal. Therefore, the ambient signals in neighbouring frequencies can potentially cause interference to the system.\\
\indent The interference signal can be from a variety of sources and can even include the backscattered versions of these interference signals. Compared with the interference signals directly received at the ET, their backscattered versions have much weaker strength (by several orders of magnitude) when they reach the ET. Therefore, we only consider the directly received interference signals. In this work, we have assumed the original ambient signal to follow a normal distribution, since the ambient signal may come from a variety of sources and is usually random. Therefore, we assume that the aggregate interference signal from other ambient sources in the same environment follows a zero-mean circularly symmetric complex Gaussian distribution~\cite{Qian-2017,Tao-2018}, i.e., $\mathbf{u}_i(t)\sim \mathcal{CN} (0,{\sigma_{i}}^2 \boldsymbol{I}_M)$ where ${\sigma_i}^2$ is the received interference power.\\
\indent The expression in~\eqref{rec_sig} for the received signal at the ET thus becomes,
\begin{align}\label{rec_sig2}
\mathbf{r}_\textrm{ET}(t) &= \sqrt{\gamma_1 \gamma_2} g \mathbf{f} c(t)s(t) + \sqrt{\gamma_3} \mathbf{h} s(t) + \mathbf{u}_i(t) + \mathbf{n}(t) ,
\end{align}
As the ET correlates this composite signal with the known training sequence we get,
{\small \begin{align}\label{despread2}
\mathbf{x}_{r} &= \underbrace{\frac{\sqrt{\gamma_2} \mathbf{f}}{\ N_cT_c}\int\displaylimits_{0}^{N_cT_c} \sqrt{\gamma_1} g c(t)s(t)c(t)dt}_{\mathbf{x}_{s}} + \underbrace{\frac{1}{\ N_cT_c}\int\displaylimits_{0}^{N_cT_c}\sqrt{\gamma_3} \mathbf{h} s(t)c(t) dt}_{\mathbf{x}_{i}}\nonumber\\
& + \underbrace{\frac{1}{\ N_cT_c}\int\displaylimits_{0}^{N_cT_c} \mathbf{u}_i(t) c(t)dt}_ {\mathbf{\widetilde{u_i}}} + \underbrace{\frac{1}{\ N_cT_c}\int\displaylimits_{0}^{N_cT_c} \mathbf{n}(t) c(t)dt}_ {\mathbf{\widetilde{n}}},
\end{align}}
where $\mathbf{x}_{s}$ and $\mathbf{x}_{i}$ are desired signal and undesired primary ambient component and $\mathbf{\widetilde{u_i}}$ is the interference component from the neighbouring ambient signals, with $ \mathbf{\widetilde{u_i}}\sim \mathcal{CN} (0,\frac{{\sigma_i}^2}{N_cT_c}\boldsymbol{I}_M)$ and $ \mathbf{\widetilde{n}}\sim \mathcal{CN} (0,\frac{{\sigma_n}^2}{N_cT_c}\boldsymbol{I}_M)$ is the noise at the output of the matched filter.\\
\indent Using~\eqref{despread2}, the signal received at the ER, previously given by~\eqref{rER} becomes,
\begin{align}
{r}_{\textrm{ER}} &= \sqrt{\gamma_2}\mathbf{f}^T \mathbf{x}_t = \sqrt{\gamma_2 P_t} \frac{\left(\mathbf{f}^T {\mathbf{x}_{s}}^*+ \mathbf{f}^T {\mathbf{x}_{i}}^*+ \mathbf{f}^T {\mathbf{\widetilde{u_i}}}^*+ \mathbf{f}^T {\mathbf{\widetilde{n}}}^*\right)}{\left\|\mathbf{x}_{s}+ \mathbf{x}_{i} + \mathbf{\widetilde{u_i}}+ \mathbf{\widetilde{n}}\right\|},\label{rERb}
\end{align}
where $\mathbf{x}_t = \sqrt{P_t} \frac{(\mathbf{x}_{r})^*}{\left\|\mathbf{x}_{r}\right\|}$. Thus, the incident RF power at the ER with interference present can be shown to be given by,
\begin{equation}\label{qa2}
\resizebox{0.5\textwidth}{!}{$Q_{RF} \approx \gamma_2 P_t \left(\frac{P_s\gamma_1 {\gamma_2} |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + P_s \gamma_3 \nu \left(\frac{N_s}{N_c}\right)^2+ \frac{\sigma_i^2 N_s}{T_s} + \frac{\sigma_n^2 N_s}{T_s}} {P_s\gamma_1 {\gamma_2} |g|^2 \mu+ P_s \gamma_3 \nu \left(\frac{N_s}{N_c}\right)^2+ \frac{\sigma_i^2 N_s}{T_s} + \frac{\sigma_n^2 N_s}{T_s}}\right)$}
\end{equation}
Substituting this value of $Q_{RF}$ in~\eqref{qi_def}, we get the instantaneous harvested power at the ER, from which the average harvested power is calculated according to~(13). Generally, larger interference power leads to a degradation in the average harvested power, because our training design and interference cancellation is only targeted at the interference from the primary ambient signal, not the secondary interference signals from neighbouring ambient sources. This is numerically investigated in Section VII-D.
\section{Results}\label{res}
In this section we present the numerical and simulation results. In order to model a practical ambient backscatter scenario, we set the distances as follows: $d_1 = 200$ m, $d_2 = 10$ m, $d_3 = 200$ m~\cite{Huynh-2018}. The values of the rest of system parameters are: $d_0 = 1$ m, $k_0 = 0.001$, $M = 500$, $P_t = 1$ W, $P_s = 1$ W, $\sigma_n^2 = 10^{-18}$, $T_s = 5~\mu$s , $T_c = 500$ ns. For the non-linear energy harvester, we set $a_0 = 1500$, $b_0 = 0.0022$ and $c_0 = 24$ mW~\cite{Boshkovska-2017}. The choice of $T_c = 500$ ns ensures that multipath delay spread is negligible~\cite{bharadia2015backfi}. As mentioned in Section~\ref{sys_model}, we have assumed Nakagami-$m$ fading on all channel links. However, we can see from~\eqref{q_instant} and~\eqref{q_instp} and that the final analytical result only depends upon $m_f$. Hence, for the sake of simplicity, we have considered $m_h = m_g = 1$ for the AS to ET and AS to ER links and $m_f = 1$ and $m_f = 10$ for the ER to ET link. \redcom{We initially ignore the impact of other interference from neighbouring ambient signals, setting $\sigma_i = 0$ in Sections~\ref{pn_res}-~\ref{ts_res}, and then investigate the impact of such interference in Section~\ref{intf_res}.}
\subsection{Energy Harvested with a PN Sequence}\label{pn_res}
\indent Fig.~\ref{eh1} plots the average harvested power versus the duration of the backscatter phase, i.e., $T_b$ with the ambient signal duration being $T_s = 5~\mu$s. These results are averaged over $10^4$ Monte Carlo simulation trials. In each trial, a new pseudorandom sequence is generated and used. Note that for other practical values of system parameters, the average harvested power has very similar values and trend. Thus, we only show a single curve in Fig.~\ref{eh1}.\\
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{energyharv1-eps-converted-to.pdf}
\caption{\sugcom{Average harvested power at the ER as a function of $T_b$ (duration of the backscatter phase).}}
\label{eh1}
\end{figure}
\indent The figure shows that there is a very good agreement between the analytical results in~\eqref{q_instant} and the simulation for $N_s \leq N_c$\footnote{\sugcom{A similar match is observed between the analytical result and the simulation for $N_s \geq N_c$ but the corresponding plots are not presented here due to the reason discussed in Section~\ref{PNseq}.}}. The figure also shows that the average harvested power is maximum around $15~\mu$W when $N_s = 1$ and $T_b = 5~\mu$s. As $N_s$ and hence $T_b$ increase, the average harvested power quickly decreases and reaches a value of approximately $4~\mu$W. Thus, we can conclude from Fig.~\ref{eh1} that the average harvested power is very small and it reduces further as the training period increases.\\
\indent This latter observation is particularly counter-intuitive, since it is not expected to happen when using DSSS techniques. The reason for this trend is that the ambient signal is orders of magnitude stronger than the backscattered signal. The spreading gain of the training sequence employed is not sufficient to boost the backscatter signal significantly against the ambient signal. In order to demonstrate this, Fig. ~\ref{mgr} plots $\frac{|x_i|}{|x_s|}$, i.e., the ratio of the magnitudes of the undesired ambient component and the desired backscatter component at the output of the correlator versus the duration of the backscatter phase $T_b$. We can see from the figure that even with the training sequence in use, the ambient component is much stronger than the desired backscattered signal. Moreover, as the duration of the training phase increases, the ambient component becomes increasingly stronger. Thus, when the ET performs retrodirective WPT by taking the conjugate of the composite signal at the output of the correlator, the comparative strength of the ambient component is far greater than the backscattered one for larger durations of backscatter phase. Thus, most of the energy transmitted by the ET is still effectively leaking towards the AS and this situation becomes exacerbated for longer durations of backscatter phase due to the comparatively higher strength of the ambient component.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{magratio-eps-converted-to.pdf}
\caption{\sugcom{Ratio of magnitude of ambient and backscatter signal components at the output of the correlator plotted against $T_b$ (duration of the backscatter phase).}}
\label{mgr}
\end{figure}
\subsection{Energy Harvested with the Proposed Ambient Backscatter Training Scheme}\label{prop_res}
\sugcom{Using~\eqref{q_instp},~\eqref{qi_def} and~\eqref{q_def}, the average harvested energy at the ER is calculated and plotted in Fig.~\ref{eh2} for three different values of ambient symbol duration, i.e., $T_s = 5~\mu$s, $T_s = 10~\mu$s and $T_s = 20~\mu$s and two different values of Nakagami-$m$ fading on the ET to ER link i.e. $m_f = 1$ and $m_f = 10$. The values of other system parameters are the same as stated in the beginning of this section for Fig.~\ref{eh1}. Numerous features of the proposed scheme are evident from Fig.~\ref{eh2}. Firstly, we can see that the result for $m_f = 1$ and $m_f = 10$ are quite similar. Thus, in this case, having a line of sight link between ET and ER does not significantly impact the results. Hence, in the remaining results, we only consider $m_f = 10$.}\\
\indent Secondly, it can be observed that the energy harvested at the ER increases significantly as compared to the case when a pseudo-random sequence is employed at the ER during the backscatter phase. This is due to the fact that the proposed scheme completely eliminates the ambient component. As a result, during retrodirective WPT the ET forms a focused beam directed back at the ER alone, with no energy leaking to the AS.\\
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{all_nak_NL_newdoub-eps-converted-to.pdf}
\caption{Average harvested power at the ER with the proposed sequence plotted against the duration of the backscatter phase, $T_b$.}
\label{eh2}
\end{figure}
\indent Thirdly, the harvested power at the ER does not change with the increase in backscatter training duration $T_b$, but stays constant as long as the ambient symbol duration $T_s$ stays constant. Specifically, when the system is designed with a fixed value of $T_c$, then for different values of $N_s$ and hence $T_b$, the average harvested power at the ER now stays around $50~\mu$W for $T_s = 5~\mu$s, $99~\mu$W for $T_s = 10~\mu$s and $190~\mu$W for $T_s = 20~\mu$s.\\
\indent Fourthly, with the ambient component removed, the average harvested power depends largely on the duration of the ambient symbol $T_s$, as is evident from the plot with the average harvested power having a significantly larger value for $T_s = 20~\mu$s, compared to $T_s = 5~\mu$s \\
\indent Lastly, it can also be inferred from the plot that for a fixed ambient source, the average harvested power in this case is independent of the number of chips $N_c$. Actually, for a fixed chip duration, the number of chips also increases with the increased backscatter period $T_b$ and as we can see from Fig.~\ref{eh2}, the average harvested power stays constant for the increased values of the backscatter period.\\
\indent Fig.~\ref{ps_plots} presents the plots of average harvested power against the duration of backscatter phase $T_b$ for different values of $P_s$, the power of the AS. We can see that the average harvested power is larger with higher values of $P_s$. This observation is consistent with the analysis in Section~\ref{detseq}. We can see from~\eqref{xsresolv} that the desired backscatter component $\mathbf{x}_\textrm{s}$ is directly proportional to the strength of the AS. Since the ambient component is now completely removed, a higher value of power is harvested on average at the ER when the AS is stronger. Similarly, Fig.~\ref{M_plots} plots the average harvested power against $M$, the number of antennas at the ET. It can be observed that there is a good agreement again between the results obtained by simulation and by numerically averaging~\eqref{q_instp} for practical values of $M$.
\begin{figure}
\centering
\includegraphics[width=.5 \textwidth]{ps_plots-eps-converted-to.pdf}
\caption{\sugcom{Average harvested power, $\bar{Q}$, \\ plotted against transmit power of AS, $P_s$.}}
\label{ps_plots}
\end{figure}%
\begin{figure}
\centering
\includegraphics[width=.5 \textwidth]{M_plots-eps-converted-to.pdf}
\caption{\sugcom{Average harvested power, $\bar{Q}$, \\ plotted against number of antennas at the ET $M$.}}
\label{M_plots}
\end{figure}
\subsection{Impact of Practical System Imperfections}\label{imperf_res}
\subsubsection{Imperfect synchronization at the correlator}\label{synch_res}
A plot of the average harvested power at the ER as a function of time offset between the received and locally generated signal is given in Fig.~\ref{misal2}. The parameter values used are the same as for Fig.~\ref{eh2}. For this plot we have taken $T_c = \frac{T_s}{2}$, as discussed in Remark 1 in Section~\ref{detseq}. It can be seen from Fig.~\ref{misal2} that the average harvested power decreases for $kT_c \leq T_\textrm{off} < k\frac{T_c}{2}$, achieving a local minimum at $T_\textrm{off} = k \frac{T_c}{2}$ and then increases for $ k\frac{T_c}{2} < T_\textrm{off} \leq kT_c $. This reiterates that the system can work reasonably with a small offset, as discovered in Section~\ref{imperfsynch}.
\subsubsection{Effect of unknown ambient symbol duration}\label{ts_res}
Fig.~\ref{gensim} plots the average harvested power at the ER versus the number of ambient symbols that fit in the backscatter phase duration of $T_b$ seconds for training sequences that satisfy the \textit{design criterion} but have different number of chips, i.e., $\frac{N_c}{N_s} = 2, 10$ and $40$. \sugcom{This system was originally designed for the following values: $T_b = 200~\mu$s, $N_c = 400$, $T_c = 500~$ns, $T_s = 5~\mu$s, $N_s = 10$, $m_g = m_h = 1$ and $m_f = 10$.} We plot the average harvested power at the ER for a range of values of $N_s' = \{6,7,8,9,10,11,12,13,14,15\}$ and the corresponding $T_s'$. \\
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{misalign1-eps-converted-to.pdf}
\caption{\sugcom{Average harvested power at the ER plotted against the offset between incoming and locally generated signal at the correlator.}}
\label{misal2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{gensim2-eps-converted-to.pdf}
\caption{\sugcom{Average harvested power at the ER plotted against the number of ambient symbols during the backscatter phase when the actual ambient symbol duration is different from the designed value.}}
\label{gensim}
\end{figure}
\indent We can see from Fig.~\ref{gensim} that the training sequence with the least number of chips per symbol gives the worst performance, i.e., as the number of the ambient symbols in the backscatter phase deviates from designed value, the average harvested power drops to a fraction of a $\mu$W. This is due to the fact that the ambient component is no longer completely cancelled as the ER was designed to switch the backscatter coefficient at $\frac{T_s}{2}$, so that each ambient symbol was multiplied by $+1$ and $-1$ for alternate halves of its duration. However, in the new scenario, a switch at $\frac{T_s'}{2}$ is required. Consequently, the ambient component is not eliminated completely; rather a fraction from each ambient symbol remains that contributes to a residual ambient component at the output of the correlator. This, in turn leads to a significant amount of power leaking to the AS. \\
\indent It can also be observed from Fig.~\ref{gensim} that as the number of chips per ambient symbol increase, better performance can be obtained. For instance, the curve with the largest number of chips per symbol, i.e., $\frac{N_c}{N_s} = 40$ performs relatively better than the other two cases for moderate mismatch in symbol duration. The reason for this behaviour is that the fraction of ambient component that is not cancelled due to the unknown value of $T_s$ depends upon the chip duration $T_c$. Therefore, in spite of the fact that an even number of chips may not fit in one ambient symbol (leading to imperfect cancellation), by increasing the switching rate of the backscatter coefficient and thereby decreasing the chip duration $T_c$, the un-cancelled fraction of a chip can be reduced and hence a smaller ambient component remains at the output of the correlator. In this way, there is less leakage towards the AS and the ER is able to harvest more power. \textit{Thus, when the ambient symbol duration is unknown, a faster switching rate can help to minimize the effect of uncancelled ambient for moderate mismatch of symbol duration.}\\
\subsection{Effect of other interference from neighbouring ambient sources}\label{intf_res}
\redcom{Fig.~\ref{intf} plots the average harvested power at the ER versus the ratio of the average received power from the direct-link AS and the received interference power from neighbouring sources $\sigma_i^2$. This ratio is expressed in dB. For this plot, we have taken $T_s = 20~\mu$s and $N_s = 4$ while all the other system parameters are kept the same as for Fig.~\ref{eh2}. It can be seen that the average harvested power is $7.09~\mu$W when this ratio is 20 dB. However, when this ratio increases to 30 dB and 40 dB, the average harvested power jumps to tens and hundreds of $\mu$W respectively, finally approaching the value of over $180~\mu$W for 50 dB, very close to that can be achieved when there is no interference. Therefore, if the interference signal is significantly weaker than the original ambient signal, our system can harvest tens to hundreds of $\mu$W of power.}
\begin{figure}
\centering
\includegraphics[width=0.5 \textwidth]{eh_intf_single20db-eps-converted-to.pdf}
\caption{\redcom{Average harvested power at the ER versus the ratio of the average received power from the direct-link ambient and the average interference power from neighbouring ambient sources.}}
\label{intf}
\end{figure}
\section{Conclusions and Future Work}
In this work we have presented a wireless power transfer scheme to energize an ER using retrodirective WPT at the ET and ambient backscatter at the ER. To deal with the direct-link ambient interference, we have proposed the approach of backscatter training, i.e., the pattern of varying the reflection coefficient at the ER to completely eliminate the strong direct-link ambient interference. We have showed that when the ambient symbol duration is known, the switching rate does not matter and we can switch the backscatter coefficient only twice per ambient symbol period. When the ambient symbol duration is unknown, then switching at a faster rate helps to minimize the effect of the uncancelled ambient and boost the harvested power. \redcom{The best average harvested power is achieved when the interference signal from neighbouring ambient sources is significantly weaker than the original ambient signal.} The scheme proposed in this paper can be extended to multiple backscatter tags located in an area by assigning the mutually orthogonal Walsh-Hadamard sequences to individual ERs and considering scheduling or collision resolution schemes. This is outside the scope of this work and can be considered in future work.
\begin{appendices}
\section{Proof of Proposition 1}\label{a}
We derive the formula for instantaneous energy harvested at the ER during the power transfer phase as given in~\eqref{q_instant}. We consider the following two cases:\\
\underline{Case 1: $N_s \leq N_c$}
In this case, we have $T_s \geq T_c$. Substituting~\eqref{ambient} in~\eqref{xs} we have
{\small \begin{subequations}
\begin{alignat}{4}
\mathbf{x}_{s} &= \frac{\sqrt{\gamma_1 \gamma_2 P_s}g \mathbf{f}}{N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{n=0}^{N_c-1}c_n^2 {p_c}^2(t-nT_c) \sum_{i=1}^{N_s} s_i p_{s}(t-iT_s)dt, \label{xs1a}\\
&= \frac{\sqrt{\gamma_1 \gamma_2 P_s}g \mathbf{f}}{N_cT_c} \sum_{i=1}^{N_s} s_i \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n^2 \int \limits_{nT_c}^{(n+1)T_c} {p_c}^2(t-nT_c) dt,\label{xs1b}\\
&= \sqrt{\gamma_1 \gamma_2 P_s} \frac{ g \mathbf{f}}{\ N_cT_c } \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n^2 s_i T_c, \label{xs1cb}\\
&= \sqrt{\gamma_1 \gamma_2 P_s} \frac{ g \mathbf{f}}{\ N_c } \frac{N_c}{N_s} \sum_{i=1}^{N_s} s_i, \label{xs1d} \\
&= \sqrt{\gamma_1 \gamma_2 P_s} \frac{ g \mathbf{f}}{N_s} \sum_{i=1}^{N_s} s_i, \label{xs1ed}
\end{alignat}
\end{subequations}}
\noindent where the integration in~\eqref{xs1b} comes from the fact that the integration in~\eqref{xs1a} is being performed for the product of two aligned rectangular pulses $p_c(t)$ and $p_s(t)$ where $T_s \geq T_c$ and the duration of integration is $N_cT_c$. Also,~\eqref{xs1d} follows from the fact that $c_n^2 = 1$ and $\sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} = \frac{N_c}{N_s}$ for any given $i$.\\
\indent Next, substituting~\eqref{ambient} in~\eqref{xi} we get
{\small \begin{subequations}
\begin{alignat}{3}
\mathbf{x}_{i} &= \frac{\sqrt{\gamma_3 P_s} \mathbf{h}}{ N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{i=1}^{N_s} s_i p_{s}(t-iT_s) \sum_{n=1}^{N_c} c_n p_c(t-nT_c)dt, \label{xi1a}\\
&= \sqrt{\gamma_3 P_s} \frac{ \mathbf{h}}{N_cT_c} \sum_{i=1}^{N_s}s_i \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n \int \limits_{nT_c}^{(n+1)T_c} p_c(t-nT_c)dt, \label{xi1b}
\end{alignat}
\begin{alignat}{3}
\mathbf{x}_{i} &= \sqrt{\gamma_3 P_s} \frac{ \mathbf{h}}{N_cT_c} \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n s_i T_c, \label{xi1c}\\
&= \sqrt{\gamma_3 P_s} \frac{ \mathbf{h}}{N_c} \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n s_i, \label{xi1d}
\end{alignat}
\end{subequations}}
\noindent where again the integration in~\eqref{xi1a} becomes the summation in~\eqref{xi1c} as mentioned above.
\noindent Substituting~\eqref{xs1ed} and~\eqref{xi1d} into~\eqref{rER}, we get~\eqref{rERapprox} which simplifies to~\eqref{rERapprox2} since ${\mathbf{f}}^T \mathbf{f}^* = {\mathbf{f}}^H \mathbf{f}$ and ${\mathbf{f}}^T \mathbf{h}^* = {\mathbf{f}}^H \mathbf{h}$ as given at the top of the page. From~\eqref{rERapprox2} the incident RF power on the ER can be found using~\eqref{qapprox} (also given at the top of the page).\\
\vspace{5mm}
\begin{figure*}[t]
\begin{align}\label{rERapprox}
{r}_{\textrm{ER}} &= \sqrt{\gamma_2 P_t} \frac{(\frac{\sqrt{\gamma_1 \gamma_2 P_s} g^*}{N_s} \sum_{i=1}^{N_s} s_i^* {\textbf{f}}^T \mathbf{f}^* + \frac{\sqrt{\gamma_3 P_s}}{N_c} \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n s_i^* {\textbf{f}}^T \mathbf{h}^* + \mathbf{f}^T {\mathbf{n}}^* )}{\left\|\frac{\sqrt{\gamma_1 \gamma_2 P_s} g^*}{N_s} \sum_{i=1}^{N_s} s_i {\textbf{f}}+ \frac{\sqrt{\gamma_3 P_s}}{N_c} \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n s_i {\textbf{h}} + {\mathbf{n}}\right\|}.
\end{align}
\begin{align}\label{rERapprox2}
{r}_{\textrm{ER}} &= \sqrt{\gamma_2 P_t} \frac{(\frac{\sqrt{\gamma_1 \gamma_2 P_s} g}{N_s} \sum_{i=1}^{N_s} s_i^* {\textbf{f}}^H \textbf{f} + \frac{\sqrt{\gamma_3 P_s}}{N_c} \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n s_i^* {\textbf{f}}^H \textbf{h} + \mathbf{f}^H {\mathbf{n}} )}{\left\|\frac{\sqrt{\gamma_1 \gamma_2 P_s} g}{N_s} \sum_{i=1}^{N_s} s_i {\textbf{f}}+ \frac{\sqrt{\gamma_3 P_s}}{N_c} \sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n s_i {\textbf{h}} + {\mathbf{n}}\right\|}.
\end{align}
\begin{align}\label{qapprox}
Q_{RF} &= |r_{\textrm{ER}}|^2 \nonumber\\
&= \left({\frac{\frac{\gamma_1 {\gamma_2}^2 P_s P_t |g|^2}{N_s^2} \left|\sum_{i=1}^{N_s} s_i\right|^2 {\left\| \mathbf{f} \right\|}^4 + \frac{\gamma_3 P_s P_t}{N_c^2} \left|\sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n s_i\right|^2 {\left\|{\textbf{f}}^H \textbf{h}\right\|}^2 + \gamma_2 P_t {\left\|\mathbf{f}^H {\mathbf{n}}\right\|}^2 } {\frac{{\gamma_1 \gamma_2 P_s} |g|^2}{N_s^2} \left|\sum_{i=1}^{N_s} s_i^*\right|^2 {\left\| \mathbf{f} \right\|}^2 + \frac{\gamma_3 P_s}{N_c^2} \left|\sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)+1}^{\frac{N_c}{N_s}i} c_n s_i^*\right|^2 {\left\| \mathbf{h} \right\|}^2 + \|{\mathbf{n}}\|^2 }}\right).
\end{align}
\rule{18.2cm}{0.5pt}
\vspace{-5mm}
\end{figure*}
Let
\begin{align}\label{mu_nu}
\mu &= \left|\sum_{i=1}^{N_s} s_i\right|^2 = \left|\sum_{i=1}^{N_s} s_i^*\right|^2,\nonumber\\
\nu &= \left|\sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n s_i^*\right|^2 = \left|\sum_{i=1}^{N_s} \sum_{n=\frac{N_c}{N_s}(i-1)}^{\frac{N_c}{N_s}i-1} c_n s_i\right|^2.
\end{align}
\indent Asymptotic massive MIMO expressions for Rayleigh fading channels have been presented in~\cite{Lim-2015}. Following a similar procedure for Nakagami-$m$ fading channels, we can show that $\frac{1}{M} {\left\|\mathbf{f}_i\right\|}^4 \,\to\, M +\frac{1}{m_f} $, $\frac{1}{M} {\left\|\mathbf{f}_i\right\|}^2 \,\to\, 1 $, $\frac{1}{M} {\left\|{\mathbf{f}_k}^H \mathbf{f}_i \right\|}^2 \,\to\, 1 $, $\frac{1}{M} {\mathbf{f}_k}^H \mathbf{f}_i \,\to\, 0 $ (for $k \neq i$), $\frac{1}{M} {\mathbf{f}_k}^H \widetilde {\mathbf{n}} \,\to\, 0 $, $\frac{1}{M} {\left\|{\mathbf{f}_i}^H\widetilde {\mathbf{n}} \right\|}^2 \,\to\, \frac{{\sigma_n}^2}{NT_c} $ and $\frac{1}{M} {\left\|{\widetilde {\mathbf{n}}} \right\|}^2 \,\to\, \frac{{\sigma_n}^2}{NT_c} $.
Note that only the expression for ${\left\|\mathbf{f}_i\right\|}^4$ is different for Nakagami-$m$ channels as compared to Rayleigh fading, while the others remain the same. Also, only $m_f$ appears in the expression and $m_g$ and $m_h$ do not impact the results. Substituting these asymptotic results in~\eqref{qapprox} gives us the result in~\eqref{q_instant} for $N_s \leq N_c$ and is reproduced below
\sugcom{\begin{align*}
Q \approx \gamma_2 P_t \left(\frac{\gamma_1 {\gamma_2} |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \gamma_3 \nu \left(\frac{N_s}{N_c}\right)^2 + \frac{\sigma_n^2 N_s}{T_s P_s}} {\gamma_1 \gamma_2 |g|^2 \mu + \gamma_3 \nu \left(\frac{N_s}{N_c}\right)^2 + \frac{\sigma_n^2 N_s}{T_s P_s}}\right).
\end{align*}}
When $N_s = N_c$,~\eqref{q_instant} simplifies to
\begin{align*}
Q & \approx \gamma_2 P_t \left(\frac{ \gamma_1 {\gamma_2} |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \gamma_3 \nu + \frac{\sigma_n^2 N_s}{T_s P_s}} {\gamma_1 \gamma_2 |g|^2 \mu + \gamma_3 \nu + \frac{\sigma_n^2 N_s}{T_s P_s}}\right).
\end{align*}
\underline{Case 2: $N_s \geq N_c$}
In this case, $T_s < T_c$. Substituting~\eqref{ambient} in \eqref{xs}, we have
{\small \begin{subequations}
\begin{alignat}{3}
\mathbf{x}_{s}&= \frac{\sqrt{\gamma_1 \gamma_2 P_s} g^* \mathbf{f}^*}{N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{n=1}^{N_c}c_n^2 {p_c}^2(t-nT_c) \sum_{i=1}^{N_s} s_i p_{s}(t-iT_s)dt, \label{xs2a}\\
&= \sqrt{\gamma_1 \gamma_2 P_s} \frac{ g^* \mathbf{f}^*}{\ N_cT_c } \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)}^{\frac{N_s}{N_c}n-1} s_i \int \limits_{iT_s}^{(i+1)T_s}p_{s}(t-iT_s)dt, \label{xs2b}
\end{alignat}
\begin{alignat}{3}
&= \sqrt{\gamma_1 \gamma_2 P_s} \frac{ g^* \mathbf{f}^*}{\ N_cT_c } \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)}^{\frac{N_s}{N_c}n-1} s_i T_s, \label{xs2c}\\
&= \sqrt{\gamma_1 \gamma_2 P_s} \frac{g^* \mathbf{f}^*}{N_s} \sum_{i=1}^{N_s} s_i, \label{xs2d}
\end{alignat}
\end{subequations}}
where the~\eqref{xs2c} comes from the fact that $\int_{iT_s}^{(i+1)T_s}p_{s}(t-iT_s)dt = T_s$ and~\eqref{xs2d} is obtained using $N_cT_c = N_sT_s$.
\noindent Substituting~\eqref{ambient} in~\eqref{xi}, we obtain
{\small \begin{subequations}
\begin{alignat}{3}
\mathbf{x}_{i} &= \frac{\sqrt{\gamma_3 P_s} \textbf{h}^H}{ N_cT_c} \int \limits_{0}^{N_cT_c} \sum_{i=1}^{N_s} s_i p_{s}(t-iT_s) \sum_{n=1}^{N_c} c_n p_c(t-nT_c)dt, \label{xi2a}\\
&= \sqrt{\gamma_3 P_s} \frac{ \textbf{h}^H}{N_cT_c} \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)}^{\frac{N_s}{N_c}n-1} c_n s_i \int \limits_{iT_s}^{(i+1)T_s}p_{s}(t-iT_s)dt, \label{xi2b}
\end{alignat}
\begin{alignat}{3}
\mathbf{x}_{i}&= \sqrt{\gamma_3 P_s} \frac{ \textbf{h}^H}{N_cT_c} \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)}^{\frac{N_s}{N_c}n-1} c_n s_i T_s, \label{xi2c}\\
&= \sqrt{\gamma_3 P_s} \frac{ \textbf{h}^H}{N_s} \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)}^{\frac{N_s}{N_c}n-1} c_n s_i, \label{xi2d}
\end{alignat}
\end{subequations}}
\noindent where~\eqref{xi2c} and~\eqref{xi2d} follow from the same reasoning as in~\eqref{xs2c} and~\eqref{xs2d}.\\
\indent Substituting~\eqref{xs2d} and~\eqref{xi2d} into~\eqref{rER}, we get~\eqref{rER2}, from which the incident RF power can be found as given in~\eqref{q_it3} at the top of the page.
\begin{figure*}[t]
\begin{align}\label{rER2}
{r}_{\textrm{ER}} &= \sqrt{\gamma_2 P_t} \frac{( \frac{\sqrt{\gamma_1 \gamma_2 P_s} g}{N_s} \sum_{i=1}^{N_s} s_i {\textbf{f}}^H \textbf{f} +\frac{\sqrt{\gamma_3 P_s}}{N_s} \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)+1}^{\frac{N_s}{N_c}n} c_n s_i {\textbf{f}}^H \textbf{h} + \textbf{f}^H {\mathbf{n}})}{\left\|\frac{\sqrt{\gamma_1 \gamma_2 P_s} g^*}{N_s} \sum_{i=1}^{N_s} s_i^* {\textbf{f}}+ \frac{\sqrt{\gamma_3 P_s}}{N_s} \sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)+1}^{\frac{N_s}{N_c}n} c_n s_i^* {\textbf{h}} + {\mathbf{n}}\right\|}.
\end{align}
\begin{align}\label{q_it3}
Q_{RF} &= |r_{\textrm{ER}}|^2 \approx \left({\frac{ \frac{\gamma_1 {\gamma_2}^2 P_s P_t |g|^2}{N_s^2} \left|\sum_{i=1}^{N_s} s_i\right|^2 {\left\| \mathbf{f} \right\|}^4 + \frac{\gamma_3 P_s P_t }{N_s^2} \left|\sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)+1}^{\frac{N_s}{N_c}n} c_n s_i\right|^2 {\left\|{\textbf{f}}^H \textbf{h}\right\|}^2 + \gamma_2 P_t {\left\|\textbf{f}^H {\mathbf{n}}\right\|}^2 } { \frac{{\gamma_1 \gamma_2 P_s} |g|^2}{N_s^2} \left|\sum_{i=1}^{N_s} s_i^*\right|^2 {\left\| \mathbf{f} \right\|}^2 + \gamma_3 P_s \frac{1}{N_s^2} \left|\sum_{n=1}^{N_c} \sum_{i=\frac{N_s}{N_c}(n-1)+1}^{\frac{N_s}{N_c}n} c_n s_i^*\right|^2 {\left\| \mathbf{h} \right\|}^2 + \| {\mathbf{n}}\|^2 }}\right)
\end{align}
\rule{18.2cm}{0.5pt}
\vspace{-5mm}
\end{figure*}
\eqref{q_it3} when simplified using the asymptotic massive MIMO expressions~\cite{Lim-2015}, gives the result for $N_s \geq N_c$ in~\eqref{q_instant}, reproduced below:
\begin{align*}
Q \approx \gamma_2 P_t \left(\frac{\gamma_1 {\gamma_2} |g|^2 \mu \left(M+\dfrac{1}{m_f}\right) + \gamma_3 \nu + \frac{\sigma_n^2 N_s}{T_s P_s}} {\gamma_1 \gamma_2 |g|^2 \mu + \gamma_3 \nu + \frac{\sigma_n^2 N_s}{T_s P_s}}\right),
\end{align*}
where $\mu $ and $\nu$ are as defined in~\eqref{mu_nu}.
\end{appendices}
|
2,869,038,155,322 | arxiv | \section{Introduction}
In their recent work, Plamenevskaya and Starkston \cite{ps} showed that every Stein filling of the link of a rational surface singularity with reduced fundamental cycle, equipped with its canonical contact structure, can be obtained from a configuration of symplectic graphical disks in $\mathbb{C}^2$ with marked points including all the intersection points of these disks, by removing the union of the proper transforms of these disks from the blowup of ${\mathbb{C}}^2$ at the marked points. Their purely topological proof relies on a theorem of Wendl \cite{w}, which implies that each Stein filling of the contact singularity link of the type above admits a planar Lefschetz fibration over $D^2$, since the contact link itself is supported by a planar open book (\cite{ab},\cite{nt}), and moreover, the Lefschetz fibration corresponds to a positive factorization of the monodromy of this open book. In their proof, Plamenevskaya and Starkston developed a method to reverse-engineer a braided wiring diagram producing any such factorization, and then extended this diagram to an arrangement of symplectic graphical disks which, in turn, gives the Stein filling along with the Lefschetz fibration via the method described above (see Section~\ref{sec: bwd} for more details of their construction).
In the present paper, we focus on cyclic quotient singularities---a subclass of rational surface singularities with reduced fundamental cycle. It is well-known that the oriented link of any cyclic quotient singularity is orientation preserving diffeomorphic to a lens space $L(p,q)$.
Let $\xi_{can}$ denote the canonical contact structure on the singularity link $L(p,q)$. In \cite{l}, Lisca classified the minimal symplectic fillings of the contact $3$-manifold $(L(p,q), \xi_{can})$, up to diffeomorphism, and also showed that any such filling of $(L(p,q), \xi_{can})$ is in fact a Stein filling. In \cite{bo}, we constructed a {\em planar} Lefschetz fibration $W \to D^2$ on each Stein filling $W$ of $(L(p,q), \xi_{can})$, by explicitly describing the ordered set of vanishing cycles on a disk with holes. In this paper, our primary goal is to describe an algorithm to draw a wiring diagram, which turns out to be {\em unbraided},
that corresponds to each of these planar Lefschetz fibrations via the work of Plamenevskaya and Starkston.
\begin{Thm}\label{thm: main} There is an algorithm to draw an explicit unbraided wiring diagram whose associated planar Lefschetz fibration obtained by the method of Plamenevskaya and Starkston \cite{ps} is equivalent to the planar Lefschetz fibration $W \to D^2$ constructed by the authors \cite{bo} on each Stein filling $W$ of the contact $3$-manifold $(L(p,q), \xi_{can})$.
\end{Thm}
As we mentioned in the first paragraph, Plamenevskaya and Starkston described an algorithm (see \cite[Section 5.4]{ps}) to obtain a braided wiring diagram from the ordered set of vanishing cycles of a planar Lefschetz fibration, by reverse-engineering. Their algorithm involves many choices (see \cite[Remark 5.7]{ps}) and although we do not rely on their reverse-engineering algorithm here, we show {\em indirectly} that by appropriate choices, one can obtain ``unbraided" wiring diagrams, which means that all the braids in the diagram can be chosen to be the identity.
The article of Plamenevskaya and Starkston was admittedly inspired by the work of de Jong and van Straten \cite{djvs}, who studied the Milnor fibers and deformation theory of sandwiched singularities---which includes rational surface singularities with reduced fundamental cycle. In their work, deformation theory of a surface singularity in the given class is reduced to deformations of the germ of a reducible plane curve representing the singularity.
In particular, to any cyclic quotient singularity germ, de Jong and van Straten associate a decorated germ of a reduced plane curve singularity $\mathcal C=C_1\cup\cdots\cup C_n\subset ({\mathbb{C}}^2,0)$ with {\em smooth} irreducible branches, where the decoration on each $C_i$ is a certain positive integer, which we omit here from the notation for simplicity. The outcome of de Jong and van Straten's construction is that there is a bijection between one-parameter deformations of the cyclic quotient singularity and ``picture deformations'' of $\mathcal C $ representing that singularity (see Section~\ref{sec: bwd} for more details of their construction).
Moreover, Plamenevskaya and Starkston \cite[Proposition 5.5]{ps}, extends any given braided wiring diagram (viewed as a collection of intersecting curves in ${\mathbb{R}} \times {\mathbb{C}}$) to a collection of symplectic disks in ${\mathbb{C}} \times {\mathbb{C}}$. Consequently, as a corollary to Theorem~\ref{thm: main}, we obtain the following result coupled with \cite[Proposition 5.8]{ps}.
\begin{Cor} \label{cor: arrangement} For each Stein filling $W$ of the contact $3$-manifold $(L(p,q), \xi_{can})$, there is an explicit collection of symplectic graphical disks $\Gamma_1, \ldots , \Gamma_n$ in ${\mathbb{C}}^2$, with marked points $p_1, \ldots, p_m \subset \bigcup_i \Gamma_i$, which include all the intersection points of these disks, so that by removing the union of the proper transforms $\widetilde{\Gamma}_1, \ldots, \widetilde{\Gamma}_n$ of $\Gamma_1, \ldots, \Gamma_n$, from the blowup of ${\mathbb{C}}^2$ at these marked points we obtain $W$ along with the Lefchetz fibration mentioned in Theorem~\ref{thm: main}. Moreover, the collection of symplectic graphical disks $\Gamma_1, \ldots , \Gamma_n$ is related to the decorated plane curve germ $\mathcal C=C_1\cup\cdots\cup C_n\subset ({\mathbb{C}}^2,0)$ representing the cyclic quotient surface singularity at hand by a smooth graphical homotopy.
\end{Cor}
For each Stein filling $W$ of $(L(p,q), \xi_{can})$, the symplectic graphical disk arrangement $\Gamma:= \Gamma_1 \cup \cdots \cup \Gamma_n $ with marked points $p_1, \ldots, p_m \subset \Gamma$ in ${\mathbb{C}}^2$ described in Corollary~\ref{cor: arrangement} determines immediately the $m \times n$ {\em incidence matrix} $\mathcal{I}(\Gamma, \{p_j\})$, defined so that its $(i,j)$-entry is $1$ if $p_j \in \Gamma_i$, and $0$ otherwise. Since there is no canonical labelling of the points $p_j$, in general, the incidence matrix is only defined up to permutation of the columns.
\begin{Cor} \label{cor: incidence} For each Stein filling $W$ of $(L(p,q), \xi_{can})$, there is an iterative algorithm to obtain the incidence matrix $\mathcal{I}(\Gamma, \{p_j\})$ for the symplectic graphical disk arrangement $\Gamma:= \Gamma_1 \cup \cdots \cup \Gamma_n $ with marked points $p_1, \ldots, p_m \subset \Gamma$ in ${\mathbb{C}}^2$ described in Corollary~\ref{cor: arrangement}. \end{Cor}
As a matter of fact, one can read off the incidence matrix directly from the wiring diagram from which the symplectic disk configuration arises. As explained in \cite[Section 6]{ps} and \cite[Section 5]{djvs}, the incidence matrix $\mathcal{I}(\Gamma, \{p_j\})$ determines the fundamental group, the integral homology and the intersection form of $W$, as well as the first Chern class of the Stein structure on $W$.
Note that each Milnor fiber of the cyclic quotient singularity is a Stein filling of its boundary---which is the link $L(p,q)$ of the singularity, equipped with its canonical contact structure
$\xi_{can}$. In \cite{npp}, N\'{e}methi and Popescu-Pampu showed that there is an explicit one-to-one correspondence between Stein fillings of $(L(p,q), \xi_{can})$ and Milnor fibers of the corresponding cyclic quotient singularity, proving in particular a conjecture of Lisca \cite{l}. As another application of Theorem~\ref{thm: main}, we obtain an alternate proof of their result formulated as Corollary~\ref{cor: equiv}.
We say that two (smooth) disk arrangements $(\Gamma, \{p_j\})$ and $(\Gamma', \{p'_j\})$ in $\mathbb{C}^2$ are {\em combinatorially equivalent} if their incidence matrices coincide up to permutation of columns, i.e. up to relabelling of the marked points.
\begin{Cor} \label{cor: equiv} For each Stein filling $W$ of $(L(p,q), \xi_{can})$, the arrangement of symplectic graphical disks $(\Gamma, \{p_j\})$, described in Corollary~\ref{cor: arrangement}, is combinatorially equivalent to the arrangement of the smooth branches of a picture deformation of the decorated plane curve germ $\mathcal C$ representing the corresponding cyclic quotient surface singularity, described by de Jong and van Straten \cite{djvs}. This equivalence gives an explicit bijection between Stein fillings of $(L(p,q), \xi_{can})$ and Milnor fibers of the corresponding cyclic quotient singularity. \end{Cor}
In other words, the wiring diagrams we obtain via Theorem~\ref{thm: main} can be viewed as picture deformations of the decorated plane curve germ representing the associated cyclic quotient singularity. In Figure~\ref{fig: steintomilnor} below, we summarized the correspondence that takes each Stein filling of $(L(p,q), \xi_{can})$ given by Lisca to the Milnor fiber of the associated cyclic quotient singularity described by de Jong and van Straten via a picture deformation of the decorated plane curve representing the singularity.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{steintomilnor.eps}}}
\relabel{1}{Stein filling } \relabel{2}{Lefschetz fibration} \relabel{3}{Wiring diagram}
\relabel{4}{Configuration of symp.} \relabel{8}{graphical disks in $\mathbb{C}^2$} \relabel{5}{Incidence matrix} \relabel{6}{picture deformation} \relabel{7}{Milnor fiber}
\endrelabelbox
\caption{From Lisca to de Jong and van Straten }\label{fig: steintomilnor} \end{figure}
It is well-known that for any contact singularity link $(L(p,q), \xi_{can})$, the Milnor fiber of the Artin smoothing component of the corresponding cyclic quotient singularity gives a Stein filling which is Stein deformation equivalent to the one obtained by deforming the symplectic structure on the minimal resolution (see \cite{bd}) of the singularity. In addition, according to \cite{djvs}, there is a canonical picture deformation, called the {\em Scott deformation}, of the decorated plane curve germ which corresponds to the Artin smoothing. As a final application of Theorem~\ref{thm: main}, we obtain a wiring diagram that represents the combinatorial equivalence class of the Scott deformation.
\begin{Cor} \label{cor: scott} For the Milnor fiber of the Artin smoothing component of the cyclic quotient singularity, the arrangement of symplectic disks given in Corollary~\ref{cor: arrangement}, arising from the wiring diagram described in Theorem~\ref{thm: main}, is combinatorially equivalent to the Scott deformation of a decorated plane curve representing the singularity.
\end{Cor}
\section{Rational singularities with reduced fundamental cycle, picture deformations, and braided wiring diagrams} \label{sec: bwd}
In \cite{npp}, N\'{e}methi and Popescu-Pampu showed that there is an
explicit bijective correspondence between Stein fillings of the link of a cyclic quotient singularity and Milnor fibres of smoothing components
of the given singularity, as conjectured by Lisca \cite{l}. As cyclic quotient singularities are examples of sandwiched singularites, de Jong and van Straten's picture deformation construction (see \cite{djvs}) can be used to describe these Milnor fibres. We give a brief description of the construction of de Jong and van Straten here.
As the theory is easier to describe in the case of rational singularities with reduced fundamental cycle, a class which
contains cyclic quotient singularities, we will restrict attention to these.
Let $(X,x)$ be the germ of a rational singularity with reduced fundamental cycle.
De Jong and van Straten associate to $(X,x)$ a, possibly nonunique, decorated germ of a reduced plane curve singularity $\mathcal C=C_1\cup\cdots\cup C_n\subset ({\mathbb{C}}^2,0)$ with \emph{smooth} irreducible branches. Each such singularity can be resolved by a finite sequence of blowups.
For each branch $C_i$, let $m_i$ denote the number of times $C_i$, or its proper transform,
is blown up in the minimal resolution. For example, if $\mathcal C$ consists of a collection of curves intersecting
(pairwise) transversally at $0$, then $m_i=1$ for all $i$.
The decoration on $\mathcal C$ consists of an $n$-tuple $l=(l_1,\dots,l_n)$ of positive integers such that $l_i\geq m_i$
for each $i$. Given such
a decorated curve $(\mathcal C,l)$, one can recover the corresponding surface singularity as follows: Take the minimal
embedded resolution of $\mathcal C$ and iteratively blow up the proper transform of $C_i$ $(l_i-m_i)$-times on the preimage
of $0$ for each $i$. Under the condition that $l_i>m_i$ for each $i$, the set of exceptional curves that do not meet
the proper transform of $\mathcal C$ will be connected. Collapsing them then gives the corresponding surface singularity.
Given a decorated curve $(\mathcal C,l)$, let $\tilde{\mathcal C}=\tilde C_1\cup\cdots\cup\tilde C_n$ denote the normalization of $\mathcal C$ (which in our present situation is just the disjoint union of the irreducible components of $\mathcal C$).
Geometrically, one may think of the decoration $l$ as a collection of $l_i$ marked points on $\tilde C_i$, for each $i$, all concentrated on
the preimage of the singular point.
The outcome of de Jong and van Straten's construction is that there is a one-to-one correspondence between one-parameter deformations of $(X,x)$ and ``picture deformations'' of $(\mathcal C,l)$. Roughly speaking, a \emph{picture deformation} of $(\mathcal C,l)$
consists of a $\delta$-constant deformation $\mathcal C^s=C_1^s\cup\cdots\cup C_n^s$ of $\mathcal C$, which in the present situation means that the branches of $\mathcal C$ are deformed separately
and not allowed to merge, together with a redistribution $l^s$ of the marked points so that we have exactly $l_i$ marked points on $\tilde C^s_i$ for each $i$, where $\tilde C^s_1,\ldots,\tilde C^s_n$ denote the irreducible components of the normalization $\tilde{\mathcal C}^s$ of $\mathcal C^s$. Here $\mathcal C^0=\mathcal C$ and we require that for $s\neq 0$ the only singularities of $\mathcal C^s$ are ordinary
$k$-tuple points, for various $k$, that is, transversal intersections of $k$ smooth branches, and that each such multiple point
is marked. There may be additional ``free'' marked points on the branches of $\mathcal C^s$. The Milnor fibre of the smoothing
associated to $(\mathcal C^s,l^s)$ can then be constructed by blowing up all the marked points, taking the complement
of the proper transforms of $C_1^s,\ldots,C_n^s$ and smoothing corners. Here the Milnor fiber will be noncompact, but by
working in a small ball centered at the origin in ${\mathbb{C}}^2$ we can obtain compact Milnor fibers.
The topological information from picture deformations can be conveniently extracted by using the notion of braided wiring diagrams. These were introduced by
Cohen and Suciu \cite{cs} in their study of complex hyperplane arrangements and have been used fruitfully by Plamenevskaya and Starkston \cite{ps} in
their investigation of unexpected Stein fillings in the case of rational surface singularities with
reduced fundamental cycle. We briefly describe these next.
A \emph{braided wiring diagram} is a collection of curves $f_i\colon [0,1]\to{\mathbb{R}}\times{\mathbb{C}}$ for $1\leq i\leq n$, called \emph{wires},
such that $f_i(t)\in\{t\}\times{\mathbb{C}}$. At finitely many interior points $t_1,\ldots,t_m$, a subcollection of the wires may intersect with the
remaining being disjoint, but at each such point
the wires intersecting are assumed to have distinct tangent lines. We will make the further assumption that there is
a number $\varepsilon> 0$ such that the positions of the wires above the points $0,1$ and $t_i\pm\varepsilon$ take the same given values
in ${\mathbb{R}}\subset{\mathbb{C}}$ and the restriction of each wire $f_j$ to $(t_i-\varepsilon,t_i+\varepsilon)$ is linear. Any
braided wiring diagram can be made to satisfy this assumption by a homotopy of braided wiring diagrams.
Then the portions of the braided wiring diagram between $t_i-\varepsilon$ and $t_i+\varepsilon$ can be
specified by declaring which adjacent wires intersect and on the complementary intervals the wires may be braided. Moreover, any wiring diagram will be presented by its projection onto ${\mathbb{R}} \times {\mathbb{R}} \subset {\mathbb{R}} \times {\mathbb{C}}$, where the second ${\mathbb{R}}$ is the real part of ${\mathbb{C}}$.
We now describe how to obtain a braided wiring diagram from a picture deformation $(\mathcal C^s,l^s)$. By choosing
coordinates of ${\mathbb{C}}^2$ generically, we may assume that each $C_i^0$ is graphical, that is,
$C_i^0=\{(x,y)\in{\mathbb{C}}^2\,|\,x\in D, y=f_i(x)\}$ for some complex function $f_i$, where $D$ is a small disk in ${\mathbb{C}}$
centered at $0$. For $s>0$ sufficiently small, it follows that each $C_i^s$ is graphical. Let $\eta_1,\ldots,\eta_m$
denote the images of the intersection points of $C_1^s,\ldots,C_n^s$ under the map $\pi_x \colon{\mathbb{C}}^2\to{\mathbb{C}}$
given by projecting onto the first coordinate and choose a smooth curve $\gamma\colon [0,1]\to D$ whose interior passes through these points such that $\gamma^\prime(t)$ has
nonpositive real part for all $t$. Then $(C_1^s\cup\cdots\cup C_n^s)\cap\pi_x^{-1}(\gamma)$ is a braided wiring diagram.
Next we review how Plamenevskaya and Starkston constructed planar Lefschetz fibrations based on a configuration of smooth disks in ${\mathbb{C}}^2$; see \cite[Lemma 3.2]{ps}. Let $\Gamma_1, \ldots, \Gamma_n$ be smooth disks in ${\mathbb{C}}^2$ which are graphical with respect to the projection $\pi_{x}$. Assume that whenever two or more of these disks meet at a point, they intersect transversally and positively with respect
to the orientation on the graph $\Gamma_i$ induced from the natural orientation on ${\mathbb{C}}$. Let $p_1, \ldots, p_m$ be the marked points
on $\bigcup_i \Gamma_i$ which include all the intersection points, and let $ \Pi\colon {\mathbb{C}}^2 \# m \hbox{$\overline{{\Bbb C}P^2}$}
\to {\mathbb{C}}^2 $ be the blow-up at the
points $p_1, \ldots, p_m$. If $\widetilde{\Gamma}_1, \ldots, \widetilde{\Gamma}_n$
denote the proper transforms of $\Gamma_1, \ldots, \Gamma_n$,
then $$\pi_{x} \circ \Pi\colon {\mathbb{C}}^2 \# m \hbox{$\overline{{\Bbb C}P^2}$} \setminus (\widetilde{\Gamma}_1, \ldots, \widetilde{\Gamma}_n) \to {\mathbb{C}}$$ is a {\em Lefschetz fibration} whose regular fibers are
punctured planes, where each puncture corresponds to a component $\widetilde{\Gamma_i}$. There is one vanishing cycle for
each point $p_j$, which is a curve in the fiber enclosing the punctures that correspond to the components $\Gamma_i$
passing through $p_j$.
Moreover, restricting to an appropriate Milnor ball in ${\mathbb{C}}^2$ that contains all the points $p_1, \ldots, p_m$ one obtains a Lefschetz fibration whose fiber is a disk with holes, where the holes correspond
to the components $\Gamma_i$ and the vanishing cycles correspond to the points $p_j$ in the same way as described above.
Furthermore, if the {\em curvettas} $C_1^s,\ldots,C_n^s$
with marked points are the result of a picture deformation of a germ associated to a surface singularity, the Lefschetz fibration constructed as above is compatible
with the complex structure on the Milnor fiber of the corresponding smoothing.
\subsection{From wiring diagrams to planar Lefschetz fibrations} \label{subsec: wiringtoPLF} Here we outline the method of Plamenevskaya and Starkston that gives a set of ordered vanishing cycles associated to any braided wiring diagram, which in turn determines a planar Lefschetz fibration on the associated Stein fillings; see \cite[Section 5.2]{ps}.
In this paper, we will only deal with wiring diagrams without any braids and we will call them {\em unbraided} wiring diagrams. In the following, we describe their method for the case of unbraided wiring diagrams. {\em We should emphasize that our conventions will be different from those of \cite{ps}, for the purposes of this paper.} We denote the marked points (consisting of intersection points and free points) by $x_i$, and enumerate them according to their geometric position from right to left, as illustrated in Figure~\ref{fig: wire}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=2in
\centerline{\epsfbox{wire.eps}}}
\relabel{z}{$x_1$} \relabel{w}{$x_3$} \relabel{y}{$x_4$} \relabel{h}{$x_2$} \endrelabelbox
\caption{An example of an {\em unbraided} wiring diagram without any free points.} \label{fig: wire} \end{figure}
For each marked point $x_s$ in the wiring diagram, there is a convex curve $\delta} \def\D{\Delta (x_s)$ in $D_k$ enclosing a certain set of adjacent holes, which is determined as follows.
\begin{Def} \label{def: convex} {\em (Convex curve assigned to a marked point)} Suppose that the marked point $x_s$ is a simultaneous intersection point of some geometrically consecutive wires in a given wiring diagram. The convex curve $\delta} \def\D{\Delta (x_s)$ encircling the adjacent holes whose geometric order from the top in $D_k$ coincides with the local geometric order of the wires simultaneously intersecting at that marked point is called the convex curve assigned to $x_s$. If $x_s$ is a free marked point on a single wire, then the convex curve $\delta} \def\D{\Delta (x_s)$ assigned to $x_s$ is the curve which is parallel to a single interior boundary component of $D_k$ whose order from the top coincides with the local geometric order of the wire. \end{Def}
For example, in Figure~\ref{fig: wire}, the geometrically top four wires intersect at the marked point $x_4$; the geometrically top two wires intersect at the marked point $x_3$; the geometrically bottom two wires intersect at the marked point $x_2$ and the geometrically second and third wires intersect at the marked point $x_1$. It follows that the convex curves $\delta} \def\D{\Delta (x_4), \delta} \def\D{\Delta (x_3), \delta} \def\D{\Delta (x_2), \delta} \def\D{\Delta (x_1)$ depicted in Figure~\ref{fig: ex9b} are assigned to the marked points $x_4, x_3, x_2, x_1$, respectively, in Figure~\ref{fig: wire}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=1.8in
\centerline{\epsfbox{ex9b.eps}}}
\relabel{4}{$\delta} \def\D{\Delta (x_3)$} \relabel{5}{$\delta} \def\D{\Delta (x_1)$} \relabel{6}{$\delta} \def\D{\Delta (x_2)$} \relabel{7}{$\delta} \def\D{\Delta (x_4)$}
\endrelabelbox
\caption{Convex curves in $D_5$ assigned to the marked points in Figure~\ref{fig: wire}.} \label{fig: ex9b} \end{figure}
For each marked point $x_s$ in the wiring diagram, there is a counterclockwise half-twist $\D (x_s)\colon D_k \to D_k$, which is determined as follows.
\begin{Def} \label{def: twist} {\em (Counterclockwise half-twist corresponding to a marked point)} The counterclockwise half-twist $\D (x_s)$ along the subdisk in $D_k$ enclosed by the convex curve $\delta} \def\D{\Delta (x_s)$ is called the counterclockwise half-twist corresponding to $x_s$. \end{Def}
Suppose that a wiring diagram has $k$ wires and $r$ marked points $x_r, x_{r-1}, \ldots, x_1$, reading from left to right. According to \cite{ps}, for each $1 \leq s \leq r$, there is a vanishing cycle $V(x_s)$ in $D_k$ associated to the marked point $x_s$, which is determined as follows.
\begin{Def} \label{def: cycle} {\em (Vanishing cycle associated to a marked point)} For each $2 \leq s \leq r$, the vanishing cycle $V(x_s)$ associated to the marked point $x_s$ is the curve in $D_k$ given as $$ \D(x_{1}) \circ \cdots \circ \D(x_{s-1}) (\delta} \def\D{\Delta(x_s)),$$ and $V(x_1) =\delta} \def\D{\Delta (x_1)$. \end{Def}
For example, the vanishing cycles for the marked points in Figure~\ref{fig: wire} are calculated as follows. The curve $V(x_4) = \D(x_{1}) \circ \D(x_{2}) \circ \D(x_{3}) (\delta} \def\D{\Delta(x_4))$ is illustrated in Figure~\ref{fig: ex7}. Similarly, $V(x_3) = \D(x_{1}) \circ \D(x_{2})(\delta} \def\D{\Delta(x_3))$ is illustrated in Figure~\ref{fig: ex8}. Finally, the vanishing cycle $V(x_2)= \D(x_{1})(\delta} \def\D{\Delta(x_2)) = \delta} \def\D{\Delta(x_2)$ and $V(x_1)=\delta} \def\D{\Delta(x_1)$ by definition.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5in
\centerline{\epsfbox{ex7.eps}}} \relabel{1}{$\delta} \def\D{\Delta(x_4)$} \relabel{2}{$V(x_4)$} \relabel{a}{$\D(x_{3})$} \relabel{b}{$\D(x_{2})$} \relabel{c}{$\D(x_{1})$}
\endrelabelbox
\caption{Starting from $\delta} \def\D{\Delta(x_4)$, we apply a counterclockwise half-twist on the subdisk enclosed by the dotted curve, at each step, going from left to right. }\label{fig: ex7} \end{figure}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4in
\centerline{\epsfbox{ex8.eps}}} \relabel{1}{$\delta} \def\D{\Delta(x_3)$} \relabel{2}{$V(x_3)$} \relabel{b}{$\D(x_{1})$} \relabel{a}{$\D(x_{2})$}
\endrelabelbox
\caption{Starting from $\delta} \def\D{\Delta(x_3)$, we apply a counterclockwise half-twist on the subdisk enclosed by the dotted curve, at each step, going from left to right. }\label{fig: ex8} \end{figure}
\section{Planar Lefschetz fibrations on Stein fillings of lens spaces} \label{sec: palf}
\subsection{Symplectic fillings of lens spaces} \label{sec: lens}
In \cite{l}, Lisca classified the minimal symplectic fillings of the contact $3$-manifold $(L(p,q), \xi_{can})$, up to diffeomorphism. It turns out any minimal symplectic filling of $(L(p,q), \xi_{can})$ is in fact a Stein filling. We first briefly review Lisca's classification \cite{l} of
Stein fillings of $(L(p,q), \xi_{can})$, up to diffeomorphism.
\begin{Def} \label{blowup} {\em (Blowup of a tuple of positive integers)} For any integer $r \geq 2$, a blowup of an $r$-tuple of positive
integers at the $i$th term is a map $\varphi_i\colon \mathbb{Z}^r_+\to
\mathbb{Z}^{r+1}_+$ defined by
\begin{align*}
& (n_1, \ldots, n_{i}, n_{i+1}, \ldots, n_r) \mapsto (n_1, \ldots,
n_{i-1}, n_{i}+1,1,n_{i+1}+1, n_{i+2}, \ldots,
n_r)
\end{align*}
for any $1 \leq i \leq r-1$ and by
\begin{align*} & (n_1, \ldots,
n_r) \mapsto (n_1, \ldots, n_{r-1},
n_r+1,1)
\end{align*}
when $i=r$. The case when $1 \leq i \leq r-1$ is called an interior blowup, whereas the case $i=r$ is called an exterior blowup. We also say that $(0) \to (1,1)$ is the initial blowup.
\end{Def}
Suppose that $p > q \geq 1$ are coprime integers and let
$$\frac{p}{p-q}=[b_1, b_2, \ldots, b_k]=b_1-
\cfrac{1}{b_2- \cfrac{1}{\ddots- \cfrac{1}{b_k}}}$$ be the Hirzebruch-Jung continued fraction, where $b_i \geq 2$ for $1 \leq i \leq
k$. Note that the sequence of integers $\{ b_1, b_2 \ldots, b_k\}$ is uniquely determined by the pair $(p,q)$.
For any $k \geq 2$, a $k$-tuple of positive integers $(n_1, \ldots, n_k)$ is
called {\em admissible} if each of the denominators in the continued
fraction $[n_1, \ldots, n_k]$ is positive, where we do not assume that $n_i \geq 2$. For any $k \geq 2$, let $\mathcal{Z}_{k} \subset
\mathbb{Z}^k$ denote the set of admissible $k$-tuples of positive
integers $\textbf{n}=(n_1, \ldots, n_k)$ such that $[n_1, \ldots,
n_k] =0$ and let $\mathcal{Z}_{1}=\{(0)\}$. As a matter of fact, any $k$-tuple of
positive integers in $\mathcal{Z}_{k}$ can be obtained from $(0)$
by a sequence of blowups as observed by Lisca \cite[Lemma 2]{l}. Note that the only possible blowup of $(0)$ is the initial blowup $(0) \to (1,1)$. Let
$$\mathcal{Z}_{k}(\textstyle{\frac{p}{p-q}}) = \{ (n_1,
\ldots, n_k)\in\mathcal{Z}_{k}\,|\, 0 \leq n_i \leq b_i \;
\mbox{for} \; i=1, \ldots , k\}.$$
Next, for every $k$-tuple $\textbf{n}=(n_1, \ldots, n_k)\in \mathcal{Z}_{k}(\textstyle{\frac{p}{p-q}})$, we describe a $4$-manifold $W_{p,q}(\textbf{n})$ whose boundary is orientation-preserving diffeomorphic to $L(p,q)$. We start with a chain of unknots in $S^3$ with framings $n_1, n_2,
\ldots, n_k$, respectively. It can be easily verified that the result of Dehn
surgery on this framed link, which we denote $N(\textbf{n})$, is diffeomorphic to $S^1 \times S^2$. Let
$\textbf{L}=\bigcup_{i=1}^{k} L_i$ denote the framed link in $N(\textbf{n})$ depicted in red
in Figure~\ref{han}, where each $L_i$ has $b_i -n_i$ components.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{han.eps}}}
\relabel{a}{$n_1$} \relabel{b}{$n_2$} \relabel{d}{$n_{k-1}$} \relabel{e}{$n_k$} \relabel{1}{$b_1-n_1$}
\relabel{2}{$b_2-n_2$} \relabel{3}{$b_{k-1}-n_{k-1}$} \relabel{4}{$b_k-n_k$} \relabel{5}{$-1$} \relabel{6}{$-1$}
\relabel{7}{$-1$} \relabel{8}{$-1$} \relabel{9}{$-1$} \relabel{10}{$-1$} \relabel{a1}{$-1$} \relabel{a2}{$-1$}
\relabel{a3}{$-1$} \relabel{a4}{$-1$} \relabel{a5}{$-1$} \relabel{a6}{$-1$}
\endrelabelbox
\caption{The relative handlebody decomposition of the $4$-manifold $W_{(p,q)}(\textbf{n})$.}
\label{han}
\end{figure}
Since $N(\textbf{n})$ is diffeomorphic to $S^1 \times S^2$, one can fix a diffeomorphism $\phi : N(\textbf{n})\to S^1 \times S^2$. By attaching
$2$-handles to $S^1 \times D^3$ along the framed link $\phi(\textbf{L}) \subset S^1
\times S^2$, we obtain a smooth $4$-manifold $W_{p,q}(\textbf{n})$ whose boundary is orientation-preserving diffeomorphic to $L(p,q)$. As noted by Lisca, the diffeomorphism type of $W_{p,q}(\textbf{n})$ is independent of the choice of $\phi$ since any self-diffeomorphism
of $S^1 \times S^2$ extends to $S^1 \times D^3$.
According to Lisca,
any minimal symplectic filling (in fact Stein filling) of ($L(p,q), \xi_{can})$ is
orientation-preserving diffeomorphic to
$W_{p,q}(\textbf{n})$ for some $\textbf{n} \in
\mathcal{Z}_{k}(\frac{p}{p-q})$.
\subsection{Planar Lefschetz fibrations on Stein fillings} \label{sec: planar}
In \cite{bo}, we described an algorithm to construct a planar Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$, based on any given blowup sequence $$(0) \to (1,1) \to \cdots \to \textbf{n}=(n_1, \ldots, n_k)\in
\mathcal{Z}_{k}(\frac{p}{p-q}).$$ Here we briefly review our algorithm, which consists of two parts, {\em stabilization} and {\em surgery}, that gives an ordered set of vanishing cycles on a disk with $k$ holes which is the fiber of our Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$. We begin by describing the first part of our algorithm which we call the stabilization algorithm.
\subsubsection{The stabilization algorithm} \label{sec: stab} For any positive integer $r$, let $D_r$ denote the disk with $r$ holes. We assume that the holes are aligned horizontally on $D_r$ and we enumerate the holes on $D_r$ from left to right as $H_1, H_2, \ldots, H_{r}.$
The initial step of the algorithm corresponding to $(0)$ is the disk $D_1$ with no vanishing cycle, as depicted on the top in Figure~\ref{fig: stabil1}. Recall that the only blowup starting from $(0)$ is the initial blowup $(0) \to (1,1)$. The corresponding fiber is the disk $D_2$ with one vanishing cycle $\alpha_1$, which is parallel to the boundary of $H_1$, as depicted in the middle in Figure~\ref{fig: stabil1}. This is a stabilization of the previous step, where we had the annulus $D_1$ with no vanishing cycle. Depending on the type of the next blowup, we proceed as follows.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{stabil1.eps}}}
\relabel{1}{$1$} \relabel{2}{$2$} \relabel{3}{$1$} \relabel{4}{$2$} \relabel{5}{$3$} \relabel{6}{$1$} \relabel{7}{$2$} \relabel{8}{$3$} \relabel{9}{$1$}
\relabel{a}{$(1,1)$} \relabel{b}{$(2,1,2)$} \relabel{c}{$(1,2,1)$} \relabel{d}{interior blowup} \relabel{e}{exterior blowup} \relabel{n}{initial blowup}
\relabel{f}{$\alpha_1$} \relabel{g}{$\alpha_1$} \relabel{k}{$\alpha_1$} \relabel{h}{$\alpha_2$} \relabel{j}{$\alpha_2$} \relabel{m}{$(0)$}
\endrelabelbox
\caption{Stabilizations depending on the type of the blowup.}
\label{fig: stabil1}
\end{figure}
If we have an interior blowup at the first term $(1,1) \to (2,1,2)$, then $H_2$ ``splits" into two holes, where the new hole $H_3$ is placed to the right of $H_2$. The curve $\alpha_1$ becomes a convex curve enclosing $H_2$ and $H_3$ in $D_3$. We introduce a new vanishing cycle $\alpha_2$ which encloses $H_1$ and $H_3$ in $D_3$ as shown at the bottom left in Figure~\ref{fig: stabil1}. We can view the introduction of $\alpha_2$ as a stabilization of the previous step.
On the other hand, if we have an exterior blowup $(1,1) \to (1,2,1)$, then we simply introduce a new hole $H_3$ to the right, and the new vanishing cycle $\alpha_2$ is parallel to the boundary of $H_3$ in $D_3$ as shown at the bottom right in Figure~\ref{fig: stabil1}. Again, we can view the introduction of $\alpha_2$ as a stabilization of the previous step.
Now suppose that we have a set of $r-1$ vanishing cycles $\alpha_1, \alpha_2, \ldots, \alpha_{r-1}$ on a disk with $r$ holes corresponding to some blowup sequence $$(0) \to (1,1) \to \cdots \to (n_1, \ldots, n_r).$$ Depending on the type of the next blowup we insert a new hole and introduce a new vanishing cycle $\alpha_r$ as follows.
If we have an interior blowup at the $i$th term, for $1 \leq i \leq r-1$, then the hole $H_{i+1}$ ``splits" into two holes, where the new hole $H_{i+2}$ is placed to the right of $H_{i+1}$ in the resulting disk $D_{r+1}$. We introduce a new vanishing cycle $\alpha_{r}$ which encloses the holes $H_1, H_2, \ldots, H_{i}$ and the new hole $H_{i+2}$ in $D_{r+1}$. We can view the introduction of $\alpha_r$ as a stabilization of the previous step.
On the other hand, if we have an exterior blowup, then we simply insert a new hole $H_{r+1}$ to the right, which is the last hole in the geometric order from the left in the resulting disk $D_{r+1}$ and the new vanishing cycle $\alpha_r$ is parallel to the boundary of $H_{r+1}$. Again, we can view the introduction of $\alpha_r$ as a stabilization of the previous step. \smallskip
Next, we describe the second part of our algorithm which we call the surgery algorithm.
\subsubsection{The surgery algorithm} \label{sec: suralg} The surgery algorithm is based on the link $\textbf{L}=\bigcup_{i=1}^{k} L_i$, which is used to define $W_{p,q}(\textbf{n})$. The vanishing cycles in this subsection will be mutually disjoint and hence their order does not matter. So we can describe all the vanishing cycles as a set of curves on the disk $D_k$ with $k$ holes.
\begin{Def} \label{def: gamma} {\em (The $\gamma$-curves)} For each $1 \leq i \leq k$, let $\gamma_i$ be the convex curve on $D_k$ enclosing the holes $H_1, H_2, \ldots, H_{i}$. \end{Def}
Then the set of vanishing cycles in this part of the algorithm is $$\{ \underbrace{\gamma_1, \ldots,\gamma_1}_{b_1-n_1}, \underbrace{\gamma_2, \ldots, \gamma_2}_{b_2-n_2}, \ldots, \underbrace{\gamma_k, \ldots \gamma_k}_{b_k -n_k}\},$$ where each $\gamma_i$ appears $b_i -n_i$ times in the set. In particular, if $b_i=n_i$, then $\gamma_i$ is not in the set of vanishing cycles.
\subsubsection{Total monodromy} The fiber of the planar Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$ is the disk $D_k$ with $k$ holes, where $k$ is the length of the continued fraction $\frac{p}{p-q}=[b_1, b_2 \ldots, b_k]$. The set of vanishing cycles consists of the curves $\alpha_1, \alpha_2, \ldots, \alpha_{k-1}$ coming from the stabilization algorithm and $\gamma_1, \gamma_2, \ldots, \gamma_k$ (each with a multiplicity) coming from the surgery algorithm. Let $D(\alpha)$ denote the right-handed twist along a simple closed curve $\alpha$ on a surface. The total monodromy of the planar Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$ is given as the following composition of Dehn twists along the vanishing cycles $$D(\alpha_1) D(\alpha_2) \cdots D(\alpha_{k-1}) D^{b_1 - n_1} (\gamma_1) D^{b_2 - n_2} (\gamma_2) \cdots D^{b_k - n_k} (\gamma_k).$$
In Lemma~\ref{lem: turn} below, we describe another planar Lefschetz fibration on $W_{p,q}(\textbf{n})$.
\begin{Lem} \label{lem: turn} Let $f: W_{p,q}(\textbf{n}) \to D^2$ be the planar Lefschetz fibration we constructed in \cite{bo}. The total space of the planar Lefschetz fibration obtained by reversing the order of the vanishing cycles of $f$, while taking their mirror images is diffeomorphic to $W_{p,q}(\textbf{n})$. \end{Lem}
\begin{proof} The result follows from the fact that such a transformation of the vanishing cycles can be achieved by rotating the absolute handlebody diagram inducing the planar Lefschetz fibration constructed in \cite{bo}. To see this, consider for example the handlebody diagram in \cite[Figure 7]{bo}, which is depicted on the left-hand side in Figure~\ref{fig: palf}.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=3.5in
\centerline{\epsfbox{palf.eps}}}
\relabel{a}{Rotate by $180^{\circ}$}
\endrelabelbox
\caption{By rotating the handlebody diagram $180^{\circ}$ in a direction {\em normal} to the page, we obtain the mirror images of the vanishing cycles in reverse order.}
\label{fig: palf}
\end{figure}
By rotating this handlebody diagram $180^{\circ}$ in a direction {\em normal} to the page, we get the handlebody diagram on the right-hand side whose total space is still the same. But this new handlebody diagram corresponds to a planar Lefschetz fibration, where the mirror images of the vanishing cycles appear in reverse order. Note that here we view the base disk $D_k$ ``horizontally" and the mirror image $\overline{\alpha}$ of a curve $\alpha \subset D_k$ is defined to be the reflection of $\alpha$ along the $x$-axis, once the holes in $D_k$ are aligned horizontally along the $x$-axis. This definition of mirror image, of course, coincides with the mirror image in a vertical $D_k$ by rotating the horizontal $D_k$ clockwise by $90^\circ$. \end{proof}
\subsection{An example} \label{ex: example} For $p=56$ and $q=17$, we have $\dfrac{56}{56-17} = [2,2,5,2,3]$. The $5$-tuple $\textbf{n}=(2,1,4,1,2)$ belongs to $\mathcal{Z}_{5}(\frac{56}{56-17})$ since we have the blowup sequence $$(0) \to (1,1) \to (1,2,1) \to (2,1,3,1) \to (2,1,4,1,2)$$ and hence we conclude that $W_{(56,17)}((2,1,4,1,2))$ is a Stein filling of the contact $3$-manifold $(L(56, 17), \xi_{can})$. The fiber of the planar Lefschetz fibration $$W_{(56,17)}(2,1,4,1,2) \to D^2$$ is the disk $D_5$ with $5$ holes, and to obtain the vanishing cycles $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ coming from the stabilization algorithm, we start from the step $(1,2,1)$ which is already shown at the bottom right in Figure~\ref{fig: stabil1} and apply the stabilization algorithm to the interior blowups $(1,2,1) \to (2,1,3,1) \to (2,1,4,1,2)$ as depicted in Figure~\ref{fig: ex3}.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{ex3.eps}}}
\relabel{1}{$5$} \relabel{2}{$4$} \relabel{3}{$1$} \relabel{d}{$1$} \relabel{4}{$2$} \relabel{e}{$2$} \relabel{5}{$3$} \relabel{m}{$3$} \relabel{6}{$1$} \relabel{7}{$2$} \relabel{8}{$3$} \relabel{9}{$4$}
\relabel{a}{$(2,1,4,1,2)$} \relabel{b}{$(2,1,3,1)$} \relabel{c}{$(1,2,1)$}
\relabel{f}{$\alpha_2$} \relabel{g}{$\alpha_1$} \relabel{k}{$\alpha_1$} \relabel{y}{$\alpha_1$} \relabel{p}{$\alpha_2$} \relabel{h}{$\alpha_2$} \relabel{r}{$\alpha_4$} \relabel{j}{$\alpha_3$} \relabel{q}{$\alpha_3$}
\endrelabelbox
\caption{The vanishing cycles $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ coming from the stabilization algorithm. }
\label{fig: ex3}
\end{figure}
Note that $b_1 - n_1=0$, whereas $b_2 - n_2=b_3 - n_3=b_4 - n_4=b_5-n_5=1$, which implies that the set of vanishing cycles coming from the surgery algorithm in this case is $\gamma_2, \gamma_3, \gamma_4$ and $\gamma_5$ as shown in Figure~\ref{fig: ex4}.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=3.5in
\centerline{\epsfbox{ex4.eps}}}
\relabel{1}{$5$} \relabel{d}{$1$} \relabel{e}{$2$} \relabel{m}{$3$} \relabel{9}{$4$}
\relabel{a}{$D_5$} \relabel{y}{$\gamma_2$} \relabel{p}{$\gamma_3$} \relabel{r}{$\gamma_5$} \relabel{s}{$\gamma_4$}
\endrelabelbox
\caption{The vanishing cycles $\gamma_2, \gamma_3, \gamma_4, \gamma_5$ coming from the surgery algorithm. }
\label{fig: ex4}
\end{figure}
Consequently, the total monodromy is given as the follows $$D(\alpha_1) D(\alpha_2) D(\alpha_3) D(\alpha_4) D (\gamma_2) D (\gamma_3) D (\gamma_4) D (\gamma_5).$$
\begin{Rem}\label{rem: monod} By Lemma~\ref{lem: turn}, there is a planar Lefschetz fibration $W_{(56,17)}(2,1,4,1,2) \to D^2$ whose monodromy factorization is given by
$$ D (\gamma_5) D (\gamma_4) D (\gamma_3) D (\gamma_2) D(\overline{\alpha}_4) D(\overline{\alpha}_3) D(\overline{\alpha}_2) D(\overline{\alpha}_1).$$ \end{Rem}
\section{Unbraided wiring diagrams} \label{palf}
\subsection{The blowup algorithm} \label{sec: blowup}
In this subsection, we describe an algorithm to construct an unbraided wiring diagram corresponding to a blowup sequence starting from the initial blowup $(0) \to (1,1)$. The wiring diagram corresponding to $(0)$ is a single wire $w_1$ without any marked points and the wiring diagram corresponding
to $(1,1)$, consists of two parallel wires $\{w_1, w_2\}$ so that $w_1$ is on top without any marked points, and $w_2$ has a single marked point $x_1$. The next step in the algorithm depends on whether we have an interior or exterior blowup that follows the initial blowup $(0) \to (1,1)$.
If we have an interior blowup at the first term $(1,1) \to (2,1,2) $ we introduce a new wire $w_3$, which is initially below $w_2$ on the right-hand side of the diagram and as it moves to the left, it goes through the marked point $x_1$ on $w_2$, but otherwise remains parallel to $w_2$ and then intersects $w_1$ at a new marked point $x_2$, which is to the left of $x_1$. This diagram with three wires $\{w_1, w_2, w_3\}$ corresponds to $(2,1,2)$, which we depicted in Figure~\ref{fig: initial1}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{initial1.eps}}}
\relabel{a}{$(0)$} \relabel{b}{$(1,1)$} \relabel{c}{$(2,1,2)$} \relabel{p}{$x_1$} \relabel{r}{$x_2$} \relabel{q}{$x_1$} \relabel{0}{$w_1$} \relabel{3}{$w_1$} \relabel{1}{$w_1$} \relabel{4}{$w_2$} \relabel{2}{$w_2$} \relabel{5}{$w_3$} \endrelabelbox
\caption{Wiring diagrams corresponding to the blowup sequence $(0) \to(1,1) \to (2,1,2)$.} \label{fig: initial1} \end{figure}
On the other hand, if we have an exterior blowup $(1,1) \to (1,2,1)$, we insert in the diagram a new wire $w_3$ which is right below $w_2$ and parallel to it. We place a marked point $x_2$ on $w_3$ so that $x_2$ is to the left of $x_1$. This diagram with three wires $\{w_1, w_2, w_3\}$ corresponds to $(1,2,1)$, which we depicted in Figure~\ref{fig: initial2}.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{initial2.eps}}}
\relabel{a}{$(0)$} \relabel{b}{$(1,1)$} \relabel{c}{$(1,2,1)$} \relabel{p}{$x_1$} \relabel{r}{$x_1$} \relabel{q}{$x_2$} \relabel{0}{$w_1$} \relabel{3}{$w_1$} \relabel{1}{$w_1$} \relabel{4}{$w_2$} \relabel{2}{$w_2$} \relabel{5}{$w_3$}
\endrelabelbox \caption{Wiring diagrams corresponding to the blowup sequence $(0) \to(1,1) \to (1,2,1)$.} \label{fig: initial2} \end{figure}
Now suppose that we have an unbraided wiring diagram $\mathcal{W}$ consisting of $r$ wires $\{w_1, w_2$, $\ldots, w_{r}\}$ corresponding to some blowup sequence starting from the initial blowup $(0) \to (1,1)$ and ending with some $r$-tuple of positive integers. We would like to emphasize that the indices of the wires in the set $\mathcal{W}$ above indicate the order in which the wires are introduced into the diagram. Depending on the type of the next blowup we insert a new wire in $\mathcal{W}$ and adjust the diagram accordingly as follows.
Suppose that we have an interior blowup at the $i$th term, for some $1 \leq i \leq r-1$. Let $w_j \in \mathcal{W}$ be the $(i+1)$st wire with respect to the {\em geometric ordering} of the wires on the right-hand side of the diagram, and let $\mathcal{W}_i$ denote the subset of $\mathcal{W}$ consisting of all the wires which appears before $w_j$ in this ordering. In other words, $\mathcal{W}_i$ is the set of the top $i$ wires in the geometric ordering of the wires on the right-hand side of the diagram. Now we introduce a new wire, named $w_{r+1}$, into the diagram, which is initially right below $w_j$ on the right-hand side of the diagram and as it moves to the left, goes through all the marked points on $w_j$ but otherwise remains parallel to $w_j$, and then we insert a new marked point $x_r$ on $w_{r+1}$ which is the simultaneous intersection of $w_{r+1}$ and all the wires in $\mathcal{W}_i$. We place the marked point $x_r$ to the left of $x_{r-1}$. For this to work, we need to know that the set $\mathcal{W}_i \cup \{w_{r+1}\}$ of wires is geometrically consecutive on the left-hand side, which we verify in Lemma~\ref{lem: consec} below, where we refer to this step in the algorithm as the {\em last twist}.
On the other hand, if we have an exterior blowup, we insert a new wire $w_{r+1}$ below all the wires in $\mathcal{W}$ with no intersection points with the other wires, and place a single marked point $x_r$ on $w_{r+1}$, which is to the left of $x_{r-1}$.
We call this procedure the blowup algorithm for wiring diagrams. Note that in the resulting wiring diagram, the wires are indexed in the order they are introduced into the diagram but their geometric ordering on the right-hand side (or the left-hand side) of the diagram as viewed on the page, might be different from the index ordering. Moreover, by our algorithm, $w_1$ will always be at the top on the right-hand side of the diagram.
\begin{Lem} \label{lem: consec} If $\mathcal{W}$ is an unbraided wiring diagram consisting of wires $\{w_1, w_2, \ldots, w_{r}\}$, which is obtained by the blowup algorithm with respect to some blowup sequence starting from the initial blowup $(0) \to (1,1)$, then any set of wires including $w_1$, which is consecutive with respect to the geometric ordering on the right-hand side of the diagram, is also geometrically consecutive (perhaps with a different geometric ordering) on the left-hand side of the diagram. Moreover, if any wire other than $w_1$ carries an even (resp. odd) number of marked points, then on the left-hand side it is above (resp. below) all the wires which appear before it in the geometric ordering of the wires on the right-hand side of the diagram.
\end{Lem}
\begin{proof} We prove the lemma by induction on the number of wires. The two wiring diagrams we described above corresponding to the blowup sequences $(0) \to (1,1) \to (2,1,2) $ and $(0) \to (1,1) \to (1,2,1)$, respectively, can be taken to be the initial step of our induction argument. The properties stated in Lemma~\ref{lem: consec} hold for these wiring diagrams.
Suppose that both properties stated in Lemma~\ref{lem: consec} hold when there are up to $r \geq 3$ wires in any unbraided wiring diagram obtained as a result of the blowup algorithm with respect to some blowup sequence starting from the initial blowup $(0) \to (1,1)$. We will prove that these properties continue to hold when a new wire is inserted into the diagram corresponding to a new blowup. If the new wire inserted corresponds to an exterior blowup, it is clear that both properties stated in Lemma~\ref{lem: consec} continue to hold in the new diagram with $r+1$ wires. This is because in this case, the new wire will be inserted at the bottom of the diagram with a single marked point on it and without any intersections with the other wires.
Suppose that a new wire $w_{r+1}$ is inserted into $\mathcal{W}$ with respect to an interior blowup at the $i$th term, for some $1 \leq i \leq r-1$. Let $\mathcal{W}_{s}$ be the subset of $\mathcal{W}$ consisting of the top $s$ wires in the {\em geometric ordering} of the wires on the right-hand side of the diagram. Note that $\mathcal{W}_{i+1} = \mathcal{W}_i \cup \{w_j\}$, since by definition, $w_j \in \mathcal{W}$ is the $(i+1)$st wire with respect to the geometric ordering of the wires on the right-hand side of the diagram.
Assume that $w_j$ has an odd number of marked points. By the induction hypotheses, before we insert $w_{r+1}$, the wires in the set $\mathcal{W}_{i+1}$ are geometrically consecutive (perhaps with a different geometric ordering) on the left-hand side, while $w_j$ is at the bottom of these geometrically consecutive wires. The new wire $w_{r+1}$ will be initially right below the wire $w_j$ on the right-hand side of the diagram and $w_{r+1}$ will go through all the marked points on $w_j$, and otherwise it will remain parallel to $w_j$, before the last twist in the algorithm. But since $w_j$ has an odd number of marked points, and $w_{r+1}$ is initially right below $w_j$, the wire $w_{r+1}$ will be right above $w_j$ on the left-hand side before the last twist. Therefore, before the last twist, the wires in the set $\mathcal{W}_{i+1} \cup \{w_{r+1}\}$ will be geometrically consecutive on the left-hand side, and moreover $w_{r+1}, w_j$ will be the bottom two wires in that order. Finally, when we twist once all the wires in the set $\mathcal{W}_i \cup \{w_{r+1}\} $ (to create a simultaneous intersection point of these $i+1$ wires) as part of the blowup algorithm, the wires in the set $\mathcal{W}_{i+1} \cup \{w_{r+1}\}$ will remain geometrically consecutive on the left-hand side, where $w_{r+1}$ will appear at the top, and $ w_j$ will appear at the bottom of this consecutive set of wires.
Assume that $w_j$ has an even number of marked points. By the induction hypotheses, before we insert $w_{r+1}$, the wires in the set $\mathcal{W}_{i+1}$ is geometrically consecutive (perhaps with a different geometric ordering) on the left-hand side, while $w_j$ is at the top of these geometrically consecutive wires. The new wire $w_{r+1}$ will be initially right below the wire $w_j$ on the right-hand side of the diagram and $w_{r+1}$ will go through all the marked points on $w_j$, and otherwise it will remain parallel to $w_j$, before the last twist in the algorithm. But since $w_j$ has an even number of marked points, and $w_{r+1}$ is initially right below $w_j$, the wire $w_{r+1}$ will be right below $w_j$ on the left-hand side before the last twist. Therefore, before the last twist, the wires in the set $\mathcal{W}_{i+1} \cup \{w_{r+1}\}$ will be geometrically consecutive on the left-hand side and, moreover, $w_j, w_{r+1}$ will be the top two wires in that order. Finally, when we twist once all the wires in the set $\mathcal{W}_i \cup \{w_{r+1}\} $ (to create a simultaneous intersection point of these $i+1$ wires) as part of the blowup algorithm, the wires in the set $\mathcal{W}_{i+1} \cup \{w_{r+1}\}$ will remain geometrically consecutive on the left-hand side, where $w_j$ will appear at the top, and $w_{r+1}$ will appear at the bottom of this consecutive set of wires.
The discussion above proves that, after we insert $w_{r+1}$, any set of wires in $\mathcal{W} \cup \{w_{r+1}\}$ including $w_1$, which is consecutive with respect to the geometric ordering on the right-hand side of the diagram, is also geometrically consecutive (perhaps with a different geometric ordering) on the left-hand side of the diagram.
Moreover, if $w_j$ has an odd (resp. even) number of marked points, then $w_{r+1}$ will have even (resp.odd) number of marked points by the blowup algorithm and it will be above (resp. below) all the wires in $\mathcal{W}_{i+1}$ on the left-hand side of the diagram. The upshot is that both properties stated in Lemma~\ref{lem: consec} hold true for the unbraided wiring diagram $\mathcal{W} \cup \{w_{r+1}\}$.
\end{proof}
\subsection{An example} Consider the blowup sequence $$(0) \to (1,1) \to (1,2,1) \to (2,1,3,1) \to (2,1,4,1,2).$$ In Figure~\ref{fig: example} below we depict the diagrams corresponding to $$(1,2,1) \to (2,1,3,1) \to (2,1,4,1,2)$$ starting from the diagram of $(1,2,1)$ already depicted in Figure~\ref{fig: initial2}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{example.eps}}}
\relabel{a}{$(1,2,1)$} \relabel{b}{$(2,1,3,1)$} \relabel{c}{$(2,1,4,1,2)$} \relabel{p}{$x_1$} \relabel{z}{$x_1$} \relabel{t}{$x_1$} \relabel{r}{$x_3$} \relabel{w}{$x_3$} \relabel{y}{$x_4$} \relabel{q}{$x_2$} \relabel{h}{$x_2$} \relabel{s}{$x_2$} \relabel{0}{$w_1$} \relabel{3}{$w_1$} \relabel{11}{$w_1$} \relabel{2}{$w_3$} \relabel{6}{$w_3$} \relabel{8}{$w_3$} \relabel{1}{$w_2$} \relabel{4}{$w_2$} \relabel{10}{$w_2$} \relabel{5}{$w_4$} \relabel{9}{$w_4$} \relabel{7}{$w_5$} \endrelabelbox
\caption{Wiring diagrams corresponding to the blowup sequence $(1,2,1) \to (2,1,3,1) \to (2,1,4,1,2)$.}\label{fig: example} \end{figure}
\subsection{The twisting algorithm} \label{sec: twisting}
Suppose that $\mathcal{W}$ is an unbraided wiring diagram consisting of wires $\{w_1, w_2, \ldots, w_{k}\}$, which is obtained by the blowup algorithm with respect to some blowup sequence, starting from the initial blowup $(0) \to (1,1)$ and ending with some $k$-tuple of positive integers. Let $\mathcal{W}_{s}$ be the subset of $\mathcal{W}$ consisting of the top $s$ wires in the {\em geometric ordering} of the wires on the right-hand side of the diagram, as in Section~\ref{sec: blowup}. Based on any $k$-tuple $\textbf{m}=(m_1,\ldots, m_{k})$, where $m_i$ is a nonnegative integer, we describe a procedure called the {\em twisting algorithm} to extend the unbraided wiring diagram $\mathcal{W}$ to another unbraided wiring diagram $\mathcal{W} ({\textbf{m})}$ with the same number of wires but with more marked points obtained by extra twists inserted to the left.
If $m_1=0$, then we do not modify $\mathcal{W}$ and move onto the next step. If $m_1> 0$, then we simply add $m_1$ extra marked points $\underbrace{y_1, y_1, \ldots, y_1}_{m_1}$ on $w_1$ to the left of $x_{k-1}$. If $m_2=0$, then we do not modify the diagram any further and move onto the next step. If $m_2 > 0$, then by Lemma~\ref{lem: consec}, we know that the wires in $\mathcal{W}_2$ are geometrically consecutive on the left-hand side of the diagram $\mathcal{W}$. We extend $\mathcal{W}$ by twisting $m_2$-times the wires in $\mathcal{W}_2$, creating consecutive simultaneous intersection points $\underbrace{y_{2}, y_{2}, \ldots, y_{2}}_{m_2}$ to the left of the last, if any, $y_{1}$. If $m_3=0$, then we do not modify the diagram any further and move onto the next step. Now suppose that $m_3 > 0$. Since the wires in $\mathcal{W}_3$ are geometrically consecutive on the left-hand side of the diagram $\mathcal{W}$ by Lemma~\ref{lem: consec} these wires will remain geometrically consecutive after the first additional twists we possibly put into the diagram corresponding to $m_2$. We extend the diagram further by twisting $m_3$-times the wires in $\mathcal{W}_3$, creating simultaneous intersection point $\underbrace{y_{3}, y_{3}, \ldots, y_{3}}_{m_3}$ to the left of the last, if any, $y_2$. By iterating this procedure, we extend $\mathcal{W}$ to $\mathcal{W} ({\textbf{m})}$ with additional marked points corresponding to \textbf{m}.
\begin{Rem} Here, we think of $m_i$ as the ``multiplicity" of the point $y_i$. If $m_i=0$, then $y_i$ does not appear in the diagram, and if $m_i > 1$, then $y_i$ is repeated $m_i$-times. To avoid cumbersome notation, we do not put an extra index to distinguish between different $y_i$ type points. \end{Rem}
\subsection{An example} Here we give an example where we extend the wiring diagram $\mathcal{W}$ corresponding to the blowup sequence $$(0) \to (1,1) \to (1,2,1) \to (2,1,3,1) \to (2,1,4,1,2)$$ depicted in Figure~\ref{fig: example} to $\mathcal{W}({\textbf{m}})$ applying the twisting algorithm based on $\textbf{m}=(0,1,1,1,1)$. Note that inside the dotted square in Figure~\ref{fig: example2}, there is a copy of $\mathcal{W}$ from Figure~\ref{fig: example}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{example2.eps}}}
\relabel{a}{$\mathcal{W}$} \relabel{p}{$y_{3}$} \relabel{z}{$x_1$} \relabel{r}{$y_{2}$} \relabel{w}{$x_3$} \relabel{y}{$x_4$} \relabel{q}{$y_{5}$} \relabel{s}{$y_{4}$} \relabel{h}{$x_2$} \relabel{11}{$w_1$} \relabel{8}{$w_3$} \relabel{10}{$w_2$} \relabel{9}{$w_4$} \relabel{7}{$w_5$} \endrelabelbox
\caption{Extending $\mathcal{W}$ to $\mathcal{W} ((0,1,1,1,1))$ by applying the surgery algorithm based on $\textbf{m}=(0,1,1,1,1)$.}\label{fig: example2} \end{figure}
In this example $m_1=0$ and $m_2=1$ and the wires $\mathcal{W}_2=\{w_1, w_2\}$ are geometrically consecutive on the left-hand side of $\mathcal{W}$. Now we twist them together once to obtain the marked point $y_{2}$, which is to the left of $x_4$. Since $m_3=1$, next we twist the wires in $\mathcal{W}_3=\{w_1, w_2,w_4\}$ (which are geometrically consecutive) together once to obtain the marked point $y_{3}$, which is to the left of $y_{2}$. Since $m_4=1$, we twist the wires in $\mathcal{W}_4=\{w_1, w_2, w_4, w_3\}$ (which are geometrically consecutive) together once to obtain the marked point $y_{4}$, which is to the left of $y_{3}$. Finally, since $m_5=1$, we twist all the wires in $\mathcal{W}_5 = \mathcal{W}$ together once to obtain the marked point $y_{5}$, which is to the left of $y_{4}$, as illustrated in Figure~\ref{fig: example2}.
\begin{Rem} \label{rem: defns} We will also speak about $\delta} \def\D{\Delta(y_s)$, $\D(y_s)$ and $V(y_s)$ for each marked point $y_s$ in the rest of the paper, as described in Definitions~\ref{def: convex}, ~\ref{def: twist}, and ~\ref{def: cycle}. \end{Rem}
\section{From vanishing cycles to unbraided wiring diagrams}
We recall the main theorem from the introduction, where we have replaced $W$ with $W_{p,q}(\textbf{n})$ below, to be more precise. \smallskip
\noindent {\bf Theorem~\ref{thm: main}.} {\em There is an algorithm to draw an explicit unbraided wiring diagram whose associated planar Lefschetz fibration obtained by the method of Plamenevskaya and Starkston \cite{ps} is equivalent to the planar Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$ constructed by the authors in \cite{bo}.} \smallskip
Before we give the proof of Theorem~\ref{thm: main} below, we illustrate the statement and its proof on an example. First we introduce some notation that will be used in the following discussion. The disk $D_k$ with $k$ holes will be viewed in two different but equivalent ways as follows: (i) the holes are aligned horizontally in $D_k$ and enumerated from left to right or (ii) the holes are aligned vertically in $D_k$ and enumerated from top to bottom. Here we identify the ``horizontal" $D_k$ in (i) with the ``vertical" $D_k$ in (ii) by rotating the ``horizontal" $D_k$ clockwise by $90^\circ$. The reason why we consider these two embeddings of a disk with holes is that the vanishing cycles in \cite{bo} are described on a horizontal $D_k$, while the vanishing cycles in \cite{ps} are described on a vertical $D_k$. Here we compare them on a vertical $D_k$ via the identification given above. When we view $D_k$ vertically, the mirror image $\overline{\alpha}$ of a curve $\alpha \subset D_k$ is defined to be the reflection of $\alpha$ along the $y$-axis, once the holes in $D_k$ are aligned vertically along the $y$-axis.
\subsection{An example} \label{subsec: exa} In Section~\ref{ex: example}, we constructed a planar Lefchetz fibration $$W_{(56, 17)}((2,1,4,1,2)) \to D^2$$ whose fiber is the disk $D_5$ with $5$ holes and whose vanishing cycles are the curves $$\alpha_1, \alpha_2, \alpha_3, \alpha_4, \gamma_2, \gamma_3, \gamma_4, \gamma_5$$ in $D_5$, which are depicted in Figures~\ref{fig: ex3} and ~\ref{fig: ex4}. We claim that the planar Lefschetz fibration obtained by using the method of Plamenevskaya and Starkston associated to the unbraided wiring diagram $W ((0,1,1,1,1))$ in Figure~\ref{fig: example2} has exactly the same set of vanishing cycles (viewed in a vertical $D_5$), except that we have to take ``mirror images" of all the curves and {\em reverse the order} of the vanishing cycles in the total monodromy. In other words, first we rotate the disks in Figures~\ref{fig: ex3} and ~\ref{fig: ex4} clockwise by $90^\circ$ and then take the mirror images of the curves. This modification of the vanishing cycles is not an issue by Lemma~\ref{lem: turn}. Note that the mirror image of a $\gamma$-curve is equal to itself, and hence we only need to take the mirror images of the $\alpha$-curves. As a matter of fact, we claim that $V(y_{j})=\gamma_j$, for $ 2 \leq j \leq 5$ and $V(x_i) = \overline{\alpha}_i$, for $ 1 \leq i \leq 4$ (see Remark~\ref{rem: defns} for notation). To verify our claim, we apply the method of Plamenevskaya and Starkston (see Section~\ref{sec: bwd}), to describe a set of ordered vanishing cycles associated to the marked points in Figure~\ref{fig: example2}, where we depicted the convex curves assigned to the marked points in Figure~\ref{fig: ex9}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5.5in
\centerline{\epsfbox{ex9.eps}}}
\relabel{1}{$\delta} \def\D{\Delta(y_2)$} \relabel{2}{$\delta} \def\D{\Delta(y_3)$} \relabel{3}{$\delta} \def\D{\Delta(y_5)$} \relabel{4}{$\delta} \def\D{\Delta(x_3)$} \relabel{5}{$\delta} \def\D{\Delta(x_1)$} \relabel{6}{$\delta} \def\D{\Delta(x_2)$} \relabel{7}{$\delta} \def\D{\Delta(x_4)$} \relabel{8}{$\delta} \def\D{\Delta(y_4)$}
\endrelabelbox
\caption{Convex curves in $D_5$ assigned to the marked points in Figure~\ref{fig: example2}.} \label{fig: ex9} \end{figure}
Note that $$V(y_{5})= \D(x_1) \circ \cdots \circ \D(x_4)\circ \D(y_2) \circ \D(y_3) \circ \D(y_4) (\delta} \def\D{\Delta(y_5)) = \delta} \def\D{\Delta(y_5) = \gamma_5,$$ $$ V(y_{4})= \D(x_1) \circ \cdots \circ \D(x_4)\circ \D(y_2) \circ \D(y_3) (\delta} \def\D{\Delta(y_4)) = \gamma_4, \; \mbox{as illustrated in Figure~\ref{fig: ex5c}},$$ $$ V(y_{3})= \D(x_1) \circ \cdots \circ \D(x_4)\circ \D(y_2) (\delta} \def\D{\Delta(y_3)) = \gamma_3, \; \mbox{as illustrated in Figure~\ref{fig: ex5}, and}$$ $$ V(y_{2})= \D(x_1) \circ \cdots \circ \D(x_4) (\delta} \def\D{\Delta(y_2)) = \gamma_2, \; \mbox{as illustrated in Figure~\ref{fig: ex6}}. $$
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5in
\centerline{\epsfbox{ex5c.eps}}}
\relabel{1}{$\delta} \def\D{\Delta(y_4)$} \relabel{2}{$\gamma_4$}
\endrelabelbox
\caption{Starting from $\delta} \def\D{\Delta(y_4)$, we apply a counterclockwise half-twist on the subdisk enclosed by the dotted curve, at each step from left to right. }\label{fig: ex5c} \end{figure}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5in
\centerline{\epsfbox{ex5.eps}}}
\relabel{1}{$\delta} \def\D{\Delta(y_3)$} \relabel{2}{$\gamma_3$}
\endrelabelbox
\caption{Starting from $\delta} \def\D{\Delta(y_3)$, we apply a counterclockwise half-twist on the subdisk enclosed by the dotted curve, at each step from left to right. }\label{fig: ex5} \end{figure}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5in
\centerline{\epsfbox{ex6.eps}}}
\relabel{1}{$\delta} \def\D{\Delta(y_2)$} \relabel{2}{$\gamma_2$}
\endrelabelbox
\caption{Starting from $\delta} \def\D{\Delta(y_2)$, we apply a counterclockwise half-twist on the subdisk enclosed by the dotted curve, at each step from left to right. }\label{fig: ex6} \end{figure}
\begin{Rem} In Figure~\ref{fig: ex5c}, we have not included $\delta} \def\D{\Delta(y_2)$ and $\delta} \def\D{\Delta(y_3)$ as dotted curves since $\D(y_2) \circ \D(y_3)$ would not have any effect on $\delta} \def\D{\Delta(y_4)$. Similarly, we have not included $\delta} \def\D{\Delta(y_2)$ as a dotted curve in Figure~\ref{fig: ex5} since $\D(y_2)$ would not have any effect on $\delta} \def\D{\Delta(y_3)$. We will generalize this observation as Lemma~\ref{lem: typex} in Section~\ref{subsec: gamma}.
\end{Rem}
Moreover, $V(x_4)= \overline{\alpha}_4$ by comparing Figure~\ref{fig: ex3} and Figure~\ref{fig: ex7}; $V(x_3)= \overline{\alpha}_3$ by comparing Figure~\ref{fig: ex3} and Figure~\ref{fig: ex8}, and finally $V(x_2)= \delta} \def\D{\Delta(x_2) = \overline{\alpha}_2 = \alpha_2$ and $V(x_1)= \delta} \def\D{\Delta(x_1)= \overline{\alpha}_1 =\alpha_1$, by comparing Figure~\ref{fig: ex3} and Figure~\ref{fig: ex9b}. Note that the total monodromy of the planar Lefschetz fibration is $$ D (\gamma_5) D (\gamma_4) D (\gamma_3) D (\gamma_2) D(\overline{\alpha}_4) D(\overline{\alpha}_3) D(\overline{\alpha}_2) D(\overline{\alpha}_1),$$ which coincides with the monodromy in Remark~\ref{rem: monod}. \smallskip
Now we are ready to give a proof of Theorem~\ref{thm: main}.
\subsection{Proof of the main result} Suppose that $p > q \geq 1$ are coprime integers and let
$$\dfrac{p}{p-q}=[b_1, b_2, \ldots, b_k]$$ be the Hirzebruch-Jung continued fraction, where $b_i \geq 2$ for $1 \leq i \leq k$. We set $$\textbf{b}= (b_1, b_2, \ldots, b_k).$$
\begin{Def} \label{def: wd} {\em (The wiring diagrams $\mathcal{W}_\textbf{n}$, and $\mathcal{W}_\textbf{n}({\textbf{m}})$)} For any $$\textbf{n}= (n_1, n_2, \ldots, n_k) \in \mathcal{Z}_{k}(\textstyle{\frac{p}{p-q}})$$ let $(0) \to (1,1) \to \cdots \to \textbf{n}$ be a blowup sequence, and let $\textbf{m}= \textbf{b}-\textbf{n}$. We denote by $\mathcal{W}_\textbf{n}$, the unbraided wiring diagram with $k$ wires $\{w_1, w_2, \ldots, w_{k}\}$ and $k-1$ marked points $x_{k-1}, x_{k-2}, \ldots, x_1$ (reading from left to right) constructed by applying the blowup algorithm in Section~\ref{sec: blowup} to the given blowup sequence. We denote by $\mathcal{W}_\textbf{n}({\textbf{m}})$ the extension of $\mathcal{W}_\textbf{n}$ to the left obtained by applying the twisting algorithm in Section~\ref{sec: twisting} based on the $k$-tuple $\textbf{m}$. Note that $\mathcal{W}_\textbf{n}({\textbf{m}})$ is obtained from $\mathcal{W}_\textbf{n}$ by inserting additional marked points $$ \underbrace{y_{k}, \ldots, y_{k}}_{m_k}, \underbrace{y_{k-1}, \ldots, y_{k-1}}_{m_{k-1}}, \dots, \underbrace{y_{1}, \ldots, y_{1}}_{m_1}, $$ reading from left to right. \end{Def}
\smallskip
\noindent {\bf Vanishing cycles associated to $\mathcal{W}_\textbf{n}({\textbf{m}})$:} Now we can apply the method of Plamenevskaya and Starkston (see Section~\ref{sec: bwd}) to the wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$, to obtain the associated planar Lefschetz fibration by describing a set of ordered vanishing cycles on the disk $D_k$ with $k$ holes. According to their algorithm, there is a vanishing cycle associated to each marked point in $\mathcal{W}_\textbf{n}({\textbf{m}})$. So, for each $1 \leq t \leq k-1$, there is a vanishing cycle $V(x_t)$ associated to the marked point $x_t$ in $\mathcal{W}_\textbf{n}({\textbf{m}})$, and for each $1 \leq s \leq k$, there is a vanishing cycle $V(y_s)$ associated to the marked point $y_s$ in $\mathcal{W}_\textbf{n}({\textbf{m}})$. Note that there are $k-1$ vanishing cycles associated to type $x$ marked points, and since each $y_t$ is repeated $m_t = b_t - n_t$ times, there are $$m_1+m_2+\cdots+m_k = (b_1-n_1)+(b_2-n_2)+ \cdots +(b_k-n_k)$$ vanishing cycles in total associated to type $y$ marked points. \smallskip
\noindent {\bf Planar Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$:} Let $W_{p,q}(\textbf{n})$ be the minimal symplectic filling of $(L(p,q), \xi_{can})$ as in Section~\ref{sec: lens}. As we described in Section~\ref{sec: planar}, there is a planar Lefschetz fibration $W_{p,q}(\textbf{n}) \to D^2$ with fiber $D_k$, which is obtained by applying the stabilization algorithm and the surgery algorithm. Note that there are $k-1$ vanishing cycles $\alpha_1, \alpha_2, \ldots, \alpha_{k-1}$ coming from the stabilization algorithm and $$(b_1-n_1)+(b_2-n_2)+ \cdots +(b_k-n_k)$$ vanishing cycles $$\{ \underbrace{\gamma_1, \ldots,\gamma_1}_{b_1-n_1}, \underbrace{\gamma_2, \ldots, \gamma_2}_{b_2-n_2}, \ldots, \underbrace{\gamma_k, \ldots \gamma_k}_{b_k -n_k}\}$$ coming from the surgery algorithm. \smallskip
Theorem~\ref{thm: main} is in fact equivalent to Proposition~\ref{prop: mirror} coupled with Lemma~\ref{lem: turn}.
\begin{Prop} \label{prop: mirror} Let $\mathcal{W}_\textbf{n}({\textbf{m}})$ be an unbraided wiring diagram with $k$ wires as described in Definition~\ref{def: wd}. Then
\begin{enumerate}[\rm(a)]
\item for any $1 \leq s \leq k$, the vanishing cycle $V(y_s)$ associated to the marked point $y_s \in \mathcal{W}_\textbf{n}({\textbf{m}})$ is isotopic to $\gamma_s$ in $D_k$, and
\item for any $1 \leq t \leq k-1$, the vanishing cycle $V(x_t)$ associated to the marked point $x_{t} \in \mathcal{W}_\textbf{n}({\textbf{m}})$ is isotopic to $\overline{\alpha}_{t}$ (the mirror image of $\alpha_t$) in $D_k$.
\end{enumerate}\end{Prop}
\smallskip
In the rest of the article we will provide a proof of Proposition~\ref{prop: mirror}. In Section~\ref{subsec: gamma}, we will first formulate Proposition~\ref{prop: lobe} (a necessarily very technical result) and Lemma~\ref{lem: lobetomirror} will show that it implies Proposition~\ref{prop: mirror}(a). Then we will turn our attention to Proposition~\ref{prop: mirror}(b) in Section~\ref{subsec: alpha}, where we will formulate the result as Proposition~\ref{prop: alpha}.
\subsubsection{The case of $\gamma$-curves:} \label{subsec: gamma} To prove our claim in Proposition~\ref{prop: mirror}, we will verify that for $1 \leq s \leq k$, the vanishing cycle $V(y_{s})$ is isotopic to the curve $\gamma_s$ in $D_k$. We begin with a simple but crucial observation.
\begin{Lem} \label{lem: typex} For any wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$ with $k$ wires as in Definition~\ref{def: wd}, and for any $1 \leq s \leq k$, we have $$V(y_s) = \D(x_1) \circ \cdots \circ \D(x_{k-1}) (\delta} \def\D{\Delta (y_s)). $$
\end{Lem}
\begin{proof} By Definition~\ref{def: cycle} we have
$$V(y_s) = \D(x_1) \circ \cdots \circ \D(x_{k-1}) \circ (\D(y_{1})) ^{b_1-n_1} \circ \cdots \circ (\D(y_{s-1})) ^{b_{s-1}-n_{s-1}} (\delta} \def\D{\Delta (y_s)). $$ But $$ (\D(y_{1})) ^{b_1-n_1} \circ \cdots \circ (\D(y_{s-1})) ^{b_{s-1}-n_{s-1}} (\delta} \def\D{\Delta (y_s)) = \delta} \def\D{\Delta (y_s),$$ since the convex curves associated to the type $y$ marked points are nested, due to the construction and the order of the type $y$ marked points in the wiring diagram. \end{proof}
Therefore, to prove our claim in Proposition~\ref{prop: mirror} (a), for each $1 \leq s \leq k$, we need to verify that $$ \D(x_{1}) \circ \cdots \circ \D(x_{k-1}) (\delta} \def\D{\Delta(y_s)) = \gamma_s$$ by Definition~\ref{def: cycle} and Lemma~\ref{lem: typex}. Equivalently, we need to verify that for each $1 \leq s \leq k$,
$$(\D(x_{k-1}))^{-1} \circ \cdots \circ (\D(x_{1}))^{-1} ( \gamma_s) = \delta} \def\D{\Delta(y_s).$$ For technical reasons, we will prove a more refined
statement in Proposition~\ref{prop: lobe} from which our claim will follow by Lemma~\ref{lem: lobetomirror}. Before giving the statement we make the following definition.
\begin{Def} {\em (Right/Left-convexity)} A curve in a disk with holes enclosing two distinct sets of adjacent holes as illustrated in Figure~\ref{fig: convex} is called right-convex, and the mirror image of a right-convex curve is called left-convex. By definition any convex curve enclosing a set of adjacent holes is both right-convex and left-convex. \end{Def}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=1.1in
\centerline{\epsfbox{convex.eps}}}\relabel{1}{$0 \leq $}
\endrelabelbox
\caption{A right-convex curve in a disk with holes.}\label{fig: convex} \end{figure}
{\bf Notation for the rest of the paper:} During the proof, it will be convenient to keep track of the number of wires in our wiring diagrams when talking about marked
points. Thus we will write $x_i^k$ (resp. $y_j^k$) when talking about the marked point $x_i$ (resp. $y_j$) in a wiring diagram with $k$ wires. Although this decoration will make the notation cumbersome, it is necessary for the accuracy of the arguments, but the reader can safely ignore this superscript for the most part in the text below.
Similarly, when talking about curves and half-twists in a disk with holes, it will be convenient to keep track of the
number of holes. For example, we will write $\gamma_s^k$ when talking about the convex curve $\gamma_s$
in the disk $D_k$ with $k$ holes. Moreover, the counterclockwise half-twist $\D(x_i)$ in $D_k$ will be abbreviated by $\D^k_i$ and its inverse, the clockwise half-twist, by $(\D^k_i)^{-1}$. (Fortunately, we will not need to use $\D(y_j)$ in our discussion below by Lemma~\ref{lem: typex}, and hence the notation $\D^k_i$ will not lead to any confusion.) Furthermore, we will denote the $i$th
hole (with respect to the geometric order from top to bottom) in $D_k$ by $H_i^k$. We will also need the following definitions.
\begin{Def} \label{def: red} For $2\leq s\leq k$, we denote by $\Gamma_s^k$ the collection of red arcs in $D_k$ shown in Figure \ref{fig: figa}, where we set $\Gamma_1^k := \emptyset$. \end{Def}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=1.8in
\centerline{\epsfbox{figa.eps}}}\relabel{1}{$\gamma^k_{s}$} \relabel{2}{$H^k_s$} \relabel{3}{$D_k$}
\endrelabelbox
\caption{ $\Gamma_s^k$ is the collection of red arcs. }\label{fig: figa} \end{figure}
Fix any wiring diagram $\mathcal{W}_\textbf{n}$ with $k$ wires and $k-1$
marked points $x^k_{k-1}, \ldots, x^k_{1}$ (reading from left to right), as in Definition~\ref{def: wd}.
\begin{Def} \label{def: arcs} For $2 \leq s\leq k$, let $\rho_s^k$ be the smallest $t \in \{1, \ldots, k-1\}$ such that the convex curve $\delta} \def\D{\Delta (x_{t}^k) \subset D_k$ assigned to $x_{t}^k$ contains
$H_s^k$. For $1\leq s\leq k$ and $1\leq r\leq k-1$, we define
\begin{align*}
\gamma_{s,r}^k&:=(\D_r^k)^{-1}\circ(\D_{r-1}^k)^{-1}\circ\cdots\circ(\D_1^k)^{-1}(\gamma_s^k), \\
\Gamma_{s,r}^k&:=\begin{cases}(\D_r^k)^{-1}\circ(\D_{r-1}^k)^{-1}\circ\cdots\circ(\D_{\rho_s^k}^k)^{-1}(\Gamma_s^k) & \text{if } \; s \geq 2\; \text{and } \;r\geq\rho_s^k \\
\Gamma_s^k & \text{otherwise,} \end{cases}
\end{align*}
and set $\gamma_{s,0}^k:=\gamma_s^k$ and $\Gamma_{s,0}^k:=\Gamma_s^k$.
\end{Def}
Note that $\Gamma_{1,r}^k =\emptyset$ for any $0 \leq r \leq k-1$, by Definition~\ref{def: arcs}.
\begin{Def} \label{def: psi} We abbreviate $$\Psi_r^k:=(\D_r^k)^{-1}\circ(\D_{r-1}^k)^{-1}\circ\cdots\circ(\D_1^k)^{-1}$$ for $r\geq 1$ and set
$\Psi_0^k=\operatorname{id}$. \end{Def}
\begin{Prop} \label{prop: lobe} Fix any wiring diagram $\mathcal{W}_\textbf{n}$ with $k$ wires and $k-1$
marked points $x^k_{k-1}$, $\ldots,$ $x^k_{1}$ (reading from left to right), as in Definition~\ref{def: wd}. Then the following three statements hold for any $1\leq s\leq k$, and $1\leq r\leq k-1$:
\begin{enumerate}[\rm(L1)]
\item The curve $\gamma_{s,r}^k$ is left or right-convex.
\item The sidedness of the convexity of $\gamma_{s,r}^k$ is opposite to that of $\gamma_{s,r-1}^k$ if and only if the convex
curve assigned to $x_r^k$ contains $\Psi_{r-1}^k(H_s^k)$.
\item If $s \geq 2$ and $r\geq\rho_s^k$, then $\Gamma_{s,r}^k$ has one of the two forms shown in Figure \ref{fig: figb}.
In particular, the holes enclosed by $\gamma_{s,r}^k$ can be split into two collections of adjacent holes, which we will
call ``lobes" with the lobe containing $\Psi_r^k(H_s^k)$ being called the ``primary lobe" and the other lobe the ``secondary lobe".
We require $\Psi_r^k(H_s^k)$ to be the innermost hole of the primary lobe and the
intersection of $\Gamma_{s,r}^k$ with a half-plane containing the primary lobe to have precisely one of the two forms illustrated in Figure \ref{fig: figb}. When $s=1$, we have $\Gamma_{1,r}^k =\emptyset$, for any $0 \leq r \leq k-1$.
\end{enumerate} \end{Prop}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5in
\centerline{\epsfbox{figb.eps}}} \relabel{1}{$D_k$} \relabel{2}{$D_k$} \relabel{3}{$\gamma_{s,r}^k$} \relabel{4}{$\gamma_{s,r}^k$} \relabel{5}{$0 \leq $} \relabel{6}{$\geq 0$} \relabel{a}{$\Psi_r^k(H_s^k)$} \relabel{b}{$\Psi_r^k(H_s^k)$}
\endrelabelbox
\caption{$\Gamma_{s,r}^k$ is the collection of red arcs in both forms in (L3).}\label{fig: figb} \end{figure}
\begin{Lem} \label{lem: lobetomirror} Proposition~\ref{prop: lobe} implies Proposition~\ref{prop: mirror} {\em (a)}. \end{Lem}
\begin{proof} Lemma~\ref{lem: consec} implies that, for each $1\leq s\leq k$, the top $s$ wires according to their geometric order on the right-hand side of a wiring diagram as described in Definition~\ref{def: wd} will be {\em consecutive} (perhaps with a different geometric order) on the left-hand side as well. Therefore, by definition, the convex curve $\delta} \def\D{\Delta(y^k_s)$ encloses the set of adjacent holes in $D_k$ each of whose order is the same as the local geometric order of one of these $s$ wires on the left-hand side of the diagram.
On the other hand, by definition, the convex curve $\gamma_s^k$ encloses the top $s$ holes in $D_k$ and the set of images of these holes under $(\D_{k-1}^k)^{-1} \circ \cdots\circ(\D_1^k)^{-1}$ will be the same as the set of adjacent holes enclosed by $\delta} \def\D{\Delta(y^k_s)$. To see this, imagine that each wire has a colour and that each hole in the initial copy of $D_k$ has a colour so that the $i$th hole from the top has the same colour as the $i$th wire from the top on the right hand side. As the wires move from right to left, they will be locally reordered each time a marked points appears in the diagram. Similarly, the clockwise half-twist corresponding to that marked point will reorder the holes on the disk $D_k$. We set up our algorithm so that at each step the colour of each wire remains the same as the colour of the corresponding hole.
Moreover, for each $1\leq s\leq k$, we know by Proposition~\ref{prop: lobe} that the curve
$$\gamma_{s,k-1}^k:=(\D_{k-1}^k)^{-1} \circ \cdots\circ(\D_1^k)^{-1}(\gamma_s^k)$$ is right or left-convex, but since it encloses a set of {\em adjacent} holes, it must be {\em convex}. Therefore we conclude that for each $1\leq s\leq k$, the convex curve $\gamma_{s,k-1}^k$ is isotopic to the convex curve $\delta} \def\D{\Delta(y^k_s)$. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop: lobe}.] We will prove Proposition~\ref{prop: lobe} by induction on the number of wires in the wiring diagram.
These three statements are vacuously true for a wiring diagram with one wire and no marked points. Now suppose that $k\geq 2$
and these statements hold for any wiring diagram constructed using the blowup algorithm above with $k-1$ wires and
$k-2$ marked points. We will prove that they hold for any wiring diagram with $k$ wires and $k-1$
marked points $x^k_{k-1}, \ldots, x^k_{1}$ (reading from left to right), constructed as in Definition~\ref{def: wd}. Our induction argument naturally splits into several cases. \smallskip
{\bf \underline{Case I (Exterior blowup)}:} This is the easiest case. Suppose that the last wire $w_k$ is inserted into the diagram as a consequence of an exterior blowup so that $w_k$ lies {\em below} all the wires and has no ``interaction" with the other wires. Recall that $w_k$ carries a free marked point $x^k_{k-1}$ which is placed geometrically to the left of all the previous marked points in the diagram.
Consider a fixed embedding of $D_{k-1} \subset D_k$, where $D_{k-1}$ includes the top $k-1$ holes in $D_k$. In other words $D_k$ is obtained from $D_{k-1}$ by inserting an extra hole, named $H^k_k$ by our conventions, at the bottom. Under this embedding, for any $1 \leq t \leq k-2$, the convex curve $\delta} \def\D{\Delta(x^{k-1}_{t})$ in $D_{k-1}$ can be identified with the convex curve $\delta} \def\D{\Delta(x^{k}_{t})$ in $D_k$, since $x^{k}_{t} = x^{k-1}_{t}$ in the new diagram. Similarly, $\D^k_{t} = \D^{k-1}_{t}$ for any $1 \leq t \leq k-2$, under this embedding and hence it follows that for $1\leq s\leq k-1$ and $1\leq r\leq k-2$ we have $\gamma_{s,r}^k = \gamma_{s,r}^{k-1}$ and $\Gamma_{s,r}^k = \Gamma_{s,r}^{k-1}$, which proves by induction, that statements (L1), (L2) and (L3) hold for the new wiring diagram with $k$ wires, for these cases.
Note that the convex curve $\delta} \def\D{\Delta(x^k_{k-1})$ is the curve that encloses the last hole $H^k_k \subset D_k$, by definition. Therefore, for $1\leq s\leq k-1$, the clockwise half-twist $(\D_{k-1}^k)^{-1}$ has no effect on the \emph{convex} curve $\gamma_{s,k-2}^k$ nor on the collection of arcs $\Gamma_{s,k-2}^k$. Hence statements (L1), (L2) and (L3) hold for $1\leq s\leq k-1$ and $r=k-1$ as well in the new wiring diagram with $k$ wires.
Finally, we observe that $\gamma_{k,r}^k = \gamma_{k}^k$ is convex for each $r$, hence (L1) and (L2) automatically hold for $s=k$. Also, $\rho_k^k=k-1$ and
$\Gamma_{k,k-1}^k = (\D_{k-1}^k)^{-1}(\Gamma_k^k)$ has the form shown in Figure \ref{fig: figc}, thus (L3) also holds for $s=k$. \smallskip
\begin{figure}[ht] \relabelbox \small {\epsfxsize=1.5in
\centerline{\epsfbox{figc.eps}}} \relabel{1}{$\gamma_{k,k-1}^k$} \relabel{2}{$D_k$}
\endrelabelbox
\caption{$\Gamma_{k,k-1}^k$ is the collection of red arcs.}\label{fig: figc} \end{figure}
{\bf \underline{Case II (Interior blowup)}:} Suppose that the last wire $w_k$ is introduced into the diagram as a consequence of an interior blowup at the $i$th term so that $w_k$ is initially right below the $(i+1)$st wire with respect to the {\em geometric ordering} of the wires on the right-hand side of the diagram. Suppose that this $(i+1)$st wire is $w_j$.
Now imagine that we take a step back in our blowup algorithm. In other words, we delete the last wire $w_k$ (and the last marked point $x^k_{k-1}$ and the associated last twisting) from the diagram. At the same time we remove the {\em corresponding hole} from $D_k$ as follows. First we remove the $(i+2)$nd hole from $D_k$ to obtain the rightmost copy of $D_{k-1}$. As we move from right to left in the wiring diagram, every time we pass through a marked point, we have a new copy of $D_{k-1}$ obtained by removing from $D_k$ the hole whose order is the same as the local geometric order of the wire $w_k$. All these copies of $D_{k-1}$ can of course be identified with the rightmost copy of $D_{k-1}$ and we use this observation in our induction argument below,
where we proceed according to three possible cases. \smallskip
{\bf \underline{Case II.A (Interior blowup, $1\leq s \leq i$)}:} Suppose that $1\leq s \leq i$. By hypothesis, $\gamma_{s,r}^{k-1}$ and $\Gamma_{s,r}^{k-1}$ satisfy statements (L1), (L2) and (L3) on $D_{k-1}$ for $1 \leq r \leq k-2$. Now since $\gamma^{k-1}_s$ does not contain the hole $H^{k-1}_{i+1}$, by the assumption that $1\leq s \leq i$, the image $\Psi_{r-1}^k(H_{i+1}^{k-1})$ is not contained in $\gamma_{s,r}^{k-1}$ for $1 \leq r \leq k-2$. Therefore, we can insert back the hole we deleted by splitting the image $\Psi_{r-1}^{k-1}(H_{i+1}^{k-1})$ into two adjacent holes. The new hole will be inserted right below $H^{k-1}_{i+1}$ in the rightmost copy of $D_{k-1}$ and it will be inserted right below or above $\Psi_{r-1}^{k-1}(H_{i+1}^{k-1})$ in an alternating fashion every time we pass a marked point that belongs to the intersection $w_j \cap w_k$. As a result, the superscript $k-1$ can be promoted to $k$, meaning that the curve $\gamma_{s,r}^{k-1}$ can be viewed as $\gamma_{s,r}^{k}$ and $\Gamma_{s,r}^{k-1}$ can be viewed as $\Gamma_{s,r}^{k}$, since we have not modified them by the insertion of the new hole. Hence $\gamma_{s,r}^{k}$ and $\Gamma_{s,r}^{k}$ satisfy the statements (L1), (L2) and (L3) on $D_{k}$, for $1 \leq r \leq k-2$, as well.
To finish the proof of this case, we only need to argue that $\gamma_{s,k-1}^{k}$ and $\Gamma_{s,k-1}^{k}$ satisfy the statements (L1), (L2) and (L3). But by the discussion
above $\gamma_{s,k-2}^{k}$ is right or left-convex and belongs to the subdisk in $D_k$ along which we apply $ \D^{-1}_{k-1}$ corresponding to the new marked point $x_{k-1}$, by our algorithm. Therefore, it is easy to see that $ \gamma_{s,k-1}^{k} = (\D^{k}_{k-1})^{-1}(\gamma_{s,k-2}^{k})$ and $\Gamma_{s,k-1}^{k} = (\D^{k}_{k-1})^{-1}(\Gamma_{s,k-2}^{k})$ satisfy (L1), (L2) and (L3) as well. \smallskip
{\bf \underline{Case II.B (Interior blowup, $i+2 \leq s \leq k$)}:} Suppose that $i+2 \leq s \leq k$. We check that $\gamma_{s,r}^k$ and $\Gamma_{s,r}^k$ satisfy
statements (L1), (L2) and (L3) for $1\leq r\leq k-1$. We will proceed by induction on $r$. In the case $r<\rho_s^k$,
the statements are trivial since $\gamma_{s,r}^k=\gamma_s^k$ and $\Gamma_{s,r}^k=\Gamma_s^k$ in this case. If $r=\rho_s^k$, then, after applying the clockwise half-twist $(\D_r^k)^{-1}$ to $\gamma_{s,r-1}^k=\gamma_s^k$ and $\Gamma_{s,r-1}^k=\Gamma_s^k$, we see that $\gamma_{s,r}^k$ and
$\Gamma_{s,r}^k$ have the form given in Figure \ref{fig: figd}. Thus statements (L1), (L2) and (L3) hold in this case also.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=2in
\centerline{\epsfbox{figd.eps}}} \relabel{2}{$\gamma_{s,\rho_s^k}^k$} \relabel{1}{$D_k$}
\endrelabelbox
\caption{$\Gamma_{s,\rho_s^k}^k$ is the collection of red arcs. }\label{fig: figd} \end{figure}
Now suppose that statements (L1), (L2) and (L3) hold for $k-3\geq r=p\geq\rho_s^k$. We check that the statements
continue to hold for $r=p+1$. For this first note that the convex curve $\delta} \def\D{\Delta(x_{p+1}^k)$ encloses the hole
$\Psi_p^k(H_s^k)$ if and only if the convex curve $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ encloses the hole $\Psi_p^{k-1}(H_{s-1}^{k-1})$.
Indeed, if $s>i+2$, then the wire in geometric position $s$ on the right hand side of the wiring diagram
is wire $w_l$ for some $l<k$, since wire $w_k$ is in geometric position $i+2$ on the right hand side. If we take a step back in
our blowup algorithm, then wire $w_l$ will have geometric position $s-1$ on the right hand side of the wiring
diagram, now with $k-1$ wires. For $s>i+2$, the result claimed now follows from the fact that $\delta} \def\D{\Delta(x_{p+1}^m)$ encloses the hole
$\Psi_p^m(H_t^m)$ if and only if the wire in geometric position $t$ on the right hand side of the diagram
passes through the marked point $x_{p+1}^m$.
If $s=i+2$, then the wire in geometric position $s$ on the right hand side of the wiring diagram
will be wire $w_k$. Since wire $w_k$ passes through each marked point $x_q^k$ that wire $w_j$ passes through for $1\leq q\leq k-2$, otherwise remaining
parallel to $w_j$, arguing as above, we obtain the same result for $s=i+2$.
By hypothesis of the induction on $r$, it now
follows that the curve $\gamma_{s,p}^k$ will be right or left-convex according to whether $\gamma_{s-1,p}^{k-1}$ is
right or left-convex. Furthermore, the holes that $\gamma_{s,p}^k$ encloses can be obtained from the holes that
$\gamma_{s-1,p}^{k-1}$ encloses by splitting the hole $\Psi_p^{k-1}(H_{i+1}^{k-1})$ into two adjacent holes.
In a similar way, the holes that $\delta} \def\D{\Delta(x_{p+1}^k)$ encloses can be obtained from the holes that $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ encloses by splitting the hole $\Psi_p^{k-1}(H_{i+1}^{k-1})$ into two adjacent holes. As a consequence, the curve
$\gamma_{s,p+1}^k=(\D^k_{p+1})^{-1}(\gamma_{s,p}^k)$ satisfies (L1) and (L2) and the holes that $\gamma_{s,p+1}^k$ encloses can be obtained from the holes that $\gamma_{s-1,p+1}^{k-1}$ encloses by splitting the hole $\Psi_{p+1}^{k-1}(H_{i+1}^{k-1})$ into two adjacent holes.
We now show that $\Gamma_{s,p+1}^k$ satisfies (L3) by considering the cases that $\delta} \def\D{\Delta(x_{p+1}^k)$ encloses and does not enclose the hole
$\Psi_p^k(H_s^k)$ separately. First suppose that $\delta} \def\D{\Delta(x_{p+1}^k)$ does not enclose
the hole $\Psi_p^k(H_s^k)$. Then $\delta} \def\D{\Delta(x_{p+1}^{k-1})$
does not enclose the hole $\Psi_p^{k-1}(H_{s-1}^{k-1})$. Assume that $\gamma_{s-1,p}^{k-1}$
is right-convex; the case that $\gamma_{s-1,p}^{k-1}$ is left-convex is similar. Then $\gamma_{s-1,p+1}^{k-1}$ is also
right-convex and we have the following possibilities for $\delta} \def\D{\Delta(x_{p+1}^{k-1})$:
(i) $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ is disjoint from $\gamma_{s-1,p}^{k-1}$. In this case $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ cannot enclose a subset of holes from the primary lobe of
$\gamma_{s-1,p}^{k-1}$, since otherwise $\Gamma_{s-1,p+1}^{k-1}$ would fail to satisfy (L3). It follows that $\delta} \def\D{\Delta(x_{p+1}^k)$ is disjoint
from $\gamma_{s,p}^k$ and that it does not enclose any subset of holes from the primary lobe of $\gamma_{s,p}^k$.
Thus $\Gamma_{s,p+1}^k$ satisfies (L3).
(ii) $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ intersects $\gamma_{s-1,p}^{k-1}$ and encloses at least one hole below the primary lobe of $\gamma_{s-1,p}^{k-1}$.
In this case $\gamma_{s-1,p+1}^{k-1}$ would fail to be right-convex, which contradicts our induction hypothesis. Hence this case cannot occur.
(iii) $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ intersects $\gamma_{s-1,p}^{k-1}$ and encloses at least one hole above the secondary lobe of $\gamma_{s-1,p}^{k-1}$.
In this case also $\gamma_{s-1,p+1}^{k-1}$ would fail to be right-convex, and hence this case also cannot occur.
(iv) $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ intersects $\gamma_{s-1,p}^{k-1}$ and encloses at least one hole below the secondary lobe of $\gamma_{s-1,p}^{k-1}$.
If $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ does not enclose all the holes contained in the secondary lobe of $\gamma_{s-1,p}^{k-1}$, then either $\gamma_{s-1,p+1}^{k-1}$
would have more than two ``lobes" or $\Psi_r^{k-1}(H_{s-1}^{k-1})$ would not be the innermost hole of the primary lobe. Both of these cases contradict the induction hypothesis hence they cannot occur. Thus the only
possibility in this case is that $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ encloses all the holes contained in the secondary lobe and at least one
hole below it.
This case is illustrated in Figure~\ref{fig: fige}(a). In this case $\delta} \def\D{\Delta(x_{p+1}^k)$ will enclose all holes in the secondary lobe of $\gamma_{s,p}^k$
and enclose at least one hole below it. Hence $\Gamma_{s,p+1}^k$ will satisfy (L3) in this case also.
This concludes the analysis for the case $\delta} \def\D{\Delta(x_{p+1}^k)$ not enclosing the hole $\Psi_p^k(H_s^k)$.
Now suppose that $\delta} \def\D{\Delta(x_{p+1}^k)$ encloses the hole $\Psi_p^k(H_s^k)$. Then $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ encloses the hole $\Psi_p^{k-1}(H_{s-1}^{k-1})$.
Suppose that $\gamma_{s,p}^k$ is right-convex; again the left-convex case is similar. Then $\gamma_{s-1,p}^{k-1}$ is also right-convex.
By the induction hypothesis, the image of $\gamma_{s-1,p}^{k-1}$ under the clockwise half-twist about $\delta} \def\D{\Delta(x_{p+1}^{k-1})$
must be left-convex and the image of $\Gamma_{s-1,p}^{k-1}$ must continue to satisfy (L3). In this case it can be checked that if $\delta} \def\D{\Delta(x_{p+1}^{k-1})$ does not enclose $\gamma_{s-1,p}^{k-1}$, then it must enclose all the holes of the secondary lobe (and no
holes above it) and any number of holes below $\Psi_p^{k-1}(H_{s-1}^{k-1})$; this situation is illustrated in
Figure~\ref{fig: fige}(b).
Since, by splitting vertically the hole $\Psi_p^{k-1}(H_{i+1}^{k-1})$ into two adjacent holes, we obtain $\gamma_{s,p}^k$ and $\delta} \def\D{\Delta(x_{p+1}^k)$
from $\gamma_{s-1,p}^{k-1}$ and $\delta} \def\D{\Delta(x_{p+1}^{k-1})$, respectively, it follows that $\gamma_{s,p}^k$ and $\delta} \def\D{\Delta(x_{p+1}^k)$ will have the same form as
$\gamma_{s-1,p}^{k-1}$ and $\delta} \def\D{\Delta(x_{p+1}^{k-1})$, that is, $\delta} \def\D{\Delta(x_{p+1}^k)$ will enclose $\gamma_{s,p}^k$ or it will enclose all the holes of the secondary
lobe (and no holes above it) and any number of holes below $\Psi_p^k(H_s^k)$.
It is now clear that, after applying applying a
clockwise half-twist about $\delta} \def\D{\Delta(x_{p+1}^k)$ to $\Gamma_{s,p}^k$, $\Gamma_{s,p+1}^k$ will continue to satisfy (L3).
We have thus checked that $\gamma_{s,r}^k$ and $\Gamma_{s,r}^k$ satisfy (L1), (L2) and (L3) for $r\leq k-2$. We now check
that they satisfy (L1), (L2) and (L3) for $r=k-1$ also.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=3.5in
\centerline{\epsfbox{fige.eps}}}
\relabel{1}{(a)} \relabel{2}{(b)} \relabel{3}{$D_k$} \relabel{4}{$D_k$} \relabel{5}{$\delta} \def\D{\Delta(x_{p+1}^{k-1})$} \relabel{6}{$\delta} \def\D{\Delta(x_{p+1}^{k-1})$}
\endrelabelbox
\caption{Some possibilities for $\delta} \def\D{\Delta(x_{p+1}^{k-1})$. }\label{fig: fige} \end{figure}
As $\gamma_{s-1,k-2}^{k-1}$ will be in the left-most copy of $D_{k-1}$, by
Lemma \ref{lem: consec}, all the holes enclosed
by $\gamma_{s-1,k-2}^{k-1}$ will be adjacent and the hole $\Psi_{k-2}^{k-1}(H_{s-1}^{k-1})$ will be either at the top or the
bottom of these holes. Being one-sided convex (by the induction hypothesis), the curve $\gamma_{s-1,k-2}^{k-1}$ must be
convex and hence the pair $(\gamma_{s-1,k-2}^{k-1},\Gamma_{s-1,k-2}^{k-1})$ must be as in Figure~\ref{fig: figf}. Thus the curve $\gamma_{s,k-2}^k$ must also be convex as the holes it encloses are obtained from the holes that $\gamma_{s-1,k-2}^{k-1}$ encloses by splitting $\Psi_{k-2}^{k-1}(H_{i+1}^{k-1})$ into two adjacent holes. As the holes enclosed by $\gamma_{s,k-1}^k$ must be adjacent with the hole $\Psi_{k-1}^k(H_s^k)$ at one end
(again, by Lemma \ref{lem: consec}), it follows that $\delta} \def\D{\Delta(x_{k-1}^k)$ must be disjoint from
$\gamma_{s,k-2}^k$. Hence the curve $\gamma_{s,k-1}^k$ will be convex and statements (L1) and (L2) will hold for $r=k-1$ also.
To see that (L3) also holds for $r=k-1$, first suppose that $s>i+2$. Then $\Psi_{k-2}^k(H_s^k)$ will already be at the top or bottom of the holes enclosed by $\gamma_{s,k-2}^k$. As the last marked point $x_{k-1}^k$ only involves the wires having the geometric positions $\{t\,|\,1\leq t\leq i+2, t\neq i+1\}$ on the right hand side, the last clockwise half-twist $(\D_{k-1}^k)^{-1}$ will be about a subdisk that does not enclose $\Psi_{k-2}^k(H_s^k)$. It easily follows that $\Gamma_{s,k-1}^k$ will satisfy (L3).
Now suppose that $s=i+2$. Then the hole $\Psi_{k-2}^k(H_s^k)$ will be either one below the top hole or one above the
bottom hole of $\gamma_{s,k-2}^k$. The last clockwise half-twist will be about a subdisk that encloses all the holes of $\gamma_{s,k-2}^k$ except the hole
$\Psi_{k-2}^k(H_{i+1}^k)$, which will be at the top or the bottom. Again it follow that $\Gamma_{s,k-1}^k$ will satisfy (L3).
This completes the proof of {Case~II.B}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=3.5in
\centerline{\epsfbox{figf.eps}}} \relabel{3}{$\gamma_{s-1,k-2}^{k-1}$} \relabel{4}{$\gamma_{s-1,k-2}^{k-1}$} \relabel{1}{$D_{k-1}$} \relabel{2}{$D_{k-1}$}
\endrelabelbox
\caption{One arc of $\Gamma_{s-1,k-2}^{k-1}$ is shown in red. }\label{fig: figf} \end{figure}
\smallskip
{\bf \underline{Case II.C (Interior blowup, $s=i+1$)}:} Suppose that $s=i+1$. By the proof of {Case~II.B}, we know that $\gamma_{i+2,r}^k$ and $\Gamma_{i+2,r}^k$ satisfy
(L1), (L2) and (L3) for $1\leq r\leq k-1$. Note that for $1\leq r\leq k-2$, the holes $\Psi_r^k(H_{i+1}^k)$ and $\Psi_r^k(H_{i+2}^k)$
will be adjacent, since the wires $w_j$ and $w_k$, having geometric positions $i+1$ and $i+2$, respectively, on the
right hand side of the diagram, will remain consecutive up to the marked point $x_{k-1}$. Thus hole $\Psi_r^k(H_{i+1}^k)$
will be in the primary lobe of $\gamma_{i+2,r}^k$ for $1\leq r\leq k-2$. It is now clear that $\gamma_{i+1,r}^k$ is given by isotoping $\gamma_{i+2,r}^k$
over the hole $\Psi_r^k(H_{i+2}^k)$ from the side dictated by $\Gamma_{i+2,r}^k$ for $1\leq r\leq k-2$; see Figure~\ref{fig: figg}.
It follows that statements (L1), (L2) and (L3) hold for $\gamma_{i+1,r}^k$ and $\Gamma_{i+1,r}^k$ for $1\leq r\leq k-2$.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=3.5in
\centerline{\epsfbox{figg.eps}}} \relabel{1}{$\gamma_{i+2,r}^k$} \relabel{2}{$\gamma_{i+2,r}^k$} \relabel{3}{$\gamma_{i+1,r}^k$} \relabel{4}{$\gamma_{i+1,r}^k$}
\endrelabelbox
\caption{The case when $1 \leq r\leq k-2$.}\label{fig: figg} \end{figure}
For $r=k-1$, we argue as follows: Since $\gamma_{i+1,k-2}^{k-1}$ will be convex with the hole $\Psi_{k-2}^{k-1}(H_{i+1}^{k-1})$
at one end and the two holes $\Psi_r^k(H_{i+1}^k)$ and $\Psi_r^k(H_{i+2}^k)$ will be adjacent and both in the primary
lobe of $\gamma_{i+2,k-2}^k$, the curve $\gamma_{i+1,k-2}^k$ must have one of the two forms given in Figure~\ref{fig: figh}. It easily follows that
$\gamma_{i+1,k-1}^k$ and $\Gamma_{i+1,k-1}^k$ satisfy statements (L1), (L2) and (L3). This completes the proof of {Case~II.C}. \end{proof}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{figh.eps}}} \relabel{1}{$\gamma_{i+2,k-2}^k$} \relabel{2}{$\gamma_{i+2,k-2}^k$} \relabel{3}{$D_k$} \relabel{4}{$D_k$} \relabel{5}{$\gamma_{i+1,k-2}^k$} \relabel{6}{$\gamma_{i+1,k-2}^k$}
\endrelabelbox
\caption{The case when $r = k-1$.}\label{fig: figh} \end{figure}
\subsubsection{The case of $\alpha$-curves:} \label{subsec: alpha} We reformulate our claim in Proposition~\ref{prop: mirror} (b) about $\alpha$-curves as Proposition~\ref{prop: alpha} below, where have we replaced $\mathcal{W}_\textbf{n} (\textbf{m})$ by $\mathcal{W}_\textbf{n}$, since the extension from $\mathcal{W}_\textbf{n}$ to $\mathcal{W}_\textbf{n} (\textbf{m})$ is irrelevant. Recall that $\overline{\alpha}$ denotes the mirror image of a given curve $\alpha$. In the following, we will decorate each curve with a superscript to indicate the number of holes in the disk in which they are embedded. For example, we will use $\alpha^k_t$ to indicate that we are talking about the curve $\alpha_t$ (see Section~\ref{sec: stab}) in $D_k$. Similarly, we will decorate each marked point in a wiring diagram with a superscript to indicate the number of wires in the diagram.
\begin{Prop} \label{prop: alpha} Let $\mathcal{W}_\textbf{n}$ be a wiring diagram with $k \geq 2$ wires and $k-1$
marked points $x^k_{k-1},\ldots,x^k_{1}$ (reading from left to right) constructed as in Definition~\ref{def: wd} with respect to some blowup sequence. Then, for each $1 \leq t \leq k-1$, the vanishing cycle $V(x^k_{t})$ associated to the marked point $x^k_{t}$ via the method of Plamenevskaya and Starkston, is the mirror image $\overline{\alpha}^k_t$ of the curve ${\alpha}^k_t$ obtained by the stabilization algorithm described in Section~\ref{sec: stab} with respect to the same blowup sequence. \end{Prop}
\begin{proof} According to Definition~\ref{def: cycle}, $V(x^k_{1}) =\delta} \def\D{\Delta(x^k_1)$ and $V(x^k_{t}) = \D^k_1 \circ \cdots \circ \D^k_{t-1} (\delta} \def\D{\Delta(x^k_t))$ for $2 \leq t \leq k-1$. Therefore, we need to verify that $\delta} \def\D{\Delta(x^k_1) = {\alpha}^k_1$ (both curves are convex and $ \overline{\alpha}^k_1= {\alpha}^k_1$) and $\D^k_1 \circ \cdots \circ \D^k_{t-1} (\delta} \def\D{\Delta(x^k_t)) = \overline{\alpha}^k_t, $ for $2 \leq t \leq k-1$.
For $k=2$, the wiring diagram has only two parallel wires and one marked point $x^2_1$ at the bottom wire, corresponding to the initial blowup $(0) \to (1,1)$. The statement holds for this case since $\delta} \def\D{\Delta(x^2_1) = {\alpha}^2_1$. Now suppose that $k\geq 3$
and the statement holds for any wiring diagram with $k-1$ wires and $k-2$ marked points, constructed as in Definition~\ref{def: wd} with respect to some blowup sequence. We will prove that the statement holds for any wiring diagram with $k$ wires and
$k-1$ marked points obtained by inserting one more wire and a marked point corresponding to the new blowup. \smallskip
{\bf \underline{Case I (Exterior blowup)}:} This is the easiest case. Suppose that the new wire $w_k$ is inserted to the diagram with $k-1$ wires as a consequence of an exterior blowup so that $w_k$ lies {\em below} all the wires and has no ``interaction" with the other wires. Recall that $w_k$ carries a free marked point $x^k_{k-1}$ which is placed geometrically to the left of all the previous marked points in the diagram.
Consider a fixed embedding of $D_{k-1} \subset D_k$, where $D_{k-1}$ includes the top $k-1$ holes in $D_k$. In other words $D_k$ is obtained from $D_{k-1}$ by inserting an extra hole, named $H^k_k$ by our conventions, at the bottom. It is clear that for any $1 \leq t \leq k-2$, the convex curve $\delta} \def\D{\Delta (x^{k-1}_t) \subset D_{k-1}$ can be identified with the convex curve $\delta} \def\D{\Delta (x^{k}_t) \subset D_k$ and hence $\D^k_{t} = \D^{k-1}_{t}$ for any $1 \leq t \leq k-2$, under this embedding. Similarly, for any $1 \leq t \leq k-2$, ${\alpha}^k_t$ can be identified with ${\alpha}^{k-1}_t$, by our stabilization algorithm in Section~\ref{sec: stab}.
By induction, the property we want to verify holds for the wiring diagram with $k-1$ wires before we insert $w_k$, i.e., we have $\delta} \def\D{\Delta(x^{k-1}_1) = {\alpha}^{k-1}_1$ and $\D^{k-1}_1 \circ \cdots \circ \D^{k-1}_{t-1} (\delta} \def\D{\Delta(x^{k-1}_t)) = \overline{\alpha}^{k-1}_t, $ for $2 \leq t \leq k-2$. Under the embedding above, these can be upgraded to $\delta} \def\D{\Delta(x^k_1) = {\alpha}^k_1$ and $\D^k_1 \circ \cdots \circ \D^k_{t-1} (\delta} \def\D{\Delta(x^k_t)) = \overline{\alpha}^k_t, $ for $2 \leq t \leq k-2$, by simply replacing the superscript $k-1$ with $k$. The key point here is that the composition $\D^k_1 \circ \cdots \circ \D^k_{t-1}$ takes place in the fixed embedded disk $D_{k-1} \subset D_k$.
To finish the proof of this case, we only have to verify the statement for $t=k-1$. Note that $\delta} \def\D{\Delta (x^k_{k-1}) = \alpha^k_{k-1}$, which by definition, is the convex curve that encloses the last hole $H^k_k \subset D_k$. We observe that $$\D^k_1 \circ \cdots \circ \D^k_{k-2} (\delta} \def\D{\Delta (x^k_{k-1})) = \alpha^k_{k-1}$$ so that the ``last" vanishing cycle is $\alpha^k_{k-1} = \overline{\alpha}^k_{k-1}$, which is consistent with our stabilization algorithm in Section~\ref{sec: stab}. \smallskip
{\bf \underline{Case II (Interior blowup)}:} Now, suppose that the new wire $w_k$ is introduced into the diagram with $k-1$ wires as a consequence of an interior blowup at the $i$th term so that $w_k$ is initially right below the $(i+1)$st wire with respect to the {\em geometric ordering} of the wires on the right-hand side of the diagram. Suppose this $(i+1)$st wire is $w_j$. Our proof below splits into two subcases: the case where $1 \leq t \leq k-2$, and the case $t = k-1$. \smallskip
{\bf \underline{Case II.A (Interior blowup, $1 \leq t \leq k-2$)}:} Now, imagine that we take a step back in our blowup algorithm. In other words, we delete the last wire $w_k$ (and the last marked point $x^k_{k-1}$ and the associated last twisting) from the diagram. At the same time we remove the {\em corresponding hole} from $D_k$ as follows. First we remove the $(i+2)$nd hole from $D_k$ to obtain the rightmost copy of $D_{k-1}$. As we move from right to left in the wiring diagram, every time we pass through a marked point, we have a new copy of $D_{k-1}$ obtained by removing from $D_k$ the hole whose order is the same as the local geometric order of the wire $w_k$. All these copies of $D_{k-1}$ can of course be identified with the rightmost copy of $D_{k-1}$, in which by induction, we have $$V(x^{k-1}_{t}) := \D^{k-1}_1 \circ \cdots \circ \D^{k-1}_{t-1} (\delta} \def\D{\Delta(x^{k-1}_t)) = \overline{\alpha}^{k-1}_t $$ for $2 \leq t \leq k-2$ and $\delta} \def\D{\Delta(x^{k-1}_1) = {\alpha}^{k-1}_1$. We claim that we can upgrade the superscript $k-1$ to $k$ in the previous sentence by splitting the $(i+1)$st hole in (the rightmost copy of) $D_{k-1}$, to create a new hole right below it, and hence identifying the new disk as $D_{k}$ and promoting the all the relevant curves from $D_{k-1}$ to $D_{k}$. Note that this is precisely how ${\alpha}^{k}_t$ is obtained from ${\alpha}^{k-1}_t$ by the stabilization algorithm in Section~\ref{sec: stab}. We just need to show that $V(x^{k-1}_t)$ can be promoted to $V(x^{k}_t)$ in the same manner, for each $1 \leq t \leq k-2$. This is easy to see for $t=1$ since the convex curve $\delta} \def\D{\Delta (x^{k-1}_1)$ can be promoted to the convex curve $\delta} \def\D{\Delta (x^{k}_1)$ by inserting a hole in $D_{k-1}$ either enclosed by $\delta} \def\D{\Delta (x^{k}_1)$ or not depending on whether $x^{k}_1$ (which is in fact the same marked point $x^{k-1}_1$) belongs to $w_j$ or not.
For $2 \leq t \leq k-2$, we argue as follows. In the aforementioned copies of $D_{k-1}$, there is a hole whose order is the same as the local geometric order of the wire $w_j$. Since, by the blowup algorithm, the wires $w_j$ and $w_k$ are geometrically consecutive throughout the diagram (except when they intersect) the corresponding two holes will be adjacent in $D_k$, interchanging their relative order at each intersection point of $w_j$ and $w_k$ (which is indeed a marked point in the diagram). Moreover, for each $1 \leq t \leq k-2$, the convex curve $\delta} \def\D{\Delta(x^k_{t})$ assigned to $x^k_{t}$ will either enclose both or neither of these holes, depending on whether $x^k_t$ belongs to $w_j$ or not. Therefore, for each $2 \leq t \leq k-2$, any one of the curves $$\delta} \def\D{\Delta(x^k_t),\; \D^k_{t-1} (\delta} \def\D{\Delta(x^k_t)), \; \ldots, \; \D^k_1 \circ \cdots \circ \D^k_{t-1} (\delta} \def\D{\Delta(x^k_t))$$ obtained iteratively by applying counterclockwise half-twists starting from the convex curve $\delta} \def\D{\Delta(x^k_t)$, will either enclose both or neither of these two holes. This implies that we could just upgrade the curves $$\delta} \def\D{\Delta(x^{k-1}_t),\; \D^{k-1}_{t-1} (\delta} \def\D{\Delta(x^{k-1}_t)), \; \ldots, \; \D^{k-1}_1 \circ \cdots \circ \D^{k-1}_{t-1} (\delta} \def\D{\Delta(x^{k-1}_t))$$ from $D_{k-1}$ to $D_k$ by inserting a new hole (corresponding to the local geometric position of the new wire $w_k$) in each copy of $D_{k-1}$. Note that the new hole corresponding to the new wire $w_k$ can be viewed as being obtained by splitting the hole corresponding to $w_j$ at each step. We observe that this new hole will appear right below the $(i+1)$st hole in the rightmost copy of $D_{k-1}$ giving $D_k$, and these two holes in question are enumerated as $H^k_{i+1}$ and $H^k_{i+2}$ in the rightmost copy of $D_k$. This ``splitting" of the $(i+1)$st hole is the crux of the matter in our stabilization algorithm in Section~\ref{sec: stab}.
The upshot of this discussion is that for $1 \leq t \leq k-2$, the vanishing cycle $V( x^k_{t})$ is the curve $\overline{\alpha}^k_t$, which is in fact nothing but $\overline{\alpha}^{k-1}_t$ upgraded to $D_k$ from $D_{k-1}$, in a manner consistent with our stabilization algorithm in Section~\ref{sec: stab}. Finally, the case $t = k-1$ will be treated separately below. \smallskip
{\bf \underline{Case II.B (Interior blowup, $t=k-1$)}:} To finish the proof of Proposition~\ref{prop: alpha}, we just have to verify that the ``last" vanishing cycle $V(x^k_{k-1})$ is the curve $\overline{\alpha}^k_{k-1}$. Equivalently, we need to verify that $\Psi^k_{k-2} ( \overline{\alpha}^k_{k-1}) = \delta} \def\D{\Delta(x^k_{k-1})$, where $\Psi^k_{k-2} = (\D_{k-2}^{k})^{-1} \circ \cdots \circ (\D_1^{k})^{-1}$, by Definition~\ref{def: psi}. Here, {\em instead of induction}, we will rather use the statement in Proposition~\ref{prop: mirror} (a) for $\gamma_{i+1}$, which we already proved.
The standing assumption in {Case II} is that the new wire $w_k$ is introduced into the diagram with $k-1$ wires as a consequence of an interior blowup at the $i$th term so that $w_k$ is initially right below the $(i+1)$st wire with respect to the {\em geometric ordering} of the wires on the right-hand side of the diagram. Suppose that this $(i+1)$st wire, labelled $w_j$, has an odd number of marked points. The case with an even number of points is very similar and is left to the interested reader.
On the left in Figure~\ref{fig: ex10a}, we depict the convex curves $ \delta} \def\D{\Delta(y^k_{i+1})$ and $\delta} \def\D{\Delta(x^k_{k-1})$. This configuration can be deduced from the proof of Lemma~\ref{lem: consec} for the case when $w_j$ has an odd number of marked points. For $s=i+1$, Proposition~\ref{prop: mirror} (a) implies that
$$ \delta} \def\D{\Delta(y^k_{i+1})= \Psi^k_{k-1} (\gamma^k_{i+1}) = (\D^k_{k-1})^{-1} \Psi^k_{k-2} (\gamma^k_{i+1})$$ and we set $(\gamma^k_{i+1})' = \D^k_{k-1}(\delta} \def\D{\Delta(y^k_{i+1}) ) = \Psi^k_{k-2} (\gamma^k_{i+1})$, which is depicted on the right in Figure~\ref{fig: ex10a}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.5in
\centerline{\epsfbox{ex10a.eps}}}
\relabel{2}{$\delta} \def\D{\Delta(y^k_{i+1})$} \relabel{a}{$\D^k_{k-1}$} \relabel{1}{$(\gamma^k_{i+1})'$} \relabel{3}{$\delta} \def\D{\Delta(x^k_{k-1})$} \relabel{4}{$\beta^k_{\psi(i+2)}$} \relabel{5}{$D_k$} \relabel{6}{$D_k$}
\endrelabelbox
\caption{The curve $(\gamma^k_{i+1})' = \D^k_{k-1}(\delta} \def\D{\Delta(y^k_{i+1}))$. }\label{fig: ex10a} \end{figure}
Let $\beta^k_{i+1}$ be the convex curve in Figure~\ref{fig: ex11} enclosing the holes $H^k_{i+1}$ and $H^k_{i+2}$ and let $\D^k_{\beta_{i+1}}$ denote the counterclockwise half-twist in the subdisk bounded by $\beta^k_{i+1}$. Note that in $D_k$, we have $\overline{\alpha}^k_{k-1}=\D^k_{\beta_{i+1}} (\gamma^k_{i+1})$ as one can see from Figure~\ref{fig: ex11}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=2in
\centerline{\epsfbox{ex11.eps}}} \relabel{1}{$\overline{\alpha}^k_{k-1}$} \relabel{2}{$\gamma^k_{i+1}$} \relabel{3}{$\beta^k_{i+1}$} \relabel{4}{$D_k$} \relabel{5}{$H^k_{i+2}$}
\endrelabelbox
\caption{The curves $\gamma^k_{i+1}$, $\beta^k_{i+1}$, and $\overline{\alpha}^k_{k-1}$ in $D_k$.}\label{fig: ex11} \end{figure}
Hence we have
\begin{equation}\label{eq1}
\Psi^k_{k-2}(\overline{\alpha}^k_{k-1} ) = \Psi^k_{k-2} \circ \D^k_{\beta_{i+1}} (\gamma^k_{i+1})
= \D^k_{\beta_{\psi(i+2)}} \circ \Psi^k_{k-2} (\gamma^k_{i+1}) = \D^k_{\beta_{\psi(i+2)}} ((\gamma^k_{i+1})'), \end{equation}
where $\beta_{\psi(i+2)}^k$ is defined as follows: The image of the curve $\beta^k_{i+1}$ under $\Psi^k_{k-2}$ is the convex curve $\beta^k_{\psi(i+2)}$ (depicted on the right in Figure~\ref{fig: ex10a}) enclosing two adjacent holes whose geometric orders are given as $ \{\psi(i+2), \psi(i+2)+1\}$. Here $\D^k_{\beta_{\psi(i+2)}}$ denotes the counterclockwise half-twist in the subdisk bounded by $\beta^k_{\psi(i+2)}$.
Note that the order of the purple coloured hole depicted in the copy of $D_k$ carrying $(\gamma^k_{i+1})'$ in Figure~\ref{fig: ex10a} is $\psi (i+2)$, and the blue hole right below it has order $\psi (i+2) +1$. In fact, $\psi(i+2)$ is the local geometric order of the wire $w_k$, and $\psi(i+2)+1$ is the local geometric order of the wire $w_j$, before the last twisting. Recall that these two wires are always geometrically consecutive and they swap their order each time they intersect.
In the second equality in \eqref{eq1} above, we used the fact that
$$\Psi^k_{k-2} \circ \D^k_{\beta_{i+1}} \circ (\Psi^k_{k-2})^{-1} = \D^k_{\beta_{\psi(i+2)}}$$ which can be easily verified.
Finally, applying $\D^k_{\beta_{\psi(i+2)}}$ (on the subdisk bounded by $\beta^k_{\psi(i+2)}$ enclosing only the blue and the purple holes) to $(\gamma^k_{i+1})'$ in Figure~\ref{fig: ex10a} we get the dashed curve $\delta} \def\D{\Delta(x^k_{k-1})$. Therefore, coupled with \eqref{eq1}, we conclude that $ \Psi^k_{k-2}(\overline{\alpha}^k_{k-1} )= \delta} \def\D{\Delta(x^k_{k-1})$. \end{proof}
\subsection{Proofs of the corollaries and some examples}
\begin{proof}[Proof of Corollary~\ref{cor: arrangement}] According to Theorem~\ref{thm: main}, for each Stein filling $W_{p,q}(\textbf{n})$ of the contact $3$-manifold $(L(p,q), \xi_{can})$, there is an (unbraided) wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$, constructed as in Definition~\ref{def: wd}, based on the blowup sequence $(0) \to (1,1) \to \cdots \to \textbf{n}$ and $\textbf{m} = \textbf{b}-\textbf{n}$, whose associated planar Lefschetz fibration $f \colon W \to D^2$ obtained by the method of Plamenevskaya and Starkston \cite[Section 5.2]{ps} is equivalent to the one constructed by the authors \cite{bo}. Moreover, using \cite[Proposition 5.5]{ps}, the wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$, viewed as a collection of intersecting curves in ${\mathbb{R}} \times {\mathbb{C}}$, can be extended to an explicit collection of symplectic graphical disks $\Gamma_1, \ldots , \Gamma_n$ in ${\mathbb{C}} \times {\mathbb{C}}$, with marked points $p_1, \ldots, p_m \in \bigcup_i \Gamma_i$ including all the intersection points of these disks. Note that we can assume that each intersection of these disks is positive and transverse, and non-intersection points are allowed as free marked points. Furthermore, by \cite[Proposition 5.6]{ps}, the Stein filling $W_{p,q}(\textbf{n})$ is supported by the restriction of the Lefchetz fibration $$\pi_{x} \circ \Pi \colon {\mathbb{C}}^2 \# m \hbox{$\overline{{\Bbb C}P^2}$} \setminus (\widetilde{\Gamma}_1 \cup\cdots\cup \widetilde{\Gamma}_n) \to {\mathbb{C}}$$ to a Milnor ball (to get compact fibers), where $\pi_x \colon {\mathbb{C}}^2 \to {\mathbb{C}}$ denotes the projection onto the first coordinate, $\widetilde{\Gamma}_1, \ldots, \widetilde{\Gamma}_n$ are the proper transforms of $\Gamma_1, \ldots, \Gamma_n$, and $\Pi$ is the blowup map. Finally, the Lefschetz fibrations $f$ and $\pi_{x} \circ \Pi$ are equivalent by the discussion in \cite[Section 5.4]{ps}. Note that the last statement in Corollary~\ref{cor: arrangement} follows immediately from \cite[Proposition 5.8]{ps}. \end{proof}
\begin{proof}[Proof of Corollary~\ref{cor: incidence}] Let $W_{p,q}(\textbf{n})$ be the Stein filling of $(L(p,q), \xi_{can})$, and let $\mathcal{W}_\textbf{n}({\textbf{m}})$ be the (unbraided) wiring diagram constructed as in Definition~\ref{def: wd} based on the blowup sequence $(0) \to (1,1) \to \cdots \to \textbf{n}$ and $\textbf{m} = \textbf{b}-\textbf{n}$. Let $\Gamma = \Gamma_1 \cup \cdots \cup \Gamma_n$ be the collection of symplectic graphical disks, with marked points $p_1, \ldots, p_m \in \Gamma$ as in Corollary~\ref{cor: arrangement}. Note that the incidence matrix $\mathcal{I}_\textbf{n}({\textbf{m}}): = \mathcal{I}(\Gamma, \{p_j\})$ can be read off from the wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$, where each wire $w_i$ is identified with the disk $\Gamma_i$ and the $x$- and $y$-type marked points are identified with $p_1, \ldots, p_m$.
To compute the incidence matrix $\mathcal{I}_\textbf{n}({\textbf{m}})$, we enumerate the wires from top to bottom with respect to their {\em geometric} order on the right-hand side of the diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$, and we enumerate the $x$- and $y$-type marked points using their natural geometric order from right to left. The matrix $\mathcal{I}_\textbf{n}({\textbf{m}})$ can be viewed as an extension of the incidence matrix $\mathcal{I}_\textbf{n}$ corresponding to the wiring diagram $\mathcal{W}_\textbf{n}$, which carries only the $x$-type marked points. In the following, first we inductively construct $\mathcal{I}_\textbf{n}$ depending on the blowup sequence $(0) \to (1,1) \to \cdots \to \textbf{n}$, starting from the matrix
\[
\begin{blockarray}{cc}
& x_1\\
\begin{block}{c(c)}
w_1 & 0 \\
w_2 & 1 \\
\end{block}
\end{blockarray} \] corresponding to the wiring diagram with two wires $w_1, w_2$ and one marked point $x_1$, obtained by the initial blowup $(0) \to (1,1)$. Suppose that we have constructed the $r \times (r-1)$ incidence matrix for a wiring diagram with $r$ wires and $r-1$ marked points $x_{1}, \ldots, x_{r-1}$. Assume that we insert the next wire $w_{r+1}$ into the diagram according to an exterior blowup. Then we ``blowup the incidence matrix" at hand by adding a last row and a last column which consist of all zeros except a $1$ at the lower right corner, to obtain the $(r+1) \times r$ incidence matrix.
Now assume that the last wire $w_{r+1}$ is inserted into the diagram according to an interior blowup at the $i$th term. Then we blowup the incidence matrix at hand by {\em inserting} a new row below the $(i+1)$st row and a last column so that the new row is copied from the row above and the last column is of the form $$[\underbrace{1,\ldots, 1}_{i}, 0,1, \underbrace{0, \ldots, 0}_{\geq 0}]^{T}$$ to obtain the $(r+1) \times r$ incidence matrix.
The matrix $\mathcal{I}_\textbf{n}({\textbf{m}})$ can be obtained from $\mathcal{I}_\textbf{n}$ in a standard way based only on the $k$-tuple $\textbf{m}$. To extend $\mathcal{I}_\textbf{n}$ to $\mathcal{I}_\textbf{n}({\textbf{m}})$, for each $1 \leq s \leq k$, we just insert a column labelled with $y_s$, so that the first $s$ entries from the top of the column labelled with $y_s$ is $1$, and the rest are $0$. If $y_s$ has multiplicity $m_s$, then we repeat $m_s$-times the column labelled with $y_s$. \end{proof}
Here we illustrate the proof of Corollary~\ref{cor: incidence} for the wiring diagram in Figure~\ref{fig: example2}. The incidence matrix $\mathcal{I}_\textbf{n}$ can be obtained algorithmically from the blowup sequence $(1,1) \to (1,2,1) \to (2,1,3,1) \to (2,1,4,1,2)=\textbf{n}$ used to construct $\mathcal{W}_{\textbf{n}}$ as follows.
\[
\begin{blockarray}{cc}
& x_1\\
\begin{block}{c(c)}
w_1 & 0 \\
w_2 & 1 \\
\end{block}
\end{blockarray} \; \xrightarrow[\text{blowup}]{\text{exterior}}
\begin{blockarray}{ccc}
& x_1 & x_2 \\
\begin{block}{c(cc)}
w_1 & 0 & \blue{0} \\
w_2 & 1 & \blue{0} \\
w_3 & \blue{0} & \blue{1} \\
\end{block}
\end{blockarray} \; \xrightarrow[\text{blowup}]{\text{interior}}
\begin{blockarray}{cccc}
& x_1 & x_2 & x_3 \\
\begin{block}{c(ccc)}
w_1 & 0 & 0 & \blue{1} \\
w_2 & 1 & 0 & \blue{0} \\
w_4 & \blue{1} & \blue{0} & \blue{1} \\
w_3 & 0 & 1 & \blue{0} \\
\end{block}
\end{blockarray} \; \xrightarrow[\text{blowup}]{\text{interior}}
\begin{blockarray}{ccccc}
& x_1 & x_2 & x_3 & x_4\\
\begin{block}{c(cccc)}
w_1 & 0 & 0 & 1& \blue{1} \\
w_2 & 1 & 0 & 0 &\blue{1} \\
w_4 & 1 & 0 & 1 & \blue{1} \\
w_3 & 0 & 1 & 0& \blue{0} \\
w_5 & \blue{0} & \blue{1} & \blue{0} & \blue{1}\\
\end{block}
\end{blockarray}
\]
The first arrow above corresponds to an exterior blowup, where we insert the last row and the last column which has a $1$ in the corner and $0$ everywhere else. The second arrow corresponds to an interior blowup at the first term, and we insert the third row and the last column so that the first two entries in the third row are copied form the row above and the entries in the last column are $1,0,1,0$ from the top. The last arrow corresponds to an interior blowup at the third term, and we insert the fifth row and the last column so that the first two entries in the third row are copied form the row above and the entries in the last column are $1,1,1,0,1$ from the top.
Let $\textbf{m}=(0,1,1,1,1)$. To extend the $5 \times 4$ incidence matrix $\mathcal{I}_\textbf{n}$ for $\mathcal{W}_\textbf{n}$ to the $5 \times 8$ incidence matrix $\mathcal{I}_\textbf{n}({\textbf{m}})$ for $\mathcal{W}_{\textbf{n}}(\textbf{m})$, we insert the columns labelled with $y_2, y_3, y_4, y_5$ where the first $s$ entries from the top of the column labelled with $y_s$ is $1$, and the rest are $0$.
\[
\begin{blockarray}{ccccccccc}
& x_1 & x_2 & x_3 & x_4 & y_2 & y_3 & y_4 & y_5 \\
\begin{block}{c(cccc|cccc)}
w_1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\
w_2 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\
w_4 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 \\
w_3 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\
w_5 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\
\end{block}
\end{blockarray} = \mathcal{I}_\textbf{n}({\textbf{m}})
\]
\begin{proof}[Proof of Corollary~\ref{cor: equiv}] A matrix
$M$ with $r\geq 2$ rows $$v_i =(v_{ij}), \quad v_{ij} \in \{0,1\}$$ is called a CQS matrix (see \cite[Definition 6.8]{djvs}) if $ \langle v_i, v_j \rangle = \langle v_i, v_i \rangle -1$ holds for all $1\leq i < j \leq r$, where $\langle \cdot,\cdot \rangle$ denotes the standard inner product, and CQS stands for cyclic quotient singularity.
Let $X$ be a cyclic quotient singularity whose singularity link is $(L(p,q), \xi_{can})$. According to \cite[Theorem 6.18]{djvs}, there is a bijection between the smoothing components of $X$ and incidence matrices of the picture deformations of the decorated curve $(\mathcal C,l)$ with smooth branches representing $X$. Moreover, the incidence matrices are in one-to-one correspondence with CQS matrices, up to permutations of the columns.
Let $\mathcal{I}^R_{\textbf{n}}(\textbf{m})$ be the matrix obtained from $\mathcal{I}_{\textbf{n}}(\textbf{m})$ by reversing the order of the rows. As in
\cite[Lemma 6.11]{djvs}, one can check that $\mathcal{I}^R_{\textbf{n}}(\textbf{m})$ is a CQS matrix and that there is a bijection between CQS matrices and the set of matrices of the form $\mathcal{I}^R_{\textbf{n}}(\textbf{m})$. The reason that we had to reverse the order of the rows of the matrix $\mathcal{I}_{\textbf{n}}(\textbf{m})$ is simply because in the present paper, to construct $\mathcal{I}_{\textbf{n}}(\textbf{m})$, we enumerated the wires in $\mathcal{W}_{\textbf{n}}(\textbf{m})$ from top to bottom, as opposed to bottom to top, with respect to their geometric order on the right-hand side of the diagram.
Finally, we give an explicit one-to-one correspondence between the Stein fillings of the contact singularity link $(L(p,q), \xi_{can})$ and the Milnor fibers of the associated cyclic quotient singularity. Let $W_{p,q}(\textbf{n})$ be the Stein filling given by Lisca obtained by the blowup sequence $(0) \to (1,1) \to \cdots \to \textbf{n}$. Using the same blowup sequence and $\textbf{m} = \textbf{b} - \textbf{n}$, we can construct a CQS matrix $\mathcal{I}^R_{\textbf{n}}(\textbf{m})$ as in the proof of Corollary~\ref{cor: incidence}, which corresponds to a picture deformation, and hence gives a Milnor fiber as in \cite{djvs}. The Stein filling $W_{p,q}(\textbf{n})$ is diffeomorphic to the Milnor fiber, because while the wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$ determines $W_{p,q}(\textbf{n})$, the picture deformation, which is in the same combinatorial equivalence class as the configuration of symplectic graphical disks arising from $\mathcal{W}_\textbf{n}({\textbf{m}})$, determines the Milnor fiber.
Conversely, for any Milnor fiber, which is obtained from a picture deformation whose incidence matrix is a CQS matrix, one can read off the pair $(\textbf{n}, \textbf{m})$ as in \cite[Proposition 6.12]{djvs}, and therefore construct the wiring diagram $\mathcal{W}_\textbf{n}({\textbf{m}})$, so that the configuration of symplectic graphical disks arising from $\mathcal{W}_\textbf{n}({\textbf{m}})$, is in the same combinatorial equivalence class as the picture deformation.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor: scott}] It is well-known that for any contact singularity link $(L(p,q), \xi_{can})$, the Milnor fiber of the Artin smoothing component of the corresponding cyclic quotient singularity gives the Stein filling $W_{(p, q)}((1,2,\ldots,2,1))$, which is Stein deformation equivalent to the one obtained by deforming the symplectic structure on the minimal resolution of the singularity; see \cite{bd}.
We describe a decorated germ $(\mathcal C,l)$ associated to the pair of coprime integers $p>q\geq 1$ such that $(\mathcal C,l)$
determines the cyclic quotient singularity with link $L(p,q)$ via the construction of de Jong and van Straten \cite{djvs}.
We follow the description given in \cite{npp}.
Suppose that $\frac pq=[a_1,\ldots,a_m]$. Let $G$ be the decorated linear graph having $m$ vertices $v_1,\ldots,v_m$ with the vertex $v_i$ weighted by the integer $-a_i$. Then $G$ is the dual resolution graph of a cyclic quotient
singularity with link $L(p,q)$. Let $G'$ be the simple graph obtained from $G$ by attaching $a_1-1$ new vertices to $v_1$
and $a_i-2$ new vertices to each vertex $v_i$ for $i>1$. Assign weight $-1$ to each new vertex. Finally, let $G''$
denote the graph obtained from $G''$ by endowing an arrowhead to each new vertex. Then $G''$ is an embedded
resolution graph of a germ of a plane curve $\mathcal C=\bigcup_i C_i$ with \emph{smooth} irreducible components corresponding to
the arrowheads in $G''$. The order of intersection of $C_i$ and $C_j$ corresponding to two distinct arrows is the number
of vertices on the intersection of the geodesics between the arrows and $v_m$. Also $l=(l_i)$, where the weight $l_i$ is the number of vertices on the geodesic
from the arrowhead corresponding to $C_i$ to $v_m$.
Now every continued fraction expansion $[c_1,\ldots,c_m]$, with $c_i\geq 2$ for all $i$, can be obtained from $[2]$ by repeated
applications of the following two operations:
\begin{enumerate}[(i)]
\item $[a_1,\ldots,a_r]\mapsto [a_1,\ldots,a_r+1]$,
\item $[a_1,\ldots,a_r]\mapsto [a_1,\ldots,a_r,2]$.
\end{enumerate}
Let $[b_1,\ldots,b_s]$ denote the dual string of $[a_1,\ldots,a_r]$, i.e. if $\frac pq=[a_1,\ldots,a_r]$, then
$\frac p{p-q}=[b_1,\ldots,b_s]$. Then the dual string changes for each of these two operations in the following way:
\begin{enumerate}[(i)]
\item $[b_1,\ldots,b_s]\mapsto [b_1,\ldots,b_s,2]$,
\item $[b_1,\ldots,b_s]\mapsto [b_1,\ldots,b_s+1]$.
\end{enumerate}
This follow immediately from a consideration of Riemenschneider's point diagrams. We check that if the statement of
the corollary holds for a pair $(p,q)$ with $\frac pq=[a_1,\ldots,a_r]$, then it continues to hold if we replace
$(p,q)$ by $(p',q')$, where $\frac {p'}{q'}=[a_1,\ldots,a_r+1]$ or $\frac {p'}{q'}=[a_1,\ldots,a_r,2]$. As the statement
holds trivially if $\frac pq=[2]$, by induction, it will follow that the statement holds for every pair of coprime integers $p>q\geq 1$.
Suppose first that $\frac {p'}{q'}=[a_1,\ldots,a_r+1]$. Then we have $\frac {p'}{p'-q'}=[b_1,\ldots,b_s,2]$, where $[b_1,\ldots,b_s]=\frac {p}{p-q}$. Notice that in the wiring diagram for the planar Lefschetz fibration $$ W_{(p,q)}((1,2,\ldots,2,1)) \to D^2$$
constructed according to Theorem \ref{thm: main}, there is a final marked point $y_s$ through which all the wires
pass. It is easy to see that that the wiring diagram for the planar Lefschetz fibration
$W_{(p',q')}((1,2,\ldots,2,1))\to D^2$ is given by taking the wiring diagram for the planar Lefschetz fibration $W_{(p,q)}((1,2,\ldots,2,1)) \to D^2$ and inserting a new
wire $w_{s+1}$ at the bottom of the diagram on the right hand side, which runs under the wiring diagram up to the point
that it passes through the marked point $y_s$ (turning it into a $y_{s+1}$-type marked point) and then runs above the diagram. In addition, a single new marked point is placed on the new wire $w_{s+1}$.
Let $(\mathcal C,l)$ denote the decorated curve associated to the pair of integers $(p,q)$ constructed as above. Then
$\mathcal C$ will consist of $s$ curvettas by the induction hypothesis.
Let $(\mathcal C',l')$ denote the decorated curve associated to the pair $(p',q')$. Then it is easy to see that $\mathcal C'$ can
be obtained from $\mathcal C$ by adding an extra curvetta $C_{s+1}$ which has intersection multiplicity $1$ with each curvetta
$C_i,i\leq s$. In addition, $l'$ is given by setting $l'_i=l_i$ for $i\leq s$ and $l_{s+1}=2$. Since the Scott
deformation of $(\mathcal C,l)$ is combinatorially equivalent to the symplectic disk arrangement associated to the
wiring diagram for $W_{(p,q)}((1,2,\ldots,2,1))$ (by the induction hypothesis), it is easy to see that the Scott
deformation of $(\mathcal C',l')$ is combinatorially equivalent to the symplectic disk arrangement associated to the
wiring diagram for $W_{(p',q')}((1,2,\ldots,2,1))$.
Now suppose that $\frac {p'}{q'}=[a_1,\ldots,a_r,2]$. Then we have $\frac {p'}{p'-q'}=[b_1,\ldots,b_s+1]$.
In this case, the wiring diagram associated to $W_{(p',q')}((1,2,\ldots,2,1))$ can be obtained from the wiring diagram
associated to $W_{(p,q)}((1,2,\ldots,2,1))$ by adding one new simultaneous intersection point $y_s$ of all the wires.
Also the decorated curve $(\mathcal C',l')$ associated to the pair $(p',q')$ can be obtained from the decorated curve
$(\mathcal C,l)$ by increasing by $1$ the order of intersection between each pair of distinct curvettas $C_i$ and $C_j$
and by increasing each $l_i$ by $1$. Again it is easy to check that the Scott
deformation of $(\mathcal C',l')$ is combinatorially equivalent to the symplectic disk arrangement associated to the
wiring diagram for $W_{(p',q')}((1,2,\ldots,2,1))$.
\end{proof}
In the following we give an example to illustrate the proof of Corollary~\ref{cor: scott}. Note that the Stein filling $W_{(p, q)}((1,2,\ldots,2,1))$ is obtained via the sequence of {\em exterior} blowups $$(0) \to (1,1) \to (1,2,1) \to \cdots \to (1,2,\ldots,2,1),$$ and the corresponding unbraided wiring diagram can be drawn easily. The wiring diagram corresponding to $W_{(56, 17)}((1,2,2,2,1))$, for instance, is depicted in Figure~\ref{fig: example3}.
\begin{figure}[ht] \relabelbox \small {\epsfxsize=4.3in
\centerline{\epsfbox{example3.eps}}} \relabel{a}{$y_3$} \relabel{b}{$y_3$} \relabel{c}{$y_3$} \relabel{d}{$y_5$} \relabel{e}{$y_5$} \relabel{3}{$x_3$} \relabel{2}{$x_2$} \relabel{1}{$x_1$} \relabel{4}{$x_4$} \relabel{5}{$y_1$} \relabel{p}{$w_1$} \relabel{q}{$w_2$} \relabel{r}{$w_3$} \relabel{s}{$w_4$} \relabel{t}{$w_5$}
\endrelabelbox
\caption{Wiring diagram for the Stein filling $W_{(56, 17)}((1,2,2,2,1))$---the symplectic resolution. }\label{fig: example3} \end{figure}
\begin{figure}[ht] \relabelbox \small {\epsfxsize=5.7in
\centerline{\epsfbox{scott.eps}}} \relabel{3}{$-2$} \relabel{2}{$-2$} \relabel{1}{$-4$} \relabel{4}{$-4$} \relabel{5}{$-2$} \relabel{7}{$-2$} \relabel{8}{$-2$}
\relabel{9}{$-2$} \relabel{6}{$-1$} \relabel{11}{$-2$}
\relabel{12}{$-2$} \relabel{10}{$-1$} \relabel{13}{$-2$} \relabel{14}{$-1$} \relabel{a}{$\boxed{2}$} \relabel{b}{$\boxed{2}$} \relabel{c}{$\boxed{2}$} \relabel{d}{$\boxed{3}$} \relabel{e}{$\boxed{3}$} \relabel{f}{$\boxed{3}$} \relabel{g}{$\boxed{4}$} \relabel{h}{$\boxed{4}$} \relabel{i}{$\boxed{4}$} \relabel{j}{$\boxed{6}$} \relabel{k}{$\boxed{6}$} \relabel{l}{$\boxed{6}$} \relabel{m}{$\boxed{3}$} \relabel{n}{$\boxed{3}$} \relabel{p}{$-1$} \relabel{q}{$-1$} \relabel{r}{$-1$} \relabel{s}{$-1$} \relabel{t}{$-1$} \relabel{x}{$\circled{3}$} \relabel{y}{$\circled{5}$} \relabel{z}{$\circled{2}$} \relabel{15}{$-4$} \relabel{16}{$-2$} \relabel{17}{$-2$} \relabel{18}{$-4$} \relabel{19}{$-2$} \relabel{20}{$-4$} \relabel{21}{$-2$}
\relabel{22}{$-2$} \relabel{23}{$-4$} \relabel{24}{$-2$} \relabel{dg}{dual resolution graph} \relabel{a1}{Scott} \relabel{a2}{deformation}
\relabel{b1}{$-1$} \relabel{b2}{$-1$} \relabel{b3}{$-1$} \relabel{b4}{$-1$} \relabel{b5}{$-1$} \endrelabelbox
\caption{From dual resolution graph of a cyclic quotient singularity to the Scott deformation of the decorated plane curve representing the singularity. }\label{fig: scott} \end{figure}
Next, we show that the arrangement of symplectic disks arising from the wiring diagram in Figure~\ref{fig: example3} is combinatorially equivalent to the Scott deformation of $(\mathcal{C},l)$. First we note that $\frac{56}{17} = [4,2,2,4,2]$ and construct the curve $(\mathcal{C},l)$, as depicted in Figure~\ref{fig: scott}.
Note that $(\mathcal{C},l)$ consists of five curvettas with indicated weights and tangencies using the notation in \cite{ps}, where boxed numbers indicate the weights, circled numbers indicate the tangencies, and the other numbers are the self-intersection numbers of the rational curves.
We also depicted the Scott deformation of $(\mathcal{C},l)$ at the bottom in Figure~\ref{fig: scott}. It follows that, after enumerating the smooth branches of the Scott deformation of $(\mathcal{C},l)$ from top to bottom as they appear on the right-hand side, the incidence matrix arising from the Scott deformation coincides with the incidence matrix arising from the wiring diagram in Figure~\ref{fig: example3}. Note that each branch of the Scott deformation $(\mathcal{C},l)$ includes a free marked point. The $5 \times 10$ incidence matrix $\mathcal{I}_{Artin} : = \mathcal{I}_{\textbf{n}}(\textbf{m})$, where $\textbf{n}=(1,2,2,2,1)$ and $\textbf{m}=(1,0,3,0,2)$, is given as follows.
\[
\begin{blockarray}{cccccccccccc}
& x_1 & x_2 & x_3 & x_4 & y_1 & y_3 & y_3 & y_3 & y_5 & y_5\\
\begin{block}{c(cccc|ccccccc)}
w_1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\
w_2 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\
w_3 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\
w_4 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\
w_5 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\
\end{block}
\end{blockarray}= \mathcal{I}_{Artin}
\]
Finally, one can easily obtain the {\em disjoint} vanishing cycles in $D_5$ of the planar Lefschetz fibration $W_{(56, 17)}((1,2,2,2,1)) \to D^2$, which we depicted in Figure~\ref{fig: ex4b}. As observed in \cite{ps}, this Lefschetz fibration agrees with the Lefschetz fibration constructed by Gay and Mark \cite{gm}, using the plumbing description $W_{(56, 17)}((1,2,2,2,1))$ given by the dual resolution graph.
\begin{figure}[ht]
\relabelbox \small {\epsfxsize=1.8in
\centerline{\epsfbox{ex4b.eps}}}
\endrelabelbox
\caption{The vanishing cycles of the planar Lefschetz fibration $W_{(56, 17)}((1,2,2,2,1)) \to D^2$. }
\label{fig: ex4b}
\end{figure}
Note that the total monodromy of the planar Lefschetz fibration $W_{(56, 17)}((1,2,2,2,1)) \to D^2$, the product of Dehn twists along the disjoint vanishing cycles depicted in Figure~\ref{fig: ex4b}, is the monodromy of the open book compatible with $(L(56,17), \xi_{can})$. This monodromy has another positive factorization $$ D (\gamma_5) D (\gamma_4) D (\gamma_3) D (\gamma_2) D(\overline{\alpha}_4) D(\overline{\alpha}_3) D(\overline{\alpha}_2) D(\overline{\alpha}_1),$$ which is the total monodromy of the planar Lefschetz fibration $W_{(56, 17)}((2,1,4,1,2)) \to D^2$ we discussed in Section~\ref{subsec: exa}.
|
2,869,038,155,323 | arxiv | \section{Introduction}
\label{section-introduction}
Many real-world data exhibit multi-dimensional form and can be naturally represented by tensors. Tensor decomposition is an essential tool for multi-way data analysis \cite{GuhaniyogiR2017BTR, liu2021low, JiLiu2013TCfE, feng2020robust}. As a generalization of the matrix product, tensor contraction is widely used in tensor factorization based methods \cite{AnandkumarA2014TDfL, anandkumar2014guaranteed, wang2015fast, huang2020provable}. However, tensor contraction is time-consuming and costs a lot of memory, especially when the data size is large. To this end, some randomization methods are developed to accelerate tensor decomposition algorithms \cite{ReynoldsMatthewJ.2016RALS, battaglino2018practical, YuanLonghao2019RTRD, MinsterRachel2020RAfL, rakhshan2020tensorized}.
Sketching adopts this randomization strategy, which succinctly maps the input data into a low-dimensional sketched space while preserving certain data properties. Unlike general random sampling, the sketching techniques commonly use random sparse matrices with certain structure, which are more efficient for computation with guaranteed performance \cite{sarlos2006improved, ClarksonKenneth2017LAaR}.
Sketching methods are successfully applied in low-rank approximation \cite{ClarksonKenneth2017LAaR, clarkson2009numerical, WoodruffDavidP.2014SaaT}, regression \cite{sarlos2006improved, ClarksonKenneth2017LAaR, chowdhury2018iterative}, etc.
Charikar et al. \cite{CharikarM2002Ffii} propose a simple and effective sketching method termed Count sketch (CS) to estimate the frequency of items in a data stream. They use a random Hash function pair to map the input vector into a low-dimensional sketched space. Pagh \cite{pagh2013compressed} combines CS of the outer product of two vectors with fast Fourier transform (FFT) to accelerate matrix multiplication compression. Despite the effectiveness of CS, it only applies to vector-valued data. When the input data is a high-order tensor, it needs to vectorize the tensor and generate a long pair of Hash functions to match the dimension of the vectorized tensor. Therefore, the storage cost for the Hash functions is high.
Pham et al. \cite{PhamNinh2013Fasp} propose tensor sketch (TS) by extending CS to high-dimensional space for polynomial kernel approximation. TS is widely used in multi-dimensional data processing tasks such as tensor decomposition \cite{wang2015fast, yang2018parasketch, malik2018low, sun2020low}, Kronecker product regression \cite{diao2018sketching, diao2019optimal}, and neural network compression \cite{kasiviswanathan2018network, han2020polynomial}.
Although TS is suitable for tensor data, it inadequately exploits the multi-dimensional structure information of the input tensor by directly sketching it into a vector. To fully exploit the multi-dimensional structure information within tensors, Shi et al. \cite{shi2019efficient} propose higher-order count sketch (HCS), which sketches the input tensor into a lower-dimensional one of the same order. However, HCS suffers from low accuracy and slow speed in some tensor based applications. Therefore, it is essential to develop a new sketching method that can achieve good approximation quality with competitive running speed.
In this paper, we propose a fast count sketch (FCS) which combines the advantages of these three sketching methods. For a general tensor $\mathcal{T}$, FCS applies multiple short Hash functions based CS on ${\rm vec}(\mathcal{T})$ instead of generating a long Hash function pair directly as done by ${\rm CS}({\rm vec}(\mathcal{T}))$. In this way, FCS costs less storage for Hash functions, since only the short ones are stored. The computational complexity of FCS for general tensors is $\operatorname{O}({\rm nnz}(\mathcal{T}))$, where ${\rm nnz(\cdot)}$ represents the number of non-zero elements. Meanwhile, compared with TS, the integrated Hash functions allow FCS to exploit the spatial information within tensors more sufficiently, which results in more accurate estimation, especially when the Hash length of Hash functions is short. Specially, suppose a tensor $\mathcal{T}$ admits CANDECOMP/PARAFAC decomposition (CPD) \cite{CarrollJ.1970Aoid, harshman1970foundations}, FCS can be accelerated by FFT, which exhibits a computational complexity asymptotically identical to TS for low-order tensors, and is much faster than HCS.
To verify the effectiveness of FCS, we apply it to two CPD algorithms named robust tensor power method (RTPM) \cite{AnandkumarA2014TDfL} and alternating least squares (ALS) \cite{KoldaTamaraG.2009TDaA}, which involve two specific tensor contractions $\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})$ and $\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})$. Theoretical proofs guarantee that the estimation by FCS is more accurate than TS when the same Hash functions are used. Experiments on both synthetic and real-world datasets demonstrate FCS achieves better approximation performance with competitive running speed, compared to various counterparts. We also apply FCS to compress the weight tensor of a tensor regression network (TRN) \cite{kossaifi2020tensor} tailed with a CP layer. Experimental results demonstrate FCS achieves better classification performance than other counterparts under various compression ratios. Finally, we compress the Kronecker product and tensor contraction using FCS. Experimental results show that compared to CS, FCS takes less compressing time and Hash memory; compared to HCS, FCS has faster decompressing speed and lower approximation error when the compression ratio is small.
\section{Background}
\label{section-background}
\subsection{Notations and basic tensor operations}
\label{notes}
Scalars, vectors, matrices and tensors are represented by lowercase, bold lowercase, bold capital and calligraphic letters respectively. Symbols ``$*$'', ``$\circ$'', and ``$\circledast$'', denote Hadamard product, vector outer product, and convolution, respectively. An $N$-th order tensor is rank-$1$ if it can be represented by the outer product of $N$ vectors. Given a tensor $\mathcal{T}\in\mathbb{R}^{I_1\times\cdots\times I_N}$, its vectorization is denoted by ${\rm vec}(\mathcal{T})\in\mathbb{R}^{\prod_{n=1}^{N}I_n}$, its mode-$n$ matricization of is denoted by $\mathbf{T}_{(n)}\in\mathbb{R}^{I_n\times \prod_{i\ne n}I_i}$. The Frobenius norm of $\mathcal{T}$ is represented by $\Vert\mathcal{T}\Vert_{\operatorname{F}}=\Vert{\rm vec}(\mathcal{T})\Vert_{\operatorname{F}}$. For any $N\in\mathbb{N}^+$, we denote $[N]:=\left\{1, \cdots, N\right\}$. For any two tensors $\mathcal{M}, \mathcal{N}$ with the same size, their tensor inner product is denoted by $\left \langle \mathcal{M},\mathcal{N}\right \rangle={\rm vec}(\mathcal{M})^{\operatorname{T}}{\rm vec}(\mathcal{N})$. The CPD of $\mathcal{T}\in\mathbb{R}^{I_1\times\cdots\times I_N}$ is a sum of rank-$1$ tensors, i.e. $\mathcal{T}\approx\sum_{r=1}^R\lambda_r\mathbf{u}^{(1)}_r\circ\cdots\circ\mathbf{u}^{(N)}_r:=[\![\bm{\lambda}; \mathbf{U}^{(1)},\cdots,\mathbf{U}^{(N)}]\!]$, where $R$ is the CP rank, $\bm{\lambda}\in\mathbb{R}^R$, $\mathbf{U}^{(n)}=[\mathbf{u}^{(n)}_1,\cdots,\mathbf{u}^{(n)}_R]\in\mathbb{R}^{I_n\times R}$ for $n\in[N]$.
Given any two tensors $\mathcal{X}\in\mathbb{R}^{I_1\times I_2\times\cdots\times I_P}$, $\mathcal{Y}\in\mathbb{R}^{J_1\times J_2\times\cdots J_Q}$, their contraction is denoted and computed by $[\mathcal{X}\circledcirc\mathcal{Y}]_{\mathbb{L}}=\mathcal{X}_{\mathbb{P}}\mathcal{Y}_{\mathbb{Q}}=\sum_{\mathbb{O}}\mathcal{X}_{:,\mathbb{O}}\otimes\mathcal{Y}_{\mathbb{O},:}$, where $\mathbb{P}:=\left\lbrace i_1\times\cdots\times i_P\right\rbrace$ for $i_n\in[I_n]$, $\mathbb{Q}:=\left\lbrace j_1\times\cdots\times j_Q\right\rbrace$ for $j_n\in[J_n]$, $\mathbb{O}=\mathbb{P}\cap\mathbb{Q}$ are the specified contraction indices, $\mathbb{L}=(\mathbb{P}\cup\mathbb{Q})\backslash \ \mathbb{O}$ are the free indices, symbol ``$\otimes$'' denotes tensor product. When applied to two vectors or matrices, ``$\otimes$'' degrades to Kronecker product of them. Moreover, given two vectors $\mathbf{u}$ and $\mathbf{v}$, we have ${\rm vec}(\mathbf{u}\circ\mathbf{v})=\mathbf{v}\otimes\mathbf{u}$. Given a series of matrices $\mathbf{M}_n\in\mathbb{R}^{I_n\times J_n}$ for $n\in[N]$, their contraction with a tensor $\mathcal{T}\in\mathbb{R}^{I_1\times\cdots\times I_N}$ is denoted and computed by $[\mathcal{T}(\mathbf{M}_1,\cdots,\mathbf{M}_N)]_{j_1,\cdots,j_N}=\sum_{i_1=1}^{I_1}\cdots\sum_{i_N=1}^{I_N}\mathcal{T}_{i_1,\cdots,i_N}\mathbf{M}_1(i_1,j_1)\cdots\mathbf{M}_N(i_N,j_N)$ for $j_n\in[J_n]$. Specially, for a $3$rd-order tensor $\mathcal{T}\in\mathbb{R}^{I\times I\times I}$ and a vector $\mathbf{u}\in\mathbb{R}^I$, $\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})=\left \langle \mathcal{T},\mathbf{u}\circ\mathbf{u}\circ\mathbf{u}\right \rangle$, $\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})_i=\left \langle \mathcal{T},\mathbf{e}_i\circ\mathbf{u}\circ\mathbf{u}\right \rangle$ where $\mathbf{I}\in\mathbb{R}^{I\times I}$ is the identity matrix, $\mathbf{e}_i\in\mathbb{R}^I$ is the $i$th standard basis vector.
\subsection{Related works}
In this section, we briefly introduce some sketching techniques that are closely related to our work.
\emph{Definition 1 (Count Sketch \cite{CharikarM2002Ffii}):} Let $\mathbf{h}:[I]\mapsto [J]$ and $\mathbf{s}:[I]\mapsto\left\{\pm1\right\}$ be two $2$-wise independent Hash functions. The CS of a vector $\mathbf{x}\in\mathbb{R}^I$ takes $\operatorname{O}({\rm nnz}(\mathbf{x}))$ time \cite{ClarksonKenneth2017LAaR} and produces a projection ${\rm CS}(\mathbf{x})\in\mathbb{R}^J$ with each element computed by:
\begin{equation}
{\rm CS}(\mathbf{x;\mathbf{h}, \mathbf{s}})_j=\sum_{\mathbf{h}(i)=j}\mathbf{s}(i)\mathbf{x}(i).
\label{eq:CS}
\end{equation}When the input is a matrix $\mathbf{X}\in\mathbb{R}^{I\times K}$, CS can be applied on each column of $\mathbf{X}$ and produces a matrix $\mathbf{Y}\in\mathbb{R}^{J\times K}$. Essentially, CS is a sketch for vectors.
\emph{Definition 2 (Tensor Sketch \cite{pagh2013compressed}):} Given an $N$th-order tensor $\mathcal{T}\in\mathbb{R}^{I_1\times\cdots\times I_N}$, $N$ pairs of $2$-wise independent Hash functions $\mathbf{h}_n:[I_n]\mapsto [J], \mathbf{s}_n:[I_n]\mapsto \left\{ \pm1 \right\}$, ${\rm TS}(\mathcal{T})\in\mathbb{R}^J$ is computed in $\operatorname{O}({\rm nnz}(\mathcal{T}))$ time by:
\begin{equation}
{\rm TS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)_j=\sum_{\mathcal{H}_{i_1,\cdots,i_N}=j}\mathcal{S}_{i_1,\cdots,i_N}\mathcal{T}_{i_1, \cdots, i_N},
\label{TS_orig}
\end{equation}where $\mathcal{H}_{i_1,\cdots,i_N}=(\mathbf{h}_1(i_1)+\cdots+\mathbf{h}_N(i_N)-N)\ {\rm mod}\ J + 1$, ${\rm mod}$ denotes the modulo operator, $\mathcal{S}_{i_1,\cdots,i_N}=\mathbf{s}_1(i_1)\mathbf{s}_2(i_2)\cdots\mathbf{s}_N(i_N)$. Specially, when $\mathcal{T}\approx\sum_{r=1}^R\lambda_r\mathbf{u}^{(1)}_r\circ\cdots\circ\mathbf{u}^{(N)}_r:=[\![\bm{\lambda}; \mathbf{U}^{(1)},\cdots,\mathbf{U}^{(N)}]\!]$ is a CP rank-$R$ tensor, ${\rm TS}(\mathcal{T})$ can be computed by the mode-$J$ circular convolution of the CS of each factor vector as:
\begin{equation}
\begin{aligned}
\label{TS}
&{\rm TS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)=\sum_{r=1}^{R}\lambda_r{\rm CS}_1(\mathbf{u}_r^{(1)})\circledast_J\cdots\circledast_J{\rm CS}_N(\mathbf{u}_r^{(N)})\\
=&\sum_{r=1}^{R}\lambda_r{\operatorname{F}}^{-1}({\operatorname{F}}({\rm CS}_1(\mathbf{U}^{(1)})(:,r))*\cdots*{\operatorname{F}}({\rm CS}_N(\mathbf{U}^{(N)})(:,r))),
\end{aligned}
\end{equation}where ${\rm CS}_n(\mathbf{U}^{(n)})\in\mathbb{R}^{J\times R}$ is based on $(\mathbf{h}_n$, $\mathbf{s}_n)$ for $n\in[N]$, $\operatorname{F}$ and ${\operatorname{F}}^{-1}$ denote FFT and its inverse, respectively. By using FFT, (\ref{TS}) can be computed in $\operatorname{O}(\max_n{{\rm nnz}(\mathbf{U}^{(n)})}+RJ{\rm log}J)$ time.
\emph{Definition 3 (Higher-order Count Sketch \cite{shi2019efficient}):} Given $\mathcal{T}\in\mathbb{R}^{I_1\times\cdots\times I_N}$, $N$ pairs of $2$-wise independent Hash functions $\mathbf{h}_n:[I_n]\mapsto [J_n]$, $\mathbf{s}_n:[I_n]\mapsto \left\{ \pm1\right\}$, ${\rm HCS}(\mathcal{T})\in\mathbb{R}^{J_1\times\cdots\times J_N}$ is computed by
\begin{equation}
{\rm HCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)_{j_1,\cdots,j_N}
=\sum_{\substack{\mathbf{h}_1(i_1)=j_1\\ \cdots\\ \mathbf{h}_N(i_N)=j_N}} \mathcal{S}_{i_1,\cdots,i_N}\mathcal{T}_{i_1, \cdots, i_N},
\label{HCS-orig}
\end{equation}where $\mathcal{S}_{i_1,\cdots,i_N}$ is defined the same as in (\ref{TS_orig}). The computational complexity of (\ref{HCS-orig}) is $\operatorname{O}({\rm nnz}(\mathcal{T}))$. When $\mathcal{T}\approx\sum_{r=1}^{R}\lambda_r\mathbf{u}^{(1)}_r\circ\cdots\circ\mathbf{u}^{(N)}_r=[\![\bm{\lambda}; \mathbf{U}^{(1)}, \cdots, \mathbf{U}^{(N)}]\!]\in\mathbb{R}^{I_1\times\cdots\times I_N}$ is a CP rank-$R$ tensor, ${\rm HCS}(\mathcal{T})$ can be computed by
\begin{equation}
{\rm HCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)=\sum_{r=1}^{R}\lambda_r{\rm CS}_1(\mathbf{U}^{(1)})(:,r)\circ\cdots\circ{\rm CS}_N(\mathbf{U}^{(N)})(:,r)
\label{eq:HCS-rank1}.
\end{equation}Notice that the vector outer product should be materialized when $R>1$. Hence, the computational complexity of (\ref{eq:HCS-rank1}) is $\operatorname{O}(\max_n{{\rm nnz}(\mathbf{U}^{(n)})}$ $+R\prod_{n=1}^{N}J_n)$.
\section{The proposed method}
As noticed in Section \ref{section-background}, HCS sketches an input tensor into a lower-dimensional one. Therefore, the multi-dimensional structure information can be well preserved. However, for CP rank-$R$ tensors, HCS computes the outer product of CS of each factor vector, which is much slower than TS. Below we will propose FCS, which can achieve a balance between the approximation accuracy and computational complexity.
{\em Definition 4 (Fast Count Sketch):} Given $\mathcal{T}\in\mathbb{R}^{I_1\times\cdots\times I_N}$, $N$ pairs of $2$-wise independent Hash functions $\mathbf{h}_n:[I_n]\mapsto [J_n]$, $\mathbf{s}_n:[I_n]\mapsto \left\{ \pm1\right\}$, ${\rm FCS}(\mathcal{T})\in\mathbb{R}^{\tilde{J}}$ is defined as:
\begin{equation}
{\rm FCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N):={\rm CS}({\rm vec}(\mathcal{T});\mathbf{h}_{N+1},\mathbf{s}_{N+1}),
\label{eq:FCS}
\end{equation}where $\tilde{J}=\sum_{n=1}^{N}J_n-N+1$. $\mathbf{h}_{N+1}:[\tilde{I}]\mapsto [\tilde{J}]$ and $\mathbf{s}_{N+1}: [\tilde{I}]\mapsto \left\{ \pm1 \right\}$ ($\tilde{I}=\prod_{n=1}^{N}I_n$) satisfy:
\begin{equation}
\begin{aligned}
\mathbf{s}_{N+1}(\sum_{n=1}^{N}(i_n-1)\prod_{j=1}^{n-1}I_j+1)&=\prod_{n=1}^{N}\mathbf{s}_n(i_n)\\
\mathbf{h}_{N+1}(\sum_{n=1}^{N}(i_n-1)\prod_{j=1}^{n-1}I_j+1)&=\sum_{n=1}^{N}\mathbf{h}_n(i_n)-N+1.
\end{aligned}
\end{equation}As it does in (\ref{eq:FCS}), FCS on the original tensor equals multiple shorter Hash functions based CS on the vectorized tensor.
In the next subsections, we will present the computation details of FCS for the CP rank-$R$ tensor and the general tensor.
\subsection{CP rank-$R$ tensor}
\label{sec:cp rank-R tensor}
Consider the case where $\mathcal{T}$ admits CPD $\mathcal{T}=\sum_{r=1}^{R}\lambda_r\mathbf{u}^{(1)}_r\circ\cdots\circ\mathbf{u}^{(N)}_r=[\![\bm{\lambda}; \mathbf{U}^{(1)}, \cdots, \mathbf{U}^{(N)}]\!]\in\mathbb{R}^{I_1\times\cdots\times I_N}$. Given $N$ pairs of $2$-wise independent Hash functions $\mathbf{h}_n:[I_n]\mapsto [J_n]$ and $\mathbf{s}_n:[I_n]\mapsto \left\{ \pm1 \right\}$, ${\rm FCS}(\mathcal{T})$ is computed by
\begin{equation}
\begin{aligned}
\label{FCS_syn}
&{\rm FCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{N})={\rm CS}(\sum_{r=1}^{R}\lambda_r\overset{1}{\underset{n=N}{\otimes}}(\mathbf{u}^{(n)}_r); \mathbf{h}_{N+1},\mathbf{s}_{N+1})\\
=&\sum_{r=1}^{R}\lambda_r{\rm CS}_1(\mathbf{U}^{(1)})(:, r)\circledast \cdots\circledast {\rm CS}_N(\mathbf{U}^{(N)})(:, r)\\
=&\sum_{r=1}^{R}\lambda_r{\operatorname{F}}^{-1}({\operatorname{F}}({\rm CS_1}(\mathbf{U}^{(1)})(:,r), \tilde{J})*\cdots*{\operatorname{F}}({\rm CS}_N(\mathbf{U}^{(N)})(:,r), \tilde{J})),
\end{aligned}
\end{equation}where $\overset{1}{\underset{n=N}{\otimes}}(\mathbf{u}^{(n)}_r):=\mathbf{u}^{(N)}_r\otimes\mathbf{u}^{(N-1)}_r\otimes\cdots\otimes\mathbf{u}^{(1)}_r$, $\operatorname{F}(;, \tilde{J})$ and ${\operatorname{F}}^{-1}(;, \tilde{J})$ denote zero-padded $\tilde{J}$-point FFT and its inverse. The proof of (\ref{FCS_syn}) is as follows:
\begin{proof*}[proof of (\ref{FCS_syn})]
Given a rank-$1$ tensor $\mathcal{T}=\mathbf{u}^{(1)}\circ\mathbf{u}^{(2)}\circ\cdots\circ\mathbf{u}^{(N)}\in\mathbb{R}^{I_1\times I_2\times\cdots\times I_N}$, if we assign $l=\sum_{n=1}^{N}(i_n-1)\prod_{j=1}^{n-1}I_j+1$, then ${\rm vec}(\mathcal{T})_l=\mathbf{u}^{(1)}_{i_1}\cdots\mathbf{u}^{(N)}_{i_N}$. We have
\begin{equation}
\begin{aligned}
&\sum_{i_1=1}^{I_1}\cdots\sum_{i_N=1}^{I_N}{\rm vec}(\mathcal{T})_l\mathbf{s}_1(i_1)\cdots\mathbf{s}_N(i_N){\rm w}^{\mathbf{h}_1(i_1)+\cdots+\mathbf{h}_N(i_N)-N}\\
=&\sum_{i_1=1}^{I_1}\cdots\sum_{i_N=1}^{I_N}\mathbf{u}^{(1)}_{i_1}\cdots\mathbf{u}^{(N)}_{i_N}\mathbf{s}_1(i_1)\cdots\mathbf{s}_N(i_N){\rm w}^{\mathbf{h}_1(i_1)+\cdots+\mathbf{h}_N(i_N)-N}\\
=&\sum_{i_1}\mathbf{u}^{(1)}_{i_1}\mathbf{s}_1(i_1){\rm w}^{\mathbf{h}_1(i_1)-1}\cdots\sum_{i_N}\mathbf{u}^{(N)}_{i_N}\mathbf{s}_N(i_N){\rm w}^{\mathbf{h}_N(i_N)-1}\\
=&\mathcal{P}_{{\rm CS}_1(\mathbf{u}^{(1)})}({\rm w})\cdots\mathcal{P}_{{\rm CS}_N(\mathbf{u}^{(N)})}({\rm w}),
\end{aligned}
\end{equation}where ${\rm w}$ is a symbolic variable, $\mathcal{P}_{{\rm CS}_n(\mathbf{u}^{(n)})}({\rm w})$ is the polynomial form of ${\rm CS}_n(\mathbf{u}^{(n)})$ for $n\in[N]$. Let $\mathbf{s}_{N+1}(l)=\mathbf{s}_1(i_1)\cdots\mathbf{s}_N(i_N)$, $\mathbf{h}_{N+1}(l)=\mathbf{h}_1(i_1)+\cdots+\mathbf{h}_{N}(i_N)-N+1$, we have
\begin{equation}
\begin{aligned}
&\sum_{i_1=1}^{I_1}\cdots\sum_{i_N=1}^{I_N}{\rm vec}(\mathcal{T})_l\mathbf{s}_1(i_1)\cdots\mathbf{s}_N(i_N){\rm w}^{\mathbf{h}_1(i_1)+\cdots+\mathbf{h}_N(i_N)-N}\\
=&\sum_l{\rm vec}(\mathcal{T})_l\mathbf{s}_{N+1}(l){\rm w}^{\mathbf{h}_{N+1}(l)-1}\\
=&\mathcal{P}_{{\rm CS}({\rm vec(\mathcal{T})}; \mathbf{h}_{N+1}, \mathbf{s}_{N+1})}({\rm w}):=\mathcal{P}_{{\rm FCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)}({\rm w}).
\end{aligned}
\end{equation}Therefore,
\begin{equation}
\mathcal{P}_{{\rm FCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)}({\rm w})=\mathcal{P}_{{\rm CS}_1(\mathbf{u}^{(1)})}({\rm w})\cdots\mathcal{P}_{{\rm CS}_N(\mathbf{u}^{(N)})}({\rm w}).
\end{equation}Due to the fact that polynomial multiplication equals the convolution of their coefficients, we have
\begin{equation}
\begin{aligned}
{\rm FCS}(\mathcal{T};\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N)&={\rm CS}_1(\mathbf{u}^{(1)})\circledast\cdots\circledast{\rm CS}_N(\mathbf{u}^{(N)})\\
&=\mathbb{F}^{-1}(\mathbb{F}({\rm CS}_1(\mathbf{u}^{(1)}))*\cdots*\mathbb{F}({\rm CS}_N(\mathbf{u}^{(N)}))).
\end{aligned}
\end{equation}Based on the linearity, the conclusion applies to CP rank-$R$ tensors immediately, which completes the proof.
\end{proof*}
The computational complexity of (\ref{FCS_syn}) is $\operatorname{O}(\max_n{{\rm nnz}(\mathbf{U}^{(n)})}+R\tilde{J}\log \tilde{J})$, which is asymptotically identical to TS for low-order tensors, and is much faster than the vector outer product of HCS shown in (\ref{eq:HCS-rank1}). Since the vectorization form of the CPD is equivalent to the original form after reordering the elements, the multi-dimensional structure of the original tensor can be well preserved.
\subsection{General tensor}
For most real-world low-rank tensor data, the rank-$1$ factors cannot be known in advance.
FCS for general tensors can be computed in $\operatorname{O}({\rm nnz}(\mathcal{T}))$ time by:
\begin{equation}
\begin{aligned}
{\rm FCS}(\mathcal{T}; \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{N})_j&=\sum_{\mathbf{h}_{N+1}(i)=j}^{\tilde{I}}\mathbf{s}_{N+1}(i)*{\rm vec}(\mathcal{T})_i\\
&=\sum_{\mathcal{H}_{i_1,\cdots,i_N}=j}\mathcal{S}_{i_1,\cdots,i_N}\mathcal{T}_{i_1, \cdots, i_N},
\end{aligned}
\label{FCS_asyn}
\end{equation}where $\mathcal{H}_{i_1,\cdots,i_N}=\mathbf{h}_1(i_1)+\cdots+\mathbf{h}_N(i_N)- N + 1$, $\mathcal{S}_{i_1,\cdots,i_N}=\mathbf{s}_1(i_1)\mathbf{s}_2(i_2)\cdots\mathbf{s}_N(i_N)$ for $j\in[\tilde{J}]$. Notice that $\mathcal{H}$ and $\mathcal{S}$ are not necessarily explicitly built so that the only $\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{N}$ are required to be stored.
We summarize the main differences of FCS among CS, TS, and HCS below:
\begin{enumerate}[(1)]
\item Compared with CS which applies on ${\rm vec}(\mathcal{T})$, FCS is based on multiple shorter Hash function pairs $\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N$, while CS generates a long Hash function pair $\mathbf{h}:[\tilde{I}]\mapsto [\tilde{J}]$ and $\mathbf{s}: [\tilde{I}]\mapsto \left\{ \pm1 \right\}$ to match the dimensionality of ${\rm vec}(\mathcal{T})$. Therefore, only $\left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^N$ are stored for FCS, which reduces the storage complexity from $\operatorname{O}(\tilde{I})$ to $\operatorname{O}(\sum_{n=1}^{N}I_n)$. Besides, as shown in (\ref{FCS_syn}), FCS for CP rank-$R$ tensors can be computed by FFT, which accelerates the original CS.
\item Compared with TS, FCS avoids the modular operation in (\ref{TS_orig}) and (\ref{TS}). Intuitively, the multiple Hash functions of FCS are integrated in a way that the spatial structure of the input tensor can be preserved more sufficiently, while the modular operation in TS breaks the spatial relationship of tensors. We will show that FCS provides a more accurate estimator for tensor inner product than TS under the same Hash functions in Proposition $1$.
\item Compared with HCS, the vector outer product in (\ref{eq:HCS-rank1}) is also avoided. Besides, as shown in Table \ref{tab:compare sketches}, the approximation for $\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})$ and $\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})$ by FCS is more efficient than HCS under the same Hash length. Finally, we will show in the experiments that FCS has faster decompressing time than HCS for Kronecker product and tensor contraction compression.
\end{enumerate}
\subsection{Tensor contraction approximation}
We approximate two specific tensor contractions $\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})$ and $\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})$ using FCS. For brevity, we omit the Hash function pairs in (\ref{TS_orig}) and (\ref{eq:FCS}). The following proposition states the validity of the approximation:
{\em Proposition $1$}: For any two tensors $\mathcal{M}, \mathcal{N}\in\mathbb{R}^{I_1\times I_2\times I_3}$, $\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle$ provides a consistent estimator of $\left \langle \mathcal{M},\mathcal{N}\right \rangle$ with the variance satisfying
\begin{equation}
\label{theorem}
\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle]\le\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm TS}(\mathcal{M}), {\rm TS}(\mathcal{N})\right \rangle],
\end{equation}if the Hash functions for TS and FCS are equalized. The validity is guaranteed from two aspects: the consistency guarantees the approximation is feasible, and (\ref{theorem}) illustrates FCS computes a more accurate estimator for tensor inner product than TS \cite{wang2015fast}. The proof is defered to the supplementary materials due to space limit.
Based on Proposition $1$, the following corollary gives the approximation error:
{\em Corollary 1}: Given any $3$rd-order tensor $\mathcal{T}\in\mathbb{R}^{I\times I\times I}$, unit vector $\mathbf{u}\in\mathbb{R}^I$, from Chebychev's inequality, if we run the sketch $D=\Omega(\log(1/\delta))$ times, then for any $\epsilon>0$, with probability at least $1-\delta$ we have
\begin{equation}
\label{cheby}
\begin{aligned}
&\operatorname{P}_{\mathbf{h}, \mathbf{s}}[|\left \langle {\rm FCS}(\mathcal{T}), {\rm FCS}(\mathbf{u}\circ\mathbf{u}\circ\mathbf{u})\right \rangle-\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})|\ge\epsilon]\le \operatorname{O}(\frac{\Vert\mathcal{T}\Vert_\text{F}^2}{J\epsilon^2})\\
&\operatorname{P}_{\mathbf{h}, \mathbf{s}}[|\left\langle {\rm FCS}(\mathcal{T}), {\rm FCS}(\mathbf{e}_i\circ\mathbf{u}\circ\mathbf{u})\right \rangle-\mathcal{T}(\mathbf{e}_i,\mathbf{u},\mathbf{u})|\ge\epsilon]\le \operatorname{O}(\frac{\Vert\mathcal{T}\Vert_\text{F}^2}{J\epsilon^2}),
\end{aligned}
\end{equation}where the symbol $\operatorname{P}$ stands for probability. The Hash lengths of all Hash functions are set to $J$ for simplicity. The proof can be found in the supplementary materials.
\section{Experiments}
In this section, we verify the effectiveness of the proposed FCS by comparing it with CS, TS and HCS in various numerical experiments, including CPD, TRN compression, the Kronecker product and tensor contraction compression. To make the estimation more robust, we compute $D$ number of independent sketches and return the median for all sketching methods.
\subsection{CP decomposition}
We consider two CPD algorithms dubbed RTPM and ALS which involve $\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})$ and $\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})$. From Corollary $1$, the two tensor contractions can be approximated by:
\begin{equation}
\label{eq:T_uvw}
\begin{aligned}
&\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})\approx\left\langle{\rm FCS}(\mathcal{T}), {\rm FCS}(\mathbf{u}\circ\mathbf{u}\circ\mathbf{u})\right \rangle\\
=&\left\langle{\rm FCS}(\mathcal{T}), {\operatorname{F}}^{-1}({\operatorname{F}}({\rm CS}_1(\mathbf{u}))*{\operatorname{F}}({\rm CS}_2(\mathbf{u}))*{\operatorname{F}}({\rm CS}_3(\mathbf{u})))\right\rangle
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:T_Ivw}
\begin{aligned}
&\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})_i\approx\left\langle{\rm FCS}(\mathcal{T}), {\rm FCS}(\mathbf{e}_i\circ\mathbf{u}\circ\mathbf{u})\right \rangle\\
=&\left\langle{\operatorname{F}}^{-1}({\operatorname{F}}({\rm FCS}(\mathcal{T}))*\overline{{\operatorname{F}}({\rm CS}_2(\mathbf u))}*\overline{{\operatorname{F}}({\rm CS}_3(\mathbf u))}),{\rm CS}_1(\mathbf{e}_i)\right \rangle\\
:=&\left \langle \mathbf{z},{\rm CS}_1(\mathbf{e}_i) \right \rangle
=\mathbf{s}_1(i)\mathbf{z}(\mathbf{h}_1(i)).
\end{aligned}
\end{equation}(\ref{eq:T_Ivw}) holds due to the unitary property of FFT. Both ${\rm FCS}(\mathcal{T})$ and $\mathbf{z}$ are irrelevant to $i$ and can be computed beforehand. We summarize the computational and storage complexity of RTPM and ALS with and without sketching (which we term ``plain'') in Table \ref{tab:compare sketches}. All Hash lengths are set to $J$ for simplicity.
\begin{table*}[htbp]
\centering
\caption{Computational and storage complexity of the plain, CS, TS, HCS, and FCS based RTPM and ALS for a $3$rd-order tensor $\mathcal{T}\in\mathbb{R}^{I\times I\times I}$. $R$ denotes the target CP rank. }
\scalebox{0.47}{
\begin{tabular}{cccccc}
\toprule
RTPM/ALS & plain & CS & TS & HCS & FCS\\
\midrule
preprocessing and sketch building for $\mathcal{T}\!=\![\![\bm{\lambda};\!\mathbf{U}\!,\!\mathbf{U}\!,\!\mathbf{U}]\!]$ &$\operatorname{O}(RI^3)$ &$\operatorname{O}(RI^3+{\rm nnz}(\mathcal{T}))$ &$\operatorname{O}({\rm nnz}(\mathbf{U})+RJ{\rm log}J)$ &$\operatorname{O}({\rm nnz}(\mathbf{U})+RJ^3)$ &$\operatorname{O}({\rm nnz}(\mathbf{U})+RJ\log J)$ \\
\specialrule{0pt}{2pt}{2pt}
preprocessing and sketch building for general tensor $\mathcal{T}$ &- &$\operatorname{O}({\rm nnz}(\mathcal{T}))$ &$\operatorname{O}({\rm nnz}(\mathcal{T}))$ &$\operatorname{O}({\rm nnz}(\mathcal{T}))$ &$\operatorname{O}({\rm nnz}(\mathcal{T}))$\\
\specialrule{0pt}{2pt}{2pt}
$\mathcal{T}(\mathbf{I},\mathbf{u},\mathbf{u})$ or its approximation & $\operatorname{O}(I^3)$ & $\operatorname{O}({\rm nnz}(\mathbf{u})^2I)$ &$\operatorname{O}({\rm nnz}(\mathbf{u})+J\log J + I)$ & $\operatorname{O}({\rm nnz}(\mathbf{u})+IJ^2)$ &$\operatorname{O}({\rm nnz}(\mathbf{u})+J\log J+I)$\\
\specialrule{0pt}{2pt}{2pt}
$\mathcal{T}(\mathbf{u},\mathbf{u},\mathbf{u})$ or its approximation &$\operatorname{O}(I^3)$ &$\operatorname{O}({\rm nnz}(\mathbf{u})^3)$ &$\operatorname{O}({\rm nnz}(\mathbf{u})+J\log J)$ &$\operatorname{O}({\rm nnz}(\mathbf{u})+J^3)$ &$\operatorname{O}({\rm nnz}(\mathbf{u})+J\log J)$\\
\specialrule{0pt}{2pt}{2pt}
storage for Hash functions &- &$\operatorname{O}(I^3)$ &$\operatorname{O}(I)$ &$\operatorname{O}(I)$ &$\operatorname{O}(I)$\\
\bottomrule
\end{tabular}
}
\label{tab:compare sketches}
\end{table*}
\subsubsection{Robust tensor power method}
RTPM decomposes a noisy input tensor into rank-$1$ components with orthogonal basis vectors. For each rank-$1$ factor, it computes power iteration $\mathbf{u}=\frac{\mathcal{T}(\mathbf{I}, \mathbf{u}, \mathbf{u})}{\Vert \mathcal{T}(\mathbf{I}, \mathbf{u}, \mathbf{u})\Vert_{\operatorname{F}}}$ from random initializations and obtains the eigenvalue by $\mathcal{T}(\mathbf{u}, \mathbf{u}, \mathbf{u})$.
It is necessary to note that most real-world low-rank tensor data are asymmetric. The power iteration of asymmetric RTPM can be performed similarly as the symmetric case via alternating rank-$1$ update \cite{AnandkumarA2017ATPM}. By (\ref{FCS_asyn}), we can accelerate RTPM on real-world data using FCS.
The effectiveness of FCS is verified by accelerating RTPM on synthetic and real-world datasets. For the synthetic case, we generate a symmetric CP rank-$10$ tensor $\mathcal{T}=\sum_{r=1}^{10}\mathbf{u}_r\circ\mathbf{u}_r\circ\mathbf{u}_r\in\mathbb{R}^{100\times 100 \times 100}$, where $\left\{\mathbf{u}_r\right\}_{r=1}^{10}$ forms a random orthonormal basis. $\mathcal{T}$ is then perturbed by zero-mean Gaussian noise with standard deviation $\sigma=0.01$. The number of independent sketches $D$, initial vectors $L$ and power iterations $T$ are set to $2$, $15$ and $20$, respectively. The Hash functions for TS and FCS are equalized to produce identical initializations. We choose residual norm as the performance metric.
We compare the performance of FCS against the plain, CS, and TS based RTPM with the Hash lengths ranging from $1000$ to $10000$, and the results are shown in Fig. \ref{fig:compare-CS-TS-FCS}. Clearly, FCS-RTPM achieves better approximation accuracy than CS- and TS- RTPM. Although FCS-RTPM is slower than TS-RTPM, it is still much faster than the CS- and plain RTPM under most Hash lengths. Notice that CS-RTPM is even slower than the plain RTPM, hence we do not compare it with FCS-RTPM in the real-world experiment.
\begin{figure}[H]
\centerline{\includegraphics[width=\columnwidth]{figures/RTPM.pdf}}
\caption{Performance comparison on a synthetic symmetric CP rank-$10$ tensor $\mathcal{T}\in\mathbb{R}^{100\times100\times100}$ for plain, CS, TS and FCS based RTPM.}
\label{fig:compare-CS-TS-FCS}
\end{figure}
We compare the performance of HCS and FCS based RTPM on a synthetic symmetric CP rank-$10$ tensor $\mathcal{T}\in\mathbb{R}^{50\times50\times50}$. Denote the Hash lengths of HCS and FCS as $J_1$ and $J_2$, respectively. Given a $3$rd-order tensor with dimension $I$, $J_1$ should be smaller than $I$ to sketch the tensor into a lower dimensional one. However, the sketched dimension for FCS requires to be $\operatorname{O}(I^3)$ for provable approximation \cite{wang2015fast}. As a result, it is not fair to set $J_1\approx J_2$ in this experiment. Therefore we choose $J_1$ and $J_2$ such that the sketched dimensions are similar ($J_1^3\approx3J_2-2$). The experiment results are displayed in Table \ref{tab:HCS-FCS-same-memory}. It can be seen that FCS beats HCS in terms of approximation accuracy and running speed under various Hash lengths, noise intensities, and number of independent sketches.
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{table*}[htbp]
\begin{center}
\caption{Performance comparison on a synthetic symmetric CP rank-$10$ tensor $\mathcal{T}\in\mathbb{R}^{50\times50\times50}$ by HCS and FCS based RTPM under similar sketched dimension.}\label{tab:HCS-FCS-same-memory}
\scalebox{0.58}{
\begin{tabular}{ccccccccccccc}
\toprule
$\sigma$ & & & \multicolumn{5}{c}{Residual norm} & \multicolumn{5}{c}{Running time (s)}\\
\midrule
\specialrule{0pt}{2pt}{2pt}
\multirow{11}{*}{0.01} & & $J_1$ & 14 & 18 & 21 & 23 & 25 & 14 & 18 & 21 & 23 & 25 \\
\specialrule{0pt}{2pt}{2pt}
& \multirow{3}{*}{\tabincell{c}{HCS-\\RTPM}} & $D=10$ & {1.3020} & {0.8305} & {0.7744} & {0.7727} & {0.7719} & {12.2329} & {30.7566} & {46.7719} & {58.4108} & {67.7999}\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ & {0.8237} & {0.6583} & {0.6089} & {0.5938} & {0.5699} & {18.2351} & {45.5026} & {70.3092} & {88.6696} & {104.4252}\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ & {0.6904} & {0.6702} & {0.5229} &{0.4738} & {0.4733} & {24.5617} & {63.1505} & {95.1202} & {119.8723} & {140.5284}\\
\specialrule{0pt}{2pt}{2pt}
& & $J_2$ & 200 & 250 & 300 & 350 & 400 & 200 & 250 & 300 & 350 & 400 \\
\specialrule{0pt}{2pt}{2pt}
& \multirow{3}{*}{\tabincell{c}{FCS-\\RTPM}} & $D=10$ & \textbf{0.3304} & \textbf{0.2033} & \textbf{0.1701} & \textbf{0.1525} & \textbf{0.1375} & \textbf{4.9459} & \textbf{8.8239} & \textbf{16.7354} & \textbf{21.9275} & \textbf{28.4605}\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ & \textbf{0.2440} & \textbf{0.1794} & \textbf{0.1472} & \textbf{0.1280} & \textbf{0.1179} & \textbf{6.9054} & \textbf{13.9064} & \textbf{26.1508} & \textbf{34.1260} & \textbf{42.7100}
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ &\textbf{0.2135} & \textbf{0.1544} & \textbf{0.1226} & \textbf{0.1050} & \textbf{0.0899} & \textbf{8.7913} & \textbf{18.2077} & \textbf{34.4879} & \textbf{44.7941} & \textbf{56.2395}
\\
\specialrule{0pt}{2pt}{2pt}
\midrule
\specialrule{0pt}{2pt}{2pt}
\multirow{11}{*}{0.1} & & $J_1$ & 14 & 18 & 21 & 23 & 25 & 14 & 18 & 21 & 23 & 25 \\
\specialrule{0pt}{2pt}{2pt}
& \multirow{4}{*}{\tabincell{c}{HCS-\\RTPM}} & $D=10$ &{0.8941} & {0.8414} & {0.7493} & {0.6796} & {0.6334} & 12.1630 & 28.7772 & 42.9283 & 54.9871 & 65.9807
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ &{0.7692} & {0.6618} & {0.6112} & {0.5766} & {0.5738} & 18.0038 & 43.0994 & 65.3426 & 85.6515 & 103.2082
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ &0.6814 & {0.6155} & {0.5256} & {0.4697} & {0.5409} & 24.2827 & 57.0479 & 88.6810 & 122.2182 & 149.7220
\\
\specialrule{0pt}{2pt}{2pt}
& & $J_2$ & 200 & 250 & 300 & 350 & 400 & 200 & 250 & 300 & 350 & 400 \\
\specialrule{0pt}{2pt}{2pt}
& \multirow{4}{*}{\tabincell{c}{FCS-\\RTPM}} & $D=10$ &\textbf{0.3123} & \textbf{0.2052} & \textbf{0.1648} & \textbf{0.1386} & \textbf{0.1277} & \textbf{4.8647} & \textbf{8.7281} & \textbf{16.5311} & \textbf{21.5956} & \textbf{28.6445}
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ &\textbf{0.2613} & \textbf{0.1665} & \textbf{0.1568} & \textbf{0.1289} & \textbf{0.1063} & \textbf{6.8684} & \textbf{14.6268} & \textbf{27.2106} & \textbf{35.0255} & \textbf{44.4975}
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ &\textbf{0.2102} & \textbf{0.1550} & \textbf{0.1173} & \textbf{0.0987} & \textbf{0.0939} & \textbf{8.9151} & \textbf{18.8304} & \textbf{34.3682} & \textbf{48.4351} & \textbf{59.3935}
\\
\specialrule{0pt}{2pt}{2pt}
\bottomrule
\end{tabular}
}
\end{center}
\end{table*}
For the real-world data based experiment, we compute the plain, TS and FCS based RTPM using the same Hash functions on a hyperspectral imaging (HSI) dataset Watercolors and a light field dataset Buddha. Below we briefly introduce the two datasets and the preprocessing methods:
\begin{enumerate}[(1)]
\item The Watercolors is a HSI dataset captured by a resolution $512\times512\times3$ cooled CCD camera with the wevelength ranging from $400$ to $700$ nm at the interval of $10$ ($31$ bands in total) \cite{yasuma2010generalized}. We transform the raw images to gray scale images and represent them as a $512\times512\times31$ tensor. The target CP rank is set as $15$.
\item The Buddha dataset is captured by a resolution $768\times768\times3$ camera at $9\times9$ views \cite{wanner2013datasets}. Therefore, the raw data can be represented by a $768\times768\times3\times9\times9$ tensor. We transform the raw images to gray scale images, and resize them as a $192\times192\times81$ tensor. The target CP rank is set as $30$.
\end{enumerate}
We vary the compared Hash lengths from $5000$ to $8000$. The number of independent sketches $D$ is set as $10$ and $15$, respectively. We choose peak signal to noise ratio (PSNR) as the performance metric. We display the $31$st frame of the approximation results of Watercolors in Fig. \ref{fig:Watercolors} and the $1$st frame of the approximation results of Buddha in Fig. \ref{fig:Buddha}, respectively. Clearly FCS-RTPM achieves better approximation quality than TS, especially when the Hash length is small. Although FCS-RTPM is slower than TS-RTPM, it is much faster than the plain RTPM.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{figures/Watercolors-short.pdf}
\caption{Rank-$15$ approximation of TS-RTPM and FCS-RTPM on HSI dataset Watercolors. PSNR (dB) and running time (s) are tagged under each approximation figure.}
\label{fig:Watercolors}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{figures/Buddha-short.pdf}
\caption{Rank-$30$ approximation of TS-RTPM and FCS-RTPM on light field dataset Buddha. PSNR (dB) and running time (s) are tagged under each approximation figure.}
\label{fig:Buddha}
\end{figure}
\subsubsection{Alternating least squares}
ALS is another efficient method for CPD, which requires computing \cite{wang2015fast}
\begin{equation}
\begin{aligned}
\mathbf{T}_{(1)}(\mathbf{C}\odot\mathbf{B})&= [\mathcal{T}(\mathbf{I},\mathbf b_1,\mathbf c_1),\cdots,\mathcal{T}(\mathbf{I},\mathbf b_N,\mathbf c_N)]\\
\mathbf{T}_{(2)}(\mathbf{A}\odot\mathbf{C})&= [\mathcal{T}(\mathbf c_1,\mathbf{I},\mathbf a_1),\cdots,\mathcal{T}(\mathbf c_N,\mathbf{I},\mathbf a_N)]\\
\mathbf{T}_{(3)}(\mathbf{B}\odot\mathbf{A})&= [\mathcal{T}(\mathbf a_1,\mathbf b_1,\mathbf{I}),\cdots,\mathcal{T}(\mathbf a_N,\mathbf b_N,\mathbf{I})],
\end{aligned}
\end{equation}iteratively. Hence we can approximate them like (\ref{eq:T_Ivw}). In the experiment, we generate a synthetic asymmetric CP rank-$10$ tensor $\mathcal{T}=\sum_{r=1}^{10}\mathbf{u}_r\circ\mathbf{v}_r\circ\mathbf{w}_r\in\mathbb{R}^{400\times 400 \times 400}$ as illustrated in the last subsection. We compare the plain, TS and FCS based ALS using the same Hash functions and the results are shown in Table \ref{tab:400synth}. Clearly FCS is more accurate than TS under various conditions. Notice that as the Hash length gets smaller, the accuracy gap between TS and FCS gets bigger, and the speed gap gets smaller.
\begin{table*}[htbp]
\begin{center}
\caption{Performance comparison on a synthetic asymmetric CP rank-$10$ tensor $\mathcal{T}\in\mathbb{R}^{400\times400\times400}$ by plain, TS and FCS based ALS.}\label{tab:400synth}
\scalebox{0.58}{
\begin{tabular}{ccccccccccccc}
\toprule
$\sigma$ & & & \multicolumn{5}{c}{Residual norm} & \multicolumn{ 5}{c}{Running time (s)}\\
\midrule
\specialrule{0pt}{2pt}{2pt}
\multirow{11}{*}{0.01} & & $J$ & 3000 & 4000 & 5000 & 6000 & 7000 & 3000 & 4000 & 5000 & 6000 & 7000 \\
\specialrule{0pt}{2pt}{2pt}
& \multirow{4}{*}{\tabincell{c}{TS-\\ALS}} & $D=10$ & 1.1898 & 0.9063 & 0.7684 & 0.6888 & 0.6198
& \textbf{16.0213} & \textbf{15.9789} & \textbf{15.9756} & \textbf{16.6813} & \textbf{16.9187}\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ & 0.9721 & 0.7961 & 0.6981 & 0.6272 & 0.5763 & \textbf{16.358} & \textbf{16.4966} & \textbf{17.2223} & \textbf{17.4917} &\textbf{17.8075}\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ & 0.6959 & 0.5899 & 0.5179 & 0.4684 & 0.4337 & \textbf{16.4243} & \textbf{17.3973} & \textbf{17.664} & \textbf{17.9625} & \textbf{18.3088}\\
\specialrule{0pt}{2pt}{2pt}
& \multirow{4}{*}{\tabincell{c}{FCS-\\ALS}} & $D=10$ & \textbf{0.7801} &\textbf{0.6429} &\textbf{0.5618} &\textbf{0.4915} &\textbf{0.4547} &18.3248 &19.6939 &20.9603 &23.3214 &24.7461\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ & \textbf{0.6949} &\textbf{0.5851} &\textbf{0.5168} &\textbf{0.4643} &\textbf{0.4311} &20.2062 &22.2532 &23.453 &26.2442 &28.9429
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ &\textbf{0.5122} &\textbf{0.4342} &\textbf{0.3877} &\textbf{0.351} &\textbf{0.3288} &22.2656 &24.2863 &26.7042 &30.677 &33.9163
\\
\specialrule{0pt}{2pt}{2pt}
& plain ALS & & \multicolumn{5}{c}{0.1000} &\multicolumn{5}{c}{52.2921}\\
\specialrule{0pt}{2pt}{2pt}
\midrule
\specialrule{0pt}{2pt}{2pt}
\multirow{9}{*}{0.1}
& \multirow{4}{*}{\tabincell{c}{TS-\\ALS}} & $D=10$ &1.2927 &1.0001 &0.8641 &0.7749 &0.7119 &\textbf{15.597} &\textbf{16.2406} &\textbf{15.908} &\textbf{16.1767} &\textbf{17.4988}
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ &1.0632 &0.8798 &0.7978 &0.7235 &0.6728 &\textbf{16.1131} &\textbf{16.0404} &\textbf{16.7699} &\textbf{17.8976} &\textbf{17.3802}
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ &0.7951 &0.6911 &0.6223 &0.5725 &0.5416 &\textbf{16.3257} &\textbf{17.7667} &\textbf{18.1851} &\textbf{18.3393} &\textbf{18.5008}
\\
\specialrule{0pt}{2pt}{2pt}
& \multirow{4}{*}{\tabincell{c}{FCS-\\ALS}} & $D=10$ &\textbf{0.8283} &\textbf{0.7012} &\textbf{0.6326} &\textbf{0.5796} &\textbf{0.5510} &18.9016 &19.3911 &20.3741 &22.9006 &24.1979
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=15$ &\textbf{0.7505} &\textbf{0.6546} &\textbf{0.5997} &\textbf{0.5523} &\textbf{0.5232} &19.7293 &21.1838 &23.2389 &26.2481 &28.1806
\\
\specialrule{0pt}{2pt}{2pt}
& & $D=20$ &\textbf{0.5989} &\textbf{0.5291} &\textbf{0.4921} &\textbf{0.4637} &\textbf{0.4424} &22.3568 &23.8313 &26.1499 &29.7723 &31.3613
\\
\specialrule{0pt}{2pt}{2pt}
& plain ALS & & \multicolumn{5}{c}{0.3162} & \multicolumn{5}{c}{53.2564}\\
\bottomrule
\end{tabular}
}
\end{center}
\end{table*}
\subsection{Tensor regression network compression}
TRNs replace the last flattening and fully-connected layer of convolutional neural networks by tensor regression layer (TRL) \cite{kossaifi2020tensor}. Here, we show the weight tensor of TRL $\mathcal{W}\in\mathbb{R}^{I_1\times\cdots\times I_N\times C}$ ($C$ represents the number of classes) can be compressed by FCS. Denote $\mathcal{X}\in\mathbb{R}^{B\times I_1\times\cdots\times I_N}$ ($B$ represents batch size) as the input activation tensor. Then the output of TRL is
\begin{equation}
\label{FCS_CP_TRL}
\mathbf{Y}(i,j)=\left\langle \mathbf{X}_{(1)}(i,:)^{\operatorname{T}}, \mathbf{W}_{(N+1)}(j,:)^{\operatorname{T}} \right \rangle+b.
\end{equation}for $i\in[B], j\in[C]$. Hence we can approximate (\ref{FCS_CP_TRL}) by
\begin{equation}
\hat{\mathbf{Y}}(i,j)=\left\langle {\rm FCS}(\mathbf{X}_{(1)}(i,:)^{\operatorname{T}}), {\rm FCS}(\mathbf{W}_{(N+1)}(j,:)^{\operatorname{T}}) \right \rangle+b,
\end{equation}which equals the compact form
\begin{equation}
\hat{\mathbf{Y}}={{\rm FCS}(\mathbf{X}_{(1)}^{\operatorname{T}})}^{\operatorname{T}}{\rm FCS} (\mathbf{W}_{(N+1)}^{\operatorname{T}}) + \mathbf{b}.
\label{eq:net_compress}
\end{equation}The compression ratio (CR) for FCS based TRL is
$(\prod_{n=1}^{N}I_n) / (\sum_{n=1}^{N}J_n-N+1)=\tilde{I} / \tilde{J}$.
In this experiment, we compare the performance of CS, TS and FCS based TRL on dataset FMNIST \cite{xiao2017fashion} ($C=10$). The TRL model is composed of two convolutional and maxpooling layers. We assume the regression weight tensor admits low-rank CPD (i.e., CP-TRL \cite{cao2017tensor}) with the target CP rank set as $5$. By default, the input activation $\mathcal{X}$ fed to the TRL is of size $B\times7\times7\times32$.
We show the network structure in Fig. \ref{fig:network-structure} and comparison results in Table \ref{tab:network-comp}. It can be seen that FCS achieves better classification performance than CS and TS under almost all CRs.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/network.pdf}
\caption{The network structure illustration for the sketching based CP-TRL. The batch size is set to $1$ for convenience of display.}
\label{fig:network-structure}
\end{figure}
\begin{table}[H]
\centering
\caption{Classification accuracy by CS, TS and FCS based CP-TRL on dataset FMNIST under various CRs.}
\scalebox{0.72}{
\begin{tabular}{ccccccccccc}
\toprule
CR &20 &22.22 &25 &28.57 &33.33 &40 &50 &66.67 &100 &200 \\
\midrule
\specialrule{0pt}{2pt}{2pt}
CS &\textbf{0.7862} & 0.7529 & 0.7743 &0.7748 & 0.7451 &0.7333 &0.7555 &0.7322 &0.7732 &0.7104 \\
\specialrule{0pt}{2pt}{2pt}
TS &0.7461 &0.7587 &0.7161 &0.7665 &0.7438 &0.6446 &0.6871 &0.6735 &0.7123 &0.6999 \\
\specialrule{0pt}{2pt}{2pt}
FCS &0.7829 &\textbf{0.8011} &\textbf{0.7874} &\textbf{0.7815} &\textbf{0.7881} &\textbf{0.7697} &\textbf{0.7706} &\textbf{0.7865} &\textbf{0.7830} &\textbf{0.7696} \\
\bottomrule
\end{tabular}
}
\label{tab:network-comp}
\end{table}
\subsection{Kronecker product and tensor contraction compression}
We further compare the performance of FCS against CS and HCS in the Kronecker product compression under the same CRs. We set the independent number of sketches $D=20$ for all sketching methods.
\subsubsection{Kronecker product compression}
For two matrices
$\mathbf{A}\in\mathbb{R}^{I_1\times I_2}$, $\mathbf{B}\in\mathbb{R}^{I_3\times I_4}$, we can compress the Kronecker product $\mathbf{A}\otimes\mathbf{B}\in\mathbb{R}^{I_1I_3\times I_2I_4}$ using FCS by:
\begin{equation}
\nonumber
\begin{aligned}
&{\rm FCS}(\mathbf{A}\otimes\mathbf{B}; \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{4})\\
=&\operatorname{F}^{-1}(\operatorname{F}({\rm CS}({\rm vec}(\mathbf{A}); \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{2}), \tilde{J})*\operatorname{F}({\rm CS}({\rm vec}(\mathbf{B}); \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=3}^{4}), \tilde{J})),
\end{aligned}
\end{equation}
and the decompressing rule is
\begin{equation}
\nonumber
\begin{aligned}
&\widehat{\mathbf{A}\otimes\mathbf{B}}_{I_3(i_1-1)+i_3, I_4(i_2-1)+i_4}\\
=&\mathbf{s}_1(i_1)\mathbf{s}_2(i_2)\mathbf{s}_3(i_3)\mathbf{s}_4(i_4){\rm FCS}(\mathbf{A}\otimes\mathbf{B})_{{\rm mod}(\mathbf{h}_1(i_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)+\mathbf{h}_4(i_4)-4, 4J-3)+1},
\end{aligned}
\end{equation}where $i_n\in[I_n]$, $J$ is the Hash length, $\tilde{J}=4J-3$. We generate two matrices $\mathbf{A}\in\mathbb{R}^{30\times40}$, $\mathbf{B}\in\mathbb{R}^{40\times50}$ with each entry randomly drawn from uniform distribution $[-5, 5]$. We compare the compressing time, decompressing time, relative error, and memory cost for Hash functions of CS, HCS and FCS under various CRs. The comparison results are shown in Fig. \ref{fig:kronecker_product}. Clearly, the compressing time of FCS is shorter than CS when CR is small. Although the compressing time of FCS is longer than CS when CR equals $16$, we argue it is acceptable, since the relative errors of all compared methods are higher than $1$ when CR is $16$, which means the accuracy is even worse than the all-zero recovery. Therefore, in practice we should focus on lower compression ratios with relative errors much smaller than $1$. Besides, although the compressing speed of HCS is faster, it has larger relative error and much longer decompressing time. Finally, the Hash memory required by FCS is only a ten percent of the Hash memory of CS.
\begin{figure}[H]
\centerline{\includegraphics[width=0.85\columnwidth]{figures/kron-small.pdf}}
\caption{Compressing time, decompressing time, relative error, and memory for Hash functions for CS, HCS and FCS based Kronecker product.}
\label{fig:kronecker_product}
\end{figure}
\subsubsection{Tensor contraction compression}
Given two tensors $\mathcal{A}\in\mathbb{R}^{I_1\times I_2\times L}$ and $\mathcal{B}\in\mathbb{R}^{L\times I_3\times I_4}$ with each entry randomly drawn from $[0, 10]$, the tensor contraction on the third mode of $\mathcal{A}$ and first mode of $\mathcal{B}$ produces a tensor $\mathcal{A}\circledcirc_{3, 1}\mathcal{B}\in\mathbb{R}^{I_1\times I_2\times I_3\times I_4}$. By applying FCS, we approximately compress the contraction by:
\begin{footnotesize}
\begin{equation}
\nonumber
\begin{aligned}
&{\rm FCS}(\mathcal{A}\circledcirc_{3, 1}\mathcal{B}; \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{4})\\
=&\sum_{l=1}^{L}\operatorname{F}^{-1}(\operatorname{F}({\rm CS}({\rm vec}(\mathbf{A}(:,:,l)); \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=1}^{2}), \tilde{J})*\operatorname{F}({\rm CS}({\rm vec}(\mathbf{B}(l,:,:)); \left\{\mathbf{h}_n, \mathbf{s}_n\right\}_{n=3}^{4}), \tilde{J})),\\
\end{aligned}
\end{equation}
\end{footnotesize}
And the decompressing rule is
\begin{equation}
\nonumber
\begin{aligned}
&\widehat{\mathcal{A}\circledcirc_{3, 1}\mathcal{B}}_{i_1, i_2, i_3, i_4}\\
=&\mathbf{s}_1(i_1)\mathbf{s}_2(i_2)\mathbf{s}_3(i_3)\mathbf{s}_4(i_4){\rm FCS}(\mathcal{A}\circledcirc_{3, 1}\mathcal{B})_{{\rm mod}(\mathbf{h}_1(i_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)+\mathbf{h}_4(i_4)-4, 4J-3)+1},
\end{aligned}
\end{equation}where $i_n\in[I_n]$, $J$ is the Hash length, $\tilde{J}=4J-3$. We generate $\mathcal{A}\in\mathbb{R}^{30\times40\times50}$ and $\mathcal{B}\in\mathbb{R}^{50\times40\times30}$ with each entry randomly drawn from $[0, 10]$. The comparison results are shown in Fig. \ref{fig:tensor_contraction}. Again, when CR is small, FCS has faster compressing speed than CS, faster decompressing speed than HCS and higher accuracy than HCS. And the Hash memory is about $5$ percent of CS.
\begin{figure}[H]
\centerline{\includegraphics[width=0.85\columnwidth]{figures/contraction-new.pdf}}
\caption{Compressing time, decompressing time, relative error, and memory for Hash functions for CS, HCS and FCS based tensor contraction.}
\label{fig:tensor_contraction}
\end{figure}
\section{Conclusion}
A novel sketching method dubbed FCS is presented, which applies multiple shorter Hash functions based CS on the vectorized input tensor. We apply FCS to approximate tensor contraction with the validity guaranteed by theoretical proofs. Numerical experiments on CPD, TRN compression, Kronecker product and tensor contraction compression confirm that FCS achieves competitive approximation quality and running speed compared with various sketching methods.
\begin{comment}
\section{Proof of Proposition $1$}
\label{appendix1}
\begin{proof*}
First we prove that for any $3$rd-order tensors $\mathcal{M}$, $\mathcal{N}$ with the same size, $\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle$ is a consistent estimator of $\left \langle \mathcal{M},\mathcal{N}\right \rangle$. To this end we prove it is unbiased with bounded variance.
\subsection{Unbiasedness}
Denote $\mathcal{H}(i_1,i_2,i_3)=\mathbf{h}_1(i_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)$, $\mathcal{S}(i_1,i_2,i_3)=\mathbf{s}_1(i_1)\mathbf{s}_2(i_2)\mathbf{s}_3(i_3)$, we have
\begin{equation}
\begin{aligned}
&\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle
=\sum_t{\rm FCS}(\mathcal{M})_t{\rm FCS}(\mathcal{N})_t\\
=&\sum_t(\sum_{\mathcal{H}(i_1,i_2,i_3)=t}\mathcal{S}(i_1,i_2,i_3)\mathcal{M}_{i_1,i_2,i_3})(\sum_{\mathcal{H}(j_1,j_2,j_3)=t}\mathcal{S}(j_1,j_2,j_3)\mathcal{N}_{j_1,j_2,j_3}),
\end{aligned}
\end{equation}
\begin{footnotesize}
\begin{equation}
\begin{aligned}
&\operatorname{E}_{\mathbf{h},\mathbf{s}}\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle\\
=&\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3}}\operatorname{E}\left[ \delta(\mathcal{H}(i_1,i_2,i_3),\mathcal{H}(j_1,j_2,j_3)) \right]\operatorname{E}[\mathcal{S}(i_1,i_2,i_3)\mathcal{S}(j_1,j_2,j_3)]\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,j_2,j_3},
\end{aligned}
\end{equation}
\end{footnotesize}where $\operatorname{E}$ denotes expectation. Given $\mathbf{h}_1,\mathbf{h}_2,\mathbf{h}_3,\mathbf{s}_1,\mathbf{s}_2,\mathbf{s}_3,$ are $2$-wise independent, we have
\begin{equation}
\operatorname{E}[\mathcal{S}(i_1,i_2,i_3)\mathcal{S}(j_1,j_2,j_3)]=\left\{
\begin{aligned}
1, \ &i_1=j_1, i_2=j_2,i_3=j_3 \\
0, \ &{\rm otherwise}.
\end{aligned}
\right.
\end{equation}
Obviously
$\operatorname{E}[\delta(\mathcal{H}(i_1,i_2,i_3),\mathcal{H}(j_1,j_2,j_3))]|_{\substack{i_1=j_1\\i_2=j_2\\i_3=j_3}}=1$. Hence
\begin{equation}
\label{expectation}
\operatorname{E}_{\mathbf{h},\mathbf{s}}\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle=\sum_{i_1,i_2,i_3}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,i_2,i_3}=\left \langle \mathcal{M}, \mathcal{N}\right \rangle.
\end{equation}
\subsection{Variance boundedness}
\begin{equation}
\begin{aligned}
\label{variance}
&\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle^2\\
=&\!\!\!\!\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3\\i_1',i_2',i_3'\\j_1',j_2',j_3'}}\!\!\!\delta(\mathcal{H}(i_1,i_2,i_3),\mathcal{H}(j_1,j_2,j_3))\delta(\mathcal{H}(i_1',i_2',i_3'),\mathcal{H}(j_1',j_2',j_3'))\mathcal{S}(i_1,i_2,i_3)\\
&\mathcal{S}(j_1,j_2,j_3)\mathcal{S}(i_1',i_2',i_3')\mathcal{S}(j_1',j_2',j_3')\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,j_2,j_3}\mathcal{M}_{i_1',i_2',i_3'}\mathcal{N}_{j_1',j_2',j_3'}.
\end{aligned}
\end{equation}
Denote $\mathfrak{G}_1=\left\{i_1,j_1,i_1',j_1'\right\}, \mathfrak{G}_2=\left\{i_2,j_2,i_2',j_2'\right\}, \mathfrak{G}_3=\left\{i_3,j_3,i_3',j_3'\right\}$. We say $\mathfrak{G}_k$ has \emph{best match} if $i_k=j_k,i_k'=j_k'$ for $k\in[3]$. then (\ref{variance}) can be grouped into following cases:\\
\textbf{Case 1} $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ all have best matches, then
\begin{equation}
{\rm S}_1=\sum_{\substack{i_1,i_2,i_3\\i_1',i_2',i_3'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,i_2,i_3}\mathcal{M}_{i_1',i_2',i_3'}\mathcal{N}_{i_1',i_2',i_3'}=\left \langle \mathcal{M}, \mathcal{N}\right \rangle^2.
\end{equation}
\textbf{Case 2} One out of $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ does not have best match, take $\mathfrak{G}_1$ for example. Suppose $i_1=i_1'\ne j_1=j_1'$, then
\begin{equation}
\begin{aligned}
{\rm S}_2&=\dbinom{3}{1}\dbinom{2}{1}\sum_{\substack{i_1,i_2,i_3\\j_1,i_2',i_3'}}\operatorname{E}[\delta(\mathbf{h}_1(i_1),\mathbf{h}_1(j_1))^2]\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,i_2,i_3}\mathcal{M}_{i_1,i_2',i_3'}\mathcal{N}_{j_1,i_2',i_3'}\\
&=\frac{6}{J}\sum_{\substack{i_1,i_2,i_3\\j_1,i_2',i_3'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,i_2,i_3}\mathcal{M}_{i_1,i_2',i_3'}\mathcal{N}_{j_1,i_2',i_3'}\\
&=\frac{6}{J}\sum_{i_1,j_1} \left \langle \mathcal{M}(\mathbf{e}_{i_1}, \mathbf{I}, \mathbf{I}), \mathcal{N}(\mathbf{e}_{j_1}, \mathbf{I}, \mathbf{I}) \right \rangle^2\\
&\le\frac{6}{J}\sum_{i_1,j_1} \Vert \mathcal{M}(\mathbf{e}_{i_1}, \mathbf{I}, \mathbf{I}) \Vert_{\operatorname{F}}^2\Vert\mathcal{N}(\mathbf{e}_{j_1}, \mathbf{I}, \mathbf{I})\Vert_{\operatorname{F}}^2\\
&=\frac{6}{J}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2
\end{aligned}
\end{equation}
holds due to the Cauchy-Schwartz inequality.\\
\textbf{Case 3} One out of $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ has best match, take $\mathfrak{G}_1$ for example. Suppose $i_2=i_2'\ne j_2=j_2', i_3=i_3'\ne j_3=j_3'$, denote $\mathbf{H}(i_2, i_3) = \mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)$, then
\begin{small}
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_3&=\dbinom{3}{1}\dbinom{2}{1}\dbinom{2}{1}\sum_{\substack{i_1,i_2,i_3\\j_2,j_3,i_1'}}\operatorname{E}[\delta(\mathbf{H}(i_2, i_3),\mathbf{H}(j_2, j_3))^2]\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,j_2,j_3}\mathcal{M}_{i_1',i_2,i_3}\mathcal{N}_{i_1',j_2,j_3}
\end{aligned}
\end{equation}
\end{small}From $2$-wise independence of $\mathbf{h}_2$ and $\mathbf{h}_3$, we have
\begin{footnotesize}
\begin{equation}
\operatorname{P}[\mathbf{H}(i_2, i_3)=t]=\left\{
\begin{aligned}
\sum_{k=0}^{t}\operatorname{P}[\mathbf{h}_2(i_2)=k, \mathbf{h}_3(i_3)=t-k]&=\frac{t+1}{J^2}, &t\le J-1 \\
\sum_{k=t-J+1}^{J-1}\operatorname{P}[\mathbf{h}_2(i_2)=k, \mathbf{h}_3(i_3)=t-k]&=\frac{2J-1-t}{J^2}, &t\ge J
\end{aligned}
\right.
\end{equation}
\end{footnotesize}
Obviously
\begin{equation}
\begin{aligned}
&\operatorname{E}[\delta(\mathbf{H}(i_2, i_3),\mathbf{H}(j_2, j_3))^2]=\operatorname{E}[\delta(\mathbf{H}(i_2, i_3),\mathbf{H}(j_2, j_3))]\\
=&\operatorname{P}[\mathbf{H}(i_2, i_3)=\mathbf{H}(j_2, j_3)]\\
=&(\frac{1}{J^2})^2+(\frac{2}{J^2})^2+\cdots+(\frac{J}{J^2})^2+(\frac{J-1}{J^2})^2+\cdots+(\frac{1}{J^2})^2=\frac{2J^2+1}{3J^3}.
\end{aligned}
\label{eq:h2_i2+h3_i3=h2_j2+h3_j3}
\end{equation}
Therefore
\begin{equation}
\begin{aligned}
{\rm S}_3&=\frac{8J^2+4}{J^3}\sum_{\substack{i_1,i_2,i_3\\j_2,j_3,i_1'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,j_2,j_3}\mathcal{M}_{i_1',i_2,i_3}\mathcal{N}_{i_1',j_2,j_3}\\
&=\frac{8J^2+4}{J^3}\sum_{\substack{i_2,i_3\\j_2,j_3}}\left \langle \mathcal{M}(\mathbf{I}, \mathbf{e}_{i_2}, \mathbf{e}_{i_3}), \mathcal{N}(\mathbf{I}, \mathbf{e}_{j_2}, \mathbf{e}_{j_3}) \right \rangle^2\\
&\le\frac{8J^2+4}{J^3}\sum_{\substack{i_2,i_3\\j_2,j_3}}\Vert\mathcal{M}(\mathbf{I}, \mathbf{e}_{i_2}, \mathbf{e}_{i_3})\Vert_{\operatorname{F}}^2\Vert\mathcal{N}(\mathbf{I}, \mathbf{e}_{j_2}, \mathbf{e}_{j_3})\Vert_{\operatorname{F}}^2\\
&=\frac{8J^2+4}{J^3}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{aligned}
\end{equation}
\textbf{Case 4} None of $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ has best matches. Suppose $i_1=i_1'\ne j_1=j_1', i_2=i_2'\ne j_2=j_2', i_3=i_3'\ne j_3=j_3'$, then
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_4&=\dbinom{2}{1}\dbinom{2}{1}\dbinom{2}{1}\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3}}\operatorname{E}[\delta(\mathcal{H}(i_1,i_2,i_3),\mathcal{H}(j_1,j_2,j_3))]\mathcal{M}_{i_1,i_2,i_3}^2\mathcal{N}_{j_1,j_2,j_3}^2.
\end{aligned}
\end{equation}
From \textbf{Case 3}:
\begin{equation}
\operatorname{P}[\mathbf{H}(i_2, i_3)=t]=\left\{
\begin{aligned}
\frac{t+1}{J^2}, t\le J-1 \\
\frac{2J-1-t}{J^2}, t\ge J
\end{aligned}
\right.
\end{equation}
Since $\operatorname{P}[\mathbf{h}_1(i_1)=t]=\frac{1}{J}$ for $t=0,\cdots,J-1$, we have
\begin{scriptsize}
\begin{equation}
\nonumber
\begin{aligned}
&\operatorname{P}[\mathcal{H}(i_1,i_2,i_3)=t]=\operatorname{P}[\mathbf{h}_1(i_1)+\mathbf{H}(i_2, i_3)=t]\\
&=\left\{
\begin{aligned}
&\sum_{k=0}^{t}\operatorname{P}[\mathbf{h}_4(i_2, i_3)=k, \mathbf{h}_1(i_1)=t-k]=\frac{(t+1)(t+2)}{2J^3}, &0\le t\le J-1 \\
&\sum_{k=t-J+1}^{t}\operatorname{P}[\mathbf{h}_4(i_2, i_3)=k, \mathbf{h}_1(i_1)=t-k]=\frac{-2t^2+6(J-1)t+9J-3J^2-4}{2J^3}, &J\le t\le 2J-2 \\
&\sum_{k=t-J+1}^{J-1}\operatorname{P}[\mathbf{h}_4(i_2, i_3)=k, \mathbf{h}_1(i_1)=t-k]=\frac{(3J-1-t)(3J-2-t)}{2J^3}, &2J-1\le t\le 3J-3
\end{aligned}
\right.
\end{aligned}
\end{equation}
\end{scriptsize}Therefore,
\begin{small}
\begin{equation}
\begin{aligned}
&\operatorname{P}[\mathcal{H}(i_1,i_2,i_3)=\mathcal{H}(j_1,j_2,j_3)]\\
=&\frac{1}{(2J^3)^2}(\sum_{t=0}^{J-1}(t+1)^2(t+2)^2+\sum_{t=J}^{2J-2}(-2t^2+6(J-1)t+9J-3J^2-4)^2\\
+&\sum_{t=2J-1}^{3J-3}(3J-1-t)^2(3J-2-t)^2)\\
=&\frac{11J^4+5J^2+4}{20J^5}.
\end{aligned}
\label{eq:h1_i1+h2_i2+h3_i3=h1_j1+h2_j2+h3_j3}
\end{equation}
\end{small}Hence,
\begin{small}
\begin{equation}
\nonumber
{\rm S}_4=8*\frac{11J^4+5J^2+4}{20J^5}\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3}}\mathcal{M}_{i_1,i_2,i_3}^2\mathcal{N}_{j_1,j_2,j_3}^2=\frac{22J^4+10J^2+8}{5J^5}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{equation}
\end{small}
As a result, we have
\begin{equation}
\begin{aligned}
&\operatorname{E}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm FCS} (\mathcal{M}), {\rm FCS} (\mathcal{N})\right \rangle^2]\\
=&{\rm S}_1+{\rm S}_2+{\rm S}_3+{\rm S}_4\\
\le&\left \langle \mathcal{M}, \mathcal{N}\right \rangle^2+\frac{92J^4+30J^2+8}{5J^5}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{aligned}
\end{equation}
Therefore
\begin{equation}
\begin{aligned}
&\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle]\\
=&\operatorname{E}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle^2]-\operatorname{E}_{\mathbf{h},\mathbf{s}}[\left \langle{\rm FCS} (\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle]^2\\
\le&\frac{92J^4+30J^2+8}{5J^5}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{aligned}
\label{eq:var_fcs}
\end{equation}
Combining (\ref{expectation}), (\ref{eq:var_fcs}) the consistency is proved.\\
We further prove FCS computes a better estimator for tensor inner product than TS.
\begin{small}
\begin{equation}
\label{TS_var}
\begin{aligned}
&\left \langle {\rm TS}(\mathcal{M}), {\rm TS}(\mathcal{N})\right \rangle^2=\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3\\i_1',i_2',i_3'\\j_1',j_2',j_3'}}\delta(\mathcal{H}(i_1, i_2, i_3) \ {\rm mod} \ J, \mathcal{H}(j_1, j_2, j_3) \ {\rm mod} \ J)\\
&\delta(\mathcal{H}(i_1', i_2', i_3') \ {\rm mod} \ J, \mathcal{H}(j_1', j_2', j_3') \ {\rm mod} \ J)\mathcal{S}(i_1, i_2, i_3)\mathcal{S}(j_1, j_2, j_3)\\
&\mathcal{S}(i_1', i_2', i_3')\mathcal{S}(j_1', j_2', j_3')\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,j_2,j_3}\mathcal{M}_{i_1',i_2',i_3'}\mathcal{N}_{j_1',j_2',j_3'}.
\end{aligned}
\end{equation}
\end{small}Define $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ similarly as mentioned before. Then (\ref{TS_var}) can be grouped into following cases:\\
\textbf{Case 1} $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ all have best matches, then
\begin{equation}
{\rm S}_1=\sum_{\substack{i_1,i_2,i_3\\i_1',i_2',i_3'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,i_2,i_3}\mathcal{M}_{i_1',i_2',i_3'}\mathcal{N}_{i_1',i_2',i_3'}=\left \langle \mathcal{M}, \mathcal{N}\right \rangle^2.
\end{equation}
\textbf{Case 2} One out of $\mathfrak{G}_1, \mathfrak{G}_2,\mathfrak{G}_3$ does not have best match, take $\mathfrak{G}_1$ for example. Suppose $i_1=i_1'\ne j_1=j_1'$, then
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_2&=\dbinom{3}{1}\dbinom{2}{1}\sum_{\substack{i_1,i_2,i_3\\j_1,i_2',i_3'}}\operatorname{E}[\delta(\mathcal{H}(i_1, i_2, i_3)\ {\rm mod} \ J ,\mathcal{H}(j_1, i_2, i_3)\ {\rm mod} \ J)\\
&\delta(\mathcal{H}(i_1, i_2', i_3')\ {\rm mod} \ J ,\mathcal{H}(j_1, i_2', i_3')\ {\rm mod} \ J)]\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,i_2,i_3}\mathcal{M}_{i_1,i_2',i_3'}\mathcal{N}_{j_1,i_2',i_3'}.
\end{aligned}
\end{equation}
Obviously, $\delta(\mathcal{H}(i_1, i_2, i_3)\ {\rm mod} \ J ,\mathcal{H}(j_1, i_2, i_3)\ {\rm mod} \ J)=1$ only if $\mathcal{H}(i_1, i_2, i_3)\ {\rm mod} \ J=\mathcal{H}(j_1, i_2, i_3)\ {\rm mod} \ J$.
Denote
\begin{equation}
\begin{aligned}
\mathbf{h}_1(i_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3) &:=k_1J+m\\ \mathbf{h}_1(j_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)&:=k_2J+m,
\end{aligned}
\end{equation}
we have
\begin{equation}
\mathbf{h}_1(i_1)-\mathbf{h}_1(j_1)=(k_1-k_2)J:=kJ.
\label{eq:h1_i1-h1_j1}
\end{equation}
Given $\mathbf{h}_1(i_1), \mathbf{h}_1(j_1)\in[J]$, (\ref{eq:h1_i1-h1_j1}) holds iff $k=0$, i.e. $\mathbf{h}_1(i_1)=\mathbf{h}_1(j_1)$. On the other hand, when $\mathbf{h}_1(i_1)=\mathbf{h}_1(j_1)$, obviously
\begin{equation}
\delta(\mathcal{H}(i_1, i_2', i_3')\ {\rm mod} \ J ,\mathcal{H}(j_1, i_2', i_3') \ {\rm mod} \ J)=1.
\end{equation}
Hence
\begin{equation}
\nonumber
\begin{aligned}
&\operatorname{E}[\delta(\mathbf{h}_1(i_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)\ {\rm mod} \ J ,\mathbf{h}_1(j_1)+\mathbf{h}_2(i_2)+\mathbf{h}_3(i_3)\ {\rm mod} \ J)\\
&\delta(\mathbf{h}_1(i_1)+\mathbf{h}_2(i_2')+\mathbf{h}_3(i_3')\ {\rm mod} \ J ,\mathbf{h}_1(j_1)+\mathbf{h}_2(i_2')+\mathbf{h}_3(i_3')\ {\rm mod} \ J)]\\
=&\operatorname{P}[\mathbf{h}_1(i_1)=\mathbf{h}_1(j_1)]=\frac{1}{J}.
\end{aligned}
\end{equation}
Therefore
\begin{equation}
\nonumber
{\rm S}_2=\dbinom{3}{1}\dbinom{2}{1}\frac{1}{J}\sum_{\substack{i_1,i_2,i_3\\j_1,i_2',i_3'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{j_1,i_2,i_3}\mathcal{M}_{i_1,i_2',i_3'}\mathcal{N}_{j_1,i_2',i_3'}\le\frac{6}{J}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{equation}
\textbf{Case 3} One out of $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ has best match, take $\mathfrak{G}_1$ for example. Suppose $i_2=i_2'\ne j_2=j_2', i_3=i_3'\ne j_3=j_3'$, then
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_3&=\dbinom{3}{1}\dbinom{2}{1}\dbinom{2}{1}\sum_{\substack{i_1,i_2,i_3\\j_2,j_3,i_1'}}\operatorname{E}[\delta(\mathcal{H}(i_1, i_2, i_3) \ {\rm mod} \ J,\mathcal{H}(i_1, j_2, j_3) \ {\rm mod} \ J)\\
&\delta(\mathcal{H}(i_1', i_2, i_3) \ {\rm mod} \ J,\mathcal{H}(i_1', j_2, j_3) \ {\rm mod} \ J)
]\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,j_2,j_3}\mathcal{M}_{i_1',i_2,i_3}\mathcal{N}_{i_1',j_2,j_3}
\end{aligned}
\end{equation}
Denote
\begin{equation}
\begin{aligned}
\mathcal{H}(i_1, i_2, i_3) &:=k_1J+m\\
\mathcal{H}(i_1, j_2, j_3) &:=k_2J+m,
\end{aligned}
\end{equation}
we have
\begin{equation}
\mathbf{H}(i_2, i_3)-\mathbf{H}(j_2, j_3)=(k_1-k_2)J:=kJ.
\end{equation}
Similar to (\ref{eq:h1_i1-h1_j1}), we have $k=0, \pm{1}$:
\begin{description}
\item[(a)] When $k=0$, we have $\operatorname{P}[\mathbf{H}(i_2, i_3)=\mathbf{H}(j_2, j_3)]=\frac{2J^2+1}{3J^3}$ from (\ref{eq:h2_i2+h3_i3=h2_j2+h3_j3}).
\item[(b)] When $k=\pm 1$, we have $\operatorname{P}[\mathbf{H}(i_2, i_3)=\mathbf{H}(j_2, j_3)\pm J]=\sum_{t=J}^{2J-2}\operatorname{P}[\mathbf{H}(i_2, i_3)=t]\operatorname{P}[\mathbf{H}(j_2, j_3)=t-J]=\sum_{t=1}^{J-1}\frac{t(J-t)}{J^4}=\frac{J^2-1}{6J^3}$.
\end{description}
Therefore, we have
\begin{small}
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_3&=\dbinom{3}{1}\dbinom{2}{1}\dbinom{2}{1}(\frac{2J^2+1}{3J^3}+2*\frac{J^2-1}{6J^3})\sum_{\substack{i_1,i_2,i_3\\j_2,j_3,i_1'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,j_2,j_3}\mathcal{M}_{i_1',i_2,i_3}\mathcal{N}_{i_1',j_2,j_3}\\
&=\frac{12}{J}\sum_{\substack{i_1,i_2,i_3\\j_2,j_3,i_1'}}\mathcal{M}_{i_1,i_2,i_3}\mathcal{N}_{i_1,j_2,j_3}\mathcal{M}_{i_1',i_2,i_3}\mathcal{N}_{i_1',j_2,j_3}\\
&\le \frac{12}{J}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{aligned}
\end{equation}
\end{small}
\textbf{Case 4} None of $\mathfrak{G}_1, \mathfrak{G}_2, \mathfrak{G}_3$ has best matches. Suppose $i_1=i_1'\ne j_1=j_1', i_2=i_2'\ne j_2=j_2', i_3=i_3'\ne j_3=j_3'$, then
\begin{footnotesize}
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_4&=\dbinom{2}{1}\dbinom{2}{1}\dbinom{2}{1}\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3}}\operatorname{E}[\delta(\mathcal{H}(i_1, i_2, i_3)\ {\rm mod} \ J, \mathcal{H}(j_1, j_2, j_3)\ {\rm mod} \ J)]\mathcal{M}_{i_1,i_2,i_3}^2\mathcal{N}_{j_1,j_2,j_3}^2.
\end{aligned}
\end{equation}
\end{footnotesize}Denote
\begin{equation}
\begin{aligned}
\mathcal{H}(i_1, i_2, i_3)&:=k_1J+m\\
\mathcal{H}(j_1, j_2, j_3)&:=k_2J+m,
\end{aligned}
\end{equation}
we have
\begin{equation}
\mathcal{H}(i_1, i_2, i_3):=\mathcal{H}(j_1, j_2, j_3)+kJ,
\end{equation}
which breaks into following cases:
\begin{description}
\item[(a)] When $k=0$, from (\ref{eq:h1_i1+h2_i2+h3_i3=h1_j1+h2_j2+h3_j3}): $\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=\mathcal{H}(j_1, j_2, j_3)]=\frac{11J^4+5J^2+4}{20J^5}$.
\item[(b)] When $k=\pm 1$, we have
\begin{equation}
\begin{aligned}
&\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=\mathcal{H}(j_1, j_2, j_3)\pm J]\\
=&\sum_{t=J}^{2J-2}\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=t, \mathcal{H}(j_1, j_2, j_3)=t-J]\\
+&\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=2J-1, \mathcal{H}(j_1, j_2, j_3)=J-1]\\
+&\sum_{t=2J}^{3J-3}\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=t, \mathcal{H}(j_1, j_2, j_3)=t-J]\\
=&\frac{13J^4-5J^2-8}{60J^5}.
\end{aligned}
\end{equation}
\item[(c)] When $k=\pm 2$, we have
\begin{equation}
\begin{aligned}
&\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=\mathcal{H}(j_1, j_2, j_3)\pm 2J]\\
=&\sum_{t=2J}^{3J-3}\operatorname{P}[\mathcal{H}(i_1, i_2, i_3)=t, \mathcal{H}(j_1, j_2, j_3)=t-2J]\\
=&\frac{J^4-5J^2+4}{120J^5}.
\end{aligned}
\end{equation}
\end{description}
Therefore,
\begin{footnotesize}
\begin{equation}
\nonumber
\begin{aligned}
{\rm S}_4&=8*(\frac{11J^4+5J^2+4}{20J^5}+2*\frac{13J^4-5J^2-8}{60J^5}+2*\frac{J^4-5J^2+4}{120J^5})\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3}}\mathcal{M}_{i_1,i_2,i_3}^2\mathcal{N}_{j_1,j_2,j_3}^2\\
&=\frac{8}{J}\sum_{\substack{i_1,i_2,i_3\\j_1,j_2,j_3}}\mathcal{M}_{i_1,i_2,i_3}^2\mathcal{N}_{j_1,j_2,j_3}^2\le\frac{8}{J}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\end{aligned}
\end{equation}
\end{footnotesize}
As a result, we have
\begin{equation}
\operatorname{E}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm TS} (\mathcal{M}), {\rm TS} (\mathcal{N})\right \rangle^2]={\rm S}_1+{\rm S}_2+{\rm S}_3+{\rm S}_4\le\left \langle \mathcal{M}, \mathcal{N}\right \rangle^2+\frac{26}{J}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.
\nonumber
\end{equation}
Therefore
\begin{equation}
\label{eq:var_TS}
\begin{aligned}
&\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm TS}(\mathcal{M}), {\rm TS}(\mathcal{N})\right \rangle]\\
=&\operatorname{E}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm TS}(\mathcal{M}), {\rm TS}(\mathcal{N})\right \rangle^2]-\operatorname{E}_{\mathbf{h},\mathbf{s}}[\left \langle{\rm TS} (\mathcal{M}), {\rm TS}(\mathcal{N})\right \rangle]^2\\
\le&\frac{26}{J}\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2.\\
\end{aligned}
\end{equation}
Comparing (\ref{eq:var_fcs}) and (\ref{eq:var_TS}), clearly we have
\begin{equation}
\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle]\le\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm TS}(\mathcal{M}), {\rm TS}(\mathcal{N})\right \rangle],
\end{equation}which means FCS provides a more accurate estimator than TS for tensor inner product, especially when the Hash length $J$ is small.
\end{proof*}
\section{Proof of Corollary $1$}
\label{appendix2}
\begin{proof*}
(\ref{eq:var_fcs}) can be represented as $\operatorname{V}_{\mathbf{h},\mathbf{s}}[\left \langle {\rm FCS}(\mathcal{M}), {\rm FCS}(\mathcal{N})\right \rangle]=\operatorname{O}(\frac{\Vert\mathcal{M}\Vert_{\operatorname{F}}^2\Vert\mathcal{N}\Vert_{\operatorname{F}}^2}{J})$. By substituting $\mathcal{M}=\mathcal{T}$, $\mathcal{N}=\mathbf{u}\circ\mathbf{u}\circ\mathbf{u}$, and $\mathcal{M}=\mathcal{T}$, $\mathcal{N}=\mathbf{e}_i\circ\mathbf{u}\circ\mathbf{u}$, respectively, from Chebychev's inequality we prove the corollary immediately, since $\mathbf{u}$ is unit vector and hence both $\mathbf{u}\circ\mathbf{u}\circ\mathbf{u}$ and $\mathbf{e}_i\circ\mathbf{u}\circ\mathbf{u}$ equal $1$.
\end{proof*}
\end{document}
\endinput
|
2,869,038,155,324 | arxiv | \section{Introduction} \label{sec:introduction}
A relevant problem that has received a lot of attention from the predictive control community is the robust regulation of disturbed linear discrete-time systems towards a desired equilibrium point.
However, in spite of the potential benefits of this paradigm, its implementation in the industry is deterred by their typical high complexity, especially for medium to large-scale systems \cite{Mayne_ARC_16}.
The classical approach for robust control laws, introduced in \cite{Wit68}, is to minimize a cost function for the worst possible disturbance realization. While this may lead to an optimal robust control law, it requires solving min-max optimization problems \cite{Bemporad_TAC_03}, \cite{SM98}, \cite{alamo2005constrained}, which may be very computationally demanding even for average-sized systems.
In order to overcome this, robust model predictive controllers (RMPC) \cite{Camacho_S_2013} based on nominal predictions and tightened constraints, typically referred to as \textit{tube-based} MPC, have been proposed in the literature.
As discussed in \cite{Zanon_ECC_2021}, ``the many variants [of tube-based MPC] developed over the years can essentially be classified as variations of two approaches [...] \cite{ChisciAUT01} and \cite{MayneAUT05}".
Other formulations, variations and approaches for RMPC have been proposed since then, such as non-linear RMPC formulations \cite{Kohler_ACC_2018}; \cite{Villanueva_AUT_2017}, which uses ellipsoidal robust forward invariant sets, resulting in an optimization problem with linear matrix inequalities (LMI); formulations which combine tube-based and multi-stage MPC such as \cite{Subramanian_Wiley_2021}; or self-triggered tube-based MPC \cite{Brunner_AUT_2016}.
However, we center our attention on the formulations from \cite{ChisciAUT01} and \cite{MayneAUT05}, since this paper proposes a similar formulation that provides several benefits over these two main tube-based RMPC formulations.
In \cite{ChisciAUT01}, the authors propose a RMPC controller where the effect of the disturbance is rejected with the use of a control gain $F$ taken from the solution of the linear quadratic regulator (LQR) corresponding to the weighting matrices of the cost function.
The system constraints are tightened by mean of \textit{reachable sets} which are computable for medium to large-scale systems (see \cite[Eq. (7)]{ChisciAUT01}).
The disadvantage is that $F$ is derived from the weighting matrices of the cost function. Thus, its effect on the constraint tightening \cite[Eq. (22)]{ChisciAUT01} cannot be freely tuned independently from the performance of the controller.
In \cite{MayneAUT05}, the authors propose a RMPC controller where the constraints are tightened using the minimal robust positive invariant set of the system, or an approximation of it \cite{RakovicTAC05}, for a given stabilizing control gain. This set is typically computationally demanding to obtain (in many cases prohibitively so), even for average-sized systems. The advantage is that the disturbance rejection (i.e., the constraint tightening) and the performance of the controller are decoupled, thus potentially leading to better performance.
This approach has been successfully applied to several applications, such as reference tracking \cite{LimonJPC10} and distributed control \cite{TroddenACC06}.
As previously mentioned, other similar constraint tightening approaches have been proposed. In \cite{RezaTAC12}, an initial feasible trajectory is calculated, and in the following sample times a control gain that keeps the system close to this initial trajectory is computed. This avoids having to solve an optimization problem online, since the deviation from the feasible trajectory is corrected using the linear feedback control gain. As such, this approach has a low computational burden; but in general provides worse performance than other robust MPC approaches \cite{GoulartAUT2006}.
In \cite{Subramanian_ACC_2017} the state estimation error is explicitly taken into consideration to provide a formulation that is independent of the employed estimation method.
Finally, \cite{RossiterIJC08} proposes a RMPC for constrained linear systems described by polytopic uncertainty models. This approach significantly enlarges the feasibility region for small control horizons by introducing a new set of variables that have to be computed offline, thus increasing the complexity of the controller.
In this paper, we present a novel RMPC formulation based on nominal predictions and constraint tightening, guaranteeing input-to-state stability (ISS) (see \cite[\S 3]{LimonLNCIS09} for a definition of ISS) and robust satisfaction of the constraints.
Similarly to \cite{ChisciAUT01}, a robust control gain is used to tighten the constraints throughout the prediction horizon by taking into account the possible effect of the disturbances on the resulting closed-loop system.
However, this gain does not have to be the one corresponding to the LQR, as in \cite{ChisciAUT01}.
Instead, it can be freely tuned to enlarge the domain of attraction.
Moreover, the terminal set does not need to be robustly invariant for all the possible disturbances.
Instead, it only needs to be robust for a reduced set of disturbances within a certain set $\mathcal{L}(N)$ whose size diminishes with the length of the prediction horizon $N$ of the controller, thus potentially leading to a larger terminal set than in other RMPC formulations.
An additional advantage of this is that if the prediction horizon is long enough to make the size of $\mathcal{L}(N)$ negligible, then a positive invariant set of the nominal system (i.e., one that does not take into account the disturbance) can be used as the terminal set, significantly simplifying the design of the controller.
In fact, in this case a terminal equality constraint could be used, which would make the proposed controller applicable to large-scale systems.
The key points of the proposed formulation are:
\begin{itemize}
\setlength\itemsep{-0.2em}
\item The use of two decoupled design parameters (the constraint tightening robust control law, on one hand, and the terminal set, on the other), allows for a more flexible design of the controller, thus allowing for more opportunity to improve its performance.
\item The tightened constraints share the same complexity as the nominal ones. That is, if the nominal constraints are box constraints, then the tightened ones are also box constraints. Moreover, they do not require the computation of the minimal robust positive invariant set.
\item The design procedure seeks a robust control gain such that $\mathcal{L}(N)$ decreases rapidly with $N$, thus increasing the size of the robust terminal set and the likelihood of being able to use a positive invariant set of the nominal system as the terminal set for reasonable values of $N$.
\item We present design procedures for the ingredients of the controller that are tractable for average-sized systems, since they require solving optimization problems subject to LMI constraints.
\item Its decision variables are the same as the ones of the nominal MPC variant.
This, in addition with the previous points, results in an optimization problem with a complexity similar to that of nominal MPC (as is typical in tube-based RMPCs \cite{Mayne_AUT_2006}).
\end{itemize}
Given its computationally tractable design procedure and the fact that its resulting optimization problem is not particularly complex, we argue that the proposed approach simplifies the design of the controller, when compared with other robust predictive controllers, and may be applicable to many average-sized systems. To illustrate this, we show the results of controlling a simulated 12-state, 6-input chemical plant.
The remainder of this paper is structured as follows.
The problem statement is described in Section \ref{sec:problem}.
The proposed RMPC controller is detailed in Section \ref{sec:RMPC}.
We present design procedures for the computation of its main ingredients in Section \ref{sec:synthesis}.
Section \ref{sec:practical} discusses the use of a positive invariant set of the nominal system as the terminal constraint.
Section \ref{sec:case:study} shows two case studies, one controlling the chemical plant and one comparing the proposed formulations with \cite{ChisciAUT01} and \cite{MayneAUT05}.
We conclude with Section \ref{sec:conclusions}.
\vspace{0.5em}
\noindent{\textbf{Notation:} Given matrices $T$ and $P$, $T {\succ}{(\succeq)} 0$ indicates that $T$ is positive (semi)definite matrix and $T {\succ}{(\succeq)} P$ indicates $T - P {\succ}{(\succeq)} 0$.
For $x\in \R^{n}$ and $P \succ 0$, $\|x\| \doteq \sqrt{x\T x}$, $\|x\|_P \doteq \sqrt{ x\T P x}$, and $\| x \|_1 \doteq \sum\limits_{i=1}^{n} | x_{(i)} |$, where $x_{(i)}$ is the $i$-th component of $x$.
We denote by $(x_{1}, x_{2}, \dots, x_{N})$ the column vector formed by the concatenation of column vectors $x_{1}$ to $x_{N}$.
Given two integers $i$ and $j$ with ${j \geq i}$, $\Z_i^j$ denotes the set of integer numbers from $i$ to $j$, i.e. ${\Z_i^j \doteq \{i, i+1, \dots, j-1, j\}}$.
Given two sets $\mathcal{U} \subset \R^n$ and $\mathcal{V} \subset \R^n$, their Minkowski sum is defined by $\mathcal{U} \oplus \mathcal{V} \doteq \{u+v: \, u \in \mathcal{U},\, v \in \mathcal{V}\}$, and their Pontryagin set difference is $\mathcal{U} \ominus \mathcal{V} \doteq \{u :\, u \oplus \mathcal{V} \subseteq \mathcal{U}\}$.
$I_n \in \R^{n \times n}$ denotes the identity matrix of dimension $n$.
For a symmetric matrix $M$, $\lambda_{\text{max}}(M)$ and $\lambda_{\text{min}}(M)$ denote its maximum and minimum eigenvalues, respectively.
Given $P \succ 0 \in \R^{n \times n}$, we denote the ellipsoid $\mathcal{E}(P) \doteq \{ x \in \R^n : x\T P x \leq 1\}$.
We define the mapping of a set $\mathcal{U} \subset \R^n$ with matrix $M \in \R^{m \times n}$ as $M \mathcal{U} \doteq \{M u : u \in \mathcal{U}\}$.
A function $f:\R_{\geq 0}\rightarrow\R_{\geq 0}$ is of class $\mathcal{K}$ if it is continuous, strictly increasing and $f(0) = 0$, and is of class $\mathcal{K}_\infty$ if it is a $\mathcal{K}$-function and $f(x) \rightarrow + \infty$ as $x \rightarrow + \infty$.
The unitary box of dimension $n$ is denoted by $\mathcal{B}_n \doteq \{x \in \R^n : \max\limits_i |x_{(i)}| \leq 1\}$, where $x_{(i)}$ is the $i$-th component of $x$.
For a given sequence of sets $\{\mathcal{V}(i)\}_{i = 1}^N$, $\bigoplus\limits_{i=1}^{N}\mathcal{V}(i) \doteq \mathcal{V}(1) \oplus \mathcal{V}(2) \oplus \dots \oplus \mathcal{V}(N)$.
Given scalars and/or matrices $M_1, \dots, M_N$ (not necessarily of the same dimensions), we denote by $\texttt{diag}(M_1, \dots, M_N)$ the block diagonal matrix formed by their diagonal concatenation.
\section{Problem statement} \label{sec:problem}
We consider a plant described by the following controllable uncertain discrete-time linear time-invariant state-space model
\begin{equation} \label{eq:model}
x(k+1) = A x(k) + B u(k) + w(k),
\end{equation}
where $x(k)\in \R^n$, $u(k)\in \R^m$ and $w(k)\in\R^n$ are the state, input and disturbance of the system at sampling time $k$, respectively.
Additionally, we consider that the state and input trajectories must satisfy the constraints $x(k)\in \mathcal{X}$ and $u(k)\in \mathcal{U}$ for any possible disturbance $w(k) \in \mathcal{W}$, where the sets $\mathcal{X}$ and $\mathcal{U}$ are compact (convex) polytopes
\begin{subequations} \label{eq:sets:XU}
\begin{align}
\mathcal{X} &= \{x \in \R^n: A_{x}x\leq b_{x}\}, \label{eq:set:X}\\
\mathcal{U} &= \{u \in \R^m: A_{u}u\leq b_{u}\}, \label{eq:set:U}
\end{align}
\end{subequations}
with $A_x \in \R^{p_x \times n}$, $A_u \in \R^{p_u \times m}$, and which we assume contain the origin in their interiors; and set $\mathcal{W}$ is a (convex) zonotope, i.e., an affine mapping of the unitary box of a certain dimension $M$
\begin{equation} \label{eq:set:W}
\mathcal{W} = H_W \mathcal{B}_M,
\end{equation}
where $H_W \in \R^{n \times M}$.
We note that we consider $\mathcal{W}$ as a zonotope to simplify future developments.
The control objective is to regulate the system to a neighborhood of the origin while fulfilling the constraints for all possible disturbances.
\section{Proposed robust MPC} \label{sec:RMPC}
For a given prediction horizon $N$, the proposed robust MPC (RMPC) control law for a given state $x$ is derived from the solution of the following convex optimization problem, which we label by $\mathcal{P}_N(x)$,
\begin{subequations} \label{eq:RMPC}
\begin{align}
\moveEq{-4} \mathcal{P}_N(x) \, : \, \min\limits_{\bar \vu} \; &V_N(x, \bar \vu) \\
s.t.\;& \, \bar x(i+1)=A\bar x(i)+B\bar u(i), \; i \in \Z_0^{N-1} \label{eq:RMPC:model}\\
& \, \bar x(0)=x \label{eq:RMPC:initial} \\
& \, \bar x(i) \in \mathcal{X} \ominus \mathcal{H}(i), \; i \in \Z_0^{N-1} \label{eq:RMPC:const:x} \\
& \, \bar u(i) \in \mathcal{U} \ominus K\mathcal{H}(i), \; i \in \Z_0^{N-1} \label{eq:RMPC:const:u}\\
& \, \bar x(N) \in \Omega_{K_t} \ominus \mathcal{L}(N), \label{eq:RMPC:terminal}
\end{align}
\end{subequations}
where $\bar\vu = (\bar u(0), \dots, \bar u(N{-}1))$; the cost function is
\begin{equation} \label{eq:RMPC:cost_function}
V_N(x,\bar\vu) = \sum\limits_{i=0}^{N-1} \Big( \|\bar x(i) \|^2_Q + \|\bar u(i)\|^2_R \Big) + \|\bar x(N)\|^2_P
\end{equation}
for the cost function matrices $Q$, $R$ and $P$, which penalize the deviation between the predicted nominal evolution of the plant, i.e. \eqref{eq:model} with $w(i) = 0$ for all $i$, and the origin throughout the prediction horizon $N$; the sets $\mathcal{H}(i)$ and $\mathcal{L}(i)$ for $i \geq 1$ are given by
\begin{equation} \label{eq:sets:HL}
\mathcal{H}(i) = \bigoplus_{j = 0}^{i - 1} {A_K^j \mathcal{W}}, \quad \mathcal{L}(i) = A_K^{i-1} \mathcal{W},
\end{equation}
where $A_K \doteq A + B K$, $K \in \R^{m \times n}$ is a linear feedback control gain and $\mathcal{H}(0)$, $\mathcal{L}(0)$ are taken as the empty sets;
and $\Omega_{K_t}$ is a robust positive invariant set of the system with the terminal feedback control gain $K_t$ for the disturbances contained in $\mathcal{L}(N)$ (see Assumption \ref{ass:RMPC}.(iv) below).
Constraints \eqref{eq:RMPC:const:x} and \eqref{eq:RMPC:const:u} request not only that the predicted states and inputs satisfy the constraints $\mathcal{X}$ and $\mathcal{U}$, respectively, but instead that they lie within \textit{tightened constraints} that depend on the sets $\mathcal{H}(i)$ and, so, on the feedback control gain $K$.
As shown in Appendix \ref{app:RMPC:feasibility:proof}, set $\mathcal{L}(i)$ is a bound of the possible deviation at time instant $i$ that a disturbance $w \in \mathcal{W}$ at the initial time instant, i.e. at $i = 0$, can create between model \eqref{eq:model} and the nominal model \eqref{eq:RMPC:model} if the control law $u = K(x - \bar x) + \bar u$ is used to reject it.
Notice that, if $A + B K$ is Hurwitz, the size of $\mathcal{L}(i)$ monotonically decreases with $i$, whereas sets $\mathcal{H}(i)$ monotonically increase, converging to a bounded set (the minimum robust positive invariant set).
\begin{assumption} \label{ass:RMPC}
We make the following assumptions on the ingredients of optimization problem \eqref{eq:RMPC}:
\renewcommand{\labelenumi}{(\roman{enumi})
\item $Q, R \succ 0$.
\item The control gain $K_t$ and $P \succ 0$ satisfy
\begin{equation} \label{eq:ass:RMPC:P}
P-(A+B K_t)^\top P(A+B K_t) \succeq Q+K_t^\top R K_t.
\end{equation}
\item The control gain $K$ is such that $A+BK$ is Hurwitz and $\mathcal{X} \ominus \mathcal{H}(N)$ and $\mathcal{U} \ominus K\mathcal{H}(N)$ are non-empty.
\item The set $\Omega_{K_t}$ is a compact convex set satisfying
\vspace*{-0.5em}
\begin{align}
&\moveEq{-22}(A+BK_t)\Omega_{K_t} \oplus \mathcal{L}(N) \subseteq \Omega_{K_t}, \label{eq:ass:RMPC:omega:invariant} \\
&\moveEq{-22}\Omega_{K_t} \subseteq \{x \in \mathcal{X} \ominus \mathcal{H}(N) : K_t x \in \mathcal{U} \ominus K \mathcal{H}(N{-}1)\}. \label{eq:ass:RMPC:omega:admissible}
\end{align}
\end{enumerate} \renewcommand{\labelenumi}{\arabic{enumi}.}
\end{assumption}
One of the advantages of this formulation, when compared to other robust MPC approaches, such as \cite{ChisciAUT01}, is that it offers an extra degree of freedom. Indeed, the gain $K$, which is used to compute the sets $\mathcal{H}(i)$ and $\mathcal{L}(i)$, can be tuned in order to enlarge the region of attraction, whereas the gain $K_t$, which is used to compute the terminal cost function matrix $P$ and the set $\Omega_{K_t}$, is affected by the choice of $Q$ and $R$, which can be tuned to improve the performance of the controller (as is standard in MPC).
As stated in Assumption \ref{ass:RMPC}.(iv), $\Omega_{K_t}$ must be a robust positive invariant set of the system controlled with the terminal control law $K_t$ for the additive disturbances contained in $\mathcal{L}(N) = A_K^{N-1} \mathcal{W}$ (which is typically much smaller than $\mathcal{W}$).
The prediction horizon $N$ can be chosen to obtain a small set $\mathcal{L}(N)$, and therefore a larger terminal set and (generally) a larger domain of attraction of the controller.
In the following, we denote by $V_N^*(x)$ the optimal cost, $\bar \vu^*(x) = \{\bar u^*(0; x), \dots, \bar u^*(N-1; x)\}$ the optimal value of the decision variable and $\bar \vx^*(x) = \{\bar x^*(0; x), \dots, \bar x^*(N; x)\}$ the corresponding optimal value of the nominal state trajectory of problem $\mathcal{P}_N(x)$. The control law at each sample time $k$ is given by the receding horizon control law $u(k) = \bar u^*(0;x(k))$, where $x(k)$ is the state of the plant.
The domain of attraction of the RMPC controller, denoted by $\mathcal{X}_N$, is the feasibility region of $\mathcal{P}_N(x)$, i.e., the set of states that can be steered to $\Omega_{K_t} {\ominus} \mathcal{L}(N)$ in $N$ steps while fulfilling the tightened constraints \eqref{eq:RMPC:const:x} and \eqref{eq:RMPC:const:u}.
The following two theorems state the recursive feasibility and input-to-state stability of the RMPC controller for all initial states~${x \in \mathcal{X}_N}$.
\begin{theorem}[Recursive feasibility] \label{theo:RMPC:feasibility}
Consider a system \eqref{eq:model} as described in Section \ref{sec:problem} controlled with the robust MPC formulation $\mathcal{P}_N(x)$. Suppose that the ingredients of the controller satisfy Assumption \ref{ass:RMPC} and that the system state at sample time $k$ satisfies $x(k) \in \mathcal{X}_N$. Then, the successor state $x(k+1) = A x(k) + B \bar u^*(0; x(k)) + w(k)$ satisfies $x(k+1) \in \mathcal{X}_N$ for any $w(k) \in \mathcal{W}$.
\end{theorem}
\vspace*{-1.2em}
\begin{proof} \renewcommand{\qedsymbol}{}
See Appendix \ref{app:RMPC:feasibility:proof}.
\end{proof}
\begin{theorem}[Input-to-state stability] \label{theo:RMPC:ISS}
Consider a system \eqref{eq:model} as described in Section \ref{sec:problem} controlled with the robust MPC formulation $\mathcal{P}_N(x)$. Suppose that the ingredients of the controller satisfy Assumption \ref{ass:RMPC} and that the system state at sample time $k$ satisfies $x(k) \in \mathcal{X}_N$. Then, the closed-loop system is ISS with respect to any disturbance signal $w(i) \in \mathcal{W}$, $i \geq k$.
\end{theorem}
\vspace*{-1.2em}
\begin{proof} \renewcommand{\qedsymbol}{}
See Appendix \ref{app:RMPC:ISS:proof}.
\end{proof}
\section{Synthesis of the RMPC ingredients} \label{sec:synthesis}
The proposed controller requires the design of the ancillary control gains $K$ and $K_t$, the matrix $P$, and the sets $\mathcal{H}(i)$, $\mathcal{L}(i)$, $\mathcal{X} \ominus \mathcal{H}(i)$, $\mathcal{U} \ominus K \mathcal{H}(i)$ and $\Omega_{K_t}$.
An appropriate design of these ingredients, which is not immediate, must ensure the satisfaction of the stabilizing assumptions (Assumption \ref{ass:RMPC}), reject the effect of the uncertainty and seek to maximizing the domain of attraction.
This section describes tractable procedures for the computation of these ingredients satisfying the stability conditions and guaranteeing robust constraint satisfaction whilst seeking to increase the domain of attraction. The procedures and results we show follow from prior results from the control literature, including \cite{LimonIWC08}, \cite{KothareAUT96}, \cite{Boyd_SIAM_1994_LMI} and \cite{Chen_ACC_2001}.
However, we present them here in a unified format and particularized to our proposed formulation.
\subsection{Computation of $K$} \label{sec:synthesis:K}
The control gain $K$ is used to compensate the deviation from the nominal predictions due to the disturbances. In this paper, we follow the approach from \cite{LimonIWC08}, in which the robustness criterium is to find control gain $K$ and a matrix $\tilde{P} \succ 0$ such that the ellipsoid $\mathcal{E}(\tilde{P})$ is a robust positive invariant set of system \eqref{eq:model} for the state feedback control law $u = K x$ satisfying the constraints $\mathcal{X}$ and $\mathcal{U}$. Additionally, we require the sets $\mathcal{X} \ominus \mathcal{H}(i)$ and $\mathcal{U} \ominus K \mathcal{H}(i)$ to be non-empty.
Moreover, we wish to minimize the size of $\mathcal{E}(\tilde{P})$. Note that, since $\mathcal{E}(\tilde{P})$ is a robust positive invariant set of the system, and $\mathcal{H}(i)$ is contained in the minimum robust positive invariant set, a reduction of the size of $\mathcal{E}(\tilde{P})$ (typically) leads to a reduction of the sets $\mathcal{H}(i)$, which is desirable since this translates into an enlargement of the tightened constraints. However, this reduction may come at the cost of increasing the gain of $K$, which may result in a reduction of the constraints $\mathcal{U} \ominus K \mathcal{H}(i)$. To avoid this, we impose that $K x \subseteq \rho \, \mathcal{U}$, $\forall x \in \mathcal{E}(\tilde{P})$, where the role of the scalar $\rho$ satisfying $0 < \rho \leq 1$ is to limit the control action of $K$ in order to ensure a certain control authority.
The computation of $K$ and $\tilde{P}$ satisfying these criteria was posed in \cite[\S 4.3]{LimonIWC08} as an optimization problem involving LMI constraints (see also \cite{KothareAUT96}).
In the following, we detail how this optimization problem and LMI restrictions are posed, where in this paper we include an additional requirement with the objective of obtaining a control gain $K$ that provides a fast convergence to the origin of the autonomous system $x(k+1) = (A + B K) x(k) = A_K x(k)$.
The reason for doing so is to obtain a matrix $K$ such that the size of $\mathcal{L}(N)$ quickly decreases with $N$, thus leading to a larger terminal set and domain of attraction for reasonable values of $N$.
Robust positive invariance of $\mathcal{E}(\tilde{P})$ can formulated as
\begin{equation} \label{eq:synthesis:K:cond:invariant}
(A_K x {+} w)\T \tilde{P} (A_K x {+} w) \leq 1, \, \forall x \in \mathcal{E}(\tilde{P}), \forall w \in \mathcal{W},
\end{equation}
which, considering the convexity with respect to $w$, only needs to be checked for all $w \in \text{vert}(\mathcal{W})$, where $\text{vert}(\mathcal{W})$ denotes the vertexes of $\mathcal{W}$.
Applying the S-procedure, we have that the implication
\begin{equation*}
x\T \tilde{P} x \leq 1 \Rightarrow (A_K x + w)\T \tilde{P} (A_k x + w) \leq 1
\end{equation*}
is satisfied if there exists a scalar $\lambda \geq 0$ such that
\begin{equation*}
(A_K x {+} w)\T \tilde{P} (A_K x {+} w) + \lambda (1 - x\T \tilde{P} x) < 1,
\end{equation*}
which can be expressed as
\begin{equation*}
\bmat{c} x \\ 1 \emat\T \bmat{cc} \lambda \tilde{P} - A_K\T \tilde{P} A_K & -A_K\T \tilde{P} w \\ -w\T \tilde{P} A_K & 1 - \lambda - w\T \tilde{P} w \emat \bmat{c} x \\ 1 \emat {>} 0.
\end{equation*}
Therefore, \eqref{eq:synthesis:K:cond:invariant} is satisfied if there exists $\lambda \geq 0$ such that
\begin{equation*}
\bmat{cc} \lambda \tilde{P} - A_K\T \tilde{P} A_K & -A_K\T \tilde{P} w \\ -w\T \tilde{P} A_K & 1 - \lambda - w\T \tilde{P} w \emat \succ 0, \, \forall w {\in} \text{vert}(\mathcal{W}).
\end{equation*}
This expression can be rewritten as
\begin{equation*}
\bmat{cc} \lambda \tilde{P} & 0 \\ 0 & 1-\lambda \emat - \bmat{c} A_K\T \\ w\T \emat \tilde{P} \left[ A_K \; w \right] \succ 0,\, \forall w {\in} \text{vert}(\mathcal{W}),
\end{equation*}
which applying the Schur complement, leads to
\begin{equation*}
\bmat{ccc} \lambda \tilde{P} & 0 & A_K\T \\ 0 & 1 - \lambda & w\T \\ A_K & w & \tilde{P}^{-1} \emat \succ 0, \, \forall w {\in} \text{vert}(\mathcal{W}).
\end{equation*}
Finally, by pre- and post-multiplying by $\texttt{diag}(\tilde{P}^{-1}, 1, I_n)$
and taking the transformations $S \doteq \tilde{P}^{-1}$ and $Y \doteq K \tilde{P}^{-1}$, we obtain the LMIs
\begin{equation}
\bmat{ccc} \lambda S & 0 & S A\T {+} Y\T B\T \\ 0 & 1 {-} \lambda & w\T \\ A S {+} B Y & w & S \emat {\succ} 0, \, \forall w {\in} \text{vert}(\mathcal{W}). \label{eq:synthesis:K:cond:invariant:LMI}\\
\end{equation}
For all $x \in \mathcal{E}(\tilde{P})$, the state constraints $x \in \mathcal{X}$ must be satisfied, i.e., $A_x x \leq b_x$, $\forall x \in \mathcal{E}(\tilde{P})$.
It is well known that $\max_{x \in \mathcal{E}(\tilde{P})} c\T x = \sqrt{c\T \tilde{P}^{-1} c}$. Therefore, the previous condition can be posed as
\begin{equation} \label{eq:synthesis:K:X:init}
A_{x,j} \tilde{P}^{-1} A_{x,j}\T \leq b_{x,j}^2, \; j \in \Z_1^{p_x},
\end{equation}
where $A_{x,j}$ and $b_{x, j}$ are the $j$-th row/component of $A_x$ and $b_x$, respectively.
However, since we are interested in minimizing the size of $\mathcal{E}(\tilde{P})$, we impose the condition $\mathcal{E}(\tilde{P}) \subseteq \sqrt{\gamma} \mathcal{X}$, where admissibility of the solution requires that the scalar $\gamma$ satisfy $0 < \gamma \leq 1$. This condition can be imposed by adding $\gamma$ to the previous inequality, leading to
\begin{equation*}
A_{x,j} \tilde{P}^{-1} A_{x,j}\T \leq \gamma b_{x,j}^2, \; j \in \Z_1^{p_x}.
\end{equation*}
Using the definition of $S$, this can be expressed as the LMIs
\begin{equation} \label{eq:synthesis:K:cond:X}
A_{x,j} S A_{x,j}\T \leq \gamma b_{x,j}^2, \; j \in \Z_1^{p_x}.
\end{equation}
For all $x \in \mathcal{E}(\tilde{P})$, the input constraints $u = K x \in \mathcal{U}$ must be satisfied, i.e, $A_u K x \leq b_u$, $\forall x \in \mathcal{E}(\tilde{P})$.
However, as discussed at the beginning of this subsection, we instead impose $K x \subseteq \rho \, \mathcal{U}$, $\forall x \in \mathcal{E}(\tilde{P})$, where the scalar $\rho$ must satisfy $0 < \rho \leq 1$. Therefore, we impose $A_u K x \leq \rho b_u$, $\forall x \in \mathcal{E}(\tilde{P})$, which, once again, can be posed as
\begin{equation*}
A_{u,j} K \tilde{P}^{-1} K\T A_{u,j}\T \leq (\rho b_{u,j})^2, \, j \in \Z_1^{p_u}.
\end{equation*}
Then, from the definition of $S$ and $Y$, and applying the Schur complement, we have
\begin{align}
&A_{u,j} K \tilde{P}^{-1} \tilde{P} \tilde{P}^{-1} K\T A_{u,j}\T \leq (\rho b_{u,j})^2, \, j \in \Z_1^{p_u}, \nonumber \\
&A_{u,j} Y S^{-1} Y\T A_{u,j}\T \leq (\rho b_{u,j})^2, \, j \in \Z_1^{p_u}, \nonumber \\
&\bmat{cc} (\rho b_{u, j})^2 & A_{u,j} Y \\ Y\T A_{u,j}\T & S \emat \succ 0, \, j \in \Z_1^{p_u}. \label{eq:synthesis:K:cond:U}
\end{align}
Finally, we want to find a matrix $K$ such that the autonomous system $x(k+1) = (A + B K) x(k)$ has a fast convergence to the origin.
To do this, we impose the following condition, where the contraction factor $\mu$ is a scalar selected in the range $0 < \mu \leq 1$,
\begin{equation*} \label{eq:synthesis:LMI:constraints}
\mu \tilde{P} \succ A_K\T \tilde{P} A_K,
\end{equation*}
which following similar procedures leads to
\begin{align} \label{eq:synthes:K:cond:contraction}
&\mu \tilde{P}^{-1} \tilde{P} \tilde{P}^{-1} - \tilde{P}^{-1} A_K\T \tilde{P} A_K \tilde{P}^{-1} \succ 0, \nonumber \\
&\mu S - (S A\T + Y\T B\T) S^{-1} (A S + B Y) \succ 0, \nonumber \\
&\bmat{cc} \mu S & S A\T + Y\T B\T \\ A S + B Y & S \emat \succ 0.
\end{align}
Matrices $K$ and $\tilde{P}$ satisfying the above criteria can be recovered from the solution of the following optimization problem involving the LMIs \eqref{eq:synthesis:K:cond:invariant:LMI}, \eqref{eq:synthesis:K:cond:X}, \eqref{eq:synthesis:K:cond:U}, and \eqref{eq:synthes:K:cond:contraction},
\begin{equation} \label{eq:synthesis:K:LMI}
\begin{aligned}
&\min\limits_{Y, S, \gamma} \quad \gamma \\
&s.t. \; \eqref{eq:synthesis:K:cond:invariant:LMI},\; \eqref{eq:synthesis:K:cond:X},\; \eqref{eq:synthesis:K:cond:U}, \;\eqref{eq:synthes:K:cond:contraction}.
\end{aligned}
\end{equation}
The procedure is to select values of $\rho$ and $\mu$, and to then solve the resulting problem \eqref{eq:synthesis:K:LMI} for increasing values of $\lambda$ until a feasible problem is found. If no feasible solution is found, then less restrictive values of $\rho$ and/or $\mu$ should be selected. An easy way to do this is to first fix $\mu = 1$ and reduce $\rho$, and to then fix $\rho$ and reduce $\mu$.
\begin{remark} \label{rem:we:compute:RPIS}
We note that problem \eqref{eq:synthesis:K:LMI} is a convex optimization problem that is solved offline. Additionally, it can be solved for (relatively) large-sized systems guaranteeing a good design of the controller, although we remark that there is no guarantee that the problem will be feasible, since there may not exist a $K$ for which the tightened constraints are non-empty if $\mathcal{W}$ is too large.
\end{remark}
\begin{remark} \label{rem:synthesis:K:numerical:issues}
We note that elements $b_{u, j}$ and $b_{x, j}$ from \eqref{eq:synthesis:K:cond:X} and \eqref{eq:synthesis:K:cond:U} can cause numerical issues when solving problem \eqref{eq:synthesis:K:LMI}, since they appear squared in the LMI constraints. To avoid this, sets $\mathcal{X}$ and $\mathcal{U}$ should be rewritten so that the components of $b_x$ and $b_u$ only contain the value $1$, which is possible since we assume that the origin is an interior point of $\mathcal{X}$ and $\mathcal{U}$.
\end{remark}
\subsection{Computation of the tightened constraints} \label{sec:synthesis:constraints}
The computation of the tightened constraints requires the computation of sets $\mathcal{H}(i)$ and $\mathcal{L}(N)$ \eqref{eq:sets:HL}.
Set $\mathcal{L}(N)$ is straightforward, since it is the zonotope given by
\begin{equation*}
\mathcal{L}(N) = A_K^{N-1} H_W \mathcal{B}_M,
\end{equation*}
and sets $\mathcal{H}(i)$ can be computed recursively by making use of the following proposition.
\begin{proposition}[\cite{Kuhn1998zonotopes}, Lemma 2(3)] \label{prop:Minkowsky:zonotope}
Let $\mathcal{C} \doteq H_C \mathcal{B}_{M_C}$, with $H_C \in \R^{n \times M_C}$, and $\mathcal{D} \doteq H_D \mathcal{B}_{M_D}$, with $H_D \in \R^{n \times M_D}$, be two zonotopes. Then, $\mathcal{C} \oplus \mathcal{D} = \left[ H_C, \; H_D \right] \mathcal{B}_{M_C + M_D}$.
\end{proposition}
Indeed, $\mathcal{H}(i) = H_{i} \mathcal{B}_{iM}$, where $H_1 = H_W$ and
\begin{equation*}
H_i = [A_K^{i-1}H_W, \; H_{i-1}], \; i > 1.
\end{equation*}
The tightened constraints $\mathcal{X} \ominus \mathcal{H}(i)$ and $\mathcal{U} \ominus K \mathcal{H}(i)$ are easily computed by using the following well-known result, for which we include a proof for completeness.
\begin{proposition} \label{prop:Pontryagin:zonotope}
Let ${D \doteq H_D \mathcal{B}_M}$, with $H_D \in \R^{n \times M}$, and let $\mathcal{C} \doteq \{ x \in \R^n : F x \leq f \}$. Then, ${\mathcal{Z} \doteq \mathcal{C} \ominus \mathcal{D}}$ is given by, $\mathcal{Z} = \{ x \in \R^n : F z \leq f - g \}$,
where, denoting $F_i$ the $i$-th row of $F$, each component $i$ of $g$ is given by $g_i = \|F_i H_D\|_1$.
\end{proposition}
\begin{proof}
$z \in \mathcal{Z}$ if $x = z + d \in \mathcal{X}$ for all $d \in \mathcal{D}$. This can be posed as $F (z + d) \leq f$ for all $d \in \mathcal{D}$. From the definition of set $\mathcal{D}$, this is equivalent to $F z + F H_D v \leq f$ for all $v \in \mathcal{B}_M$. The most restrictive value of $v$ for each linear inequality in $F$ is given by $\max_{v \in \mathcal{B}_M} F_i H_D v = \| F_i H_D \|_1$.
\end{proof}
\begin{remark} \label{rem:tightened:constraint:complexity}
Note that Proposition \ref{prop:Pontryagin:zonotope} shows that the tightened constraints $\mathcal{X} \ominus \mathcal{H}(i)$ and $\mathcal{U} \ominus K \mathcal{H}(i)$ have the same complexity as the nominal ones \eqref{eq:sets:XU}, since they are polytopes with the same matrices $A_x$ and $A_u$.
Additionally, since the computation of the sets are done offline and they only require vector norms and vector-matrix multiplications, this procedure can be applied to large-scale systems.
\end{remark}
\subsection{Computation of $K_\MakeLowercase{t}$, $P$ and $\Omega_{K_\MakeLowercase{t}}$} \label{sec:synthesis:omega}
We follow a similar procedure to the one used in Section \ref{sec:synthesis:K}. That is, we compute matrices $K_t$ and $P \succ 0$ satisfying \eqref{eq:ass:RMPC:P}, and such that $\Omega_{K_t} \doteq \mathcal{E}(P)$ satisfies \eqref{eq:ass:RMPC:omega:invariant} and \eqref{eq:ass:RMPC:omega:admissible}. As done in Section \ref{sec:synthesis:K}, these conditions can be imposed as LMIs as follows, where we define $\tilde{S} {\doteq} P^{-1}$ and $\tilde{Y} {\doteq} K_t P^{-1}$.
\begin{remark} \label{rem:selection:Omega}
The design procedure that we detail below provides the values of $K_t$ and $P$.
Additionally, it shows the existence of a robust positive invariant set of the form $\Omega_{K_t} = \mathcal{E}(P)$.
The use of an ellipsoidal terminal invariant set is useful due to it typically being simpler to compute that the more common polyhedral invariant set \cite[\S 4.1]{Blanchini_A_1999}, \cite{Wan_AUT_03} and to it resulting in the addition of fewer constraints in the optimization problem \cite[\S 5]{Blanchini_A_1999}.
However, a polyhedral robust invariant set $\Omega_{K_t}$ can be computed by other means \cite{Fiacchini_AUT_2010}, \cite{Blanchini2008set}, and used instead of the ellipsoidal one obtained from the following design procedure.
\end{remark}
Condition \eqref{eq:ass:RMPC:P} can be posed as an LMI as follows,
\begin{align}
&\tilde{S} {-} (\tilde{S} A\T {+} \tilde{Y}\T B\T) P (A \tilde{S} {+} B \tilde{Y}) \succ \tilde{S} Q \tilde{S} + \tilde{Y}\T R \tilde{Y}, \nonumber \\
&\tilde{S} - \bmat{c} A \tilde{S} {+} B \tilde{Y} \\ Q^{1/2} \tilde{S} \\ R^{1/2} \tilde{Y} \emat\T \bmat{ccc} P & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \emat \bmat{c} A \tilde{S} {+} B \tilde{Y} \\ Q^{1/2} \tilde{S} \\ R^{1/2} \tilde{Y} \emat {\succ} 0, \nonumber \\
&\bmat{cccc} \tilde{S} & * & * & * \\ A \tilde{S} + B \tilde{Y} & \tilde{S} & * & * \\ Q^{1/2} \tilde{S} & 0 & I & * \\ R^{1/2} \tilde{Y} & 0 & 0 & I \emat \succ 0, \label{eq:synthesis:omega:LQR}
\end{align}
where we use the asterisks to represent the transposed of the elements shown in the lower half of the matrix due to space considerations.
Condition \eqref{eq:ass:RMPC:omega:invariant} can be posed as
\begin{equation*}
(A_K x {+} d)\T P (A_K x {+} d) \leq 1, \, \forall x \in \mathcal{E}(\tilde{P}), \forall d \in \mathcal{L}(N),
\end{equation*}
which, using the arguments used to derive \eqref{eq:synthesis:K:cond:invariant:LMI}, leads to
\begin{equation} \label{eq:synthesis:omega:invariant}
\bmat{ccc} \tilde \lambda \tilde{S} & * & * \\ 0 & 1 {-} \tilde \lambda & * \\ A \tilde{S} {+} B \tilde{Y} & d & \tilde{S} \emat \succ 0, \, \forall d {\in} \text{vert}(\mathcal{L}(N)),
\end{equation}
for some $\tilde \lambda \geq 0$.
Finally, we pose condition \eqref{eq:ass:RMPC:omega:admissible} as two LMIs. Note that, from Section \ref{sec:synthesis:constraints} (see Remark \ref{rem:tightened:constraint:complexity}), we have that the tightened constraints $\mathcal{X} \ominus \mathcal{H}(N)$ and $\mathcal{U} \ominus K \mathcal{H}(N-1)$ are compact (convex) polytopes given by
\begin{align*}
\mathcal{X} \ominus \mathcal{H}(N) &= \{x \in \R^n: A_{x}x\leq \tilde b_{x}\}, \\
\mathcal{U} \ominus K \mathcal{H}(N-1) &= \{u \in \R^m: A_{u}u\leq \tilde b_{u}\},
\end{align*}
where $\tilde b_x$ and $\tilde b_u$ are computed as described Proposition \ref{prop:Pontryagin:zonotope}. Therefore, condition \eqref{eq:ass:RMPC:omega:admissible} can be posed as LMIs following the same procedure used to derive \eqref{eq:synthesis:K:X:init} and \eqref{eq:synthesis:K:cond:U}:
\begin{align}
&A_{x,j} \tilde{S} A_{x,j}\T \leq \tilde b_{x,j}^2, \; j \in \Z_1^{p_x}, \label{eq:synthesis:omega:cond:X} \\
&\bmat{cc} \tilde b_{u, j}^2 & A_{u,j} \tilde{Y} \\ \tilde{Y}\T A_{u,j}\T & \tilde{S} \emat \succ 0, \, j \in \Z_1^{p_u}, \label{eq:synthesis:omega:cond:U}
\end{align}
where we note that $\rho$ is not used in this case. Remark \ref{rem:synthesis:K:numerical:issues} also applies to the two above LMIs.
The computation of $K_t$, $P$ and $\Omega_{K_t}$ can therefore be recovered by finding a feasible solution of the LMIs \eqref{eq:synthesis:omega:LQR}, \eqref{eq:synthesis:omega:invariant}, \eqref{eq:synthesis:omega:cond:X} and \eqref{eq:synthesis:omega:cond:U} for some value of $\tilde \lambda \geq 0$.
\section{Using a positive invariant set of the nominal system as the terminal set} \label{sec:practical}
As discussed in \cite{Mayne_ARC_16}, the application of RMPC to real systems is hindered because of its complexity, particularly for medium to large-scale systems. A mayor contributor to this complexity comes from the use of a robust positive invariant terminal set, particularly in the case of a polyhedral one.
As shown in Assumption \ref{ass:RMPC}.(iv) and \eqref{eq:RMPC:terminal}, our proposed formulation requires a terminal set that must be robust for the disturbances contained in $\mathcal{L}(N)$, which will typically satisfy $\mathcal{L}(N) \subset \mathcal{W}$.
This is by itself a useful property of the formulation, since it simplifies the computation of $\Omega_{K_t}$ and results in a larger terminal set that in other RMPC formulations.
However, note that if $\mathcal{L}(N) = \{0\}$, then the problem is simplified further.
Indeed, first note that the terminal set in \eqref{eq:RMPC:terminal} reduces to $\Omega_{K_t}$, and, most importantly, that the conditions stated in Assumption \ref{ass:RMPC}.(iv) reduce to
\begin{subequations} \label{eq:omega:non:robust}
\begin{align}
&(A+BK_t)\Omega_{K_t} \subseteq \Omega_{K_t}, \label{eq:omega:non:robust:invariant} \\
&\Omega_{K_t} \subseteq \{x \in \mathcal{X} \ominus \mathcal{H}(N) : K_t x \in \mathcal{U} \ominus K \mathcal{H}(N{-}1)\}. \label{eq:omega:non:robust:constraints}
\end{align}
\end{subequations}
Thus, $\Omega_{K_t}$ has to be a positive invariant set for the system $x(k+1) = (A + B K_t) x(k)$ subject to the constraints $x(k) \in \mathcal{X} \ominus \mathcal{H}(N)$ and $K_t x(k) \in \mathcal{U} \ominus K \mathcal{H}(N-1)$.
That is, $\Omega_{K_t}$ must be a positive invariant set of the nominal (non-disturbed) system \eqref{eq:model} controlled with the state feedback gain $K_t$ for the constraints in \eqref{eq:omega:non:robust:constraints}.
This is a significant benefit over having to compute a robust positive invariant set, since we can take the positive invariant set used in nominal MPC formulations, which in general is much simpler to compute than one that has to be robust to some uncertainty.
Additionally, we can use very simple terminal sets, such as the ellipsoidal set $\Omega_{K_t} = \mathcal{E}(P)$ obtained following the method in Section \ref{sec:synthesis:omega}, or a terminal equality constraints, since in this case the set $\Omega_{K_t} = \{0\}$ satisfies \eqref{eq:omega:non:robust}.
In these two cases the proposed formulation could be applied even to medium to large-scale systems.
The reader may note that the condition $\mathcal{L}(N) = \{0\}$ is impractical, since in general $\mathcal{L}(N) \rightarrow \{0\}$ as $N \rightarrow +\infty$. If $K$ is taken as the gain of the dead-beat controller then $N$ can be selected such that $A_K^{N-1} = 0$. The problem with this approach is that, typically, the resulting matrix $K$ will have a big gain, and therefore the tightened constraints $\mathcal{U} \ominus K \mathcal{H}(i)$, $i \in \Z_0^{N-1}$, will be too restrictive (they might even be empty).
Therefore, instead of forcing $\mathcal{L}(N) = \{0\}$, we propose to relax this condition to $\mathcal{L}(N)$ being very small by forcing the norm of $A_K^{N-1}$ to be sufficiently small. In particular, we take the square of the spectral norm of $A_K^{N-1}$ (which is the absolute value of its maximum eigenvalue).
If this norm is smaller than a certain tolerance, then $\mathcal{L}(N)$ can be taken as $\{0\}$ for all practical purposes.
Note that the design procedure we present in Section \ref{sec:synthesis:K} seeks a control gain $K$ that provides a fast convergence of the autonomous system $x(k+1) = A_K x(k)$.
Thus, it is expected to provide a $K$ such that a negligible $\mathcal{L}(N)$ is attained for reasonable values of $N$.
A more practical approach is to relax this condition even further. The optimization problem of the RMPC controller will be solved, in real-time, using some iterative optimization algorithm with an exit condition determined by an exit tolerance set by the user. In practice, the exit tolerance is typically in the range of $10^{-3}$ to $10^{-6}$. Therefore, we argue that, in practice, it is enough to select $N$ and compute $K$ so that the norm of $A_K^{N-1}$ is significantly smaller than the exit tolerance of the solver. In this case, for all practical purposes, the terminal equality constraint $\bar x(N) = 0$ can also be used.
\begin{remark} \label{rem:control:horizon}
The control gain $K$ computed as described in Section \ref{sec:synthesis:K} may still lead to a large $N$ in order for $\mathcal{L}(N)$ to be sufficiently small. In this case, to reduce the complexity of optimization problem \eqref{eq:RMPC}, a control horizon $N_c$ may be used alongside the prediction horizon $N$, as is typically done in MPC, by adding constraint $\bar u(i) = K_t \bar x(i)$, $i \in \Z_{N_c}^{N-1}$, to~\eqref{eq:RMPC}.
Even if $N$ has to be large, in many cases the resulting optimization problem will still be implementable, since many solvers for linear MPC can exploit the structure of the optimization problem, leading to a linear memory and computational complexity growth with the prediction horizon $N$ \cite{Krupa_arXiv_ellipMPC_21}, \cite{Quirynen_OCAM_2020}, \cite{Domahidi_FORCES}.
\end{remark}
\begin{remark} \label{rem:practical:eliminate}
Another approach to deal with the issues that arise from the need to compute and implement a terminal set is to eliminate it altogether. This can be done by increasing the penalization of the terminal cost with an additional scalar penalty parameter $\lambda \geq 0$ \cite{LimonLNCIS09}, i.e. to take $\lambda \| \bar x(N) \|^2_P$ in \eqref{eq:RMPC:cost_function}. However, the issue is that it is not clear how to select $\lambda$, since the region in which the controller remains robust is not known a priori \cite{LimonTAC06}.
\end{remark}
\begin{remark} \label{rem:synthesis:omega:equality}
If the option of taking $\Omega_{K_t}$ as the singleton $\Omega_{K_t} = \{0\}$ is taken as discussed above, then the ingredients $K_t$ and $P$ only needs to satisfy \eqref{eq:ass:RMPC:P}.
In this case, the most straightforward choice is to take \eqref{eq:ass:RMPC:P} as en equality, i.e., to solve the synthesis problem of the discrete LQP problem, as is standard in MPC.
The reason for our choice of considering \eqref{eq:ass:RMPC:P} as an inequality is to provide a wider selection range for $K_t$ and $P$ if $\Omega_{K_t}$ is not taken as a singleton.
\end{remark}
\section{Case study} \label{sec:case:study}
We show two case studies, one on a multivariable chemical plant, showing that the proposed formulation is applicable to medium-sized systems, and one to an academic example in which we compare the terminal sets and domains of attraction of the proposed formulation with the ones from \cite{ChisciAUT01} and \cite{MayneAUT05}.
\subsection{Multivariable chemical plant} \label{sec:case:study:plant}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Reactors.eps}
\caption{Double reactor and separator system.}
\label{fig:Reactors}
\end{figure}
\begin{figure*}[t]
\centering
\begin{subfigure}[ht]{0.48\textwidth}
\includegraphics[width=\linewidth]{reactors_test_RMPC_r1_Review_1.eps}
\caption{RMPC.}
\label{fig:reactors:tests:r1:RMPC}
\end{subfigure}%
\qua
\begin{subfigure}[ht]{0.48\textwidth}
\includegraphics[width=\linewidth]{reactors_test_MPC_r1_Review_1.eps}
\caption{Nominal MPC.}
\label{fig:reactors:tests:r1:MPC}
\end{subfigure}%
\caption{Closed-loop results of $T_3$ for the RMPC and nominal MPC controllers with the double reactor and separator plant.}
\label{fig:reactors:tests}
\end{figure*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.96\linewidth]{computation_times.eps}
\caption{Computation times of the nominal and robust controllers.}
\label{fig:times}
\end{center}
\end{figure}
This section presents a case study where the multivariable chemical plant described in \cite[\S 5.7.1]{Krupa_Thesis_21} and depicted in Figure~\ref{fig:Reactors} is controlled using the proposed RMPC formulation.
This plant is a 12-state, 6-input and 4-output system consisting of two consecutive reactors and a separator where two first-order reactions, ${A \rightarrow B}$ and ${B \rightarrow C}$, take place.
We take the parameters, constraints and operating point shown in \cite[\S 5.7.1]{Krupa_Thesis_21} and the reference as the one shown in \cite[Table 5.5]{Krupa_Thesis_21}.
We include disturbances $w$ acting on the heights and temperatures of the three volumes, given by a uniform distribution on the intervals $\pm0.01$ for the heights and $\pm0.25$ for the temperatures.
We obtain a model \eqref{eq:model} of the system by linearizing around the operating point and taking a sampling time of $3$s.
We design the RMPC controller following the procedures described in Section \ref{sec:synthesis} taking $Q = 5 I_n$, $R = 0.5 I_m$, $\rho = 1$ and $\mu = 0.9$.
Optimization problem \eqref{eq:synthesis:K:LMI} is constructed using the YALMIP package \cite{Lofberg2004} for Matlab, and solved using the SDPT3 solver \cite{SDPT3_99} for increasing values of $\lambda$.
A feasible solution is found for $\lambda = 0.701$, where the optimal value of $\gamma$ is $0.3358$. Taking $N = 60$, the square of the spectral norm of $A_k^N$ is $4.06 \cdot 10^{-6}$, which is relatively small.
Therefore, we consider a terminal equality constraint $\bar x(N) = 0$, as discussed in Section \ref{sec:practical}, and compute $P$ by solving the LQR synthesis problem for our choice of $Q$ and $R$ (see Remark \ref{rem:synthesis:omega:equality}).
We use this model to simulate the system, since we are interested in determining if the proposed formulation does, indeed, robustly control model \eqref{eq:model} for our choice of $w$, as stated in Theorems \ref{theo:RMPC:feasibility} and \ref{theo:RMPC:ISS}.
To this end, we compare our proposed formulation with a nominal MPC controller using the same formulation and ingredients (including the terminal equality constraint), but that, obviously, considers the nominal constraints instead of the tightened ones.
Notice that the optimization problems of the RMPC and the nominal MPC are nearly identical, differing only in that the RMPC uses tightened state and input constraints.
\begin{figure*}[t]
\centering
\begin{subfigure}[ht]{0.48\textwidth}
\includegraphics[width=\columnwidth, height=5cm]{terminal_set.eps}
\caption{Terminal sets.}
\label{fig:set:comparison:terminal}
\end{subfigure}%
\qua
\begin{subfigure}[ht]{0.48\textwidth}
\includegraphics[width=\columnwidth, height=5cm]{domain_attraction.eps}
\caption{Domain of attraction.}
\label{fig:set:comparison:domain}
\end{subfigure}%
\caption{Terminal set and domain of attraction of \cite{ChisciAUT01}, \cite{MayneAUT05} and \eqref{eq:RMPC}.}
\label{fig:set:comparison}
\end{figure*}
Figure \ref{fig:reactors:tests} shows the trajectories of the temperature of the separator, $T_3$, using the RMPC and MPC formulations to control model \eqref{eq:model}.
Each figure shows the result of $100$ tests with different realizations of the disturbances, where the same realizations are used for both MPC controllers.
We highlight the maximum and minimum temperature attained at each iteration in magenta and blue, respectively; the average temperature of the tests at each sample time in dash/dotted black; the reference in dashed green; and the upper bound of $T_3$ in red.
As can be seen in the figures, the MPC controller sometimes violates the constraint, whereas the RMPC controller does not; at the expense of not being able to track the reference with zero offset.
For references sufficiently far away from the constraints both controllers have the same behaviour.
We focus on the evolution of $T_3$ since it is the only state that shows active (and violated) constraints during the simulations.
Appendix \ref{app:extended} shows the evolution of the other states and control inputs.
Both formulations are solved using the ADMM-based solver for the MPC formulation with terminal equality constraint presented in \cite{Krupa_TCST_20} from version \texttt{v0.3.4} of the SPCIES toolbox \cite{Spcies}, where we take the exit tolerance as $10^{-4}$ and the penalty parameter of the ADMM algorithm as $\rho = 15$.
Figure \ref{fig:times} shows the average computation times of both solvers at each sample time of the simulation using an Intel Core i5-8250U CPU operating at $1.60$ GHz.
The difference between the two is due to the fact that the RMPC solver tends to have more active constraints, which typically increase the number of iterations of first order methods.
The maximum and minimum computation times, in milliseconds, for each solver are $7.76$ and $4.43$ for the RMPC, and $7.49$ and $3.04$ for the MPC, respectively.
\subsection{Comparison with other RMPC formulations} \label{sec:case:study:comparison}
This section compares the terminal region and domain of attraction of the proposed formulation with the ones of the RMPC formulations from \cite{ChisciAUT01} and \cite{MayneAUT05}.
We consider the example from \cite[\S 5]{ChisciAUT01} taking the constraints as $| x | \leq 10$, $| u | \leq 1$ and $| w | \leq 0.16$.
We take $Q = I_2$, $R = 0.01$ and $N = 10$ in the three RMPC formulations.
We use the LQR gain for these cost function matrices as the $K_t$ gain used to compute the set $\Omega_{K_t}$, which we take as the maximal polytopic robust invariant set.
We also use this gain to compute the tightening of the constraints and the terminal set of the formulation \cite{ChisciAUT01} as well as for the constraint tightening for the formulation \cite{MayneAUT05}.
We use the LQR gain for the matrices $Q = I_2$ and $R = 100$ to compute the tightened constraints of our proposed formulation.
For the formulation from \cite{MayneAUT05}, we take the LQR gain for the matrices $Q = \texttt{diag}(100, 0)$ and $R = 0.01$ to compute the minimal robust positive invariant set.
The gains for the proposed formulation and for the formulation from \cite{ChisciAUT01} were hand tuned to increase their domain of attraction and terminal sets, whereas the gain used to compute the minimal robust positive invariant set for \cite{MayneAUT05} was hand tuned to produce the smallest one possible, thus reducing the size of the associated tube.
All the sets are computed using the MPT3 toolbox \cite{MPT3}.
Figure \ref{fig:set:comparison} shows the comparison between the three formulations, where Figure \ref{fig:set:comparison:terminal} shows the terminal sets and Figure \ref{fig:set:comparison:domain} the domains of attraction.
As can be seen, the proposed formulation provides a larger terminal region and domain of attraction. This is partly due to the fact that we have two degrees of freedom, one for the tightened constraints ($K$) and one for the set $\Omega_{K_t}$, and partly due to the fact that the terminal set only has to be robust for a subset of the disturbances $\mathcal{W}$, i.e., $\mathcal{L}(N)$.
\section{Conclusions} \label{sec:conclusions}
This paper presents a robust MPC formulation based on nominal predictions that \textit{(i)} uses two independent control gains, one, derived from the MPC cost function matrices, related to the performance, and one related to the constraint tightening, thus providing a good trade-off between performance and domain of attraction, \textit{(ii)} the computation of the terminal set is simplified thanks to it not having to be robust for all the possible system uncertainties, but only for a reduced-sized set, and \textit{(iii)} is recursively feasible and stable in the ISS sense.
Additionally, we provide tractable procedures for the computation of its ingredients.
This, along with the possibility of being able to use a positive invariant set of the nominal system as the terminal set, i.e., for the system without the disturbance, results in a formulation that could be applied to relatively large-sized systems.
In particular, in this case the resulting optimization problem would share the same complexity as its equivalent nominal MPC formulation, although, as is typical in tube-based robust MPC formulations, the closed-loop performance of the proposed controller may be significantly worse than its nominal counterpart.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{P_feasibility2b_v3.eps}
\caption{Visual representation of the feasible state trajectories. Black dashed line and dots represents the optimal trajectory at sample time $k$, magenta dashed-dotted line and squares the feasible trajectory for the successor state, blue ellipsoids the sets $\mathcal{H}(i)$ and red ellipsoids the sets $\mathcal{L}(i)$. Solid blue and red lines are for visual aid.}
\label{fig:feasibility:trajectory}
\end{center}
\end{figure}
\section*{Declaration of competing interest}
\noindent The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Acknowledgments}
\noindent This work was supported in part by Grant PID2019-106212 RB-C41 funded by MCIN/AEI/10.13039/501100011033,
in part by Grant P20\_00546 funded by the Junta de Andalucía and the ERDF A way of making Europe,
and in part by Grant PDC2021-121120-C21 funded by MCIN/AEI /10.13039/501100011033 and by the ``European Union \-NextGenerationEU/PRTR".
\begin{appendix}
\gdef\thesection{\Alph{section}}
\makeatletter
\renewcommand\@seccntformat[1]{Appendix \csname the#1\endcsname.\hspace{0.5em}}
\makeatother
\section{Proof of the recursive feasibility of the RMPC controller} \label{app:RMPC:feasibility:proof}
\noindent The proof of Theorem \ref{theo:RMPC:feasibility} makes use of the following property, taken from claims (ii) and (v) of \cite[Theorem 2.1]{KolmanovskyMPE98}.
\begin{property} \label{prop:sets}
Let $\mathcal{A}, \mathcal{B}, \mathcal{C} \subset \R^n$. Then,
\renewcommand{\labelenumi}{(\roman{enumi})
\item $( \mathcal{A} \ominus \mathcal{B} ) \oplus \mathcal{B} \subseteq \mathcal{A}$,
\item $\mathcal{A} \ominus ( \mathcal{B} \oplus \mathcal{C} ) \subseteq ( \mathcal{A} \ominus \mathcal{B}) \ominus \mathcal{C}$.
\end{enumerate} \renewcommand{\labelenumi}{\arabic{enumi}.}
\end{property}
\begin{proof}[Proof of Theorem \ref{theo:RMPC:feasibility}]
Let the optimal solution of $\mathcal{P}_N(x(k))$ be $\bar \vu^* = \left( \bar u^*(0; x(k)), \bar u^*(1; x(k)), \dots, \bar u^*(N-1; x(k)) \right)$ and let $\bar \vx^* = \left( \bar x^*(0; x(k)), \bar x^*(1; x(k)), \dots, \bar x^*(N; x(k)) \right)$ be the corresponding optimal values of the predicted states.
Let us denote a candidate solution of $\mathcal{P}_N(x(k+1))$, where $x(k+1) = A x(k) + B \bar u^*(0; x(k)) + w(k)$ is the successor state, by $\bar \vu = \left( \bar u(0; x(k+1)), \dots, \bar u(N-1; x(k+1)) \right)$ and by $\bar \vx = \left( \bar x(0; x(k+1)), \dots, \bar x(N; x(k+1)) \right)$ its corresponding predicted states. Additionally, let
\begin{equation} \label{eq:def:fi}
\delta(i) \doteq \bar x(i;x(k+1))-\bar x^*(i+1;x(k)), \; i \in \Z_0^{N-1}.
\end{equation}
We construct the candidate solution as follows. First,
\begin{subequations} \label{eq:proof:feasibility:candidate}
\begin{equation} \label{eq:proof:feasibility:candidate:x0}
\bar x(0; x(k+1)) = x(k+1).
\end{equation}
Then, take
\begin{equation} \label{eq:proof:feasibility:candidate:ui}
\bar u(i; x(k+1)) = \bar u^*(i+1; x(k)) + K \delta(i), \; i \in \Z_0^{N-2},
\end{equation}
\begin{equation} \label{eq:proof:feasibility:candidate:xi}
\bar x(i; x(k+1)) = A \bar x(i-1; x(k+1)) + B \bar u(i-1; x(k+1)),
\end{equation}
for $i \in \Z_1^N$, where, finally,
\begin{equation} \label{eq:proof:feasibility:candidate:uN}
\bar u(N-1; x(k+1)) = K_t \bar x(N-1; x(k+1)).
\end{equation}
\end{subequations}
Figure \ref{fig:feasibility:trajectory} illustrates the resulting trajectory $\bar x(i; x(k+1))$.
In the following, we show that the candidate solution \eqref{eq:proof:feasibility:candidate} is a feasible solution of $\mathcal{P}_N(x(k+1))$ for any $w(k) \in \mathcal{W}$. Indeed, \eqref{eq:RMPC:initial} and \eqref{eq:RMPC:model} are trivially satisfied from \eqref{eq:proof:feasibility:candidate:x0} and \eqref{eq:proof:feasibility:candidate:xi}.
Next, from the definition \eqref{eq:def:fi} we have that
\begin{align} \label{eq:proof:feasibility:fi_next}
\delta(i+1) &= \bar x(i+1; x(k+1)) - \bar x^*(i + 2; x(k)) \nonumber\\
&= A \bar x(i; x(k+1)) + B \bar u(i; x(k+1)) \nonumber\\
&\quad - A \bar x^*(i+1; x(k)) - B \bar u^*(i+1; x(k)) \nonumber \\
&= (A + B K) \left( \bar x(i; x(k+1)) - \bar x^*(i+1; x(k)) \right) \nonumber \\
&= (A + B K) \delta(i) = A_K \delta(i)
\end{align}
for $i \in \Z_0^{N-2}$. Then, noting that
\begin{equation*}
\delta(0) \numeq{\eqref{eq:proof:feasibility:candidate:x0}} x(k+1) - (A x(k) + B u(k)) = w(k),
\end{equation*}
and recursively applying \eqref{eq:proof:feasibility:fi_next}, we obtain
\begin{equation} \label{eq:proof:feasibility:x_diff}
\bar x(i; x(k+1)) - \bar x^* (i+1; x(k)) = A_K^i w(k), \; i \in \Z_0^{N-1},
\end{equation}
which by definition of $\mathcal{L}(\cdot)$ implies that
\begin{equation} \label{eq:proof:feasibility:relation:x}
\bar x(i; x(k+1)) \in \bar x^* (i+1; x(k)) \oplus \mathcal{L}(i+1), \; i \in \Z_0^{N-1}.
\end{equation}
Therefore, the satisfaction of \eqref{eq:RMPC:const:x} follows from
\begin{align} \label{eq:proof:feasibility:satisfaction:x:N_1}
\bar x(i;x(k+1)) & \in \bar x^*(i+1;x(k)) \oplus \mathcal{L}(i+1) \nonumber \\
& \numeq[\subset]{(*)} ( \mathcal{X} \ominus \mathcal{H}(i+1) ) \oplus \mathcal{L}(i+1) \nonumber \\
& \numeq[\subseteq]{(**)} \big( ( \mathcal{X} \ominus \mathcal{H}(i) ) \ominus \mathcal{L}(i+1) \big) \oplus \mathcal{L}(i+1) \nonumber \\
& \moveEq{-18} \numeq[\subseteq]{Prop. \ref{prop:sets}(i)} \mathcal{X} \ominus \mathcal{H}(i), \; i \in \Z_0^{N-1},
\end{align}
where step (*) follows from the fact that $\bar x^*(i;x(k))$ satisfies \eqref{eq:RMPC:const:x} for $i \in \Z_0^{N-1}$ and
\begin{align*}
&\bar x^*(N;x(k)) \numeq[\in]{\eqref{eq:RMPC:terminal}} \Omega_{K_t} \ominus \mathcal{L}(N) \\
&\moveEq{-6}\numeq[\subseteq]{\eqref{eq:ass:RMPC:omega:admissible}} (\mathcal{X} \ominus \mathcal{H}(N)) \ominus \mathcal{L}(N) \subseteq \mathcal{X} \ominus \mathcal{H}(N),
\end{align*}
and step (**) follows from
\begin{align*}
&\mathcal{X} \ominus \mathcal{H}(i + 1) = \mathcal{X} \ominus \left( \bigoplus_{j = 0}^i A_K^j \mathcal{W} \right) \\
&= \mathcal{X} \ominus \left( \mathcal{H}(i) \oplus A_K^i \mathcal{W} \right) \numeq[\subseteq]{Prop. \ref{prop:sets}(ii)} (\mathcal{X} \ominus \mathcal{H}(i) ) \ominus A_K^i \mathcal{W}.
\end{align*}
From \eqref{eq:proof:feasibility:candidate:ui}, taking into account \eqref{eq:def:fi} and \eqref{eq:proof:feasibility:x_diff}, we have
\begin{equation} \label{eq:proof:feasibility:u_diff}
\moveEq{-10}\bar u(i; x(k+1)) {=} \bar u^*(i+1; x(k)) {+} K A_K^i w(k), \; i \in \Z_0^{N-2},
\end{equation}
which, following the same procedure used before, leads to
\begin{align} \label{eq:proof:feasibility:satisfaction:u:N_2}
\bar u(i;x(k+1)) & \in \bar u^*(i+1;x(k)) \oplus K\mathcal{L}(i+1) \nonumber\\
& \subset ( \mathcal{U} \ominus K\mathcal{H}(i+1) ) \oplus K\mathcal{L}(i+1) \nonumber\\
& \subseteq \big( ( \mathcal{U} \ominus K\mathcal{H}(i) ) \ominus K\mathcal{L}(i{+}1) \big) \oplus K\mathcal{L}(i{+}1) \nonumber \\
& \subseteq \mathcal{U} \ominus K\mathcal{H}(i), \; i \in \Z_0^{N-2}.
\end{align}
Next, taking $i = N-1$ in \eqref{eq:proof:feasibility:relation:x} leads to
\begin{align} \label{eq:proof:feasibility:x:N_1:in:Omega}
\bar x(N-1;x(k+1)) &\in \bar x^*(N;x(k)) \oplus \mathcal{L}(N) \nonumber \\
&\numeq[\subset]{\eqref{eq:RMPC:terminal}} ( \Omega_{K_t} \ominus \mathcal{L}(N) ) \oplus \mathcal{L}(N) \nonumber \\
&\moveEq{-20}\numeq[\subseteq]{Prop. \ref{prop:sets}(i)} \Omega_{K_t}.
\end{align}
Therefore, taking into account the definition of the Pontryagin difference, we have that
\begin{align*}
\bar x(N; x(k{+}1)) &= A \bar x(N{-}1; x(k{+}1)) {+} B \bar u(N{-}1; x(k{+}1)) \\
&\moveEq{-5}\numeq{\eqref{eq:proof:feasibility:candidate:uN}} (A + B K_t) \bar x(N-1; x(k+1)) \\
&\moveEq{-1}\numeq[\subseteq]{\eqref{eq:proof:feasibility:x:N_1:in:Omega}} (A + B K_t) \Omega_{K_t} \\
&\moveEq{2}\numeq[\subseteq]{\eqref{eq:ass:RMPC:omega:invariant}} \Omega_{K_t} \ominus \mathcal{L}(N),
\end{align*}
which shows the satisfaction of \eqref{eq:RMPC:terminal}.
Finally, since $\bar x(N-1; x(k+1)) \in \Omega_{K_t}$ \eqref{eq:proof:feasibility:x:N_1:in:Omega}, and taking into account Assumption \ref{ass:RMPC}.(iv), we have that
\begin{align*}
\bar u(N-1; x(k+1)) &\moveEq{-6}\numeq{\eqref{eq:proof:feasibility:candidate:uN}} K_t \bar x(N-1; x(k+1)) \\
&\numeq[\in]{\eqref{eq:ass:RMPC:omega:admissible}} \mathcal{U} \ominus K \mathcal{H}(N-1)
\end{align*}
which, alongside \eqref{eq:proof:feasibility:satisfaction:u:N_2}, proves the satisfaction of \eqref{eq:RMPC:const:u}. \qedhere
\end{proof}
\section{Proof of the input-to-state stability of the RMPC controller} \label{app:RMPC:ISS:proof}
\noindent The proof of Theorem \ref{theo:RMPC:ISS} makes use of the following proposition.
\begin{proposition} \label{prop:quadratic:bound:dif}
Let $\mathcal{C}$ be a compact set, $a, b, c \in \R^n$ satisfy $a = b + M c$ with $M \in \R^{n \times n}$ and $b \in \mathcal{C}$. Then, for any given $S \succ 0 \in \R^{n \times n}$ there exists a $\mathcal{K}_\infty$ function $\rho(\cdot)$ such that
\begin{equation*}
\| a \|_S^2 - \| b \|_S^2 \leq \rho \left( \| c \| \right).
\end{equation*}
\end{proposition}
\begin{proof}
Denote $\tau = \max_{b \in \mathcal{C}} \| M\T S b \|$. We have that,
\begin{align*}
\| a \|_S^2 - \| b \|_S^2 &= \| b + M c \|_S^2 - \| b \|_S^2 \\
&= 2 b\T S M c + c\T M\T S M c \\
&\numeq[\leq]{(*)} 2 \| M\T S b \| \|c \| + \lambda_{\text{max}}( M\T S M\T ) \| c \|^2 \\
&\leq 2 \tau \| c \| + \lambda_{\text{max}}( M\T S M\T ) \| c \|^2,
\end{align*}
where $(*)$ is due to the Cauchy-Schwarz inequality.
Thus, the claim of the property holds with
\begin{equation*}
\rho(\| c \| ) = 2 \tau \| c \| + \lambda_{\text{max}}( M\T S M\T ) \| c \|^2. \qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo:RMPC:ISS}]
In the following, we will prove that the optimal cost function of $\mathcal{P}_N(x(k))$, $V^*_N(x(k))$, is an ISS Lyapunov function of the closed-loop system for any ${w(k) \in \mathcal{W}}$.
Let the optimal solution of problem $\mathcal{P}_N(x(k))$ be given by $\bar \vu^* = \left( \bar u^*(0), \dots, \bar u^*(N{-}1) \right)$ and $\bar \vx^* = \left( \bar x^*(0), \dots, \bar x^*(N) \right)$ be the corresponding optimal values of the predicted states. Additionally, consider the successor state
\begin{equation*}
x(k+1) = A x(k) + B \bar u^*(0; x(k)) + w(k),
\end{equation*}
and let us denote by $\bar \vu$ and $\bar \vx$ the trajectories given by \eqref{eq:proof:feasibility:candidate} for this successor state, which by Theorem \ref{theo:RMPC:feasibility} are a feasible solution of $\mathcal{P}_N(x(k+1))$.
We note that, as shown in the above definitions of $\bar \vu^*$ and $\bar \vx^*$, we are dropping the notation $\bar u^*(i; x(k))$ for $\bar u^*(i)$ to improve readability. In fact, we will drop the $(k)$ notation entirely. Instead, the variables referring to the successor state will be indicated with a superscript $+$, as in $x^+ \equiv x(k+1)$.
To prove the claim we follow \cite[Theorem 3]{LimonLNCIS09}, which states that the existence of $\alpha_1$, $\alpha_2$, $\alpha_3$ $\mathcal{K}_\infty$-functions and a $\mathcal{K}$-function $\alpha_4$ such that
\begin{subequations} \label{eq:ISS:Lyapunov:conditions}
\begin{align}
\moveEq{-8}&V^*_N(x) \geq \alpha_1(\|x\|),\, \forall x\in \mathcal{X}, \label{eq:ISS:Lyapunov:conditions:lower} \\
\moveEq{-8}&V^*_N(x) \leq \alpha_2(\|x\|),\, \forall x\in \Omega_{K_t}, \label{eq:ISS:Lyapunov:conditions:upper} \\
\moveEq{-8}&V_N^*(x^+) {-} V_N^*(x) {\leq} {-}\alpha_3(\|x\|) {+} \alpha_4(\|w\|), \forall x{\in}\mathcal{X}, w{\in}\mathcal{W}, \label{eq:ISS:Lyapunov:conditions:diff}
\end{align}
\end{subequations}
proves that $V^*_N(x)$ is an ISS Lyapunov function of the closed loop system.
First, we show that the lower and upper bounds \eqref{eq:ISS:Lyapunov:conditions:lower} and \eqref{eq:ISS:Lyapunov:conditions:upper} can be obtained following standard procedures, see \cite{MayneAUT00}. Indeed, the lower bound \eqref{eq:ISS:Lyapunov:conditions:lower} can be obtained by taking into account that $Q \succ 0$ as follows,
\begin{equation*}
V^*_N(x) \geq \|\bar x^*(0) \|^2_Q = x\T Q x \geq \lambda_{\text{min}}(Q) \|x\|^2, \forall x \in \mathcal{X}.
\end{equation*}
Whereas the upper bound \eqref{eq:ISS:Lyapunov:conditions:upper} can be obtained in the terminal region as follows,
\begin{equation*}
V^*_N(x)\leq x^TPx \leq \lambda_{\text{max}}(P)\|x\|^2, \forall x \in \Omega_{K_t},
\end{equation*}
where the left-hand side inequality is a well known result in the field of MPC for all states $x \in \Omega_{K_t}$.
To prove the satisfaction of \eqref{eq:ISS:Lyapunov:conditions:diff}, let us first note the following inequality,
\begin{align} \label{eq:proof:stability:bound:N_1_usefull}
& \| \bar x^+(N-1)\|^2_Q + \| \bar u^+(N-1) \|^2_R + \| \bar x^+(N) \|^2_P \nonumber \\
&\numeq{\eqref{eq:proof:feasibility:candidate:xi}} \| \bar x^+(N-1)\|^2_Q + \| \bar u^+(N-1) \|^2_R \nonumber \\
&\quad\;\;+ \| A \bar x^+(N-1) + B \bar u^+(N-1) \|^2_P \nonumber \\
&\numeq{\eqref{eq:proof:feasibility:candidate:uN}} \| \bar x^+(N-1) \|^2_{Q + K_t\T R K_t + (A + B K_t)\T P (A + B K_t)} \nonumber \\
&\;\numeq[\leq]{\eqref{eq:ass:RMPC:P}} \| \bar x^+(N-1) \|^2_P.
\end{align}
Then, we have that
\begin{align} \label{eq:proof:stability:cost:x_plus}
V_N(x^+) &= \sum\limits_{i=0}^{N-2} \big( \| \bar x^+(i) \|^2_Q + \| \bar u^+(i) \|^2_R \big) {+} \| \bar x^+(N) \|^2_P\nonumber \\
&\quad+ \| \bar x^+(N{-}1) \|_Q^2 + \| \bar u^+(N{-}1) \|^2_R \nonumber \\
&\numeq[\leq]{\eqref{eq:proof:stability:bound:N_1_usefull}} \sum\limits_{i=0}^{N-2} \big( \| \bar x^+(i) \|^2_Q + \| \bar u^+(i) \|^2_R \big) {+} \| \bar x^+(N{-}1) \|^2_P.
\end{align}
Additionally, eliminating $\| \bar u^*(0) \|^2_R$ from $V_N^*(x)$ leads to
\begin{align} \label{eq:proof:stability:cost:x}
V_N^*(x) \geq &\sum\limits_{i = 0}^{N-2} \big( \| \bar x^*(i+1) \|^2_Q + \| \bar u^*(i+1) \|^2_R \big) \nonumber \\
&+ \| \bar x^*(0) \|^2_Q + \| \bar x^*(N) \|^2_P.
\end{align}
Therefore, from \eqref{eq:proof:stability:cost:x_plus} and \eqref{eq:proof:stability:cost:x}, and noting that $\bar x^*(0) = x$, we have that
\begin{align} \label{eq:proof:stability:cost:original}
\moveEq{-0}V_N(x^+) {-} V_N^*(x)\, {\leq} & \sum\limits_{i=0}^{N-2} \big( \| \bar x^+(i) \|^2_Q - \| \bar x^*(i{+}1) \|^2_Q \big) \nonumber \\
&+ \sum\limits_{i = 0}^{N-2} \big( \| \bar u^+(i) \|^2_R - \| \bar u^*(i{+}1) \|^2_R \big) \nonumber \\
&- \| x \|^2_Q + \| \bar x^+(N{-}1) \|^2_P - \| \bar x^*(N) \|^2_P.
\end{align}
Next, from \eqref{eq:proof:feasibility:x_diff} we know that
\begin{equation*}
\bar x^+(i) = \bar x^*(i+1) + A_K^i w, \, i \in \Z_0^{N-1},
\end{equation*}
for any $w \in \mathcal{W}$. Then, since $\bar x^*(i+1)$ belongs to a compact set, we have from Proposition \ref{prop:quadratic:bound:dif} that,
\begin{subequations} \label{eq:proof:stability:bound:x_i}
\begin{align}
&\|\bar x^+(i)\|^2_Q-\|\bar x^*(i+1)\|^2_Q \leq \alpha_i ( \| w \| ), \, i \in \Z_0^{N-2}, \\
&\|\bar x^+(N-1)\|^2_P-\|\bar x^*(N)\|^2_P \leq \alpha_{N-1} ( \| w \| ),
\end{align}
\end{subequations}
for some $\mathcal{K}_\infty$ functions $\alpha_i(\cdot)$, $i \in \Z_0^{N-1}$.
Similarly, from \eqref{eq:proof:feasibility:u_diff}, and taking into account that $\bar u^*(i+1)$ belongs to a compact set, we have that
\begin{equation} \label{eq:proof:stability:bound:u_i}
\| \bar u^+(i) \|^2_R - \| \bar u^*(i+1) \|^2_R \leq \gamma_i( \| w \| ), \, i \in Z_0^{N-2},
\end{equation}
for some $\mathcal{K}_\infty$ functions $\gamma_i(\cdot)$, $i \in \Z_0^{N-2}$.
Therefore, using \eqref{eq:proof:stability:bound:x_i} and \eqref{eq:proof:stability:bound:u_i} in \eqref{eq:proof:stability:cost:original} leads to
\begin{equation*}
V_N(x^+) - V_N^*(x) \leq - \| x \|^2_Q + \sigma( \| w \| ),
\end{equation*}
where $\sigma(\| w \|)$ is given by
\begin{equation*}
\sigma(\| w \|) = \sum\limits_{i = 0}^{N-1} \alpha_i(\| w \|) + \sum\limits_{i = 0}^{N-2} \gamma_i(\|w\|),
\end{equation*}
which is, by construction, a $\mathcal{K}_\infty$-function.
Finally, by optimality we have $V_N^*(x^+)\leq V_N(x^+)$. Thus,
\begin{equation*}
V_N^*(x^+) - V_N^*(x) \leq - \| x \|^2_Q + \sigma( \| w \| ). \qedhere
\end{equation*}
\end{proof}
\onecolumn
\section{Extended case study results for the double reactor and separator plant} \label{app:extended}
\begin{figure*}[h!]
\centering
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{H1.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{H2.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{H3.eps}
\end{subfigure}%
\vspace{1em}
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{cA1.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{cA2.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{cA3.eps}
\end{subfigure}%
\vspace{1em}
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{cB1.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{cB2.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{cB3.eps}
\end{subfigure}%
\vspace{1em}
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{T1.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{T2.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{T3.eps}
\end{subfigure}%
\caption{States during the test shown in Section \ref{sec:case:study:plant} for the double reactor and separator plant.
Reference in solid back line and state constraint in dashed blue line. Otherwise, the lines and shaded areas represent the same as in Figure \ref{fig:reactors:tests}, but representing the results for the proposed RMPC in green and the results for the nominal MPC in red.
We only represent the upper constraint of $T_3$ because all other constraints are inactive during all the simulations.}
\label{fig:extended:state}
\vspace{2em}
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{Q1.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{Q2.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{Q3.eps}
\end{subfigure}%
\vspace{1em}
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{Ff1.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{Ff2.eps}
\end{subfigure}%
\
\begin{subfigure}[ht]{0.32\textwidth}
\includegraphics[width=\linewidth]{FR.eps}
\end{subfigure}%
\caption{Control inputs during the test shown in Section \ref{sec:case:study:plant} for the double reactor and separator plant.
Reference in solid black line.
Otherwise the lines and shaded areas represent the same as in Figures \ref{fig:reactors:tests} and \ref{fig:extended:state}, where green is used to represent the results using the proposed RMPC and red using the nominal MPC.
We do not represent the constraints because they are inactive during all the simulations.}
\end{figure*}
\vspace*{-2em}
\twocolumn
\end{appendix}
\bibliographystyle{elsarticle-num}
|
2,869,038,155,325 | arxiv | \section{Introduction}
In a lot of applications in image processing we are faced with data containing lots of
outliers. One example is denoising and edge-preserving smoothing of low-level image
features, but the outlier problem also occurs in high-level operations like object
recognition and stereo vision. A wide range of robust techniques for different
applications have been presented, where RANSAC \cite{hartley} and the Hough transform
\cite{sonka} are two classical examples.
In this paper, we focus on the particular problem of calculating a mean value which is
robust against outliers. An efficient method for the special case of one-dimensional
features is presented and compared to the \emph{channel averaging} \cite{scia2003}
approach.
\section{Problem Formulation}
\label{sec:prob} Given a sample set $\mathbf{X} = [\mathbf{x}^{(1)} \ldots
\mathbf{x}^{(n)}]$, we seek to minimize an error function given by
\begin{equation}
\label{eq:pf1}
\mathcal{E}(\mathbf{x}) = \sum_{k=1}^n \rho(\| \mathbf{x}^{(k)} - \mathbf{x} \|)
\end{equation}
If we let $\rho$ be a quadratic function, the minimizing $\mathbf{x}$ is the standard
mean value. To achieve the desired robustness against outliers, $\rho$ should be a
function that saturates for large argument values. Such functions are called \emph{robust
error norms}. Some popular choices are the \emph{truncated quadratic} and \emph{Tukey's
biweight} shown in figure \ref{fig:pf1}. A simple 1D data set together with its error
function is shown in figure \ref{fig:pf2}. The $\mathbf{x}$ which minimizes
\eqref{eq:pf1} belongs to a general class of estimators called \emph{M-estimators}
\cite{Winkler02}, and will in this text be referred to as the \emph{robust mean value}.
\begin{figure}
\center
\includegraphics[width=0.2\textwidth]{norm_trunc.eps}
\includegraphics[width=0.2\textwidth]{norm_tukey.eps}
\caption{Error norms: Truncated quadratic (left), Tukey's biweight (right)}\label{fig:pf1}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.3\textwidth]{errfunc1.eps}
\caption{A simple 1D data set together with the error function generated using the truncated
quadratic error norm with cutoff distance 1}\label{fig:pf2}
\end{figure}
\section{Previous Work}
\label{sec:prev}
Finding the robust mean is a non-convex optimization problem, and a unique global minimum
is not guaranteed. The problem is related to clustering, and the well-known \emph{mean
shift} iteration has been shown to converge to a local minimum of a robust error function
\cite{cheng95}.
Another approach is to use the channel representation (soft histograms) \cite{f04a,
scia2003, fg04, Scharr03}. Each sample $\mathbf{x}$ can be encoded into a channel vector
$\mathbf{c}$ by the nonlinear transformation
\begin{equation}
\label{eq:channel1}
\mathbf{c} = [K(\|\mathbf{x}-\xi_1\|), \ldots, K(\|\mathbf{x}-\xi_m\|)]
\end{equation}
where $K$ is a localized kernel function and $\xi_k$ the \emph{channel centers},
typically located uniformly and such that the kernels overlap (fig \ref{fig:channel1}).
By averaging the channel representations of the samples, we get something which resembles
a histogram, but with overlapping and ``smooth'' bins. Depending on the choice of kernel,
the representation can be decoded to obtain an approximate robust mean. The distance
between neighboring channels corresponds to the scale of the robust error norm.
\begin{figure}[t]
\center
\includegraphics[width=0.4\textwidth, height=0.2\textwidth]{channels1.eps}\\
\caption{Example of channel kernel functions located at the integers}\label{fig:channel1}
\end{figure}
\section{Efficient 1D Method
\protect\footnote{This section has been slightly revised since the
original SSBA paper, as it contained some minor errors.}} \label{sec:fast}
This section will cover the case where the $\mathbf{x}$'s are one-dimensional,
\emph{e.g.} intensities in an image, and the truncated quadratic error norm is used. In
this case, there is a very efficient method, which we have not discovered in the
literature. For clarity, we describe the case where all samples have equal weight, but
the extension to weighted samples is straightforward.
First, some notation. We assume that our data is sorted in ascending order and numbered
from $1 \ldots n$. Since the $\mathbf{x}$'s are one dimensional, we drop the vector
notation and write simply $x_k$. The error norm is truncated at $c$, and can be written
as
\begin{equation}
\rho(x) = \min\{x^2, c^2\}
\end{equation}
The method works as follows: We keep track of indices $a,b$ and and a window $w = [a,b]$
of samples $[x_a, \ldots, x_b]$. The window $[a,b]$ is said to be
\begin{itemize}
\item[-] \emph{feasible} if $|x_b - x_a| < 2c$
\item[-] \emph{maximal} if the samples are contained in a continuous window of length $2 c$,
i.e. if $[a, b]$ is feasible and $[a-1, b+1]$ is infeasible.
\end{itemize}
Now define for a window $w = [a,b]$
\begin{eqnarray}
\mu_w & = & \frac{1}{b-a+1} \sum_{k=a}^b x_k \\
n_o & = & (a-1) + (n-b) \quad \\
q_w & = & \sum_{k=a}^b (\mu_w - x_k)^2 \\
\hat{\mathcal{E}}_w & = & q_w + n_o c^2
\end{eqnarray}
Note that $n_o$ is the number of samples outside the window. Consider the global minimum
$x_0$ of the error function and the window $w$ of samples $x_k$ that fall within the
quadratic part of the error function centered around $x_0$, i.e. the samples $x_k$ such
that $|x_k - x_0| \le c$. Either this window is located close to the boundary ($a=1$ or
$b=n$) or constitutes a maximal window. In both cases, $x_0 = \mu_w$, and
$\hat{\mathcal{E}}_w = \mathcal{E}(\mu_w)$. This is not necessarily true for an arbitrary
window, e.g. if $\mu_w$ is located close to the window boundary. However, for an
arbitrary window $w$, we have
\begin{eqnarray}
\hat{\mathcal{E}}_w & = & \sum_{k=a}^b (\mu_w - x_k)^2 + n_o c^2 \ge \\
& \ge & \sum_{k=1}^n \min\{(\mu_w - x_k)^2, c^2\} \\
& = & \sum_{k=1}^n \rho(\mu_w - x_k) = \mathcal{E}(\mu_w)
\end{eqnarray}
The strategy is now to enumerate all maximal and boundary windows, evaluate
$\hat{\mathcal{E}}_w$ for each and take the minimum, which is guaranteed to be the global
minimum of $\mathcal{E}$. Note that it does not matter if some non-maximal windows are
included, since we always have $\hat{\mathcal{E}}_w \ge \mathcal{E}(\mu_w)$.
The following iteration does the job: Assume that we have a feasible window $[a, b]$, not
necessarily maximal. If $[a, b+1]$ is feasible, take this as the new window. Otherwise,
$[a, b]$ was the largest maximal window starting at $a$, and we should go on looking for
maximal windows starting at $a+1$. Take $[a+1, b]$ as the first candidate, then keep
increasing $b$ until the window becomes infeasible, etc. If proper initialization and
termination of the loop is provided, this iteration will generate all maximal and
boundary windows.
The last point to make is that we do not need to recompute $q_w$ from scratch as the
window size is changed. Similar to the treatment of mean values and variances in
statistics, we get by expanding the quadratic expression
\begin{eqnarray}
q_w & = & \sum_{k=a}^b (\mu_w - x_k)^2 = \nonumber \\
& = & \sum_{k=a}^b x_k^2 - (b-a+1) \mu_w^2 = \nonumber \\
& = & S_2 - (b-a+1)^{-1} S_1^2
\end{eqnarray}
where we have defined
\begin{eqnarray}
S_1 & = & \sum_{k=a}^b x_k = (b-a + 1) \mu_w \\
S_2 & = & \sum_{k=a}^b x_k^2 \\
\end{eqnarray}
$S_1$ and $S_2$ can easily be updated in constant time as the window size is increased or
decreased, giving the whole algorithm complexity $O(n)$. The algorithm is summarized as
follows: \footnote{The check $a \le b$ is required to avoid zero division if $a$ was
increased beyond $b$ in the previous iteration.}
\begin{algorithm}
\caption{Fast 1D robust mean calculation} \label{alg:fast1}
\begin{algorithmic}
\STATE Initialize $a \gets 1$, $b \gets 1$, $S_1 \gets x_1$, $S_2 \gets x_1^2$
\WHILE{$a \le n$}
\IF{$a \le b$}
\STATE Calculate candidate $\hat{\mathcal{E}}_w$ and $\mu_w$:
\STATE $\mu_w \gets (b-a+1)^{-1} S_1$
\STATE $\hat{\mathcal{E}}_w \gets S_2 - \mu_w S_1 + n_o c^2$
\STATE If $\hat{\mathcal{E}}_w$ is the smallest so far, store $\hat{\mathcal{E}}_w$, $\mu_w$.
\ENDIF
\IF{$b < n$ and $|x_{b+1} - x_a| < 2c$}
\STATE $b \leftarrow b + 1$
\STATE $S_1 \leftarrow S_1 + x_{b}$
\STATE $S_2 \leftarrow S_2 + x_{b}^2$
\ELSE
\STATE $S_1 \leftarrow S_1 - x_{a}$
\STATE $S_2 \leftarrow S_2 - x_{a}^2$
\STATE $a \leftarrow a + 1$
\ENDIF
\ENDWHILE
\STATE The $\mu_w$ corresponding to the smallest $\hat{\mathcal{E}}_w$ is now the robust mean.
\end{algorithmic}
\end{algorithm}
Note that it is straightforward to introduce a weight $w$ for each sample, such that a
weighted mean value is produced. We should then let $n_0$ be the total weight of the
samples outside the window, $\mu_w$ the weighted mean value of the window $w$, $S_1$ and
$S_2$ weighted sums etc.
\section{Properties of the Robust \\ Mean Value}
In this section, some properties of the robust mean values generated by the truncated
quadratic method and the channel averaging will be examined. In figure \ref{fig:comp1},
we show the robust mean of a sample set consisting of some values (inliers) with mean
value $3.0$ and an outlier at varying positions. As the outlier moves sufficiently far
away from the inliers, it is completely rejected, and when it is close to $3.0$, it is
treated as an inlier. As expected, the truncated quadratic method makes a hard decision
about whether the outlier should be included or not, whereas the channel averaging
implicitly assumes a smoother error norm.
Another effect is that the channel averaging overcompensates for the outlier at some
positions (around $x=6.0$ in the plot). Also, the exact behavior of the method can vary
at different absolute positions due to the \emph{grid effect} illustrated in figure
\ref{fig:comp2}. We calculated the robust mean of two samples $x_1, x_2$, symmetrically
placed around some point $x_0$ with $|x_1-x_0| = |x_2-x_1| = d$. The channels were placed
with unit distance, and the displacement of the estimated mean $m$ compared to the
desired value $x_0$ is shown for varying $x_0$'s in the range between two neighboring
channel centers. The figure shows that the method makes some (small) systematic errors
depending on the position relative to the channel grid. No such grid effect occurs using
the method from section \ref{sec:fast}.
When the robust mean algorithm is applied on sliding spatial windows of an image, we get
an edge-preserving image smoothing method. In figure \ref{fig:lenna}, we show the 256x256
Lenna image smoothed with the truncated quadratic method using a spatial window of 5 x 5
and $c = 0.1$ in the intensity domain, where intensities are in the range $[0,1]$. The
pixels are weighted with a Gaussian function.
\begin{figure}[t]
\center
\includegraphics[width=0.4\textwidth]{influence.eps}
\caption{The influence of an outlier on the mean value}\label{fig:comp1}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width=0.4\textwidth]{grid.eps}
\caption{The grid effect}\label{fig:comp2}
\end{figure}
\section{Discussion}
We have shown an efficient way to calculate the robust mean value for the special case of
one-dimensional features and the truncated quadratic error. The advantage of this method
is that it is simple, exact and global. The disadvantage is of course its limitation to
one-dimensional feature spaces.
One example of data for which the method could be applied is image features like
intensity or orientation. If the number of samples is high, e.g. in robust smoothing of a
high resolution image volume, the method might be suitable. If a convolution-like
operation is to be performed, the overhead of sorting the samples could be reduced
significantly, since the data is already partially sorted when moving to a new spatial
window, leading to an efficient edge-preserving smoothing algorithm.
\begin{figure}[t]
\center
\includegraphics[width=0.4\textwidth]{lenna_smooth.eps}
\caption{Lenna, robustly smoothed with the truncated quadratic method}\label{fig:lenna}
\end{figure}
\section*{Acknowledgment}
This work has been supported by EC Grant IST-2003-004176 COSPAL.
|
2,869,038,155,326 | arxiv | \section{Introduction}
There is a growing demand to include partial spatial coherence in optical design,
not only because of classical areas of application such as microscopy and projection
lithography, but also because of the increasing importance of partially coherent solid-state
sources such as multimode lasers (including excimers) and LEDs.
Methods based on ray optics can not adequately describe all of the relevant issues related to
coherence and polarization, which are intimately connected in electromagnetic coherence theory.
Thus wave-optical methods are required, which must be capable of dealing with non-paraxial fields
and systems that may contain also micro- and nano\-structures in
objects and interfaces. To this end, one needs computationally efficient physical-optics-based
representations of spatially partially coherent electromagnetic fields.
Apart from some specific models that allow analytic solutions, propagating spatially partially coherent
light even in free space is a formidable numerical problem involving four-dimensional integrals~\cite{Mandel}.
The dimensionality of the propagation integrals can be decreased to two (for planar sources) if the partially coherent field is represented as an incoherent superposition of fully spatially coherent fields. The classical way to do this is the coherent-mode expansion of the cross-spectral density (CSD) function by means of Mercer's expansion~\cite{Wolf82}, which has recently been extended to electromagnetic fields~\cite{Gori03,Tervo04}. In this representation the coherent modes are uniquely defined by the CSD through a Fredholm integral equation; they form a complete and orthonormal set, in which the effective number of modes $N$ increases as the degree of coherence of the field is reduced~\cite{Gori80a,Gori80b,Starikov1,Starikov2}. Thus, in free-space propagation, the original four-dimensional integral for CSD is replaced by $N$ two-dimensional integrals for the coherent modes. In light-matter interaction analysis one solves the diffraction or scattering problem for $N$ coherent fields of different functional forms~\cite{Huttunen,Vahimaa97}.
There is also an alternative representation of a partially coherent field in terms of uncorrelated, fully coherent fields~\cite{Gori78,Gori80,Vahimaa,Gori07,Turunen08}. Here all coherent fields or `elementary modes' are of identical functional form but spatially (or angularly) shifted with respect to each other and weighted by a function determined by the CSD. Unlike the Mercer expansion, the shifted-elementary-mode representation is applicable only to a specific class of genuine CSDs~\cite{Gori07}. However, this class contains many of the fields of practical significance in optical design, including all quasihomogeneous fields (LEDs, excimers, illumination in microscopy and projection lithography, etc.). Since the elementary modes are identical, only a single 2D integral needs to be evaluated in free-space propagation problems. In interaction problems one needs to scan the elementary mode across the object and perform a set of independent diffraction calculations for coherent light. Typically the elementary mode has a smooth functional form, at least compared to the higher-order modes in the Mercer expansion, and is therefore easy to propagate numerically. Another advantage of the shifted-mode model is that there is no need for numerical solution of the Fredholm integral equation: the elementary mode and the associated weight function can be determined, e.g., from far-zone properties of the field~\cite{Vahimaa}.
In this paper we generalize the scalar shifted-elementary-mode representation to the vectorial case, which is necessary to adequately model partially spatially coherent, partially polarized sources. There are two major reasons why such a generalization is necessary: first, non-paraxial fields can not be adequately described by a scalar model and, second, optical components in the system can modify the polarization state of the field. It turns out that the vectorial nature of the field does not fundamentally complicate the numerical procedure. Two elementary modes are needed to specify the state of polarization, but also in the electromagnetic case the modes and their weight functions can be determined from far-zone properties of the field.
We begin the discussion by briefly reviewing the scalar model in Sect.~\ref{s:scalar} to establish the notation and to simplify the interpretation of the main results. The extension to the electromagnetic case is outlined in Sect.~\ref{s:emext} and the rigorous mathematical formulation is presented Sections \ref{s:far} and \ref{s:elementary}. The important special case of rotationally symmetric and quasihomogeneous fields is discussed in Sections \ref{s:rotsym} and ~\ref{s:quasihom}, respectively. Some numerical results are provided in Sect.~\ref{s:cosine}. In Sect.~\ref{s:LED} we apply the model to a simple LED geometry. Finally, issues such as the measurements required to determine the elementary modes and their weight functions are discussed in Sect.~\ref{s:discussion}.
\section{The scalar model}
\label{s:scalar}
Using the notations of Fig.~\ref{f:notations}, we may write the well-known relationship~\cite{Mandel} between the cross-spectral density function $W(\mathbf{r}_1,\mathbf{r}_2)$ and the angular correlation function $A(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$ as (we omit the dependence on the angular frequency $\omega$ for brevity though\-out the paper)
\begin{align}
W(\mathbf{r}_1,\mathbf{r}_2) &=\frac{1}{(2\pi)^4}
\iiiint_{-\infty}^\infty
A(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)\nonumber\\
&\quad\times \exp\left[{\rm
i}(\mathbf{k}_2\cdot\mathbf{r}_2-\mathbf{k}_1^*\cdot\mathbf{r}_1)\right]
\mathrm{d}^2\kappa_1\,\mathrm{d}^2\kappa_2. \label{angspecscalar}
\end{align}
Here the asterisk indicates complex conjugation, $\mathbf{r}_j = \left(x_j,y_j,z_j\right)$ with $j = 1,2$ are position vectors, $\mathbf{k}_j = \left(k_{jx},k_{jy},k_{jz}\right) = \left(\boldsymbol{\kappa}_{j},k_{jz}\right)$ represent wave vectors, and $\boldsymbol{\kappa}_j=\left(k_{jx},k_{j,y}\right)$ are their transverse projections.
\begin{figure}
\psfrag{a}{(a)}\psfrag{b}{(b)}
\psfrag{x}{$x$}\psfrag{y}{$y$}\psfrag{z}{$z$}
\psfrag{4}{$k_x$}\psfrag{q}{$k_y$}\psfrag{2}{$k_z$}
\psfrag{3}{$\psi$}\psfrag{r}{$r$}\psfrag{2}{$k_z$}
\psfrag{w}{$\kappa$}\psfrag{h}{$\mathbf{\hat{x}}$}\psfrag{o}{$\mathbf{\hat{z}}$}
\psfrag{e}{$\rho$}\psfrag{k}{$k$}\psfrag{t}{$\theta$}
\psfrag{g}{$\phi$}\psfrag{n}{$\mathbf{\hat{s}}$}
\psfrag{l}{$\mathbf{\hat{r}}$} \psfrag{i}{$\mathbf{\hat{\phi}}$}
\psfrag{j}{$\boldsymbol{\hat{\rho}}$}
\psfrag{6}{$\boldsymbol{\hat{\theta}}$}
\psfrag{8}{$\boldsymbol{\hat{\kappa}}$}
\psfrag{5}{$\boldsymbol{\hat{\psi}}$} \psfrag{u}{$\mathbf{\hat{k}}$}
\psfrag{v}{$\boldsymbol{\kappa}$}\psfrag{c}{$\mathbf{\hat{y}}$}\psfrag{7}{$\mathbf{k}$}
\psfrag{f}{$\boldsymbol{\rho}$}\psfrag{m}{$\mathbf{r}$}
\psfrag{d}{$\boldsymbol{\sigma}$} \psfrag{0}{$z=0$}
\begin{center}
\includegraphics[width=\columnwidth]{elementary-em-1.eps}
\end{center}
\caption{Notations used for (a) position and (b) wave-vector
coordinates.} \label{f:notations}
\end{figure}
Restricting now to the specific class of fields mentioned in the introduction, we assume that the angular correlation
function is of the Schell-model form~\cite{Vahimaa}
\begin{equation}
A(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2) = g(\Delta\boldsymbol{\kappa})f^\ast(\boldsymbol{\kappa}_1)f(\boldsymbol{\kappa}_2).
\label{Aschell}
\end{equation}
The radiant intensity of a scalar field is defined as~\cite{Mandel}
\begin{equation}
J(r\hat{\mathbf{s}}) = 2 n \pi^2k^2\cos^2\theta A(k\boldsymbol{\sigma},k\boldsymbol{\sigma}),
\label{Jdefscalar}
\end{equation}
where $n$ is the refractive index of the medium, $\mathbf{\hat{s}}=\mathbf{r}/r$ with $r=\|\mathbf{r}\|$
is a unit direction vector, $\boldsymbol{\sigma}$ is its transverse projection, $\theta$ is the angle between
$\mathbf{\hat{s}}$ and the $z$ axis, and $k=\|\mathbf{k}\|$ is the wave number. Using Eq.~\eqref{Aschell}, we have
\begin{equation}
J(r\hat{\mathbf{s}}) = 2 n \pi^2k^2\cos^2\theta \left|f(k\boldsymbol{\sigma})\right|^2
\label{Jexprscalar}
\end{equation}
Thus the radiant intensity of a Schell-model partially coherent field defined by Eq.~\eqref{Aschell} is the same as that produced by a coherent field with angular spectrum $f(\boldsymbol{\kappa})$.
Let us introduce two-dimensional Fourier-transform relations
\begin{equation}
e(\mathbf{r}) = \frac{1}{(2\pi)^2}\iint_{-\infty}^\infty f(\boldsymbol{\kappa})\exp\left({\rm i}\mathbf{k}\cdot\mathbf{r}\right){\rm d}^2\kappa
\label{IFT-e}
\end{equation}
and
\begin{equation}
p(\boldsymbol{\rho}) = \frac{1}{(2\pi)^2}\iint_{-\infty}^\infty g(\Delta\boldsymbol{\kappa})\exp\left({\rm i}\Delta \boldsymbol{\kappa}\cdot\boldsymbol{\rho}\right){\rm d}^2\Delta\kappa,
\label{IFT-p}
\end{equation}
where $\Delta\boldsymbol{\kappa} = \boldsymbol{\kappa}_2-\boldsymbol{\kappa}_1$ and $\boldsymbol{\rho}=x\mathbf{\hat{x}}+y\mathbf{\hat{y}}$ is the transverse projection of the position vector.
Inserting Eq.~\eqref{Aschell} into Eq.~\eqref{angspecscalar} and using the Fourier representation of $g(\Delta\boldsymbol{\kappa})$ obtained by inverting Eq.~\eqref{IFT-p}, we get the expression
\begin{equation}
W(\mathbf{r}_1,\mathbf{r}_2) = \iint_{-\infty}^\infty p(\boldsymbol{\rho}^\prime)e^\ast(\mathbf{r}_1-\boldsymbol{\rho}^\prime)e(\mathbf{r}_2-\boldsymbol{\rho}^\prime)
{\rm d}^2\rho^\prime
\label{Scalarrep}
\end{equation}
for the CSD~\cite{Vahimaa}. This result applies, in particular, at $z = 0$. Thus $e(\boldsymbol{\rho},0)$ is the coherent source-plane field with angular spectrum $f(\boldsymbol{\kappa})$. The representation in Eq.~\eqref{Scalarrep} expresses the partially coherent field as a weighted linear superposition of spatially shifted but identical fully coherent elementary (scalar) fields $e(\boldsymbol{\rho},0)$.
\section{The electromagnetic extension}
\label{s:emext}
As is evident from Eq.~\eqref{Jdefscalar}, the mathematical form of the elementary field mode $e(\boldsymbol{\rho},0)$ can be determined from the knowledge of the radiant intensity (at least apart from a phase factor, which can in fact be employed to model volume sources~\cite{Turunen08}). In general, the weight function $p(\boldsymbol{\rho}^\prime)$ can be determined from far-field coherence measurements using Eqs.~\eqref{Aschell} and \eqref{IFT-p}, but often there are simpler ways to at least approximate it~\cite{Vahimaa}. This is the case, in particular, if the field is quasi\-homogeneous, i.e., if the coherence area in the source plane is much smaller than source area; in this case the weight function comes out of the integral in Eq.~\eqref{Scalarrep}. When $e(\boldsymbol{\rho},0)$ and $p(\boldsymbol{\rho}^\prime)$ are known, coherent propagation techniques for $e(\boldsymbol{\rho},0)$ and a linear superposition according to Eq.~\eqref{Scalarrep} suffice to propagate the entire spatially partially coherent field.
The extension of the scalar shifted-mode model to the electromagnetic case is not trivial. One might be tempted to use some superpositions of, e.g., locally linearly polarized modes of the scalar functional form to model partially polarized or unpolarized sources. However, such constructions seem hard to justify mathematically. The approach taken here is based on far-field information: in the far zone the field behaves as an outgoing spherical wave, and therefore it has a well-defined local polarization state. We employ this fact to separate the angular correlation tensor, which is the electromagnetic extension of the function $A(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$, into two orthogonal parts [see Eq.~\eqref{A:StartingPoint} in sect.~\ref{s:far}]. These represent the electromagnetic elementary modes (or polarization modes) in the far field. Then the source-plane modes can be determined by Fourier-transform techniques in analogy with the scalar case.
As a result of the construction process (to be described mathematically in the following sections), we obtain a vectorial shifted-mode expansion for the spatially partially coherent field everywhere in space [see Eqs.~\eqref{FieldSource}, \eqref{Elementary}, and \eqref{ElementaryB} in Sect.~\ref{s:elementary}]. Instead of propagating one fully coherent mode as in the scalar case, we now need to propagate two well-defined fully coherent vectorial field modes, and to form the generalized shifted-mode superposition, to govern the propagation of the electromagnetic spatially partially coherent field. Thus the increase in computational complexity, compared to the scalar case, is essentially fourfold.
\section{Field representation in the far-zone}
\label{s:far}
A statistically stationary random electromagnetic field in
the space--frequency domain is described by a CSD matrix
$\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2)$, which may be
expressed as~\cite{Tervo04,Alonso08}
\begin{align}
\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2) = \langle
\mathbf{E}^\ast(\mathbf{r}_1)\mathbf{E}^\mathrm{T}(\mathbf{r}_2)\rangle.
\label{Wpqdef}
\end{align}
Here $\mathrm{T}$ indicates the transpose, the brackets denote ensemble
averaging, and the electric-field realizations
$\mathbf{E}(\mathbf{r})$ are understood as appropriate random linear
superpositions of the eigenfunctions of the Fred\-holm integral
equation satisfied by
$\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2)$~\cite{Tervo04,Gori03}.
The position-dependent spectral density of the field can be written,
in analogy with scalar theory of partial coherence~\cite{Wolf82}, as
$S(\mathbf{r}) = {\rm tr}\,\boldmatrix{W}(\mathbf{r},\mathbf{r})$,
where tr stands for trace.
The relation between the field at the (secondary) source plane $z=0$
and the far-field can be found, for example, using the angular
spectrum representation of the CSD matrix~\cite{Tervo02}. We thus have, at any plane $z>0$,
\begin{align}
\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2) &=\frac{1}{(2\pi)^4}
\iiiint_{-\infty}^\infty
\boldmatrix{A}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)\nonumber\\
&\quad\times \exp\left[{\rm
i}(\mathbf{k}_2\cdot\mathbf{r}_2-\mathbf{k}_1^*\cdot\mathbf{r}_1)\right]
\mathrm{d}^2\kappa_1\,\mathrm{d}^2\kappa_2, \label{angspecdef}
\end{align}
where the angular correlation matrix (ACM)
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
&=\iiiint_{-\infty}^\infty
\boldmatrix{W}(\boldsymbol{\rho}_1,\boldsymbol{\rho}_2,0)\nonumber\\
&\quad\times \exp\left[{\rm
i}(\boldsymbol{\kappa}_1\cdot\boldsymbol{\rho}_1-\boldsymbol{\kappa}_2\cdot\boldsymbol{\rho}_2)\right]
\mathrm{d}^2\rho_1\,\mathrm{d}^2\rho_2 \label{defangspec}
\end{align}
describes the correlations between vectorial plane-wave components. In the far zone~\cite{Tervo02}
\begin{align}
\boldmatrix{W}^\infty(r_1\mathbf{\hat{s}}_1,r_2\mathbf{\hat{s}}_2)
&=(2\pi k)^2\cos\theta_1\cos\theta_2\boldmatrix{A}(k\boldsymbol{\sigma}_1, k\boldsymbol{\sigma}_2) \nonumber\\
&\quad\times \frac{\exp\left[{\rm i}k(r_2-r_1)\right]}{r_1r_2}
\label{farfield}
\end{align}
and the Poynting vector takes the form~\cite{Tervo02}
\begin{align}
\mathbf{P}^\infty(r\mathbf{\hat{s}})
&=\frac{n\mathbf{\hat{s}}}{2}\sqrt{\frac{\epsilon_0}{\mu_0}}
\mathrm{tr}\,\boldmatrix{W}^\infty(r\mathbf{\hat{s}},r\mathbf{\hat{s}})\nonumber\\
&= \mathbf{\hat{s}}\cos^2\theta\frac{2n\pi^2 k^2}{r^2}
\sqrt{\frac{\epsilon_0}{\mu_0}}\,\mathrm{tr}\,\boldmatrix{A}(k\boldsymbol{\sigma},
k\boldsymbol{\sigma}), \label{Poynting}
\end{align}
where $n$ is the refractive index of the material, and $\epsilon_0$ and
$\mu_0$ are the vacuum permittivity and permeability, respectively.
Furthermore, the radiant intensity is
\begin{align}
J(r\mathbf{\hat{s}})&=\lim_{r\to\infty}[r^2\|\mathbf{P}^\infty(r\mathbf{\hat{s}})\|]\nonumber\\
&= 2n \pi^2 k^2 \cos^2\theta
\sqrt{\frac{\epsilon_0}{\mu_0}}\,\mathrm{tr}\,\boldmatrix{A}(k\boldsymbol{\sigma},
k\boldsymbol{\sigma}). \label{RadiantIntensity}
\end{align}
Owing to Eqs.~\eqref{Wpqdef} and \eqref{defangspec}, also ACM has a representation as a correlation
matrix:
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2) =
\langle\mathbf{A}^\ast(\boldsymbol{\kappa}_1)
\mathbf{A}^\mathrm{T}(\boldsymbol{\kappa}_2)\rangle,
\label{angspeccorr}
\end{align}
where the components of $\mathbf{A}(\boldsymbol{\kappa})$ represent, componentwise, the angular
spectra~\cite{Mandel} of the electric-field realizations. It follows
from Eq.~\eqref{angspeccorr} that
$\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})$ is
Hermitian, satisfying
\begin{align}
\boldmatrix{A}^\dagger(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)
=\boldmatrix{A}(\boldsymbol{\kappa}_2, \boldsymbol{\kappa}_1),
\label{A:Hermitian}
\end{align}
where the dagger denotes the adjoint matrix. It is also non-negative
definite in the sense that
\begin{align}
\iiiint\mathbf{a}^\dagger(\boldsymbol{\kappa}_1)
\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)
\mathbf{a}(\boldsymbol{\kappa}_2) \mathrm{d}^2\kappa_1
\mathrm{d}^2\kappa_2\ge0, \label{A:Nonneg}
\end{align}
where $\mathbf{a}(\boldsymbol{\kappa})$ is an arbitrary,
sufficiently well-behaved vector function of the same size as the
electric-field vector.
The following argument is essential for the conclusions of this paper:
the field in the far zone is well known to be a modulated outgoing spherical wave.
Thus, in spherical polar coordinates, the angular-spectrum vector is
two-dimensional, i.e.,
$\hat{\mathbf{s}}\cdot\mathbf{A}(k\boldsymbol{\sigma})=0$ for
every $\hat{\mathbf{s}}$. Hence $\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)$
is expressible as a $2\times 2$ matrix in these coordinates.
Since $\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)$
is a Hermitian, non-negative definite $2\times 2$
matrix, it has two non-negative real-valued eigenfunctions.
In view of Eqs.~\eqref{A:Hermitian} and~\eqref{A:Nonneg}, we have at $\boldsymbol{\kappa}_1=\boldsymbol{\kappa}_2=\boldsymbol{\kappa}$
the decomposition
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})
&=\sum_{j=1}^2I_j(\boldsymbol{\kappa})\mathbf{F}_j^*
(\boldsymbol{\kappa})\mathbf{F}_j^\mathrm{T}(\boldsymbol{\kappa}),
\label{decomposition}
\end{align}
where $I_j(\boldsymbol{\kappa})$ are
the eigenvalues and $\mathbf{F}_f(\boldsymbol{\kappa})$ are the eigenvectors of
$\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})$. The
eigenvectors may be assumed orthonormal, i.e.,
\begin{align}
\mathbf{F}_p^\dagger(\boldsymbol{\kappa})\mathbf{F}_q(\boldsymbol{\kappa})=\delta_{pq}.
\label{Orthogonality}
\end{align}
In other words, the matrix $\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})$ can be
diagonalized. Evaluation of the trace in Eq.~\eqref{RadiantIntensity}
yields the explicit form for the radiant intensity:
\begin{equation}
J(r\mathbf{\hat{s}})=J_0 \cos^2\theta\left[I_1(k\boldsymbol{\sigma})
+I_2(k\boldsymbol{\sigma})\right],\label{RadiantModes}
\end{equation}
where $J_0 = 2n \pi^2 k^2\sqrt{\epsilon_0/\mu_0}$. Thus
it is proportional to the sum of the eigenvalues of the ACM at
$\boldsymbol{\kappa}_1=\boldsymbol{\kappa}_2$.
The decomposition in Eq.~\eqref{decomposition} is, in fact, valid
regardless of the chosen coordinate system; the eigenvalues remain invariant
in all unitary transformations, including simple coordinate transformations
between, e.g., Cartesian and spherical polar coordinate systems. As explicitly expressed in
Eqs.~\eqref{decomposition}, the polarization decomposition of ACM is generally direction-dependent:
the eigenvalues and/or the eigenvectors depend on $\hat{\mathbf{s}}$.
The physical meaning of Eq.~\eqref{decomposition} is clear:
In each direction in the far-zone, we may decompose the single-point
ACM into two mutually uncorrelated, orthogonal
polarization components. Moreover, owing to the factorized form
$I_j(\boldsymbol{\kappa})\mathbf{F}_j^*(\boldsymbol{\kappa})\mathbf{F}_j^\mathrm{T}(\boldsymbol{\kappa})$
of these components, they both represent fully polarized
fields. This can be verified from the well-known formula for
the (space--frequency domain) degree of polarization:
\begin{align}
P(\mathbf{r})=\left\{1-\frac{4\,\mathrm{det}\,\boldmatrix{W}(\mathbf{r},\mathbf{r})
{[\mathrm{tr}\,\boldmatrix{W}(\mathbf{r},\mathbf{r})]^2}\right\}^{1/2}.
\end{align}
In the far zone we have, using Eq.~(\ref{farfield}),
\begin{align}
P(r\mathbf{\hat{s}})=\left\{1-\frac{4\,\mathrm{det}\,\boldmatrix{A}(k\boldsymbol{\sigma},k\boldsymbol{\sigma})
{[\mathrm{tr}\,\boldmatrix{A}(k\boldsymbol{\sigma},k\boldsymbol{\sigma})]^2}\right\}^{1/2}.
\label{farzoneD}
\end{align}
With the aid of Eqs.~(\ref{decomposition}) and (\ref{Orthogonality}),
we then obtain
\begin{align}
P(r\mathbf{\hat{s}})=\left|\frac{I_1(k\boldsymbol{\sigma})-I_2(k\boldsymbol{\sigma})}
{I_1(k\boldsymbol{\sigma})+I_2(k\boldsymbol{\sigma})}\right|.\label{Idegpol}
\end{align}
Thus $P(r\mathbf{\hat{s}})=1$ for each individual polarization mode,
i.e., if either $I_1(\boldsymbol{\kappa})=0$ or
$I_2(\boldsymbol{\kappa})=0$. It is worth stressing that the
polarization decomposition presented above is analogous with the
well-known decomposition of a partially polarized plane wave into
two polarization modes: in particular, Eq.~(\ref{Idegpol}) is
analogous with Eq.~(6.3--31) in Ref.~\cite{Mandel}. In our case,
however, the direction-dependent eigenvalues are those of a
partially polarized and partially coherent field in the far zone.
To be able to include partial coherence (in addition to partial polarization) in the analysis,
we now assume that also the two-point ACM can be expressed
in the form
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)
=\boldmatrix{A}_1(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)+
\boldmatrix{A}_2(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2),
\label{assumption}
\end{align}
where $\boldmatrix{A}_j(\boldsymbol{\kappa}_1,
\boldsymbol{\kappa}_2)$ have diagonal values $\boldmatrix{A}_j(\boldsymbol{\kappa},
\boldsymbol{\kappa})
=I_j(\boldsymbol{\kappa})\mathbf{F}_j^*(\boldsymbol{\kappa})\mathbf{F}_j^\mathrm{T}
(\boldsymbol{\kappa})$. In other words,
$\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)$ is
assumed to be expressible as a sum of two mutually uncorrelated, but
fully coherent and polarized modes even if $\boldsymbol{\kappa}_1\neq
\boldsymbol{\kappa}_2$. We stress that, while Eq.~(\ref{assumption})
does not follow from Eq.~(\ref{decomposition}), it typically holds.
We do not dwell into a detailed discussion of this point here, but
note that it is a nontrivial task to find counterexamples.
We proceed to investigate the angular correlation properties of the
class of fields described by Eq.~\eqref{assumption}. Without loss of
generality, we may employ spherical polar coordinates in the far
field, in which case $\boldmatrix{A}_j(\boldsymbol{\kappa}_1,
\boldsymbol{\kappa}_2)$ is a $2\times 2$ matrix. Let us denote by
$\boldmatrix{U}(\boldsymbol{\kappa})$ a unitary matrix whose columns
are the eigenvectors $\mathbf{F}_j(\boldsymbol{\kappa})$ of
$\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})$ and by
$\boldmatrix{D}(\boldsymbol{\kappa},\boldsymbol{\kappa})$ a diagonal
matrix with elements equal to the eigenvalues
$I_j(\boldsymbol{\kappa})$ of $\boldmatrix{A}(\boldsymbol{\kappa},
\boldsymbol{\kappa})$. Then, equivalently with
Eq.~\eqref{decomposition}, we may write
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})
=\boldmatrix{U}^*(\boldsymbol{\kappa})
\boldmatrix{D}(\boldsymbol{\kappa},\boldsymbol{\kappa})
\boldmatrix{U}^\mathrm{T}(\boldsymbol{\kappa})
\end{align}
and consequently
\begin{align}
\boldmatrix{U}^\mathrm{T}(\boldsymbol{\kappa})\boldmatrix{A}(\boldsymbol{\kappa},
\boldsymbol{\kappa})\boldmatrix{U}^*(\boldsymbol{\kappa}) =
\boldmatrix{D}(\boldsymbol{\kappa},\boldsymbol{\kappa}).
\label{singular11}
\end{align}
In view of Eq.~(\ref{assumption}) and the associated discussion,
$\boldmatrix{A}_j(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)$
represents the angular correlation between two plane-wave components
whose polarization states are described by deterministic vectors
$\mathbf{F}_j(\boldsymbol{\kappa}_1)$ and
$\mathbf{F}_j(\boldsymbol{\kappa}_2)$. Thus we may write, in analogy
with Eq.~(\ref{decomposition}),
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)
&=\sum_{j=1}^2G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
\mathbf{F}_j^*(\boldsymbol{\kappa}_1)
\mathbf{F}_j^\mathrm{T}(\boldsymbol{\kappa}_2),\label{decomposition2}
\end{align}
where $G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$ are scalar
(angular correlation) functions. It is seen by direct calculation
that $\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)$
has a representation similar to
Eq.~(\ref{singular11}),
\begin{align}
\boldmatrix{U}^\mathrm{T}(\boldsymbol{\kappa}_1)\boldmatrix{A}(\boldsymbol{\kappa}_1,
\boldsymbol{\kappa}_2)\boldmatrix{U}^*(\boldsymbol{\kappa}_2) =
\boldmatrix{D}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2),
\end{align}
where $\boldmatrix{D}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)$
is a diagonal matrix with elements
$G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$. Thus
$\mathbf{F}_j(\boldsymbol{\kappa})$ and $G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$
can be determined by singular value decomposition of
$\boldmatrix{A}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$.
It can be shown, e.g., using Eq.~\eqref{A:Nonneg}
that the singular values $G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)$ satisfy
\begin{align}
|G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)|^2\le
I_j(\boldsymbol{\kappa}_1)I_j(\boldsymbol{\kappa}_2).
\end{align}
As a result, we may define the normalized angular correlation
functions
\begin{align}
g_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
=\frac{G_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
{\left[I_j(\boldsymbol{\kappa}_1)I_j(\boldsymbol{\kappa}_2)\right]^{1/2}}
\label{normG}
\end{align}
satisfying the inequalities
$0\le\left|g_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)\right|
\le 1$. If the correlations in the far zone are of
(generalized) Schell-model form, i.e.,
$g_j(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
=g_j(\Delta\boldsymbol{\kappa})$, it follows from Eqs.~(\ref{decomposition2}) and (\ref{normG}) that
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}_1, \boldsymbol{\kappa}_2)
&=\sum_{j=1}^2 g_j(\Delta\boldsymbol{\kappa})
\mathbf{f}_j^*(\boldsymbol{\kappa}_1)
\mathbf{f}_j^\mathrm{T}(\boldsymbol{\kappa}_2)
\label{A:StartingPoint}
\end{align}
with
\begin{equation}
\mathbf{f}_j(\boldsymbol{\kappa})
=[I_j(\boldsymbol{\kappa})]^{1/2}\mathbf{F}_j (\boldsymbol{\kappa}).
\label{HF}
\end{equation}
This is the electromagnetic extension of Eq.~\eqref{Aschell}.
\section{Elementary electric-field modes}
\label{s:elementary}
It follows from Eq.~\eqref{assumption} and the linearity of
Eq.~\eqref{defangspec} that the CSD has
the decomposition
\begin{align}
\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2)
=\boldmatrix{W}_1(\mathbf{r}_1,\mathbf{r}_2)
+\boldmatrix{W}_2(\mathbf{r}_1,\mathbf{r}_2). \label{FieldSource}
\end{align}
Let us define the inverse Fourier transforms $\mathbf{e}_j(\mathbf{r})$ and $p_j(\boldsymbol{\rho})$
of the functions $\mathbf{f}_j(\boldsymbol{\kappa})$ and $g_j(\Delta\boldsymbol{\kappa})$ in
analogy with Eqs.~\eqref{IFT-e} and \eqref{IFT-p}. With a procedure similar to that used in
derivation of Eq.~\eqref{Scalarrep}, we can express the two terms in Eq.~(\ref{FieldSource}) in the form
\begin{align}
\boldmatrix{W}_j(\mathbf{r}_1,\mathbf{r}_2) =\iint_{-\infty}^\infty
p_j(\boldsymbol{\rho}')
\mathbf{e}_j^*(\mathbf{r}_1-\boldsymbol{\rho}' )
\mathbf{e}_j^\mathrm{T}(\mathbf{r}_2-\boldsymbol{\rho}' )
\mathrm{d}^2\rho' \label{Elementary}
\end{align}
in the half-space $z\geq 0$. In particular, Eq.~(\ref{Elementary})
is valid at the source plane $z = 0$, where
$\mathbf{e}_j(\mathbf{r}) = \mathbf{e}_j(\boldsymbol{\rho},0)$. This
result is the electromagnetic extension of the scalar elementary-mode
decomposition in Eq.~\eqref{Scalarrep}.
Let us next examine some general properties of Eq.~(\ref{Elementary}).
Equations \eqref{Orthogonality} and \eqref{HF}
together with
\begin{equation}
\mathbf{f}_j(\boldsymbol{\kappa}) = \iint_{-\infty}^\infty
\mathbf{e}_j(\mathbf{r}) \exp\left(-{\rm i}
\mathbf{k}\cdot\mathbf{r}\right)\,\mathrm{d}^2\rho
\label{AngspecGinv}
\end{equation}
lead to
\begin{equation}
\iiiint_{-\infty}^\infty
\mathbf{e}_1^\dagger(\boldsymbol{\rho}-\boldsymbol{\rho}',0)\mathbf{e}_2(\boldsymbol{\rho},0)
\exp\left(-{\rm
i}\boldsymbol{\kappa}\cdot\boldsymbol{\rho}'\right)\,\mathrm{d}^2\rho\,\mathrm{d}^2\rho'=0.
\end{equation}
Because of the exponential factor in the integral, $\mathbf{e}_1(\boldsymbol{\rho},0)$ and
$\mathbf{e}_2(\boldsymbol{\rho},0)$ are generally not orthogonal in
a point\-wise sense. For many paraxial fields, though, the functions
$\mathbf{f}_1(\boldsymbol{\kappa})$ and
$\mathbf{f}_2(\boldsymbol{\kappa})$ are globally orthogonal, i.e.,
$\mathbf{f}_1^\dagger(\boldsymbol{\kappa}_1)\mathbf{f}_2(\boldsymbol{\kappa}_2)=0$
for all $\boldsymbol{\kappa}_1$ and $\boldsymbol{\kappa}_2$. In
this special case the point\-wise orthogonality of
$\mathbf{e}_1(\boldsymbol{\rho},0)$ and
$\mathbf{e}_2(\boldsymbol{\rho},0)$ follows from
Eqs.~\eqref{Orthogonality}, \eqref{HF}, and \eqref{AngspecGinv}.
This property holds also for significant classes of non-paraxial
fields, as will be demonstrated in Sections \ref{s:rotsym} and
\ref{s:cosine}. However, it is not essential for practical implementation of
the propagation algorithm developed in this paper.
Since the functions $\mathbf{e}_j(\mathbf{r})$ represent fully coherent fields, they
obey the Helmholtz equation
\begin{align}
\nabla^2\mathbf{e}_j(\mathbf{r}) +k^2\mathbf{e}_j(\mathbf{r})=0.
\label{HelmholtzG}
\end{align}
Furthermore, the CSD matrix obeys the divergence
equation~\cite{Tervo04}
\begin{align}
\nabla_1^\mathrm{T}\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2)=\mathbf{0},
\label{DivergenceW}
\end{align}
where the subscript 1 denotes differentiation with respect to
$\mathbf{r}_1$. Together with Eqs.~\eqref{FieldSource} and
\eqref{Elementary}, this implies that
\begin{align}
\nabla\cdot\mathbf{e}_j(\mathbf{r})=0. \label{DivergenceG}
\end{align}
In view of Eqs.~\eqref{HelmholtzG} and \eqref{DivergenceG}, the
functions $\mathbf{f}_j(\mathbf{r})$ behave exactly as the electric
field vector in the space--frequency domain. Hence the
matrix-functions
\begin{align}
\boldmatrix{W}_j^{\rm e}(\mathbf{r}_1,\mathbf{r}_2)
=\mathbf{e}_j^*(\mathbf{r}_1)\mathbf{e}_j^\mathrm{T}(\mathbf{r}_2)
\label{elfact}
\end{align}
may be called the \emph{elementary electric-field modes} of the
field. Using these modes, we can write Eqs.~\eqref{FieldSource} and
\eqref{Elementary} in a more compact form
\begin{equation}
\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2)=\sum_{j=1}^2\iint_{-\infty}^\infty
p_j(\boldsymbol{\rho}')\boldmatrix{W}_j^{\rm
e}(\mathbf{r}_1-\boldsymbol{\rho}',\mathbf{r}_2-\boldsymbol{\rho}')
\mathrm{d}^2\rho' \label{ElementaryB}
\end{equation}
and express the spectral density of the field as
\begin{equation}
S(\mathbf{r})=\sum_{j=1}^2\iint_{-\infty}^\infty
p_j(\boldsymbol{\rho}')S_j^{\rm e} (\mathbf{r}-\boldsymbol{\rho}')
\mathrm{d}^2\rho', \label{SDensity}
\end{equation}
where $S_j^{\rm e}(\mathbf{r})=\mathrm{tr}\,\boldmatrix{W}_j^{\rm
e}(\mathbf{r},\mathbf{r})$ are the spectral densities of the two
elementary electric-field modes.
In analogy with scalar theory~\cite{Vahimaa}, the field in
Eq.~\eqref{ElementaryB} is understood to consist of a weighted
continuum of identical, laterally shifted elementary modes. The main
difference between the scalar and electromagnetic descriptions is
that the electromagnetic field consists of two (uncorrelated) sets
of elementary modes, whose polarization states in the far-zone are
orthogonal and whose radiant-intensity distributions are in general
different. Owing to the factorized form of $\boldmatrix{W}_j^{\rm
e}(\mathbf{r}_1,\mathbf{r}_2)$ in Eq.~\eqref{elfact}, each
elementary mode is completely coherent~\cite{Setala2004} in the
sense of the space--frequency analog of the degree of coherence for
electromagnetic fields put forward in Ref.~\cite{Tervo2003}:
\begin{align}
\mu(\mathbf{r}_1,\mathbf{r}_2)
=\{\mathrm{tr}\left[\boldsymbol{\mu}(\mathbf{r}_1,\mathbf{r}_2
\boldsymbol{\mu}(\mathbf{r}_2,\mathbf{r}_1)\right]\}^{1/2}.
\label{scalarmu}
\end{align}
Here
\begin{align}
\boldsymbol{\mu}(\mathbf{r}_1,\mathbf{r}_2)=
\frac{\boldmatrix{W}(\mathbf{r}_1,\mathbf{r}_2)
{\left[S(\mathbf{r}_1)S(\mathbf{r}_2)\right]^{1/2}} \label{boldmu}
\end{align}
is the normalized CSD matrix. In general, all
$3\times 3$ matrix elements of $\boldmatrix{W}_j^{\rm
e}(\mathbf{r}_1,\mathbf{r}_2)$ are non-zero and spatially varying.
Analogously with Eqs.~\eqref{scalarmu} and \eqref{boldmu}, we can
define the degree of angular coherence
\begin{equation}
\alpha(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
=\{\mathrm{tr}\left[\boldsymbol{\alpha}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
\boldsymbol{\alpha}(\boldsymbol{\kappa}_2,\boldsymbol{\kappa}_1)\right]\}^{1/2}
\label{scalaralpha}
\end{equation}
where
\begin{equation}
\boldsymbol{\alpha}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)=
\frac{\boldmatrix{A}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)}
{\left[{\rm tr}\,\boldmatrix{A}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_1)
{\rm tr}\,\boldmatrix{A}(\boldsymbol{\kappa}_2,\boldsymbol{\kappa}_2)\right]^{1/2}}
\label{boldalpha}
\end{equation}
is normalized angular correlation matrix.
To conclude this section we provide a convenient, fully general
series representation for the electric-field modes. In the far zone,
where the field is transverse to the local propagation direction and
thus fluctuates locally in the plane defined by the spherical polar
unit vectors $\boldsymbol{\hat{\theta}}$ and
$\boldsymbol{\hat{\psi}}$, it is natural to express the basis
vectors in the form
\begin{equation}
\mathbf{f}_j(\boldsymbol{\kappa})=
f_{j,\theta}(\boldsymbol{\kappa})\boldsymbol{\hat{\theta}}+
f_{j,\psi}(\boldsymbol{\kappa})\boldsymbol{\hat{\psi}}.
\label{GeneralSphericalAzimuthal}
\end{equation}
Using spherical
polar coordinates $\left(\theta,\psi\right)$ for the spatial
frequencies and circular cylindrical coordinates
$\left(\rho,\phi,z\right)$ for the position vector (see
Fig.~\ref{f:notations}), we then have (after somewhat lengthy
calculations outlined in Appendix \ref{A1})
\begin{align}
\mathbf{e}_j(\mathbf{r})
&=\frac{k}{2\pi }\sum_{m=-\infty}^\infty {\rm i}^m \exp({\rm
i}m\phi)\int_0^{\pi/2}
\exp\left({\rm i} kz\cos\theta\right)\nonumber\\
&\quad\times \biggl\{ \boldsymbol{\hat{\varrho}} \left[ -\frac{
m}{\rho}f_{j,\psi,m}(\theta) -{\rm i} \cos\theta
f_{j,\theta,m}(\theta) \frac{\mathrm{d}}{\mathrm{d\rho}}
\right]\nonumber\\
&\qquad+ \boldsymbol{\hat{\phi}} \left[ \frac{ m}{\rho}\cos\theta
f_{j,\theta,m}(\theta) -{\rm i} f_{j,\psi,m}(\theta)
\frac{\mathrm{d}}{\mathrm{d\rho}}
\right]\nonumber\\
&\qquad- \mathbf{\hat{z}} k \sin^2\theta f_{j,\theta,m}(\theta)
\biggr\}J_m(k\rho\sin\theta)\,\mathrm{d}\theta, \label{Cylinder}
\end{align}
where
\begin{equation}
f_{j,\xi,m}(\theta) =\frac{1}{2\pi}\int_0^{2\pi}
f_{j,\xi}(\theta,\psi)\exp\left(-{\rm i}m\psi\right)\,\mathrm{d}\psi
\label{fouriercoeff}
\end{equation}
are the azimuthal Fourier coefficients of $f_{j,\xi}(\theta,\psi)$
and $\xi$ stands for either $\theta$ or $\psi$. Note that the upper
limit of the integral in Eq.~\eqref{Cylinder} is set to $\pi/2$.
Thus the elementary electric-field modes are taken to contain only
propagating waves, i.e., information that can be gathered from
far-zone measurements. As a result, the propagation method
considered here is not suitable for modeling near-field phenomena
(fields at distances of the order of one wavelength from the plane
$z=0$).
\section{Rotationally symmetric fields}
\label{s:rotsym}
Let us assume that the CSD at $z = 0$ is rotationally symmetric about the $z$ axis.
Then also the ACM is rotationally symmetric
and the polarization basis vectors
$\mathbf{f}_1(\boldsymbol{\kappa})$ and
$\mathbf{f}_2(\boldsymbol{\kappa})$ are rotationally invariant,
i.e., their $\theta$ and $\psi$ components
$f_{j,\theta}(\theta,\psi)$ and $f_{j,\psi}(\theta,\psi)$ are
independent on the azimuthal angle $\psi$. Hence only the
zeroth-order Fourier coefficients $f_{j,\xi,0}(\theta)$ in
Eq.~\eqref{fouriercoeff} are non-zero and it follows from
Eqs.~\eqref{Cylinder}
and \eqref{BesselDiff} that
\begin{align}
\mathbf{e}_j(\mathbf{r})
&=\frac{k^2}{2\pi}\int_0^{\pi/2}
\sin\theta\exp\left({\rm i} kz\cos\theta \right)\nonumber\\
&\quad\times \biggl\{ {\rm i}J_1(k\rho\sin\theta) \left[ \cos\theta
f_{j,\theta,0}(\theta)\boldsymbol{\hat{\rho}}
+f_{j,\psi,0}(\theta)\boldsymbol{\hat{\phi}}
\right]\nonumber\\
&\qquad- \sin\theta
J_0(k\rho\sin\theta)f_{j,\theta,0}(\theta)\mathbf{\hat{z}} \biggr\}
\,\mathrm{d}\theta.
\end{align}
Thus, as expected, the elementary electric-field modes are also
rotationally symmetric about the $z$ axis. In particular, if the
basis vectors are parallel to $\boldsymbol{\hat{\theta}}$ and
$\boldsymbol{\hat{\psi}}$,
\begin{subequations}
\begin{align}
\mathbf{e}_1(\mathbf{r})
&=\frac{k^2}{2\pi}\int_0^{\pi/2}f_{j,\theta,0}(\theta)
\sin\theta \nonumber\\
&\quad\times \left[ {\rm
i}\boldsymbol{\hat{\rho}}J_1(k\rho\sin\theta) \cos\theta -
\mathbf{\hat{z}}\sin\theta J_0(k\rho\sin\theta) \right]
\nonumber\\
&\quad\times\exp\left({\rm i} kz\cos\theta \right)\,\mathrm{d}\theta,\\
\mathbf{e}_2(\mathbf{r}) &=\boldsymbol{\hat{\phi}}\frac{{\rm
i}k^2}{2\pi}\int_0^{\pi/2}f_{j,\psi,0}(\theta) \sin\theta
J_1(k\rho\sin\theta)
\nonumber\\
&\quad\times\exp\left({\rm i} kz\cos\theta
\right)\,\mathrm{d}\theta.
\end{align}
\label{GeneralRadialAzimuthal}
\end{subequations}
The elementary modes $\mathbf{e}_1(\mathbf{r})$ and
$\mathbf{e}_2(\mathbf{r})$ are now radially and azi\-muthally
polarized fields, respectively. Hence they are point\-wise
orthogonal, regardless of whether the field is paraxial or not.
Since these modes have no azimuthal phase variation, they possess no
phase singularity (vortex) at $\rho=0$. Thus, even if the axial
field vanished $z=0$ (as turns out to be often the case for some of
the field components), such a zero does not propagate.
\section{Quasi-homogeneous sources}
\label{s:quasihom}
Assume next that the variations of the spectral density
$S(\boldsymbol{\rho},0)$ at the source plane are slow compared to
the variations of the degree of coherence
$\mu(\boldsymbol{\rho}_1,0,\boldsymbol{\rho}_2,0)$. Moreover, let
the correlations at $z = 0$ be of the
Schell-model form, i.e., depend only on
$\Delta\boldsymbol{\rho}=\boldsymbol{\rho}_2-\boldsymbol{\rho}_1$.
Such a planar source is said to be quasi-homogeneous~\cite{Mandel}. We may then approximate
$S(\boldsymbol{\rho}_1,0)\approx S(\boldsymbol{\rho}_2,0)\approx
S(\boldsymbol{\bar{\rho}},0)$, where
$\boldsymbol{\bar{\rho}}=(\boldsymbol{\rho}_1+\boldsymbol{\rho}_2)/2$,
and write
\begin{align}
\boldmatrix{W}(\boldsymbol{\rho}_1,\boldsymbol{\rho}_2,0)\approx
S(\boldsymbol{\bar{\rho}},0)
\boldsymbol{\mu}(\Delta\boldsymbol{\rho},0). \label{quasi}
\end{align}
Inserting Eq.~\eqref{quasi} into Eq.~\eqref{defangspec} and defining $\boldsymbol{\bar{\kappa}}=(\boldsymbol{\kappa}_1
+\boldsymbol{\kappa}_2)/2$ yields
\begin{align}
\boldmatrix{A}(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2)
=\tilde{S}(\Delta\boldsymbol{\kappa})
\tilde{\boldsymbol{\mu}}(\boldsymbol{\bar{\kappa}}),
\label{AngSpecQuasi}
\end{align}
where
\begin{equation}
\tilde{S}(\Delta\boldsymbol{\kappa}) = \iint_{-\infty}^\infty
S(\boldsymbol{\bar\rho},0)\exp\left(-{\rm
i}\Delta\boldsymbol{\kappa}\cdot\boldsymbol{\bar\rho}\right){\mathrm
d}^2\bar\rho,
\end{equation}
and
\begin{equation}
\tilde{\boldsymbol{\mu}}(\boldsymbol{\bar\kappa}) =
\iint_{-\infty}^\infty
\boldsymbol{\mu}(\Delta\boldsymbol{\rho},0)\exp\left(-{\rm
i}\boldsymbol{\bar\kappa}\cdot\Delta\boldsymbol{\rho}\right){\mathrm
d}^2\Delta\rho.
\end{equation}
Since
\begin{eqnarray}
\lefteqn{\mathrm{tr}\,\boldmatrix{A}(\boldsymbol{\kappa},\boldsymbol{\kappa})=
\tilde{S}(\boldsymbol{0})\,\mathbf{tr}\,\tilde{\boldsymbol{\mu}}(\boldsymbol{\kappa})}\nonumber\\
& & = \tilde{S}(\boldsymbol{0})\iint_{-\infty}^\infty
\mathrm{tr}\,\boldsymbol{\mu}(\Delta\boldsymbol{\rho},0)\exp\left(-{\rm
i}\boldsymbol{\kappa}\cdot\Delta\boldsymbol{\rho}\right){\mathrm
d}^2\Delta\rho,\nonumber\\
\end{eqnarray}
it follows from Eq.~(\ref{RadiantIntensity}) that the
radiant intensity produced by a quasi\-homogeneous electromagnetic
source depends only on the correlation properties of the source
field, in complete analogy with the scalar case~\cite{Mandel}.
Inserting from Eq.~\eqref{quasi} into Eq.~\eqref{boldalpha} and recalling that $\tilde{\mu}$ is a wide function compared to $\tilde{S}$, we have
\begin{equation}
\boldsymbol{\alpha}({\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2) = \frac{\tilde{S}(\Delta\boldsymbol{\kappa})}{\tilde{S}(\boldsymbol{0})}\frac{\tilde{\mu} (\bar{\boldsymbol{\kappa}})}{\tilde{\mu} (\boldsymbol{0})}}.
\end{equation}
Using Eq.~\eqref{scalaralpha} and noting that $\tilde{S}$ is real because $S$ is non-negative, we obtain
\begin{equation}
\alpha(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2) = \frac{\tilde{S}(\Delta\boldsymbol{\kappa})}{\tilde{S}(\boldsymbol{0})}\frac{\left\{{\rm tr}\left[\tilde{\mu} (\bar{\boldsymbol{\kappa}})\right]^2\right\}^{1/2}}{{\rm tr}\,\tilde{\mu} (\bar{\boldsymbol{\kappa}})}.
\label{alpha-2}
\end{equation}
This expression can be cast into a more transparent form using the far-zone degree of polarization defined in Eq.~\eqref{farzoneD}, which can be written equivalently in the form
\begin{equation}
P(\boldsymbol{\kappa}) = \left\{\frac{2\, {\rm tr}\left[\tilde{\boldsymbol{\mu}}(\boldsymbol{\kappa})\right]^2}{
\left[{\rm tr}\,\tilde{\boldsymbol{\mu}}(\boldsymbol{\kappa})\right]^2}-1\right\}^{1/2}.
\end{equation}
We then have, from Eq.~\eqref{alpha-2},
\begin{equation}
\alpha(\boldsymbol{\kappa}_1,\boldsymbol{\kappa}_2) = \frac{\tilde{S}(\Delta\boldsymbol{\kappa})}{\tilde{S}(\boldsymbol{0})}
\left[\frac{D^2(\bar{\boldsymbol{\kappa}}) +1}{2}\right]^{1/2}.
\label{qhffD}
\end{equation}
The first fraction in this expression is equal to the scalar degree of angular coherence. The second fraction, however, is a polarization-dependent modulating term that depends on the source-plane correlations.
The assumption that the source is quasi\-homogeneous simplifies
decisively the elementary-mode decomposition of the field in the
scalar case~\cite{Vahimaa}, and the same is true in the
electromagnetic case. It follows from Eqs.~\eqref{decomposition}, \eqref{HF},
and~\eqref{AngspecGinv} that
\begin{align}
&\sum_{j=1}^2\iint_{-\infty}^\infty
\mathbf{e}_j^*(\boldsymbol{\rho}_1-\boldsymbol{\rho}',0)
\mathbf{e}_j^\mathrm{T}(\boldsymbol{\rho}_2-\boldsymbol{\rho}',0)\,
\mathrm{d}^2\rho'\nonumber\\
&=\frac{1}{(2\pi)^2}\iint_{-\infty}^\infty
\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})
\exp(i\boldsymbol{\kappa}\cdot\Delta\boldsymbol{\rho})\,
\mathrm{d}^2\kappa\nonumber\\
&=\tilde{S}(\mathbf{0})
\boldsymbol{\mu}(\Delta\boldsymbol{\rho},0),\label{quasirelation}
\end{align}
where, in the last step, we have used Eq.~\eqref{AngSpecQuasi}.
Comparing Eqs.~\eqref{quasi} and~\eqref{quasirelation} yields
\begin{align}
&\boldmatrix{W}(\boldsymbol{\rho}_1,\boldsymbol{\rho}_2,0)\approx
\frac{S(\boldsymbol{\bar{\rho}},0)
{\tilde{S}(\mathbf{0})}\nonumber\\
&\times\sum_{j=1}^2\iint_{-\infty}^\infty
\mathbf{e}_j^*(\boldsymbol{\rho}_1-\boldsymbol{\rho}',0)
\mathbf{e}_j^\mathrm{T}(\boldsymbol{\rho}_2-\boldsymbol{\rho}',0)\,
\mathrm{d}^2\rho'.
\end{align}
Thus the weight function no longer appears inside the integral and,
irrespective of the spatial distribution of the spectral density at
the source plane, the field characteristics can be determined the
propagating a convolution integral involving the elementary modes
only.
\section{Sources with cosine-power radiant intensity}
\label{s:cosine}
Let us consider the rotationally symmetric case with radially
and azimuthally polarized basis vectors $\mathbf{F}_1(\theta,\psi) = \boldsymbol{\hat{\theta}}$,
$\mathbf{F}_2(\theta,\psi) = \boldsymbol{\hat{\psi}}$, and corresponding eigenvalues
$I_1(\theta,\psi) = A_1^2\cos^{a-2}\theta$, $I_1(\theta,\psi) = A_2^2\cos^{b-2}\theta$,
where $A_1$ and $A_2$ are arbitrary (real) functions of frequency. Then, in view of Eq.~\eqref{RadiantModes},
the radiant intensity is a superposition of two $\cos^n\theta$ type contributions, one radially and
the other azimuthally polarized:
\begin{equation}
J(\theta,\psi) = J_0\left[A_1^2\cos^a\theta+A_2^2\cos^b\theta\right].
\end{equation}
The elementary electric-field modes in the far zone are, according to Eq.~\eqref{HF},
\begin{subequations}
\begin{align}
\mathbf{f}_1(\theta,\psi) & = \boldsymbol{\hat{\theta}}A_1\cos^{a/2-1}\theta,\\
\mathbf{f}_2(\theta,\psi) & = \boldsymbol{\hat{\psi}}A_2\cos^{b/2-1}\theta.
\end{align}
\end{subequations}
Using Eqs.~\eqref{GeneralSphericalAzimuthal} and \eqref{fouriercoeff} we see that the non-vanishing
Fourier coefficients are $f_{1,\theta,0} = A_1\cos^{a/2-1}\theta$ and
$f_{2,\psi,0} = A_2\cos^{b/2-1}\theta$. Inserting these into Eqs.~\eqref{GeneralRadialAzimuthal}
and applying~\eqref{IntegralSolution2} derived in Appendix \ref{A2}, we obtain the source-plane
elementary field modes in the form
\begin{subequations}
\begin{align}
\mathbf{e}_1(\rho,0) &=A_1\frac{{\rm i}k^2}{8\sqrt{\pi}}\left[
\boldsymbol{\hat{\rho}}\frac{k\rho}{2}\Gamma\left(\frac{1}{2}+\frac{a}{4}\right)\right.\nonumber\\
&\quad\times
\sideset{_1}{_2}\ER\left(\frac{3}{2};2,2+\frac{a}{4};-\frac{k^2\rho^2}{4}\right)\nonumber\\
&\quad-\left.\mathbf{\hat{z}}\Gamma\left(\frac{a}{4}\right)
\sideset{_1}{_2}\ER\left(\frac{3}{2};1,\frac{3}{2}+\frac{a}{4};-\frac{k^2\rho^2}{4}\right)\right],\\
\mathbf{e}_2(\rho,0) &=\boldsymbol{\hat{\phi}}A_2\frac{{\rm
i}k^3\rho}{16\sqrt{\pi}}
\Gamma\left(\frac{b}{4}\right)\nonumber\\
&\quad\times
\sideset{_1}{_2}\ER\left(\frac{3}{2};2,\frac{3}{2}+\frac{b}{4};-\frac{k^2\rho^2}{4}\right),
\end{align}
\label{spcosmodes}
\end{subequations}
where $\Gamma$ is the Gamma function and $\sideset{_1}{_2}\ER$ is the
regularized hypergeometric function (see Appendix \ref{A2}).
Figures~\ref{Cos1}--\ref{Cos3} illustrate the radial dependence of
the elementary-field components and the function
\begin{equation}
w(\rho,0)=\left[\|\mathbf{e}_1(\rho,0)\|^2+\|\mathbf{e}_2(\rho,0)\|^2\right]^{1/2}
\end{equation}
for different values of $a = b = n$, with $A_1=A_2=-{\rm i}k^{-2}$.
\begin{figure}[!h]
\psfrag{r}{\hspace{-1mm}$k\rho$}\psfrag{e}{\hspace{-7mm}amplitude}
\psfrag{0}{$0$}\psfrag{5}{$5$}\psfrag{10}{$10$}\psfrag{15}{$15$}\psfrag{20}{$20$}
\psfrag{0.1}{\hspace{2mm}$0.1$}\psfrag{0.2}{\hspace{2mm}$0.2$}\psfrag{0.3}{\hspace{2mm}$0.3$}
\psfrag{-0.1}{\hspace{2mm}$-0.1$}\psfrag{-0.2}{\hspace{2mm}$-0.2$}\psfrag{-0.3}{\hspace{2mm}$-0.3$}
\centering
\includegraphics[width=\columnwidth]{elementary-em-2.eps}
\caption{Relative amplitudes of the radial (solid line), azimuthal
(dashed line), and longitudinal (dotted line) components of the
elementary fields as a function of the normalized radial coordinate
$k\rho$, as well as the function $w(k\rho)$ (thick solid line) for
$n=1$, which corresponds to a Lambertian source.} \label{Cos1}
\end{figure}
\begin{figure}[!h]
\psfrag{r}{\hspace{-1mm}$k\rho$}\psfrag{e}{\hspace{-7mm}amplitude}
\psfrag{0}{$0$}\psfrag{5}{$5$}\psfrag{10}{$10$}\psfrag{15}{$15$}\psfrag{20}{$20$}
\psfrag{0.1}{\hspace{2mm}$0.1$}\psfrag{0.05}{\hspace{2mm}$0.05$}
\psfrag{-0.1}{\hspace{2mm}$-0.1$}\psfrag{-0.05}{\hspace{2mm}$-0.05$}
\centering
\includegraphics[width=\columnwidth]{elementary-em-3.eps}
\caption{Same as Fig.~\ref{Cos1}, but with $n=2$, which corresponds
to an incoherent source in scalar theory.} \label{Cos2}
\end{figure}
\begin{figure}[!h]
\psfrag{r}{\hspace{-1mm}$k\rho$}\psfrag{e}{\hspace{-7mm}amplitude}
\psfrag{0}{$0$}\psfrag{5}{$5$}\psfrag{10}{$10$}\psfrag{15}{$15$}\psfrag{20}{$20$}
\psfrag{0.02}{\hspace{2mm}$0.02$}\psfrag{0.04}{\hspace{2mm}$0.04$}
\psfrag{-0.02}{\hspace{2mm}$-0.02$}\psfrag{-0.04}{\hspace{2mm}$-0.04$}
\centering
\includegraphics[width=\columnwidth]{elementary-em-4.eps}
\caption{Same as Fig.~\ref{Cos1}, but with $n=5$. Thus the source
has a somewhat directional radiation pattern.} \label{Cos3}
\end{figure}
\psfrag{0.0025}{\hspace{4mm}$0.0025$}\psfrag{0.005}{\hspace{3.5mm}$0.005$}
\psfrag{0.0075}{\hspace{4mm}$0.0075$}\psfrag{0.01}{\hspace{3mm}$0.01$}
\psfrag{-0.0025}{\hspace{4mm}$-0.0025$}\psfrag{-0.005}{\hspace{3.5mm}$-0.005$}
\begin{figure}[!h]
\psfrag{r}{\hspace{-1mm}$k\rho$}\psfrag{e}{\hspace{-7mm}amplitude}
\psfrag{0}{$0$}\psfrag{5}{$5$}\psfrag{10}{$10$}\psfrag{15}{$15$}\psfrag{20}{$20$}
\centering
\includegraphics[width=\columnwidth]{elementary-em-5.eps}
\caption{Same as Fig.~\ref{Cos3}, but with $n=20$. The radiation
pattern is increasingly directional and could be produced
approximately by a LED with an integrated collimating lens.}
\label{Cos4}
\end{figure}
\section{Illustration: LED model}
\label{s:LED}
Let us consider a simple model for a rotationally symmetric surface-emitting LED
illustrated in Fig.~\ref{f:LEDgeom}a, where the primary light-emitting region is
planar (such as a quantum well) and buried inside a semiconductor
material of refractive index $n_{\rm s}$. Each primary source point is assumed to radiate
(independently) a spherical wave. If we denote the propagation angle inside the semiconductor material
by $\theta^\prime$, the radiant intensity may be expressed as a sum of radially
and azimuthally polarized contributions $J^{({\rm i})}_j(\theta^\prime)$, $j = 1,2$:
\begin{equation}
J^{({\rm i})}_j(\theta^\prime) = J_{0,j}^{({\rm i})}\cos^2\theta^\prime I_j^{({\rm i})}(\theta^\prime) = J_{0,j}^{({\rm i})}\cos^2\theta^\prime
\left|A_j^{({\rm i})}(\theta^\prime)\right|^2,
\end{equation}
where $A_j^{({\rm i})}(\theta^\prime)$ is the complex amplitude of the plane wave in direction $\theta^\prime$.
If the distance between the pn plane and the semiconductor-air interface is large compared to
$\lambda$, the local plane wave approximation is valid at the semiconductor-air interface.
Thus the output radiant intensity takes the form
\begin{equation}
J_j(\theta) = J_{0,j}\cos^2\theta I_j(\theta) = J_{0,j}\cos^2\theta
\left|A_j(\theta)\right|^2.
\end{equation}
Here $J_{0,j} = J_{0,j}^{({\rm i})}/n_{\rm s}$, the angles $\theta$ and $\theta^\prime$ are related by Snell's law
$\sin\theta = n_{\rm s}\sin\theta^\prime$, and $A_j(\theta) = t_j(\theta,\theta^\prime)A_j(\theta^\prime)$, where $t_j(\theta,\theta^\prime)$ are given by Fresnel's equations
\begin{equation}
t_1(\theta,\theta^\prime) = \frac{2n_{\rm s}\cos\theta}{\cos\theta^\prime + n_{\rm s}\cos\theta},
\end{equation}
\begin{equation}
t_2(\theta,\theta^\prime) = \frac{2\cos\theta}{n_{\rm s}\cos\theta^\prime + \cos\theta}
\end{equation}
for radial (TM) and azimuthal (TE) polarizations.
Because of the large refractive index $n_{\rm s}$ of a semiconductor, only a narrow cone of plane waves
with incident angles $\theta^\prime$ in the range $0\leq \theta^\prime < \arcsin\left(1/n_{\rm s}\right)$.
We may thus assume that the radial and azimuthal contributions to the radiant intensity of the primary source
are equal, i.e., $I_1^{({\rm i})} = I_2^{({\rm i})}$, and hence we may denote $J_0^{(\rm i)}=2J_{0,j}^{(\rm i)}$ and $J_0=2J_{0,j}$. Then the degree of polarization given by Eq.~\eqref{Idegpol}
is
\begin{equation}
P(\theta) = \frac{\left|t_1(\theta,\theta^\prime)\right|^2 - \left|t_2(\theta,\theta^\prime)\right|^2}{\left|t_1(\theta,\theta^\prime)\right|^2 + \left|t_2(\theta,\theta^\prime)\right|^2}
\end{equation}
and the radiant intensity transforms at the interface according to
\begin{equation}
\frac{J(\theta)}{J^{({\rm i})}(\theta)} = \frac{1}{2n_{\rm s}}\frac{\cos^2\theta}{\cos^2\theta^\prime}\left[\left|t_1(\theta,\theta^\prime)\right|^2+\left|t_2(\theta,\theta^\prime)\right|^2\right].
\end{equation}
If we assume that the radiation pattern produced by the primary source is Lambertian, with
\begin{equation}
J^{({\rm i})}_j(\theta^\prime) = \frac{1}{2} J_0^{({\rm i})}\cos\theta^\prime,
\end{equation}
the radial and azimuthal contributions to the radiant intensity of the secondary source become
\begin{equation}
J_j(\theta) = \frac{1}{2} J_0\frac{\cos^2\theta}{\cos\theta^\prime}\left|t_j(\theta,\theta^\prime)\right|^2.
\end{equation}
These contributions $J_1(\theta)$ and $J_2(\theta)$ are shown by the dotted and dashed lines,
respectively, in Fig.~\ref{f:LEDgeom}b, where we have taken $n_{\rm
{\rm s}} = 3.5$. The curves fit well the $\cos^n\theta$: we obtain $n\approx 3.4 = b$
for the azimuthally polarized contribution, $n\approx 2.4 = a$ for the radially polarized
contribution, and $n\approx 2.9$ for an equally weighted sum of the two contributions.
Thus the elementary electric-field modes given by Eq.~\eqref{spcosmodes} provide good
approximations of the modes of the structure in Fig.~\ref{f:LEDgeom}a.
The degree of polarization, also plotted in Fig.~\ref{f:LEDgeom}b, increases
from a zero on-axis value (unpolarized radiation in the paraxial
domain) to $P(\theta)\approx 0.85$ when $\theta\rightarrow \pi/2$,
indicating partially polarized radiation in the non-paraxial domain.
\begin{figure}[!h]
\psfrag{n}{\hspace{-1mm}$n_{\rm s}$}\psfrag{x}{$\theta$}
\psfrag{y}{\hspace{-7mm}$J(\theta)$, $P(\theta)$}
\psfrag{p}{pn}\psfrag{a}{(a)}\psfrag{b}{(b)}\psfrag{z}{$z$}
\psfrag{r}{$J_{\rm azi}(\theta)$}\psfrag{c}{$J_{\rm rad}(\theta)$}
\psfrag{0}{$0$}\psfrag{0.25}{$0.25$}\psfrag{0.5}{$0.5$}\psfrag{0.75}{$0.75$}\psfrag{1}{$1$}
\psfrag{1.25}{$1.25$}\psfrag{1.5}{$1.5$}\psfrag{0.2}{\hspace{2mm}$0.2$}\psfrag{0.4}{\hspace{2mm}$0.4$}
\psfrag{0.6}{\hspace{2mm}$0.6$}\psfrag{0.8}{\hspace{2mm}$0.8$}
\centering
\includegraphics[width=0.8\columnwidth]{elementary-em-led.eps}
\includegraphics[width=\columnwidth]{ledpatterns.eps}
\caption{(a) A generic geometry of a broad-area surface-emitting
LED: pn is the active emitting area and $n$ is the refractive index
of the semiconductor material. The solid and dashed curves
illustrate the azimuthally and radially polarized contributions to
the radiant intensity distribution $J(\theta)$.
(b)~Geometrical-optics predictions of the azimuthally (dashed curve)
and radially (dotted curve) polarized contributions to the radiant
intensity, their average (solid curve), and the degree of
polarization $P(\theta)$ in the far zone (thick solid curve).}
\label{f:LEDgeom}
\end{figure}
\section{Final remarks}
\label{s:discussion}
The electromagnetic elementary-mode decomposition presented in this paper should prove useful in
in optical system modeling by field tracing methods. To this end, it is necessary to determine
elementary field modes and the weight functions of the source. The example presented above illustrates
the possibility of doing this if there is sufficient a priori information about the structure of
the source. If, however, the source properties are not known, it is necessary to determine the modal
decomposition experimentally. As in the scalar case~\cite{Vahimaa}, this can in principle be accomplished
by far-field measurements. In general, the polarization basis vectors $\mathbf{f}_j$ and the eigenvalues
$I_j$ can be determined from the polarization matrix $\boldmatrix{A}(\boldsymbol{\kappa}, \boldsymbol{\kappa})$.
This matrix can be determined by measuring, e.g., the angular dependence of the Stokes parameters of the field
in the far zone. Thus only single-point measurements across the radiation pattern are needed. Determination
of the weight functions requires, in general, two-point correlation measurements in the far zone, but this
is avoided if the field is known to be quasihomogeneous.
\section*{Acknowledgments}
This work was supported by the Academy of Finland (118951, 129155,
and 209806).
|
2,869,038,155,327 | arxiv | \section{Introduction}
The {\it IJCAI--21 Proceedings} will be printed from electronic
manuscripts submitted by the authors. These must be PDF ({\em Portable
Document Format}) files formatted for 8-1/2$''$ $\times$ 11$''$ paper.
\subsection{Length of Papers}
All paper {\em submissions} must have a maximum of six pages, plus at most one for references. The seventh page cannot contain {\bf anything} other than references.
The length rules may change for final camera-ready versions of accepted papers and will differ between tracks. Some tracks may include only references in the last page, whereas others allow for any content in all pages. Similarly, some tracks allow you to buy a few extra pages should you want to, whereas others don't.
If your paper is accepted, please carefully read the notifications you receive, and check the proceedings submission information website\footnote{\url{https://proceedings.ijcai.org/info}} to know how many pages you can finally use. That website holds the most up-to-date information regarding paper length limits at all times. Please notice that if your track allows for a special references-only page, the {\bf references-only page(s) cannot contain anything else than references} (i.e.: do not write your acknowledgments on that page or you will be charged for it).
\subsection{Word Processing Software}
As detailed below, IJCAI has prepared and made available a set of
\LaTeX{} macros and a Microsoft Word template for use in formatting
your paper. If you are using some other word processing software, please follow the format instructions given below and ensure that your final paper looks as much like this sample as possible.
\section{Style and Format}
\LaTeX{} and Word style files that implement these instructions
can be retrieved electronically. (See Appendix~\ref{stylefiles} for
instructions on how to obtain these files.)
\subsection{Layout}
Print manuscripts two columns to a page, in the manner in which these
instructions are printed. The exact dimensions for pages are:
\begin{itemize}
\item left and right margins: .75$''$
\item column width: 3.375$''$
\item gap between columns: .25$''$
\item top margin---first page: 1.375$''$
\item top margin---other pages: .75$''$
\item bottom margin: 1.25$''$
\item column height---first page: 6.625$''$
\item column height---other pages: 9$''$
\end{itemize}
All measurements assume an 8-1/2$''$ $\times$ 11$''$ page size. For
A4-size paper, use the given top and left margins, column width,
height, and gap, and modify the bottom and right margins as necessary.
\subsection{Format of Electronic Manuscript}
For the production of the electronic manuscript, you must use Adobe's
{\em Portable Document Format} (PDF). A PDF file can be generated, for
instance, on Unix systems using {\tt ps2pdf} or on Windows systems
using Adobe's Distiller. There is also a website with free software
and conversion services: \url{http://www.ps2pdf.com}. For reasons of
uniformity, use of Adobe's {\em Times Roman} font is strongly suggested.
In \LaTeX2e{} this is accomplished by writing
\begin{quote}
\mbox{\tt $\backslash$usepackage\{times\}}
\end{quote}
in the preamble.\footnote{You may want also to use the package {\tt
latexsym}, which defines all symbols known from the old \LaTeX{}
version.}
Additionally, it is of utmost importance to specify the {\bf
letter} format (corresponding to 8-1/2$''$ $\times$ 11$''$) when
formatting the paper. When working with {\tt dvips}, for instance, one
should specify {\tt -t letter}.
\subsection{Title and Author Information}
Center the title on the entire width of the page in a 14-point bold
font. The title must be capitalized using Title Case. Below it, center author name(s) in 12-point bold font. On the following line(s) place the affiliations, each affiliation on its own line using 12-point regular font. Matching between authors and affiliations can be done using numeric superindices. Optionally, a comma-separated list of email addresses follows the affiliation(s) line(s), using 12-point regular font.
\subsubsection{Blind Review}
In order to make blind reviewing possible, authors must omit their
names and affiliations when submitting the paper for review. In place
of names and affiliations, provide a list of content areas. When
referring to one's own work, use the third person rather than the
first person. For example, say, ``Previously,
Gottlob~\shortcite{gottlob:nonmon} has shown that\ldots'', rather
than, ``In our previous work~\cite{gottlob:nonmon}, we have shown
that\ldots'' Try to avoid including any information in the body of the
paper or references that would identify the authors or their
institutions. Such information can be added to the final camera-ready
version for publication.
\subsection{Abstract}
Place the abstract at the beginning of the first column 3$''$ from the
top of the page, unless that does not leave enough room for the title
and author information. Use a slightly smaller width than in the body
of the paper. Head the abstract with ``Abstract'' centered above the
body of the abstract in a 12-point bold font. The body of the abstract
should be in the same font as the body of the paper.
The abstract should be a concise, one-paragraph summary describing the
general thesis and conclusion of your paper. A reader should be able
to learn the purpose of the paper and the reason for its importance
from the abstract. The abstract should be no more than 200 words long.
\subsection{Text}
The main body of the text immediately follows the abstract. Use
10-point type in a clear, readable font with 1-point leading (10 on
11).
Indent when starting a new paragraph, except after major headings.
\subsection{Headings and Sections}
When necessary, headings should be used to separate major sections of
your paper. (These instructions use many headings to demonstrate their
appearance; your paper should have fewer headings.). All headings should be capitalized using Title Case.
\subsubsection{Section Headings}
Print section headings in 12-point bold type in the style shown in
these instructions. Leave a blank space of approximately 10 points
above and 4 points below section headings. Number sections with
arabic numerals.
\subsubsection{Subsection Headings}
Print subsection headings in 11-point bold type. Leave a blank space
of approximately 8 points above and 3 points below subsection
headings. Number subsections with the section number and the
subsection number (in arabic numerals) separated by a
period.
\subsubsection{Subsubsection Headings}
Print subsubsection headings in 10-point bold type. Leave a blank
space of approximately 6 points above subsubsection headings. Do not
number subsubsections.
\paragraph{Titled paragraphs.} You should use titled paragraphs if and
only if the title covers exactly one paragraph. Such paragraphs should be
separated from the preceding content by at least 3pt, and no more than
6pt. The title should be in 10pt bold font and ended with a period.
After that, a 1em horizontal space should follow the title before
the paragraph's text.
In \LaTeX{} titled paragraphs should be typeset using
\begin{quote}
{\tt \textbackslash{}paragraph\{Title.\} text} .
\end{quote}
\subsubsection{Acknowledgements}
You may include an unnumbered acknowledgments section, including
acknowledgments of help from colleagues, financial support, and
permission to publish. If present, acknowledgements must be in a dedicated,
unnumbered section appearing after all regular sections but before any
appendices or references.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Acknowledgements\}})
\end{quote}
to typeset the acknowledgements section in \LaTeX{}.
\subsubsection{Appendices}
Any appendices directly follow the text and look like sections, except
that they are numbered with capital letters instead of arabic
numerals. See this document for an example.
\subsubsection{References}
The references section is headed ``References'', printed in the same
style as a section heading but without a number. A sample list of
references is given at the end of these instructions. Use a consistent
format for references. The reference list should not include publicly unavailable work.
\subsection{Citations}
Citations within the text should include the author's last name and
the year of publication, for example~\cite{gottlob:nonmon}. Append
lowercase letters to the year in cases of ambiguity. Treat multiple
authors as in the following examples:~\cite{abelson-et-al:scheme}
or~\cite{bgf:Lixto} (for more than two authors) and
\cite{brachman-schmolze:kl-one} (for two authors). If the author
portion of a citation is obvious, omit it, e.g.,
Nebel~\shortcite{nebel:jair-2000}. Collapse multiple citations as
follows:~\cite{gls:hypertrees,levesque:functional-foundations}.
\nocite{abelson-et-al:scheme}
\nocite{bgf:Lixto}
\nocite{brachman-schmolze:kl-one}
\nocite{gottlob:nonmon}
\nocite{gls:hypertrees}
\nocite{levesque:functional-foundations}
\nocite{levesque:belief}
\nocite{nebel:jair-2000}
\subsection{Footnotes}
Place footnotes at the bottom of the page in a 9-point font. Refer to
them with superscript numbers.\footnote{This is how your footnotes
should appear.} Separate them from the text by a short
line.\footnote{Note the line separating these footnotes from the
text.} Avoid footnotes as much as possible; they interrupt the flow of
the text.
\section{Illustrations}
Place all illustrations (figures, drawings, tables, and photographs)
throughout the paper at the places where they are first discussed,
rather than at the end of the paper.
They should be floated to the top (preferred) or bottom of the page,
unless they are an integral part
of your narrative flow. When placed at the bottom or top of
a page, illustrations may run across both columns, but not when they
appear inline.
Illustrations must be rendered electronically or scanned and placed
directly in your document. They should be cropped outside latex, otherwise portions of the image could reappear during the post-processing of your paper. All illustrations should be understandable when printed in black and
white, albeit you can use colors to enhance them. Line weights should
be 1/2-point or thicker. Avoid screens and superimposing type on
patterns, as these effects may not reproduce well.
Number illustrations sequentially. Use references of the following
form: Figure 1, Table 2, etc. Place illustration numbers and captions
under illustrations. Leave a margin of 1/4-inch around the area
covered by the illustration and caption. Use 9-point type for
captions, labels, and other text in illustrations. Captions should always appear below the illustration.
\section{Tables}
Tables are considered illustrations containing data. Therefore, they should also appear floated to the top (preferably) or bottom of the page, and with the captions below them.
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Scenario & $\delta$ & Runtime \\
\hline
Paris & 0.1s & 13.65ms \\
Paris & 0.2s & 0.01ms \\
New York & 0.1s & 92.50ms \\
Singapore & 0.1s & 33.33ms \\
Singapore & 0.2s & 23.01ms \\
\hline
\end{tabular}
\caption{Latex default table}
\label{tab:plain}
\end{table}
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Scenario & $\delta$ (s) & Runtime (ms) \\
\midrule
Paris & 0.1 & 13.65 \\
& 0.2 & 0.01 \\
New York & 0.1 & 92.50 \\
Singapore & 0.1 & 33.33 \\
& 0.2 & 23.01 \\
\bottomrule
\end{tabular}
\caption{Booktabs table}
\label{tab:booktabs}
\end{table}
If you are using \LaTeX, you should use the {\tt booktabs} package, because it produces better tables than the standard ones. Compare Tables \ref{tab:plain} and~\ref{tab:booktabs}. The latter is clearly more readable for three reasons:
\begin{enumerate}
\item The styling is better thanks to using the {\tt booktabs} rulers instead of the default ones.
\item Numeric columns are right-aligned, making it easier to compare the numbers. Make sure to also right-align the corresponding headers, and to use the same precision for all numbers.
\item We avoid unnecessary repetition, both between lines (no need to repeat the scenario name in this case) as well as in the content (units can be shown in the column header).
\end{enumerate}
\section{Formulas}
IJCAI's two-column format makes it difficult to typeset long formulas. A usual temptation is to reduce the size of the formula by using the {\tt small} or {\tt tiny} sizes. This doesn't work correctly with the current \LaTeX{} versions, breaking the line spacing of the preceding paragraphs and title, as well as the equation number sizes. The following equation demonstrates the effects (notice that this entire paragraph looks badly formatted):
\begin{tiny}
\begin{equation}
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
\end{equation}
\end{tiny}%
Reducing formula sizes this way is strictly forbidden. We {\bf strongly} recommend authors to split formulas in multiple lines when they don't fit in a single line. This is the easiest approach to typeset those formulas and provides the most readable output%
\begin{align}
x =& \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \nonumber\\
+ & \prod_{i=1}^n \sum_{j=1}^n j_i
\end{align}%
If a line is just slightly longer than the column width, you may use the {\tt resizebox} environment on that equation. The result looks better and doesn't interfere with the paragraph's line spacing: %
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
$}
\end{equation}%
This last solution may have to be adapted if you use different equation environments, but it can generally be made to work. Please notice that in any case:
\begin{itemize}
\item Equation numbers must be in the same font and size than the main text (10pt).
\item Your formula's main symbols should not be smaller than {\small small} text (9pt).
\end{itemize}
For instance, the formula
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j
$}
\end{equation}
would not be acceptable because the text is too small.
\section{Examples, Definitions, Theorems and Similar}
Examples, definitions, theorems, corollaries and similar must be written in their own paragraph. The paragraph must be separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. They must begin with the kind of item written in 10pt bold font followed by their number (e.g.: Theorem 1), optionally followed by a title/summary between parentheses in non-bold font and ended with a period. After that the main body of the item follows, written in 10 pt italics font (see below for examples).
In \LaTeX{} We strongly recommend you to define environments for your examples, definitions, propositions, lemmas, corollaries and similar. This can be done in your \LaTeX{} preamble using \texttt{\textbackslash{newtheorem}} -- see the source of this document for examples. Numbering for these items must be global, not per-section (e.g.: Theorem 1 instead of Theorem 6.1).
\begin{example}[How to write an example]
Examples should be written using the example environment defined in this template.
\end{example}
\begin{theorem}
This is an example of an untitled theorem.
\end{theorem}
You may also include a title or description using these environments as shown in the following theorem.
\begin{theorem}[A titled theorem]
This is an example of a titled theorem.
\end{theorem}
\section{Proofs}
Proofs must be written in their own paragraph separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. Proof paragraphs should start with the keyword ``Proof." in 10pt italics font. After that the proof follows in regular 10pt font. At the end of the proof, an unfilled square symbol (qed) marks the end of the proof.
In \LaTeX{} proofs should be typeset using the \texttt{\textbackslash{proof}} environment.
\begin{proof}
This paragraph is an example of how a proof looks like using the \texttt{\textbackslash{proof}} environment.
\end{proof}
\section{Algorithms and Listings}
Algorithms and listings are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm~\ref{alg:algorithm}. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc.
In \LaTeX{} algorithms may be typeset using the {\tt algorithm} and {\tt algorithmic} packages, but you can also use one of the many other packages for the task.
\begin{algorithm}[tb]
\caption{Example algorithm}
\label{alg:algorithm}
\textbf{Input}: Your algorithm's input\\
\textbf{Parameter}: Optional list of parameters\\
\textbf{Output}: Your algorithm's output
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{condition}
\STATE Do some action.
\IF {conditional}
\STATE Perform task A.
\ELSE
\STATE Perform task B.
\ENDIF
\ENDWHILE
\STATE \textbf{return} solution
\end{algorithmic}
\end{algorithm}
\section*{Acknowledgments}
The preparation of these instructions and the \LaTeX{} and Bib\TeX{}
files that implement them was supported by Schlumberger Palo Alto
Research, AT\&T Bell Laboratories, and Morgan Kaufmann Publishers.
Preparation of the Microsoft Word file was supported by IJCAI. An
early version of this document was created by Shirley Jowell and Peter
F. Patel-Schneider. It was subsequently modified by Jennifer
Ballentine and Thomas Dean, Bernhard Nebel, Daniel Pagenstecher,
Kurt Steinkraus, Toby Walsh and Carles Sierra. The current version
has been prepared by Marc Pujol-Gonzalez and Francisco Cruz-Mencia.
\section{Introduction}
With the rapid development of E-commerce over years, recommender systems play an increasingly important role in the E-commerce platform.
In general, the recommender systems consist of two stages, the matching stage and the ranking stage.
The matching stage is mainly matching users with relevant items, quickly retrieving a fraction of items that users are potentially interested in from the massive inventory, and then handing them to the ranking stage.
The ranking stage assigns a score to each item according to a desired objective function. Intuitively, the main purpose of the two stages is to learn user and item representations to support efficient retrieval and ranking of items for users.
Some recent works leverage the algorithms to learn how to represent the user interest vector.
Collaborative filtering-based methods \cite{DBLP:conf/www/SarwarKKR01}\cite{DBLP:journals/computer/KorenBV09} extract user interests from historical behaviors, which are the preferred for new recommender systems in most cases but may suffer from sparsity problem during computation.
Thus, deep learning-based methods are introduced in recommender systems to model user interests with low-dimensional embedding vectors. For example, the deep neural network proposed for YouTube video recommendation (YouTube DNN) \cite{DBLP:conf/recsys/CovingtonAS16} generates one fixed-length vector for each user from the historical behaviors of users.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/introduction17_1.png}
\caption{A single vector can not express multiple interests. }
\label{fig:model_part}
\end{figure}
However, using one vector to represent the user assumes that the user only has a single preferred interest within a session.
As can be seen in Figure \ref{fig:model_part}, if a user clicks dresses multiple times and keyboards few times within a session, the learned user vector is likely very close to dresses. During the matching stage, the nearest neighbor algorithm matches items related to dresses. However, keyboards are also of interest to a user. Multiple vectors that represent different interests of users are thus necessary.
As user interests broaden, user modeling also needs to be more expressive.
Deep Interest Network (DIN) \cite{DBLP:journals/corr/ZhouSZMYDZJLG17} applies local activation unit on user historical behaviors to adaptively capture the diversity of user interests. However, since the candidate size for the matching stage is on the scale of billions, recalculating user representation for each item is computationally infeasible.
Multi-interest modeling in the matching stage requires greater model expressiveness. Greater model expressiveness entails more parameters, which introduces huge costs for computation, storage, and model optimization. MIND \cite{DBLP:conf/cikm/LiLWXZHKCLL19} applies the capsule network to generate multiple user vectors not only to reduce additional cost but also to improve the performance. Moreover, ComiRec \cite{DBLP:conf/kdd/CenZZZYT20} modifies MIND for considering the order information. However, neither MIND nor ComiRec considers the diversity constraints of multiple user interests. In extreme cases, vectors generated from the MIND and ComiRec are the same.
In this paper, we propose a novel diversity regularized interest model for recommender systems, called DRIM. We firstly generate multiple interest vectors through a capsule network in our model. To prevent the generated interest vectors from converging to the same one, different interest vectors should be regularized. Thus, we introduce three strategies as the diversity regularized separators to discriminate user multiple interests.
The diversity of multiple user vectors is controllable in our model through the three diversity regularized strategies. Each interest of the user has a certain degree of distinction, regularizing the diversity of user multiple interests in this model.
These user vectors are in the matching stage for retrieving relevant items from billion-scale items.
To summarize, the main contributions of this work are as follows:
\begin{itemize}
\item We propose a comprehensive framework that generates multiple interests for a user and integrates the the diversity-regularizing mechanism of multi-interest components.
\item We introduce three different diversity regularized separators to model user multiple interests, improving both complexity and accuracy.
\item Our framework achieves state-of-the-art performance on two real-world challenging datasets for the recommender systems.
\end{itemize}
\textbf{Organization:} The rest of the paper is organized as follows. Section \ref{section:back} summarizes related work of our model. Section \ref{model_section} introduces our proposed framework
in detail. In Section \ref{experimental_section}, we conduct extensive experiments and case studies. Finally, we conclude in Section \ref{conclusion}.
\section{Background and Related Work} \label{section:back}
In this section, we introduce the related work about traditional models and deep learning methods for recommender systems, as well as capsule networks we used in this paper.
\subsection{Traditional Model for Recommendation}
Collaborative filtering methods \cite{DBLP:conf/www/SarwarKKR01}\cite{DBLP:journals/computer/KorenBV09} are main traditional methods used in recommender systems. Collaborative filtering methods make recommendations based on user-item similarity. Rendle\textit{ et al.} \cite{DBLP:conf/www/RendleFS10} combine matrix factorization with personalized Markov chain to model both the long-term intents of users and the sequence effects. Following this work, Liang\textit{ et al.} \cite{DBLP:conf/recsys/LiangACB16} propose a co-factor model, combing matrix factorization with item embedding to improve the performance of standard matrix factorization and to model the sequence pattern. Factorization Machines (FMs) \cite{DBLP:conf/icdm/Rendle10} model
all interactions between variables using factorized parameters and thus can resolve sparsity problems in recommender systems.
\subsection{Deep Learning for Recommendation}
Due to the significant improvement in performance compared to traditional models, deep learning has been integrated into many industry-scale recommender systems. Neural Collaborative Filtering
(NCF) \cite{DBLP:conf/www/HeLZNHC17} uses a multi-layer neural network to model the interaction between users and items. Neural Factorization Machines (NFM) \cite{DBLP:conf/sigir/0001C17} fully combines the second-order linear feature extracted by FM and the higher-order nonlinear feature extracted by the neural network. Furthermore, the low-order and high-order combination features can be extracted at the same time by DeepFM \cite{DBLP:conf/ijcai/GuoTYLH17}. Deep \& Cross Network (DCN)\cite{DBLP:conf/kdd/WangFFW17} has higher computational efficiency and can extract higher-order crossover features.
\subsection{Capsule Network}
The concept of "capsules" is first proposed
by Hinton \textit{et al.} \cite{DBLP:conf/icann/HintonKW11} in 2011. They consider a capsule as a group of neurons whose activity vectors represent the instantiation parameters of a specific type of entity such as an object or an object part. The length of the output vector of a capsule represents the probability that the entity represented by the capsule is in the current input. Next, the dynamic routing
method \cite{DBLP:conf/nips/SabourFH17} is introduced to learn the weights on the connections between capsules. Afterwards, Hinton \textit{et al.} \cite{DBLP:conf/iclr/HintonSF18} propose expectation-maximization algorithm to overcome several deficiencies. Stacked Capsule Autoencoders (SCAE) \cite{DBLP:conf/nips/KosiorekSTH19} uses geometric relationships between parts to reason about objects. The capsule network has been applied in recommender systems recently.
MIND \cite{DBLP:conf/cikm/LiLWXZHKCLL19} utilizes dynamic routing mechanisms in recommender systems to capture multiple interests of users in the matching stage. Following this, ComiRec \cite{DBLP:conf/kdd/CenZZZYT20} modifies MIND for considering the order information to apply the capsule network in the sequential recommendation.
The main difference with existing methods is that our new diversity regularized interest model adapts MIND by adding a diversity separator, where each interest of the user has a certain degree of distinction to better model multi-interests.
\begin{figure*}[!tb]
\centering
\includegraphics[width=1\textwidth]{figs/model19.png}
\caption{The Diversity Regularized Interests Model framework. The extractor takes user behaviors as input and outputs user multiple vectors. Id features from the input layer are transformed into embeddings. User behavior
embeddings are fed into the multi-interest extractor layer, which produces interest capsules. The red, yellow, green circles represent the different interests respectively. Then the diversity separator (\textcircled{1} or \textcircled{2} or \textcircled{3}, one of the three ) applies strategies on interest capsules for regularizing user vectors to some extent diverse. \textcircled{1} represents the max entropy loss, \textcircled{2} represents the mean square loss, and \textcircled{3} represents the diverse loss. By concatenating interest capsules with user profile embedding and transforming the concatenated capsules by multilayer perceptron, user representation
vectors are obtained. An extra select function is introduced to guide the training process. Furthermore, strategy and softmax loss are jointly applied to the training process.}
\label{fig:model}
\end{figure*}
\section{Model} \label{model_section}
In this section, we formulate the problem and introduce the diversity regularized interests model in detail.
\subsection{Problem Formulation}
The recommender system is a two-stage system, with two stages, matching and ranking. The purpose of our work is to generate user vectors for the matching stage to model multi-interests. We have a set of users $u \in U$, a set of items $i \in I$, and a sequence of user historical behaviors $h_u^i$ for each user. Our model can generate multiple vectors representing user interests $V_u$, where $V_u = (v_u^1, v_u^2, \cdots, v_u^K) \in R^{d*K}$ given the user historical behaviors $h_u^i$ as input. Notations are summarized in Table \ref{notation}.
\subsection{Incorporate with profile}
As shown in Figure \ref{fig:model}, the input of our model is user profile and user history behaviors. The history behaviors consist of a list of item IDs. The item IDs are transformed into item embeddings through an embedding layer. To enrich the feature space, user profile(user id, gender, etc) are also fed into the embedding layer. A multi-interest extractor module and a diversity separator module receive the embedding of user history behaviors and generate multiple diverse interests for each user.
In this paper, we apply the clustering process to aggregate the user's historical behaviors into several clusters. A cluster of items represents a user's particular interest. Here we not only design the multi-interest extraction layer to generate multiple user interest vectors but also design diversity separators to regularize the diversity of multi-interests.
\begin{table}[htbp]
\caption{Notations}
\begin{center}
\begin{tabular}{lrr}
\toprule
\multicolumn{2}{c}{Notation}&{Description} \\
\midrule
\multicolumn{2}{c}{$u$}& a user\\
\multicolumn{2}{c}{$i$}& an item\\
\multicolumn{2}{c}{$U$}& the set of users\\
\multicolumn{2}{c}{$I$}& the set of items\\
\multicolumn{2}{c}{$h_u^i$}& user historical behaviors\\
\multicolumn{2}{c}{$d$}& the dimension of user/item embeddings\\
\multicolumn{2}{c}{$K$}& the number of interest embeddings\\
\multicolumn{2}{c}{$V_u$}& the matrix of interest embeddings of user $u$\\
\multicolumn{2}{c}{$c_{ij}$} & the routing logit \\
\multicolumn{2}{c}{$S$} & the bilinear mapping matrix \\
\multicolumn{2}{c}{$b_{ij}$} & the coupling coefficients\\
\bottomrule
\end{tabular}
\label{notation}
\end{center}
\end{table}
\subsection{Multi-Interest Extractor}
Our model designs the multi-interest extractor based on the dynamic routing method, for representation learning in the capsule network.
\subsubsection{Dynamic Routing.}
We utilize a dynamic routing method to capture multiple interests for users. We consider a two-layer capsule structure, which includes the history layer and interest layer respectively. The history layer represents the user history behavior and the interest layer represents the user interests. We use the dynamic routing method from CapsNet. We adopt dynamic routing for computing vector inputs and outputs of capsules. Let $h_i \in R^d$ represent the capsule $i$ of the history layer and $v_u^j \in R^d$ represent the interest capsule $j$. We calculate the capsule $j$ of the interest layer based on the history layer. The routing logit $c_{ij}$ between history capsule $i$ and interest capsule $j$ is calculated by
\begin{equation} \label{logit}
c_{ij} = v_j^TSh_i,
\end{equation}
where $S \in R^{d*d}$ denotes the bilinear mapping matrix parameter shared across each pair of history and interest capsules.
The $b_{ij}$ are the coupling coefficients between history capsules $i$ and interest capsules $j$. For particular history capsule $i$, the coupling coefficients between all the interest capsules and it sum to 1. It is calculated by applying softmax on routing logits as
\begin{equation} \label{ coupling coefficient}
b_{ij} = \frac{exp(c_{ij})}{\sum_texp(c_{it})},
\end{equation}
As the coupling coefficients are calculated, the candidate vector for interest capsule $j$ is calculated as
\begin{equation} \label{candidate vector}
z_j= \sum_ib_{ij}Sh_i,
\end{equation}
Then, the embedding of interest capsule $j$ can be obtained with a non-linear "function" as
\begin{equation} \label{interest capsule}
v_j= squash(z_j)=\frac{\|z_j\|^2}{1+\|z_j\|^2}\frac{z_j}{\|z_j\|},
\end{equation}
\subsubsection{Argmax operator to select one particular interest for target item in training}
After obtaining the interest embedding through a multi-interest extraction layer based on user history behaviors, we adopt an argmax operator to choose a corresponding user interest embedding vector for a target item $i$, since a particular target item belongs to one interest in common sense:
\begin{equation} \label{select interest}
v= V_u[:,argmax_{1<=j<=K}( v_j^Te_i)],
\end{equation}
where $e_i$ denotes the embedding of the target item $i$ and K is the number of interest embeddings.
\subsection{Diversity Regularized Separators} \label{sepa}
To obtain diverse interests of users extracted from the history capsules, we propose a separator layer to make the interest capsule distinct. The main idea of the diversity separator layer is to introduce three loss functions to increase the distance among the interest clusters and to regularize the diversity.
\subsubsection{Max Entropy Loss}
According to the law of entropy growth and SCAE \cite{DBLP:conf/nips/KosiorekSTH19}, with the degree of confusion increasing, the entropy value also increases. We apply to maximize between-interest capsules entropy to regularize the diversity of the interest capsules vector.
\begin{equation} \label{loss}
loss_{entropy} =-\sum_{k=1}^KH(v_k)
\end{equation}
\subsubsection{Mean Square Loss}
To increase the diversity of multiple interests of a user $u$, we make the distance between each interest vector and the mean vector much farther. That means the mean square of the error between each specific user interest vector and the mean vector of a user $u$ becomes much larger. To do that, the distance of the interest differentiation is obvious as shown in \ref{fig:model}, which is more helpful to regularize multi-interest vectors diversity.
\begin{equation} \label{loss_}
loss_{mean} = -\sum_{k=1}^K(v_k-mean(v_k))^2
\end{equation}
\subsubsection{Diverse Loss}
Inspired by Yu \textit{et al.} \cite{DBLP:conf/ijcai/YuLZ11}, we utilize the sum of a pairwise difference to measure the total diversity. This is a metric to measure the effectiveness of multiple interests separation.
Thus, for a pair of vectors $v_i$ and
$v_j$, we measure their diversity using the angle between them
\begin{equation} \label{loss_1}
loss_{div} = 1-\frac{v^T_iv_j}{||v_i||\cdot||v_j||}
\end{equation}
\subsection{Joint Training}
A joint training framework is proposed here by optimizing two losses together: a softmax loss for the matching objective and one of the three diversity separator losses for interest capsule differentiation. Those two representations are fused for the final prediction.
\subsubsection{Softmax Loss}
After obtaining the user particular interest embedding vector $ v_u$ and the target item embedding $e_i$, we can compute the probability of the user $u$ interacting with the target item $i$ as
\begin{equation} \label{softmax}
P(i|u)=\frac{exp(v_u^Te_i)}{\sum_{k \in I}exp(v_u^Te_k)},
\end{equation}
The objective function of our model is to minimize the following negative log-likelihood
\begin{equation} \label{loss_softmax}
loss_{softmax} = \sum_{u \in U}\sum_{i \in I}-logP(i|u).
\end{equation}
\subsubsection{Separator Loss}
Separator losses are introduced in \ref{sepa}. The three loss functions are independent.
Joint loss training enables the network to simultaneously train for user vector accuracy and interest vector differentiation:
\begin{equation} \label{loss_entropy}
loss_{DRIM-entropy} = loss_{softmax} + \lambda loss_{entropy}
\end{equation}
\begin{equation} \label{loss_mean}
loss_{DRIM-mean} = loss_{softmax} + \lambda loss_{mean}
\end{equation}
\begin{equation} \label{loss_div}
loss_{DRIM-div} = loss_{softmax} + \lambda loss_{div}
\end{equation}
We call them DRIM-entropy, DRIM-mean, DRIM-div for short. These loss functions balance the accuracy
and diversity of the recommendation by a controllable factor $\lambda$ $\geq$ 0.
\subsection{Serving}
At serving time, the user’s behavior sequence and user profile are fed into the model, producing multiple representation vectors for each user. Then, these representation vectors are used to retrieve top $N$ items by an approximate nearest neighbor approach.
These items constitute the final set of candidate
items for the matching stage of recommender systems.
Please note that, when a user
has new actions, it will alter his/her behavior sequence as well as
the corresponding user representation vectors, thus our model enables real-time inference for the recommendation matching stage.
\section{Experimental and Evaluation} \label{experimental_section}
In this section, we evaluate the performance between our methods and
existing methods on several datasets. The statistics of the two datasets are shown in Table \ref{datasets}.
\subsection{Datasets}
We use two datasets for evaluating performance, Amazon Books \footnote{http://jmcauley.ucsd.edu/data/amazon/} and OURS respectively.
Amazon Books is one of the most widely-used public datasets for e-commerce recommendations.
OURS is a dateset generated from the log of one international E-commerce App, containing historical behaviors of randomly sampled 1100000 of users in 2 weeks. For Amazon Books, we only keep items that have been reviewed at least 20 times and users who have reviewed at least 20 items. For OURS, we only keep items that have been reviewed at least 10 times and users who have reviewed at least 10 items.
Since the main task of the matching stage is the next item prediction problem, we choose it to evaluate the methods' performance. The user behavior sequence is sorted by time. We hold the first 80\% user sorted behavior sequence as the training set and the rest user behavior sequence as the test set. In Amazon Books, each training sample is truncated at length 10. And in OURS, each training sample is truncated at length 50.
\subsection{Evaluation Metrics}
We use the following metric to evaluate the performance of our proposed model. Hit rate (HR) measures the percentage that recommended items contain at least one correct item interacted by the user, which is a commonly used evaluation criterion.
\begin{equation} \label{HR}
HR@N = \frac{\sum_{u \in U}I(N)}{|U|}
\end{equation}
where $U$ denotes the number of users in the test set and $I(N)$ denotes the indicator function meaning whether the target item occurs in top $N$ or not.
\begin{table}[!htbp]
\caption{Statistics of datasets}
\begin{center}
\begin{tabular}{cccccccc}
\toprule
\multicolumn{2}{c}{Datasets}&{\# users}&{\# items}& {\# interactions}\\
\midrule
\multicolumn{2}{c}{Amazon Books}&173,901&163,328&4,910,406\\
\multicolumn{2}{c}{OURS}& 1,100,000&531,830&18,221,613\\
\bottomrule
\end{tabular}
\label{datasets}
\end{center}
\end{table}
\subsection{Parameter Configuration}
The number of dimensions $d$ for user and item embeddings is set to 36. We sample 5 negative samples for sampled softmax loss.
We use Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with learning rate
lr = 0.001 for optimization.
\begin{table*}[htbp]
\centering
\caption{Performance comparison on the two datasets}
\begin{tabular}{ccccccccccccc}
\toprule
\multicolumn{2}{c}{Dataset} & Metric & \multicolumn{2}{c}{Most Popular} & \multicolumn{2}{c}{YoutubeDNN} & \multicolumn{1}{c}{MIND} &
\multicolumn{1}{c}{COMIREC} &
\multicolumn{1}{c}{DRIM-entropy} &
\multicolumn{1}{c}{DRIM-mean} &
\multicolumn{1}{c}{DRIM-div} & \\
\midrule
\multicolumn{2}{c}{Amazon Books} & HR@50 & \multicolumn{2}{c}{0.0155}&
\multicolumn{2}{c}{0.4749} &
\multicolumn{1}{c}{0.5796} &0.7892&\textbf{ 0.8062} & 0.6463 & 0.7073 & \\
&& HR@100 &\multicolumn{2}{c}{0.0219}& \multicolumn{2}{c}{0.5848} & \multicolumn{1}{c}{0.7136} &0.8785& \textbf{0.8957 }& 0.7275 & 0.8298 &\\
\midrule
\multicolumn{2}{p{4.25em}}{OURS} & HR@50 &\multicolumn{2}{c}{0.0136}& \multicolumn{2}{c}{0.2240} & \multicolumn{1}{c}{0.3436} & 0.3542 &0.3540& 0.3672 & \textbf{0.4145} & \\
&& HR@100&\multicolumn{2}{c}{0.0173} & \multicolumn{2}{c}{0.4030} & \multicolumn{1}{c}{0.4235} &0.4573& 0.4351 & 0.4630 &\textbf{ 0.5663} &\\
\bottomrule
\end{tabular}%
\label{tab:results}%
\end{table*}%
\begin{figure*}[!tb]
\centering
\subfigure[MIND]{
\includegraphics[width=0.23\textwidth]{figs/fig_mind.pdf} }
\subfigure[DRIM-entropy]{
\includegraphics[width=0.23\textwidth]{figs/fig_entropy.pdf}
}
\subfigure[DRIM-mean]{
\includegraphics[width=0.23\textwidth]{figs/fig_mean.pdf}
}
\subfigure[DRIM-div]{
\includegraphics[width=0.23\textwidth]{figs/fig_div.pdf}
}
\caption{The dimensionality reduction figures of Amazon Books on our proposed DRIM and MIND.}
\label{fig:tsne_amazon}
\end{figure*}
\begin{figure*}[!tb]
\centering
\subfigure[MIND]{
\includegraphics[width=0.23\textwidth]{figs/ae_fig_mind.pdf}
}
\subfigure[DRIM-entropy]{
\includegraphics[width=0.23\textwidth]{figs/ae_fig_entropy.pdf}
}
\subfigure[DRIM-mean]{
\includegraphics[width=0.23\textwidth]{figs/ae_fig_mean.pdf}
}
\subfigure[DRIM-div]{
\includegraphics[width=0.23\textwidth]{figs/ae_fig_div.pdf}
}
\caption{The dimensionality reduction figures of OURS on our proposed DRIM and MIND.}
\label{fig:tsne_ali}
\end{figure*}
\subsection{Comparing Methods}
We compare our method with the following baselines.
We compare our proposed models, DRIM-entropy, DRIM-mean, DRIM-div with state-of-the-art models. In our experimental setting, models should give the prediction for the users
of test sets.
\begin{itemize}
\item \textbf{MostPopular} is a traditional recommendation method that only recommends items to a user according to the popularity.
\item \textbf{YouTube DNN \cite{DBLP:conf/recsys/CovingtonAS16}} is one of the most successful deep learning model used for industrial recommender systems.
\item \textbf{MIND \cite{DBLP:conf/cikm/LiLWXZHKCLL19}} is related to our model. It designs a multi-interest extractor layer based on a capsule network, for clustering past behaviors and extracting interests.
\item \textbf{ComiRec \cite{DBLP:conf/kdd/CenZZZYT20}} is a recent state-of-the-art model. This model integrates the multi-interest components and controllable aggregation module in unified recommender systems.
\end{itemize}
\subsection{Experimental Results}
For multiple interest models, each user representation vector independently retrieves top-$N$ candidate items. Thus, our model retrieves a total $K*N$ items for each user. We sort the items by the inner product of the item embedding and the corresponding user interest representation vector. After sorting, top-$N$ items from these $K*N$ items are viewed as the final candidate items of the models.
Table \ref{tab:results} summarizes the performance of our model and all baselines on two datasets in terms of HR@$N$ ($N$=50,100). Our model surpasses all of the baselines by a wide margin on both datasets. The non-personalized method, MostPopular, is beated by other methods, revealing the power of the personalized feature for improving the matching stage of recommender systems. It can be observed that methods employing multiple user representation vectors perform better than employing single user representation vector. Therefore, multi-interest modeling is effective for modelling user's diverse interests as well as boosting recommendation accuracy. Moreover, we can observe that ComiRec-DR outperforms MIND due to the difference of the dynamic routing method which can make user representation vectors distinguishable.
\subsection{Model Visualization}
We have a closer look at some trained user interest vectors.
For demonstration purpose only, we set the number of interests to 2 in this part.
We first apply t-Distributed Stochastic Neighbor Embedding (t-SNE) \cite{DBLP:journals/symmetry/HusnainMMLCO19} to reduce user interest vector dimension into 2 so each user has 2 points mapped from interest vectors.
We connect the 2 points of the same user by lines and plot the points and lines in the Figure \ref{fig:tsne_amazon} and \ref{fig:tsne_ali} .
Then, we connect the points mapped from the interest vectors of a user using line segments. The greater the segment length, the better differentiation between interest vectors. We can see that most line segments in Figure \ref{fig:tsne_amazon}(a) and \ref{fig:tsne_ali}(a) are short which means the embeddings from MIND are not well separated. In stark contrast, there are many more longer lines in (b), (c), (d) in Figure \ref{fig:tsne_amazon} and \ref{fig:tsne_ali} which means the embeddings generated by DRIM are better separated.
\section{Conclusion}
\label{conclusion}
In this paper, we propose a novel method of Diversity Regularized Interests Modeling for recommender systems, namely DRIM, to explore users' diverse interests and regularize the diversity of the multiple interests for the matching stage in e-commerce recommendation. Specifically, we design a multi-interest extractor layer with a variant of dynamic routing to extract users' diverse interests and a diversity regularized separator layer with three regularization strategies to regularize the diversity of interests. Empirical study indicates that DRIM achieves superior performance on public benchmarks.
\bibliographystyle{named}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.